Work with Shadow AI – not in conflict

by
0 comments
Work with Shadow AI – not in conflict

There is a new man lurking in the shadows of the enterprise ecosystem. Beyond undisclosed and unknown apps and software – traditional shadow IT – employees are turning to ChatGate and the cloud in droves.

about three-quarters Employees surveyed are now using generative AI in the workplace, with nearly half (46%) doing so within the last six months. As a result, more employees are onboarding smart tools and uploading proprietary information, leaving administrators unaware of security backdoors or what happens in data black boxes. Compliance, regulation and brand risk speak for themselves.

What and why of shadow AI

Shadow AI seems like a natural evolution of remote workers using more innovative tools. As was the case during the COVID-19 pandemic, employees believe the tech stack is their decision, picking and choosing their favorite apps without telling administrators. But it also opens up security and privacy concerns, especially because there’s so much we still don’t know about AI.

For the record, I don’t blame employees for striving for excellence or efficiency. These tools can provide both, and workers (especially in tech) are usually encouraged to move fast and break things. Large language models are adept at uncovering insights from large amounts of text or code more quickly than humans. The problem is that this new wave of “preferred apps” can confuse output, leak sensitive information and lead to regulatory and compliance nightmares.

Connected:AI center responsible for combining research with industry information

Shadow AI provides administrators with significantly less control and monitoring over their ecosystem. If an employee is sharing sensitive data in these models, it is likely happening without encryption and security measures. Similarly, there is no audit trail to prove who used what and why, nor visibility into where the information goes – significant concerns for researchers. report 11% of files uploaded to AI contain sensitive corporate data.

Why is shadow AI a problem?

The risks are increased for enterprises handling sensitive information as unregulated use could violate industry regulations like GDPR or HIPAA. Also, it’s worth remembering that the technology isn’t perfect. Biased answers and broken logic visible in corporate output look bad.

Unfortunately, you don’t have to look far for AI horror stories in the enterprise – about 80% Many IT organizations reported negative consequences (incorrect results, data leaks) from employees’ use of generative AI.

How can admins take retaliatory action?

Like shadow IT, administrators should work with employees rather than against them in finding solutions to shadow AI. Educate them why unfettered AI access can be bad for business.

Connected:Salesforce AI Suite May Be Set Up, Yet AI Adoption Stuck

Also, help keep them on the straight and narrow by exploring data guardrails like blacklisting access to questionable tools and blocking file uploads to unauthorized services.

However, moving forward, the adoption of generic AI is accelerating, whether IT approves or not. Smart admins will proceed by treating unauthorized AI as market research – identify what employees need, evaluate the risks, and deploy enterprise-grade options with appropriate guardrails. Thus ecosystem orchestrators can shed light on the situation, enable innovation and protect the business (and themselves).

Related Articles

Leave a Comment