The Hidden Threat of Shadow AI
In November 2022, ChatGPT launched, bringing new possibilities and challenges. As AI and GenAI have grown in popularity and use, businesses have had to grapple with how to use the technology and ensure it is used correctly and ethically. While productivity gains are transparent to users, companies must set guidelines to protect themselves and their employees.
Think back to when cloud technology was introduced, and IT departments had to figure out how to deploy it. Employees began using cloud-based technologies before IT departments understood the benefits and limitations, leading to various issues such as data privacy concerns, data loss, and security issues. Fast-forward to today, employees have already adopted AI and GenAI before internal rules and industry-wide regulations were implemented. This Wild West mentality has resulted in “shadow AI”— an update to “shadow IT” whereby IT departments have no visibility or control over the applications and services employees use.
Shadow AI is the unauthorized use of AI technology in the workplace, usually unbeknownst to the organization’s IT department. Many employees use AI tools through personal accounts that IT departments don’t approve. A recent study from Cyberhaven indicates that between March 2023 and March 2024, the amount of corporate data employees put into AI tools increased by 485 percent. More than a quarter of that corporate data, or 27 percent, was considered sensitive, potentially putting the data at risk (causing more IT headaches). Sensitive data includes personally identifiable information (PII), financial data, intellectual property, business operations data, customer data, employee data, and legal documents.
With formal governance, security protocols, and oversight, employees can avoid exposing sensitive data when using AI tools or services to enhance productivity or solve a specific problem.
How GenAI is Being Used, With or Without IT Approval
Shadow AI sounds like a cloak-and-dagger headline for a spy novel. The reality is that businesses are in a tough spot if they don’t establish governance, oversight, or security protocols around AI and GenAI. Here are examples of how employees and departments are already tapping into GenAI:
- Content creation. A marketing team can leverage GenAI to create engaging videos with AI-powered scriptwriting, customized messaging, AI-generated graphics and animation, and voice and sound editing.
- Data Analysis. Employees might use AI tools for data analysis to gain insights quickly without waiting for IT support. For example, the marketing team might be preparing for a product launch and needs to analyze customer behavior to tailor messages to different audiences.
- Automation. Departments may implement AI-driven automation to streamline processes and reduce manual work. For example, the HR department may leverage AI in recruitment with resume screening, filtering, and interview scheduling automation.
- Customer Interactions. Customer service teams might deploy AI chatbots or virtual assistants to enhance engagement and handle common inquiries and support requests.
What Risks Come with Shadow AI?
As promising as AI and GenAI are, using these technologies without proper parameters can cause significant risks and impact the business. Consider these:
- Data breaches and malware attacks. Tools that leverage GenAI may lack access controls and encryption, exposing sensitive data to unauthorized individuals and opening the door to malware and potential security breaches.
- Legal and compliance. Using unapproved AI technologies can lead to the misuse of personal data, which violates privacy laws. Using these tools without the necessary oversight can lead to non-compliance with industry standards, such as GDPR, HIPPA, or PCD DSS, leading to hefty fines or, worse, legal consequences.
- Operational inefficiency. Fragmented data management or poor resource allocation can lead to data inconsistency, inaccurate information, or duplicate efforts.
Action and Education Are Key to Protecting Businesses
The best way for businesses to protect themselves from the dangers of misuse of AI and the exposure of sensitive data is to be proactive: create business practices and policies, provide IT-approved AI tools, and develop a culture of compliance and awareness.
Another way is for the IT department to implement educational campaigns to increase awareness of the negative aspects of using unauthorized AI technologies. Awareness campaigns, training workshops, and information sessions where IT experts discuss the consequences of unapproved AI tools can tackle these issues. Other recommendations include:
- Create clear policies and guidelines.
- Leverage employee-facing communication channels (e-learning modules, newsletters, intranet) to share tutorials and FAQs about the dangers of Shadow AI.
- Develop real-world examples and case studies that illustrate the dangers of misusing AI. Consider highlighting internal incidents where Shadow AI caused a problem.
- Task upper management with endorsing policies and best practices to lead by example.
By taking proactive measures to ensure policies around AI usage are in place, and employees understand the issues associated with Shadow AI, the IT department will empower and protect the company and its employees and foster a culture of security and compliance.
About the Author
Saugat Sindhu is Senior Partner and Global Head, Advisory Services, at Wipro Limited.
He leads a diverse group of practitioners globally, providing management consulting and business advisory services at Wipro focused on cybersecurity and risk, and related technology integration and transformation services for commercial and public sector clients. He leads strategy development and execution planning, industry motions, solution innovation, and client service for Wipro’s Cyber Advisory business. His primary industry expertise includes Media, Technology and Telecom.
Saugat can be reached online at https://www.linkedin.com/in/saugatsindhu/ and at our company website https://www.wipro.com/.