Generative AI In Business: Managing Risks in The Race for Innovation


Artificial intelligence has emerged as a game-changing force, with record amounts of funding fueling new innovations that are transforming industries and workflows at speeds we have never seen before.

According to data from Crunchbase, in February 2024, AI companies received $4.7 billion in venture funding. That’s more than double the $2.1 billion invested in February 2023. It’s money like this is fueling innovations that are being adopted at record speeds. Take code copilots as an example. It saw adoption rates exceeding 50%. Other big releases in 2024 included OpenAI’s GPT-4o and o1 Series and AI Video Creation Tools such as Google’s Veo and many others.

While these new tools are helping build business efficiencies, they also introduce significant security and data privacy concerns.

A great example is Microsoft Copilot, which truly typifies the dual-edged sword that is AI. On one hand, it delivers valuable workflow optimization capabilities. Recent reports show that users can complete tasks nearly 30% faster. This includes writing, summarizing meetings, and searching for information, among other things.

On the other hand, there is a heightened risk of sensitive data exposure and potential data privacy violations. According to an article from GCS Technologies, there are significant data security risks that come with using Microsoft Copilot. According to the article, “research shows that 16% of businesses’ critical data is overshared. In fact, the average company has 802,000 files at risk of oversharing — typically with users or groups within the company.”

Some may wonder why Microsoft hasn’t addressed these risks and done more to protect its 20 to 30 million active CoPilot users. Part of the problem is that companies are steeped in intense competition to bring the next cutting-edge AI technologies to market, often at the expense of conducting thorough assessments that address security risks before deployment. This introduces new vulnerabilities and exposes organizations to potential threats.

Another great example of a recent innovation that sparked equal parts excitement and alarm is Claude 3.5 Sonnet, which was created by Anthropic. One of the big AI releases from 2024, Claude 3.5 Sonnet, takes AI beyond traditional assistants like ChatGPT, Google Gemini, and others as it’s capable of executing complex tasks with minimal human intervention. Users embed Claure in a system. From there, it can leverage multiple tools and applications to carry out commands, anything from researching your next family vacation and building a new website for your business to advanced coding and software development or context-sensitive customer support.

It’s an exciting tool, but when examined more closely significant security challenges with Claude 3.5 Sonnet emerge.

The Risks of AI Agent-Based Programs

AI systems like Claude 3.5 Sonnet operate through extensive data transactions and retrieval techniques where data security and privacy vulnerabilities are abundant. These systems are also vulnerable to exploitation, and that’s happening now. For example, cybercriminals are leveraging AI to enhance their attack methodologies, targeting AI systems in ways that could have deleterious impacts on a business.

For example:

Model Extraction Attacks: A major concern is model extraction attacks, where attackers reverse-engineer AI models to uncover how they work. With this insight, cybercriminals can create mimicked versions, which are then used for malicious purposes. For example, gaining unauthorized insights into a business’s operations or launching highly targeted attacks.

Prompt Injection Attacks: Another growing threat is prompt injection attacks. In this instance, attackers manipulate an AI model’s prompts to produce unintended or harmful outputs. This can be further complicated by advanced AI models that are able to craft adaptive prompts that allow attackers to continuously refine injection techniques. In doing so, they can successfully bypass filters and exploit vulnerabilities to disrupt operations or extract sensitive information.

AI-Powered Ransomware: Another pressing concern is AI-powered ransomware, which attackers can leverage to identify weaknesses more efficiently. But that’s not all. They can also use AI to develop ransomware that can adapt and evolve, so it not only evades traditional detection methods but also complicates all remediation efforts.

The Growing Adoption of GenAI Systems

GenAI adoption in business environments is growing. According to an Altman Solon report, Putting Generative AI to Work, 65% of enterprises around the globe reported using GenAI tools regularly. That’s up from just 11% in early 2023 and represents a jaw-dropping 490% year-over-year jump in adoption.

The issue is that businesses lack the appropriate processes, guidelines, or formal governance structures needed to regulate AI use, which, at the end of the day, makes them prone to accidental security breaches. In many instances, the culprits are employees who introduce GenAI systems on corporate devices with no understanding of the risks that come with it or their use even permitted based on the company’s existing data security and privacy guidelines.

The fact that companies lack the necessary levels of oversight is problematic and can create unintended consequences. For example, it can lead to violations of data protection regulations (e.g. GDPR, California Privacy Rights Act) or expose an organization to sophisticated cyberattacks. Righting the ship begins with security leaders who must proactively map out potential GenAI use cases to existing data security frameworks. By assessing risks and scoring them, teams can spot weaknesses and fix them before they’re exploited.

Addressing Security Risks in AI-Powered Systems

More must be done to stop security risks associated with AI agent-based systems. Your best bet is a multifaceted approach that goes beyond traditional cybersecurity measures in favor of strategies designed to address the unique challenges posed by these advanced technologies.

  1. Implement Robust AI Governance: Establish clear guidelines that detail how AI can be used within the organization. To make this easily digestible to all employees, provide clear details and examples of permissible use cases. You can even include examples of permissible AI tools. Next, set specific boundaries for data access and ensure that all AI systems comply with data security and privacy regulations. Naturally, not every employee will adhere to these guidelines. In fact, you can count on many who will not. For this very reason, conduct regular audits to identify any incidents of unauthorized AI usage that could lead to potential security breaches and then take any and all preventative measures needed.
  2. Enhance Employee Awareness: Never overestimate the power of employee education, which is essential in times when new innovations are far ahead of education. Put in place an educational program that delves into the risks of AI systems. Include training sessions that give people the tools they need to recognize red flags, such as suspicious AI-generated outputs or unusual system behaviors. In a world of AI-enabled threats, it’s important to empower employees to act as the first line of defense is essential.
  3. Invest in Advanced Security Solutions: Traditional security tools may not be sufficient to protect against AI-enabled attacks. Organizations should consider adopting advanced solutions, such as endpoint protection platforms, behavioral analytics, and real-time threat detection tools, to bolster their defenses.
  4. Focus on Preemptive Defense: A preemptive approach that leverages tools such as Automated Moving Target Defense (AMTD) can help organizations stay ahead of attackers. By anticipating potential threats and implementing measures to address them before they occur, companies can reduce their vulnerability to AI-enabled exploits. This proactive stance is particularly important given the speed and adaptability of modern cyber threats.
  5. Monitor AI Systems Continuously: Continuous monitoring of AI systems is essential to detect unusual activity or signs of compromise, as it can help ensure that AI agents operate as intended and do not begin to pose a risk to a company’s broader security environment.

Preparing for the Future of AI-Driven Threats

It’s reported that the global generative AI market could grow from $28.9 billion in 2024 to more than $54 billion in 2025. With this growth will come new groundbreaking innovations that will introduce new tactics employed by cybercriminals. It’s incumbent that organizations recognize that the risks that come with AI adoption extend beyond technological vulnerabilities to include human factors, such as inadequate training and governance. A holistic approach to security—encompassing people, processes, and technology—is essential to safeguarding against these threats.

By taking proactive steps to address the risks associated with AI agent-based systems, organizations can harness the benefits of these technologies while minimizing their exposure to emerging threats. The path forward lies in balancing innovation with vigilance, ensuring that security remains a priority as we embrace the future of artificial intelligence.

About the Author

Brad LaPorte is the CMO of Morphisec. He is a seasoned cybersecurity expert and former military officer specializing in cybersecurity and military intelligence for the United States military and allied forces. With a distinguished career at Gartner as a top-rated research analyst, Brad was instrumental in establishing key industry categories such as Attack Surface Management (ASM), Extended Detection & Response (XDR), Digital Risk Protection (DRP), and the foundational elements of Continuous Threat Exposure Management (CTEM). His forward-thinking approach led to the inception of Secureworks’ MDR service and the EDR product Red Cloak—industry firsts. At IBM, he spearheaded the creation of the Endpoint Security Portfolio, as well as MDR, Vulnerability Management, Threat Intelligence, and Managed SIEM offerings, further solidifying his reputation as a visionary in cybersecurity solutions years ahead of its time.

Brad can be reached online at https://www.linkedin.com/in/brad-laporte/ and at our company website https://www.morphisec.com/



Source link

Leave a Comment