- 구글 클라우드, 구글 워크스페이스용 제미나이 사이드 패널에 한국어 지원 추가
- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
- OpenAI updates GPT-4o, reclaiming its crown for best AI model
- Nile unwraps NaaS security features for enterprise customers
AI security risks: Separating hype from reality
The security community — along with its collaborators in fields such as risk, audit and governance — is actively grappling with the implications of generative artificial intelligence (AI). A recent ISACA survey on AI found that the majority of organizations do not provide training on the authorized use of generative AI, and three out of 10 organizations have established policies governing the use of AI technology. It is undeniable that AI poses substantive risks to enterprises, including security and privacy risks, but it is important to understand which threats are most serious and which are likely drawing too much attention. This will help enterprise leaders decide which measures can be implemented to guide their organizations to responsibly navigate this rapidly evolving landscape.
As someone who was part of the healthcare industry during a time when the idea of transitioning to cloud platforms caused significant apprehension, I can draw parallels to the current concerns surrounding AI adoption in the corporate world. Looking back, many of the anxieties related to the cloud turned out to be somewhat overstated. At a high level, the concerns regarding cloud computing were security, data privacy, compliance, access and legal issues. Today, cloud platforms are a common fixture across the corporate landscape, demonstrating their effectiveness and security. Just as the shift from traditional on-premises security to cloud-based security necessitated security professionals to adapt their expertise, the rapid adoption of generative AI presents a great opportunity for security professionals to pivot. By investing in AI training and providing teams with the necessary tools, security professionals can harness the power of AI to enhance their capabilities and address the associated risks effectively.
As with the cloud there are two deployment models for generative AI: there is public (such as ChatGPT) and private generative AI. The main distinction lies in accessibility and control. Public generative AI is available for a wide range of users and offers limited customization, while private generative AI is tailored to specific organizational needs, allowing for a higher degree of control over its usage, data handling and behavior. This difference in accessibility and customization makes private generative AI ideal for addressing unique business or industry-specific requirements, while ChatGPT, for example, is designed for general, public use. Security risks faced by companies are heightened when utilizing public generative AI platforms, as they introduce more potential for the compromise of sensitive information. This has led several companies to impose bans or restrictions on widely recognized generative AI platforms like ChatGPT. In contrast, well-resourced organizations have the option to invest in proprietary AI platforms, affording them greater control over data storage and protection. For many companies, a pragmatic middle-ground approach involves turning to third-party services, which are expected to become increasingly prevalent.
To drive toward responsible usage of generative AI, the security, risk and IT functions all have important parts to play. IT teams will be responsible for restricting access to specific generative AI models. The risk function is involved in identifying, assessing and mitigating risks associated with generative AI to ensure responsible and ethical use, and must effectively define the organization’s risk appetite and tolerance. Security teams must ensure that data used to train and fine-tune generative AI models is handled with strict privacy and security measures. This includes anonymizing data, encrypting sensitive information and complying with data protection regulations. Security teams also work in tandem with legal and compliance teams to mitigate ethical and legal risks associated with generative AI, including issues like intellectual property infringement or generating deceptive content. The security team will monitor the use of generative AI for malicious purposes, such as creating deepfakes, spam or other harmful content.
As much as the inevitable hype cycle inflates some of the concerns around AI, there is no doubt there are significant risks, and there are not yet sufficient answers to some of the legitimate questions that exist. For example, what can be done about the advance of deepfake technology? Instances of AI effectively emulating individuals’ voice and speech patterns, potentially weaponized in deepfake calls, are deeply worrisome, particularly in the absence of well-established countermeasures. There are also evolving concerns related to AI in the context of creative fields such as art and music, as it impacts how people can protect their work and sustain their livelihoods. From a workforce standpoint, there are questions about some roles that could be replaced by AI, but there is no doubt that some areas, such as cyber threat hunting, can be enhanced by AI to augment human capabilities.
Whether you’re attending a security conference, scrolling LinkedIn or just bantering with industry colleagues, there is no escaping the intense discussion around AI’s impact on the security community. That conversation has been amplified throughout 2023 with ChatGPT and other generative AI platforms gaining mindshare, both inside and outside the enterprise. There is a lot of noise out there, so it is especially important for security professionals to focus on the real risks AI will present and not on every headline. By making informed choices such as minimizing reliance on public generative AI tools, investing in ongoing AI training and credentialing for personnel, and routinely updating organizational policies to address bias and fairness, ethical use, malicious use, and the most pertinent threats, enterprises can navigate the complex AI risk landscape with confidence and purpose.