- OpenAI, SoftBank, Oracle lead $500B Project Stargate to ramp up AI infra in the US
- 오픈AI, 700조원 규모 'AI 데이터센터' 프로젝트 착수··· 소프트뱅크·오라클 참여
- From Election Day to Inauguration: How Cybersecurity Safeguards Democracy | McAfee Blog
- The end of digital transformation, the rise of AI transformation
- 줌, '팀챗' 업데이트··· "사이드바 통해 업무 간소화"
The end of digital transformation, the rise of AI transformation
The modern technology landscape is becoming increasingly advanced, and many enterprises are still trying to catch up. With the growing prominence of generative AI, organizational data is faced with new risks.
Here, we talk with Steve Tait, Chief Technology Officer at Skyhigh Security, regarding the key areas of risk that generative AI can create.
Security magazine: Tell us about your background and career.
Tait: I began my career as an engineer in the emerging mobile data space in the late 90s, and over time progressed through the traditional path of technical team management and leadership roles. One of my key early positions was Head of Engineering at Capita IT, where I covered a range of projects in financial services and government systems before finally ending as the Director of Software & Application Services for Capita’s Travel and Events business unit.
Later on in my career I joined BAE Systems as an engineering director, leading their global engineering workforce focused on the development of cyber and intelligence products along with complex system integration programs for government and enterprise systems. I was hired as the VP of Engineering at CALYX in 2018 where I focused on critical software solutions for the pharmaceutical industry before taking on my first Chief Technology Officer role at Snow Software.
Today, I serve as Chief Technology Officer at Skyhigh Security, bringing my decades-long experience building, transforming, and leading highly scaled and distributed engineering teams, with a focus on delivering mission-critical security software to major enterprises, governments, and organizations across the globe.
Security magazine: What are the key areas of generative AI that create risk?
Tait: Generative AI presents numerous risks to the enterprise, including hallucinations, where the AI generates incorrect or misleading outputs, poisoning of its learning data, which could compromise the AI’s reliability, and malicious prompt injections designed to manipulate the system. Intellectual property (IP) risks also arise when data is shared with public AI services, leading to potential breaches of business confidentiality.
However, in my opinion, the most pressing risk for enterprises lies in data protection within emerging AI-based productivity tools. There are two specific areas to look out for: copilots and AI-based low-code app development tools.
Security magazine: Why are these key areas risky?
Tait: It’s all about access to data. Copilots by their nature have access to an enormous amount of corporate data, which is very different when compared to the scope of access that an individual user has. While there are basic protections in place to protect users from unauthorized data access, there is no guarantee that the AI tool won’t inadvertently link unrelated datasets. This could significantly increase the potential for sensitive data exfiltration, either deliberately or completely by accident.
On the other hand, AI-supported development tools will explode the already steadily increasing output of “citizen developers” or non-technical individuals creating applications. These developers often lack the formal training of basic cyber development practices, such as scope management and the principle of least privilege. Consequently, applications can be built and distributed with minimal oversight.
While systems like Active Directory can manage some coarse-grained privileges, discrepancies arise when application-level access differs from user-level permissions. These horizontal privilege expansions can provide access to unauthorized data, making breaches extremely difficult to detect.
Security magazine: What are some best practices for organizations to keep data safe while incorporating AI?
Tait: Organizations need to invest significant time in understanding their level of data exposure. Where does your data reside? Web, cloud, email, private apps? Who has access to it — and should they?
They must also identify and train citizen developers, in addition to checking their applications for possible vulnerabilities.
When it comes to copilots, organizations must continuously review the outputs — don’t just put data in and forget about it. Additionally, don’t sleep on using cybersecurity tools and Data Loss Prevention (DLP) techniques. Many of these tools are already available and can safeguard your data, prevent threats and ensure compliance all while enabling organizations to embrace AI and innovation.
Security magazine: Anything else you’d like to add?
Tait: “Digital transformation” is over. In today’s technologically advanced landscape, few, if any, businesses would claim they are not already digitally transformed. This shift began two decades ago with the automation of manual tasks and reached its peak during the COVID-19 pandemic. Now, the fundamental challenge facing businesses is how to incorporate generative AI while keeping critical data secure.
AI usage will likely be a dominant priority for 2025 and beyond. This task is made complex not only by the emergence of new risks tied to large language models (LLMs), such as data poisoning and malicious prompt injection, but also by expanding threat vectors from shadow AI, private AI, and sanctioned applications. For instance, corporate copilots, which are deeply integrated with business assets and facilitate extensive “interrogation” of data, amplify these risks. Simultaneously, citizen developers — enabled by LLM-powered tools — create applications without necessarily understanding the principles of secure development, increasing the likelihood of vulnerabilities. As businesses integrate generative AI, securing critical data will remain paramount.