Now Is Not the Time to Cut Back on Security Teams


Generative artificial intelligence (AI) is revolutionising the way businesses operate. The widespread adoption and integration of models, such as OpenAI’s ChatGPT and Google’s Gemini, into everyday organisational processes has resulted in the seismic growth of the global market, which is expected to reach $1.3 trillion in 2032.

The rapid advancement of AI models has created a highly competitive environment where companies are channelling unprecedented resources into AI development. However, the extreme pressure to keep up and innovate is overshadowing an equally critical priority — AI safety.

Security is the backbone of companies

Despite the immense potential and excitement over generative AI, its adoption has yet to be universal. A recent study by CIO found that 58% of organisations haven’t adopted AI due to cybersecurity concerns. As AI technologies evolve, so do the types of cyber-attacks capable of disrupting businesses. Yet, many companies are scaling back their security teams — the very units tasked with protecting sensitive data.

Mass layoffs within information security departments have become alarmingly common. Demand for cybersecurity professionals has fallen by 32%, and even large corporations, such as ASDA, have cut their internal security teams.

These cuts come at a time when data breaches linked to AI are becoming a growing risk. For example, ChatGPT has been manipulated into generating Windows 10 and 11 keys, leading to significant security breaches. User prompts can also reveal sensitive business information, which may be stored without encryption. Studies reveal that 24.6% of employees have entered confidential documents, and 5.4% have input payment card information when asking generative AI models a question.

Such mismanagement of AI can damage an organisation’s trust and credibility and expose it to legal liabilities and regulatory fines. In the UK alone, businesses have incurred over £44bn worth of damages relating to cybersecurity breaches.

Technology holds the key to next-level security

Governance policies, compliance measures, and education programmes are critical for companies looking to combat generative AI’s potential security threats. However, organisations must also invest in privacy-enhancing technologies (PETs) to strengthen their defences.

Most companies handle sensitive and financially valuable information, which means the risk associated with a potential cybersecurity breach is staggering. PETs can act as a powerful complement to existing security measures.

PETs enable secure exchanges among organisations while ensuring confidentiality and compliance without exposing vulnerabilities. For instance, Fully Homomorphic Encryption (FHE) allows computations on encrypted data without the need to decrypt it. This means that data can remain confidential throughout AI processing, preventing sensitive data from being exposed even during complex computations. Other tools, such as Data Loss Prevention (DLP), can monitor and control the movement of sensitive information, which helps prevent data leaks and ensures that sensitive data is not shared or lost.

Whilst no solution can guarantee complete security, especially given AI’s constant evolution, the integration of PETs represents a positive step towards protecting sensitive data. The future of cybersecurity in the AI era lies in organisations combining robust internal security measures with these advanced technologies.

The case for security teams

The challenge when deploying generative AI is ensuring the confidentiality of sensitive data whilst leveraging AI’s capabilities. To tackle this, organisations need to take a multidisciplinary approach, employing and retaining key security personnel and using innovative PETs. This ensures businesses can enjoy the benefits of AI whilst keeping valuable data secure and private.

Organisations should view the role of CISOs and their teams not as an expense but as investments. The cost of a dedicated security team is minimal compared to the reputational and financial damage caused by a cyberattack. In the ever-changing technological landscape, CISOs and their supporting team have never been more critical to business success. Now is the time for organisations to focus on safeguarding their users’ data rather than focusing on AI growth at all costs.

About the Author

Dr Nick New, CEO and Co-Founder of Optalysys. With a PhD in Optical Pattern Recognition from Cambridge, Nick has a strong foundation in optical technology. Before Optalysys, he led Cambridge Correlators, shaping their technical development and international growth. At Optalysys, Nick is pioneering advancements in silicon photonics and FHE.

Nick can be reached at our company website https://optalysys.com/



Source link

Leave a Comment