ChatGPT For Enterprises Is Here – But CEOs First Want Data Protections


Amidst the rise of generative AI, business leaders must navigate the delicate balance of adoption, security, and trust.

By Apu Pavithran, CEO and Founder, Hexnode

At the end of August, OpenAI released ChatGPT for Enterprises. The much-hyped version focuses on “enterprise-grade security,” advanced data analysis capabilities, and customization options. But, it’s unlikely to change how businesses view the tool. Despite a solid majority (60%) of US executives expecting generative AI (GenAI) to have an enormous long-term impact, they’re still a year or two away from implementing their first solution.

Regardless of self-proclaimed “enterprise” solutions, business leaders first want to understand how the technology works, evaluate their internal capabilities and data security, and invest accordingly.

Let’s explore how businesses can navigate the delicate balance of adoption, security, and trust in the era of GenAI.

Why Business Leaders Aren’t Yet Sure of GenAI

So, how does this technology work, and why does it warrant concern for businesses? GenAI models undergo extensive training on vast image and text datasets from various origins. Users start the process with an initial prompt to the platform which serves as a guide for generating content. Given that these models are still in their learning phases, however, the way data is utilized on the backend remains uncertain.

It’s important to recognize that any shared information effectively becomes part of the platform’s training data, influencing future outputs. While ChatGPT assures consumers of not training their software with personal or corporate data, OpenAI faces multiple high-profile data leaks. For example, Samsung faced a proprietary data leak with ChatGPT. Meanwhile, others criticized the way it learns from content in a manner that potentially breaches copyright.

Enterprises are right to approach this technology with a healthy dose of caution. Consider that higher adoption of GenAI and subsequent integration with third-party platforms demand additional assessment from cybersecurity teams. As a result, their focus is no longer confined to internal measures but to also scrutinizing the security of third-party software and its affiliates. Additionally, another emerging threat involves injection attacks targeting customer support chatbots, which could potentially grant unauthorized access to enterprise systems. If unaddressed, the potential threat vector with this technology is considerable.

Interestingly, 45% of organizations believe that if they fail to implement the right risk management tools, GenAI could potentially erode trust within their organization. Therefore, before onboarding this technology, business leaders are doing their homework to ensure a safe and responsible adoption. This meticulous approach to adoption seeks to strike a balance between the remarkable potential of GenAI and the imperative of safeguarding trust.

The Duality of GenAI

There’s great promise and profound risk for enterprises venturing into the realm of GenAI. To navigate the associated risks and challenges, organizations must forge forward-thinking policies that protect employees and data.

For instance, companies will likely need to address GenAI-specific risks by revising policies regarding business email communication, data sharing with third parties, or the utilization of established third-party code projects.

Chief Information Security Officers (CISOs) should also consider running awareness campaigns to educate users about the inherent risks associated with this technology. Establishing a comprehensive rulebook that delineates who can utilize what and what should remain confidential will help.

While international policymakers are actively formulating strategies to foster a responsible AI ecosystem, organizations must align their efforts with government-endorsed approaches to fend off potential threats. It’s important to remember that GenAI risks extend beyond cybersecurity – they encompass privacy and data protection risks, regulatory compliance, legal exposure, and AI ethics. This means that CISOs must stay vigilant, not only regarding current risks but also those on the horizon.

Redefining The Security Roadmap

A big concern for enterprises is that employees dabble in GenAI away from the watchful eye of IT. In effect, it’s now a major vector of Shadow IT. Research found that 7 out of 10 folks who’ve jumped on the ChatGPT bandwagon aren’t telling their supervisors. While more than two-thirds of employees continue to engage in non-enterprise applications, CISOs need to question why such applications are gaining prominence and attain a clear picture of these users.

Alarmingly, studies show that 4% of employees have placed sensitive corporate data into language models. Cybersecurity leaders can work on this by deploying protection tools to ensure safer transit of data, ensuring that sensitive information remains shielded from unauthorized access or exposure.

Finally, there are tools like unified endpoint management (UEM) that can restrict the transfer of sensitive data across unapproved devices or applications by defining accessibility. Admins can authorize device access to such applications based on the user’s role. Endpoint management solutions, when integrated with identity and access management (IAM), will flag admins if confidential data is shared. In the unfortunate event of a device with ChatGPT access being misplaced or stolen, UEM solutions can remotely erase data from the device, effectively shielding sensitive information from falling into the wrong hands.

GenAI is understandably gaining ground in the modern workplace. Corporations are taking these tools for a spin, developers are cozying up to them, and employees are experimenting with them. At the end of the day, CISOs must create a security-conscious environment without stifling the productivity of their workforce. It’s therefore vital to find the perfect harmony between innovation and safety. End of article.

About the Author

Apu Pavithran is the founder and CEO of Hexnode. Recognized in the IT management community as a consultant, speaker, and thought leader, Apu has been a strong advocate for IT governance and Information security management. He’s passionate about entrepreneurship and spends significant time working with startups and empowering young entrepreneurs. You can find more about Apu on his LinkedIn and his company’s website, Hexnode.



Source link