- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
6 best practices to develop a corporate use policy for generative AI
While there’s an open letter calling for all AI labs to immediately pause training of AI systems more powerful than GPT-4 for six months, the reality is the genie is already out of the bottle. Here are ways to get a better grasp of what these systems are capable of, and utilize them to construct an effective corporate use policy for your organization.
Generative AI is the headline-grabbing form of AI that uses un- and semi-supervised algorithms to create new content from existing materials, such as text, audio, video, images, and code. Use cases for this branch of AI are exploding, and it’s being used by organizations to better serve customers, take more advantage of existing enterprise data, and improve operational efficiencies, among many other uses.
But just like other emerging technologies, it doesn’t come without significant risks and challenges. According to a recent Salesforce survey of senior IT leaders, 79% of respondents believe the technology has the potential to be a security risk, 73% are concerned it could be biased, and 59% believe its outputs are inaccurate. In addition, legal concerns need to be considered, especially if externally used generative AI-created content is factual and accurate, content copyrighted, or comes from a competitor.
As an example, and a reality check, ChatGPT itself tells us that, “my responses are generated based on patterns and associations learned from a large dataset of text, and I do not have the ability to verify the accuracy or credibility of every source referenced in the dataset.”
The legal risks alone are extensive, and according to non-profit Tech Policy Press they include risks revolving around contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.
In fact, it’s likely your organization has a large number of employees currently experimenting with generative AI, and as this activity moves from experimentation to real-life deployment, it’s important to be proactive before unintended consequences happen.