- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
ChatGPT, the rise of generative AI
Over the last few months, both business and technology worlds alike have been abuzz about ChatGPT, and more than a few leaders are wondering what this AI advancement means for their organizations. Let’s explore ChatGPT, generative AI in general, how leaders might expect the generative AI story to change over the coming months, and how businesses can stay prepared for what’s new now—and what may come next.
What is ChatGPT?
- ChatGPT is a product of OpenAI. It’s only one example of generative AI.
- GPT stands for generative pre-trained transformer. A transformer is a type of AI deep learning model that was first introduced by Google in a research paper in 2017. Five years later, transformer architecture has evolved to create powerful models such as ChatGPT.
- ChatGPT has significantly improved the number of tokens it can accept (4,096 tokens vs 2,049 in GPT-3), which effectively allows the model to “remember” more about a current conversation and informs subsequent responses with context from previous question-answer pairs in a conversation. Every time the maximum number of tokens is reached, the conversation resets without context—reminiscent of a conversation with Dory from Pixar’s Nemo.
- ChatGPT was trained on a much larger dataset than its predecessors, with far more parameters. ChatGPT was trained with 175 billion parameters; for comparison, GPT-2 was 1.5B (2019), Google’s LaMBDA was 137B (2021), and Google’s BERT was 0.3B (2018). These attributes make it possible for users to enquire about a broad set of information.
- ChatGPT’s conversational interface is a distinguished method of accessing its knowledge. This interface paired with increased tokens and an expansive knowledge base with many more parameters, helps ChatGPT to seem quite human-like.
ChatGPT is certainly impressive, and its conversational interface has made it more accessible and understandable than its predecessors. Meanwhile, however, many other labs have been developing their own generative AI models. Some examples are originating from Microsoft, Amazon Web Service, Google, IBM , and more, plus from partnerships among players. The frequency of new generative AI releases, the scope of their training data, the number of parameters they are trained on, and the tokens they can take in will continue to increase. There will be more developments in the generative AI space for the foreseeable future, and they’ll become available rapidly. It was 2 years from GPT-2 (February 2019) to GPT-3 (May 2020), 2.5 years to ChatGPT (November 2022), and only 4 months to GPT-4 (March 2023).
How ChatGPT and generative AI fit with conversational AI
Protiviti
Text-based generative AI can be considered a key component in a broader context of conversational AI. Business applications for conversational AI have, for several years already, included help desks and service desks. A natural language processing (NLP) interpretation layer underpins all conversational AI, as you must first understand a request before responding. Enterprise applications of conversational AI today leverage responses from either a set of curated answers or results generated from searching a named information resource. The AI might use a repository of frequently asked questions (producing a pre-defined response) or an enterprise system of record (producing a cited response) as its knowledge base.
When generative AI is introduced into conversational applications, it is impossible today to provide answers that include the source of the information The nature of generative capabilities of a large language model is to create a novel response by compiling And restructuring information from a body of information. This becomes problematic for enterprise applications, as it is often imperative to cite the information source to validate a response and allow further clarification.
Another key challenge of generative AI today is its obliviousness to the truth. It is not a “liar,” because that would indicate an awareness of fact vs. fiction. It is simply unaware of truthfulness, as it is optimized to predict the most likely response based on the context of the current conversation, the prompt provided, and the data set it is trained on. In its current form, generative AI will oblige information as prompted, which means your question may lead the model to produce false information. Any rules or restrictions on responses today are built in as an additive “safety” layer outside of the model construct itself.
For now, ChatGPT is finding most of its applications in creative settings. But one day soon, generative AI like ChatGPT will draw responses from a curated knowledge base (like an enterprise system of record), after which more organizations will be able to apply generative AI to a variety of strategic and competitive initiatives, as some of these current challenges could be addressed.
Leaders can start preparing today for this eventuality, which could come in a matter of months, if recent developments indicate how fast this story will continue to move: in November of 2022, ChatGPT was only accessible via a web-based interface. By March of 2023, ChatGPT’s maker OpenAI announced the availability of GPT3.5 Turbo, an application programming interface (API) via which developers can integrate ChatGPT into their applications. The API’s availability doesn’t resolve ChatGPT’s inability to cite sources in its responses, but it indicates how rapidly generative AI capabilities are advancing. Enterprise leaders should be thinking about how advances in generative AI today could relate to their business models and processes tomorrow.
What it takes to be ready
Organizations that have already gained some experience with generative AI are in a better position than their peers to apply it one day soon. The next impressive development in generative AI is fewer than six months away. How can organizations find or maintain an edge? The principles of preparing for the great “what’s next?” remain the same, whether the technology in question is generative AI or something else.
It’s hard to achieve a deep, experiential understanding of new technology without experimentation. Leaders should define a process for evaluating these AI technology developments early, as well as an infrastructure and environment to support experimentation.
They should respond to innovations in an agile way: starting small and learning by doing. They’ll keep track of innovation in the marketplace and look for opportunities to refresh their business and competitive strategies as AI advances become available to them.
They should seed a small cross-functional team to monitor these advancements and experiment accordingly. Educate that team about the algorithms, data sources, and training methods used for a given AI application, as these are critical considerations for enterprise adoption. If they haven’t already, they should develop a modular and adaptable AI governance framework to evaluate and sustain solutions, specifically including generative abilities, such as the high-level outline below:
Protiviti
Leaders need not wonder what ChatGPT, other generative AI, and other revolutionary technologies might mean for their business and competitive strategy. By remaining vigilant to new possibilities, leaders should create the environment and infrastructure that supports identification of new technology opportunities and prepare to embrace the technology as it matures for enterprise adoption.
Learn more about Protiviti’s Artificial Intelligence Services.
Connect with the Author
Christine Livingston
Managing Director, Technology Consulting