How machines learned to chat


Chatbots have blazed an evolutionary path similar to that of self-driving cars. Using the benchmarking approach for driverless vehicles, they’ve advanced from what we might call Level 0—simple call-and-response programs designed a half-century ago—to Level 5—sophisticated AI-driven engines that can increasingly perform human-like tasks.

That’s like going from rotary phones to the iPhone, notes Robb Wilson, co-author of “Age of Invisible Machines” and CEO and co-founder of OneReach.ai, which makes a conversational AI platform for enterprises. 

“All software will have a conversational AI in front of it, and it will simply find a bot with the skills you need when you need them,” Wilson says. “The bot will know what you want and simply do it.”

Chatbots, as with self-driving vehicles, are not yet at the point of full autonomy. But each day they edge a little closer to it. The following scale is by no means official, but it offers a guide to where chatbots started and where they’re likely to end up.

These early chatbot predecessors, which are still in use, generate scripted responses based on pre-programmed rules. They rely on pattern-matching to mimic conversation and cannot learn from the conversation or adapt without being reprogrammed.

MIT computer scientist Joseph Weizenbaum created the first such chatbot in 1966. He named it ELIZA (after Eliza Doolittle, the street-peddler protagonist who becomes the well-spoken toast of London society in George Bernard Shaw’s “Pygmalion”). Weizenbaum programmed ELIZA to communicate like a Rogerian psychotherapist, responding to user prompts with questions based on keywords. If you told ELIZA you were unhappy, it would respond “Why are you unhappy?” 

Such bots are built around decision trees, have small vocabularies, and may not understand the same question posed in different ways (“Where is my package?” vs. “When is my order arriving?”). Rules-based bots cannot improve their performance over time without further coding. But because they’re relatively inexpensive to create and use, ELIZA’s descendants remain in wide use today, letting users find information more easily than using search tools or combing through FAQs.

Level 1 chatbots employ natural language processing (NLP), a branch of AI designed to understand human speech and respond in kind. They’re considered the precursor to today’s consumer voice assistants (e.g., Siri, Alexa, and Google Assistant). 

The first widely used NLP-based chatbot was SmarterChild, made accessible on AOL Instant Messenger, MSN Messenger, and Yahoo Messenger in the early 2000s. SmarterChild could engage in human-like conversations and retrieve information from the internet. (At the height of its popularity, more than 30 million people used SmarterChild to ask about news headlines, weather reports, and stock quotes.)

Today’s NLP-based bots, fed billions of examples of language, can generate human-like text responses on the fly, identify synonyms, and understand similar questions phrased in multiple ways. 

By 2027, Gartner projects that 1 in 4 organizations will rely on bots as their primary customer support channels.

The emergence of Siri in 2010 ushered in a new era of conversational assistants. Built into phones and smart speakers, these bots quickly evolved into intelligent assistants that can schedule meetings or play games.

Still, this breed of bot is considered “weak” or “narrow” AI, since it is limited by the length and complexity of verbal interactions; they struggle to discern intent, can’t learn from conversations, and can only perform simple tasks.

“Their ability to chat is getting better, but speech recognition can still be problematic because of the various incarnations of language, colloquialisms, and geographical differences in pronunciation,” notes Robby Garner, CEO of the Institute of Mimetic Sciences, and an award-winning creator of NLP conversational systems. “We’re still a long way from artificial general intelligence.” 

Even so, Gartner has predicted that conversational AI bots will save companies $80 billion annually in customer support costs by 2026.

As shown by several new generative AI platforms (ChatGPT, Bing Chat, Google Bard), these bots can perform a remarkable range of human-like tasks. They can create (or generate) poetry, music, and art. They can write software code or solve complex mathematical equations. 

The downsides of LLMs are also well documented. They can suffer from “hallucinations,” where they fabricate “facts,” producing wild inaccuracies. And because these bots are trained on Internet data, they’re prone to the same biases, inaccuracies, and falsehoods that exist online.

Despite these concerns, 72% of the Fortune 500 plan to adopt generative AI to improve their productivity, according to Harris Poll.

These small language models (SLMs) require much less data for training and less complexity. That means they will use less energy and be less prone to hallucinations. They’ll be more limited but more targeted in what they can do. For example, they may be trained on company or industry data and deployed to perform a single task, such as identifying images or generating personalized marketing content.

Only a handful of SLMs have been deployed, mostly for writing code and retrieving data. A group of academic computer scientists have organized the BabyLM Challenge to help create more functional SLMs. 

Such SLMs would be a key way “to improve performance and accuracy, with fewer headaches around the resources needed to run them,” says Juhasz.

The ultimate goal for chatbots, as with self-driving cars, is for them to operate autonomously—without anyone behind the wheel. But, as with cars today, there will be a human in the loop for the foreseeable future.  

There’s a lot of economic upside riding on it. The World Economic Forum predicts that more than 40% of common business tasks will be automated by 2027. Chatbots will transform from curiosities to coworkers, understanding our jobs and delivering the right information or performing the right task at the right time. 

These intelligent digital workers (IDWs) will combine conversational bots’ ease-of-use with the skills of specialized machine learning models, predicts author and OneReach.ai CEO Robb Wilson. 

For example, you’ll tell your IDW bot: “Arrange my trip to Chicago.” It will book your flight (knowing you prefer aisle to window), schedule your Uber (or Lyft), and contact a fellow accommodations bot to book your room (with loyalty points) at your preferred hotel.”We’re at that post-BlackBerry, pre-iPhone moment where all the technology is there, but we don’t yet have an example of a great conversational AI,” says Wilson. “No one has put it together into a nice beautiful package like the iPhone. But that day is coming.”

This article was originally published on The Works



Source link