- I've tested dozens of work laptops - but I'd take this Lenovo to the office everyday
- Avery Dennison takes culture-first approach to AI transformation
- 6 ways I save money on TV streaming without losing the shows I love
- It almost pains me to say it, but Microsoft Edge is great on Linux - you should try it
- UK MoD Launches New Cyber Warfare Command
Hume's new EVI 3 model lets you customize AI voices – how to try it

Hume AI is launching EVI 3, the third iteration of its Empathic Voice Interface (EVI) model, which can interact with users in a huge variety of humanlike voices.
Like ChatGPT’s voice mode, EVI 3 comes with an assortment of preprogrammed AI voices. These are listed by personality and character descriptions, including “Old Knocks Comedian,” “Seasoned Life Coach,” “Wise Wizard,” and “Dungeon Master,” as well as the company’s namesake, the 18th-century philosopher David Hume.
Crucially, the model also comes with a feature that allows users to customize their own AI voices from scratch. And rather than having to adjust a long list of specific attributes, as you might when building a Bitmoji or a video game character, you can simply describe the characteristics of your desired voice, using natural language, and the model will do the rest.
The launch reflects a broader effort among AI companies to build more personable and engaging models by training them to exhibit distinct “personalities.” Anthropic’s Claude was trained to be thoughtful and open-minded, for example, while xAI’s Grok is supposed to be edgier, with a sense of humor.
Hume describes itself on its website as working “to ensure that artificial intelligence is built to serve human goals and emotional well-being.” That mission statement is reminiscent of those of some of the most preeminent AI developers (OpenAI, for example, is aiming “to ensure that artificial general intelligence…benefits all of humanity”). But whereas the bigger players are mainly oriented around building bigger and more powerful models, Hume seems primarily focused on fine-tuning the believability of its models, so that they can verbally communicate in a way that not only sounds, but feels real, down to the little pauses between words and the occasional “umm” peppered into sentences.
Also: What is AI? Everything to know about artificial intelligence
The results are impressive. My first time demoing the model, I asked it to generate a character that spoke in a world-weary but witty working-class British accent — à la Michael Caine — and who was a staunch Flat-Earther. When the voice was ready, I asked it why it thought the government and scientists were lying about the shape of the Earth, and it immediately launched into a passionate tirade about why the real logical fallacy was believing an official narrative when all of the direct evidence from one’s senses pointed to the opposite story being true (i.e., the Earth is a flat disc). The voice was lyrical and full of energy, as if we were speaking at some Olde English pub.
Also: AI voice generators: What they can do and how they work
Past, present, and future
In a company blog post published Thursday, Hume wrote that the launch of EVI 3 marks the next step in the company’s mission to “achieve a voice AI experience that can be fully personalized” by the end of this year. “We believe this is an essential step toward voice being the primary way people want to interact with AI.”
In 1950, the mathematician Alan Turing proposed his famous test for assessing machine intelligence. The “Imitation Game,” as he called it — now known as the Turing Test — envisioned a human being interviewing another human and a machine, both of which were hidden behind a partition. If the interlocutor couldn’t tell which responses were coming from the human and which were coming from the machine, the latter had passed the test and could be considered true artificial intelligence.
Seventy-five years later, we have AI tools that can not only write, but actually speak in a way that seems convincingly human.
Many of the latest voice-equipped AI models have none of the mechanical monotone or emotional vacancy characteristic of earlier automated voices, like the ones that greet you when you call your bank. They instead exhibit a broad range of tenors and personalities, encapsulating what’s effectively become an entire subfield of AI research in and of itself, sparked by a competition among tech companies to build more personable and engaging software.
The question of how the average person will interact with AI in the future has been a growing concern across Silicon Valley in recent years, as companies have searched for viable successors to chatbots like ChatGPT.
OpenAI recently announced a plan to buy io, a company founded by former Apple executive Jony Ive (the designer of the iPhone), with long-term plans to build hardware centered on AI. A similar goal was undertaken by the company Humane with its AI Pin, before that product flopped.
Hume is banking on the idea that the future of AI will belong to models that can speak with users in humanlike voices.
Comparing EVI 3 to leading AI models
When developing EVI 3, Hume compared its performance to some of the most powerful AI voice assistant models currently available, including GPT-4o and Gemini Live, across a few key benchmarks.
Also: What is Gemini? Everything you should know about Google’s new AI model
According to the company blog post, EVI 3 outperformed its competitors in “emotion/style modulation, or adjusting its emotional tone throughout the course of a conversation. It also outperformed GPT-4o in “emotion understanding”— an ability to recognize and interpret the emotional tenor of users’ voices. Finally, early testing showed that it has a lower latency than both GPT-4o and Gemini Live — though it was outscored by the chatbot from AI company Sesame.
How to access EVI 3
You can try EVI 3 today through a demo and Hume’s iOS app. Hume hasn’t announced pricing for the model just yet. An API is slated for release in the coming weeks.
The model currently specializes in English but will become proficient in other major languages, including French and Spanish, as it continues to be trained and after it’s generally released, according to the company blog post.