You can now talk with ChatGPT's Advanced Voice Mode on the web


Volha Yemialyantsava/Getty Images

If you rely on ChatGPT for your everyday workflow, you are likely used to having a tab open with the chatbot on your desktop at all times. Now, right from your desktop, you’ll have the chance to access OpenAI’s Advanced Voice Mode — and you’ll want to. 

Also: Google’s Gemini Advanced gets a very useful ChatGPT feature – but how does it compare?

On Tuesday, OpenAI announced — via an X post — that Advanced Voice Mode is beginning to roll out on the web, extending the voice assistant’s availability beyond the desktop and mobile apps. This rollout makes Advanced Voice Mode the most accessible it has ever been, as it removes the barrier of downloading an app to get started. 

Advanced Voice refers to OpenAI’s AI-powered voice assistant, which can be interrupted, hold multi-turn conversations, and respond to user emotions for a much more intuitive and helpful conversation experience. It tackles the issue that most voice assistants have when struggling to understand what is said. 

Although it sounds too good to be true, in my testing, Advanced Voice Mode has been skilled at carrying out lengthy conversations and understanding what I mean even when my thoughts are not linear. Some lighthearted use cases include chatting with ChatGPT about your day, playing a trivia game, or talking about yourself. Still, it has the same practical use cases as a regular voice assistant. 

Also: Microsoft offers $4 million in AI and cloud bug bounties – how to qualify

Unfortunately, despite the expansion of availability, users are still required to subscribe to ChatGPT Plus, which costs $20 per month. If you are a ChatGPT superuser, the upgrade may be worth it as it comes with other perks such as access to all of the latest OpenAI models, including o1-preview, five times more messages for GPT-4o, image generation, and more. 

Users will still be unable to access Voice Mode’s multimodal capabilities, including assisting with content on users’ screens and using the user’s phone camera as context for a response, for which OpenAI has still not shared a release date. 





Source link