- Join BJ's Wholesale Club for $20, and get a $20 gift card: Deal
- Delivering better business outcomes for CIOs
- Docker Desktop 4.35: Organization Access Tokens, Docker Home, Volumes Export, and Terminal in Docker Desktop | Docker
- Cybercriminals Exploit DocuSign APIs to Send Fake Invoices
- Your iPhone's next iOS 18.2 update may come earlier than usual - with these AI features
What is Microsoft's Copilot Labs, and how does it compare to Google Labs?
Generative AI is a work in progress and many of the newer AI features that companies are working on still require further development before being released to the public. As a result, many companies often release features to smaller testing groups first — and Microsoft will start doing so now, too.
On Tuesday, Microsoft unveiled several updates to Copilot, including new features such as Copilot Voice, Think Deeper, and Copilot Vision. However, not all these features are rolling out to the public yet. Rather, the experimental features will live in a new home: Copilot Labs.
Also: I tested the new Copilot Voice, Microsoft’s AI voice assistant. You can, too – for free
Copilot Labs works the same way as Google Labs, placing experimental features still in development in one place for people to easily access, try, and give feedback that can then be incorporated into further product development.
Upon launch, Copilot Labs will be home to two experimental features: Copilot Vision and Think Deeper. Both features will expand on what Copilot does to add new levels of assistance for users.
Copilot Vision
Copilot Vision is a new feature that will allow Copilot within Microsoft Edge to see what is on your screen and offer voice assistance in real-time, with the added context taking both text and visual elements into account.
Also: Every new Microsoft Copilot feature and AI upgrade coming soon to your Windows PC
I had the opportunity to demo the feature at the NYC Microsoft Copilot and Windows Event and saw that the feature has many real-life applications.
Essentially, the feature blends the capabilities of Copilot Voice with the context of information on your screen to provide in-depth assistance.
In my demo, the user asked Copilot Vision for assistance with picking outfit inspiration from Pinterest. Copilot Vision suggested which outfit he should consider from all the options on the page and offered encouragement when the user said he didn’t think he had what it took to pull the outfit off.
To put users at ease about the potential risks of having AI look at your screen, Microsoft said the Copilot Vision sessions are opt-in, will not be stored for training, won’t work on all websites or paywalled content, and will not process site content. Instead, Copilot Vision will read and interpret site content with you.
Think Deeper
With the Think Deeper feature, Copilot can take longer to respond and, as a result, work through more complex questions. Microsoft said Copilot can deliver step-by-step answers to difficult questions using the feature.
Also: Gemini Live is finally available for all Android phones – how to access it for free
Because the feature is still in its early stages, it can only be accessed through Copilot Labs. However, if you are interested in using a tool with similar features, you can try accessing OpenAI’s o1-model in ChatGPT, though it will cost you $20 per month for a ChatGPT Plus subscription.
How to access
The major caveat with these features is that Copilot Labs is only rolling out to Copilot Pro users. The Copilot Pro subscription costs $20 per month and comes with other perks, such as priority access to the latest models even at peak time, Copilot in Word, Excel (in preview), PowerPoint, Outlook, and more.
Google Labs is entirely free, so if you want to try some experimental AI features, but don’t want to commit to a subscription, that platform may be a better alternative.