- Dell wants to be your one-stop shop for enterprise AI infrastructure
- I thought my favorite browser blocked trackers but this free privacy tool proved me wrong
- PwCのCITO(最高情報技術責任者)が語る「CIOの魅力」とは
- Best Roborock vacuums 2025: After testing multiple models, these are the top ones
- The best VPNs for school in 2025: Expert tested and reviewed
I tried Google's XR glasses and they already beat my Meta Ray-Bans in 3 ways

Google unveiled a slew of new AI tools and features at I/O, dropping the term Gemini 95 times and AI 92 times. However, the best announcement of the entire show wasn’t an AI feature; rather, the title went to one of the two hardware products announced — the Android XR glasses.
Also: I’m an AI expert, and these 8 announcements at Google I/O impressed me the most
For the first time, Google gave the public a look at its long-awaited smart glasses, which pack Gemini’s assistance, in-lens displays, speakers, cameras, and mics into the form factor of traditional eyeglasses. I had the opportunity to wear them for five minutes, during which I ran through a demo of using them to get visual Gemini assistance, take photos, and get navigation directions.
As a Meta Ray-Bans user, I couldn’t help but notice the similarities and differences between the two smart glasses — and the features I now wish my Meta pair had.
- In-lens displays
The biggest difference between the Android XR glasses and the Meta Ray-Bans is the inclusion of an in-lens display. The Android XR glasses have a display that is useful in any instance involving text, such as when you get a notification, translate audio in real time, chat with Gemini, or navigate the streets of your city.
The Meta Ray-Bans do not have a display, and although other smart glasses such as Hallidays do, the interaction involves looking up at the optical module placed on the frame, which makes for a more unnatural experience. The display is limited in what it can show, as it is not a vivid display. The ability to see elements beyond text adds another dimension to the experience.
Also: I’ve tested the Meta Ray-Bans for months, and these 5 features still amaze me
For example, my favorite part of the demo was using the smart glasses to take a photo. After clicking the button on top of the lens, I was able to take a photo in the same way I do with the Meta Ray-Bans. However, the difference was that after taking the picture, I could see the results on the lens in color and in pretty sharp detail.
Although being able to see the image wasn’t particularly helpful, it did give me a glimpse of what it might feel like to have a layered, always-on display integrated into your everyday eyewear, and all the possibilities.
2. Gemini assistance
Google has continually improved its Gemini Assistant by integrating its most advanced Gemini models, making it an increasingly capable and reliable AI assistant. While the “best” AI assistant ultimately depends on personal preference and use case, in my experience testing and comparing different models over the years, I’ve found Gemini to outperform Meta AI, the assistant currently used in Meta’s Ray-Ban smart glasses.
Also: Your Google Gemini assistant is getting 8 useful features – here’s the update log
My preference is based on several factors, including Gemini’s more advanced tools, such as Deep Research, advanced code generation, and more nuanced conversational abilities, which are areas where Gemini currently holds an advantage over Meta AI. Another notable difference is in content safety.
For example, Gemini has stricter guardrails around generating sensitive content, such as images of political figures, whereas Meta AI is looser. It’s still unclear how many of Gemini’s features will carry over to the smart glasses, but if the full experience is implemented, I think it would give the Android smart glasses a competitive edge.
3. Lightweight form factor
Although visually they do not look very different from the Meta Ray-Bans in the Wayfarer style, Google’s take on XR glasses felt noticeably lighter than Meta’s. As soon as I put them on, I was a bit shocked by how much lighter they were than I expected. While a true comfort test would require wearing them for an entire day, and there is also the possibility that by the time the glasses reach production they become heavier, at this moment it seems like a major win.
Also: The best smart glasses unveiled at I/O 2025 weren’t made by Google
If the glasses can maintain their current lightweight design, it will be much easier to take full advantage of the AI assistance they offer in daily life. You wouldn’t be sacrificing comfort, especially around the bridge of the nose and behind the ears, to wear them for extended periods. Ultimately, these glasses act as a bridge between AI assistance and the physical world. That connection only works if you’re willing and able to wear them consistently.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.