- Stop plugging these 7 devices into extension cords - even if they sound like a good idea
- I changed these 6 Samsung TV settings to give the picture quality an instant boost
- I tested a 9,000,000mAh battery pack from eBay that cost $10 - here's my verdict
- The 3 most Windows-like Linux distros to try because change is hard
- This 'unlimited battery' GPS tracker is an integral part of my hikes - and it's on sale
Meta explores neural control and AI beats bot detectors
Welcome to ZDNET’s Innovation Index, which identifies the most innovative developments in tech from the past week and ranks the top four, based on votes from our panel of editors and experts. Our mission is to help you identify the trends that will have the biggest impact on the future.
Meta leads this week with the release of Orion, its new AR glasses. Launched as a prototype at Meta Connect, they initially impressed ZDNET editor Kerry Wan for more closely realizing an AR experience than Vision Pro has thus far. Rather than “capturing and reimaging what’s in front of you,” as Wan puts it, Orion uses holograms to visualize incoming messages and other notifications, keeping the wearer socially aware instead of trapped in their headset. What stands out the most, however, is the promise of an accompanying neural interface that interprets finger gesture commands.
Also: In a surprise twist, Meta is suddenly crushing Apple in the innovation battle
Meanwhile, in spot #2, Swiss researchers successfully trained an AI model to complete reCAPTCHA tests — you know, those image quizzes meant to distinguish humans from bots — with 100% accuracy. Point, bots. While no one seems too concerned at the moment, the development makes reCAPTCHA look a little obsolete as a browser security measure. Verification tests will have to get harder, or discreet behavior tracking on devices will become more important in stopping malicious activity. Neither option feels great for the user experience or data privacy in the long run.
Coming in third is Meta, again — the company also upgraded its existing Ray-Bans with a feature that “remembers” things you look at and saves the information for later. The glasses aim to provide sleek, natural-feeling AI that includes an increasingly popular live translation capability, accessibility perks for those with impaired vision, and the ability to remember where you parked (so you don’t have to). The upgrades make the case for everyday AI wearables becoming more popular — though that seamlessness also means the specs are always watching and listening.
Closing out the week is OpenAI’s Sam Altman, who published a breathless paper on “superintelligence” being just “a few thousand days” away, loosely referencing artificial general intelligence (AGI). ZDNET Contributor Tiernan Ray was quickly on the case, citing several academic concerns to the contrary, plus a few critics that find the comments manipulative.
Also: I held the world’s thinnest foldable phone, and it made my iPhone 16 Pro Max feel outdated
But why all the fuss about Altman’s optimism, you ask? His remarks come at a critical time for the AI hype cycle; some, like ZDNET staff writer Taylor Clemons, think “the AI bubble is about to burst.” By popularizing an endlessly positive perspective on AI’s ability to heal the world (forget all the question marks around social and environmental impact, bias, and scalability), Altman risks pushing that suspicion too close to the edge.