6 AI features Google thinks will sell you on its latest Pixel phones (including the Fold)
At last year’s Made by Google event, Google sprinkled artificial intelligence (AI) throughout its product offerings. So naturally, at this year’s event, the company upped the ante on the Pixel 9 phones, unveiling AI features that can help with calls, photo and video editing, and more.
Also: Everything announced at Made by Google 2024
The features use generative AI to address everyday pain points you may have when using your phone, and as a result, they have the potential to elevate your everyday smartphone experience.
Here are the six new features, ranked from most useful to least useful.
1. Pixel Screenshots
You may screenshot something with the intention of remembering it later, but it probably often gets lost amongst the thousands of photos in your photo library. Going forward, you’ll be able to ask for information about the screenshot and have Pixel Screenshots pull it up for you.
Pixel Screenshots uses AI to process screenshots to help you find them later using simple text prompts. If you screenshot something from a webpage, Pixel Screenshots can also recall the site, ensuring you never lose track of an item you are interested in buying.
Also: These three AI features sold me on the Google Pixel 9 Pro – and they’re very clever
The new feature is similar to Microsoft’s Recall, which the company halted due to privacy and security issues. Pixel Screenshots addresses Recall’s faults by only ingesting screenshots that you take (as opposed to automatically taking screenshots in the background) and allowing you to toggle it on or off.
This feature is an on-device Pixel exclusive, so if you want to try it, it may be time to switch to a Pixel device.
2. Call Summary
Every day, people collaborate with colleagues, make appointments, settle disputes, and more over the phone. If that sounds like you, you might benefit from having a detailed overview of your calls. That’s where the new Call Summary feature comes in.
The feature can be turned on within the call screen UI and, once activated, will provide you with a detailed AI-powered summary of the call’s key points.
Also: Google Meet will take notes for you now, thanks to AI
To address the privacy concerns that come with an AI feature listening in on calls, Call Summary will announce itself on the call, so all parties involved know that the call is being ingested and run on-device.
This feature will be available in conjunction with Call Screen, an AI-powered feature that Google unveiled in 2023 that lets you filter out spam and robocalls by declining the call if spam is detected.
3. Gemini by default
Gemini is replacing Google Assistant as the default voice assistant on Pixel phones, letting you access a smarter, more helpful assistant just by pressing and holding the power button.
Gemini is aware of your Google apps, meaning it can check your calendar to see if you are free, find party details from your email, and more.
Furthermore, Gemini brings an optimized conversation experience. You can talk with the Gemini voice assistant as if it were a person, interrupting it or talking with pauses, without compromising its understanding.
Starting today, you can also overlay Gemini over top of your apps to get answers about what is on your screen. For example, Google states that users can ask questions about a YouTube video they are watching or generate an image from the overlay and drop it into Gmail.
Also: Gemini to replace Google Assistant as Android’s default – but you still have options
Gemini Live, first announced at Google I/O in May 2024, is also now finally available for Gemini Advanced subscribers on Android phones. Those who buy the Pixel 9 Pro will get one year of free Gemini Advanced, including access to Gemini Live and other deeper AI integrations in the Google Workspace suite.
The Gemini Live experience will be limited to its advanced voice conversational capabilities, allowing you to have long conversations that include asking advice or answering complex questions about complex topics. The multimodal capabilities demoed at Google I/O that allow you to use your camera with Live are still not available.
4. Auto Frame in Magic Editor
Sometimes, you have the perfect photo opportunity, but factors like space restrictions, limited mobility, and shutter speed make it difficult to take the best-framed photo and ruin the overall capture. To address this, Google is introducing Auto Frame in Magic Editor.
Also: I’m a diehard Pixel user, but I’m considering a change for two reasons
The feature can help you automatically reframe your photo, suggest the best crop, and even expand your post-production. Auto Frame will work alongside the suite of new photo editing features to ensure ideal results.
5. Add Me
Ever find yourself in a situation where no one is available to take a photo of your group, so you take one for the team by getting behind the camera? Thanks to this feature, you no longer have to forfeit your spot in the picture — theoretically.
Also: I tested the Google Pixel 9 Pro’s ‘Add Me’ feature and found it crazy clever
Add Me merges a photo of the group with a photo of the photographer alone to create a new image of everyone together. You can then combine this with the Best Take feature, released last year, to select your favorite expressions for each person from a series of similar photos.
This feature is toward the bottom of the list — not because it isn’t a neat AI application, but because its everyday applications remain a bit fuzzy.
6. Reimagine
Like the feature above, Reimagine is meant to breathe more creativity into photos by using AI to add new elements.
Also: The best AI image generators of 2024: Tested and reviewed
If you take a photo and want to edit something using AI, tap the area you want to change and type what you want to see. Some examples include “add sunset” or “make the grass greener.”