Google’s Pixel 9 lineup is powered by cutting-edge hardware like the Tensor G4 processor and tons of RAM that should help keep your phone feeling fast and fresh for years to come. But all that hardware is also designed to power brand new AI experiences.
“Android is reimagining your phone with Gemini,” wrote Sameer Samat, Google’s president of the Android Ecosystem, in a blog post published on Tuesday. “With Gemini deeply integrated into Android, we’re rebuilding the operating system with AI at the core. And redefining what phones can do.”
Here are the big new AI features coming with the new Pixel devices.
Gemini, Google’s AI-powered chatbot, will be the default assistant on the new Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL and Pixel 9 Pro Fold phones. To access it, simply hold down your phone’s power button and start talking or typing in your question.
A big new change is that you can now bring up Gemini on top of any app you’re using to ask questions about what’s on your screen, like finding specific information about a YouTube video you’re watching, for instance. You’ll also be able to generate images directly from this overlay and drag and drop them into the underlying app, as well as upload a photo into the overlay and ask Gemini questions about it.
If you buy the pricier Pixel 9 Pro (starting at $999), Google’s bundling in one free year of the Google One AI Premium Plan that typically runs $19.99 a month for access to 2 TB cloud storage and access to Gemini Advanced, which lets you try Gemini directly in Google products like Gmail and Docs to help you summarize text and conversations.Crucially, Gemini Advanced also includes access to Gemini Live, which Google describes as a new “conversational experience” to make speaking with Gemini more intuitive (I’m not the only one having a hard time keeping track of all the things Google brands “Gemini,” don’t worry). You can use Gemini Live to have natural conversations with Gemini about anything that’s on your mind, including, Google says, using it for help with complex questions and job interviews, choosing between a variety of voices that sound stunningly lifelike, according to demos that Google showed Engadget earlier this month.
Recently, OpenAI released Advanced Voice Mode, a similar feature, to paying ChatGPT customers with a voice assistant that can talk, sing, laugh and allegedly understand emotion. When asked if getting Gemini Live to sound as human-like as possible was one of Google’s goals, Sissie Hsiao, the company’s vice president and general manager of Gemini Experiences told Engadget that Google was “not here to flex the technology. We’re here to build a super helpful assistant.”
Google is using AI to make both taking and editing pictures dramatically better with the Pixel 9 phones, something they’ve focused on for years now. A new feature called Add Me, which will be released in preview with the new devices, for instance, will let you take a group photo and then take a picture of the photographer separately and add it to the main picture seamlessly — handy if you don’t have anyone around to take a picture of your entire group.
Meanwhile, Magic Editor, the built-in, AI-powered editing tool on Android, can now suggest the best crops and even expand existing images by filling in details with generative AI to get more of the scene. Finally, a new “reimagine” feature will let you add elements like fall leaves or make grass greener — punching up your images, yes, but blurring the line between which of your memories are real and which are not.
You can already search anything that you see on your phone by simply circling it, but now, AI will intelligently clip whatever you’ve circled and let you instantly share it in a text message or an email. Handy.
If you can't figure out how to sort through the tons of pictures of receipts, tickets and screenshots from social media littering your phone's photo gallery, use AI to help. A brand new app called Pixel Screenshots available on the new Pixel devices at launch will go through your photo library (once you give it permission), pick out screenshots, and then identify what's within each picture. You can also click pictures of real-world signs (such as a music festival you want to attend, for example), and directly ask the app relevant questions like when do the tickets for the festival go on sale.
A new feature called Call Notes will automatically save a private summary of each phone call. so you can refer back to a transcript to quickly look up important information from the call like an appointment time, address, or phone number later. Google notes that the feature runs fully on-device, which means that nothing is sent to Google's servers for processing. And everyone on the call will be notified if you've activated Call Notes.
We've been able to use AI to generate images for a long time now, but Google is finally building in the feature right into Android thanks to Pixel Studio, a dedicated new image-generation app for Pixel 9 devices. The app runs on both, an on-device model powered by the new Tensor G4 processor and Google's Imagen 3 model in the cloud. You can share any images you create in the app through messaging or email directly.
A similar feature called Apple Image Playground is coming to newer iPhones with iOS 18 in September.
Google will use AI to create custom weather reports for your specific location right at the top of a new Weather app so you "don't have to scroll through a bunch of numbers to get a sense of the day's weather," according to the company's blog post.