Ever since OpenAI released ChatGPT at the tail end of 2022, the tech industry has been chasing artificial intelligence. For the past few years, tech companies have been ramping up AI development — and you could argue that everyone has been trying to catch OpenAI. Still, it feels like no one has figured out the "killer app" for AI just yet. The year 2024 can be characterized by the tech industry putting AI absolutely everywhere it could, trying to figure out what would stick.
The new year will mark a significant shift, and the way consumers interact with AI in 2025 is poised to change. Currently, you need to go out of your way to use it. Loading up the ChatGPT website or calling upon the Gemini assistant is necessary to converse with AI helpers. Next year, that likely won't be the case. AI will be baked into the operating systems and software you use and will become capable enough that it'll expectedly start doing things completely on your behalf.
The end of this year has brought a few glimpses of what's to come in 2025. Project Astra is being tested by people outside of Google, Android XR is the first operating system built with Gemini at the core, and Apple Intelligence is now available with ChatGPT integration.
We don't have to guess what AI will bring in 2025 — tech companies and their executives have already laid out many of their plans. There are still questions to answer, though. Which companies can actually deliver on their promises, how will they change how we use mobile devices and wearables, and what privacy implications will these shifts have?
Up until now, artificial intelligence has powered many individual features and apps. Some have been widely successful — OpenAI reported ChatGPT had 200 million weekly users as of August 2024 — and others less so. Gemini, for example, was downloaded about 780,000 times in September 2024, per Statista. The number for ChatGPT? That would be 4.2 million downloads in the same month, despite the ChatGPT app already being available on iOS and Android for more than a year.
Meanwhile, hardware devices created with AI at their center have definitively failed, and the Rabbit R1 and Humane AI Pin are two key examples from this year. Specifically, the Rabbit R1 was a tiny AI device that touted a "Large Action Model" that could take actions on your behalf. Asking the Rabbit R1 to play a song would require it to use Spotify for you, and ordering food would have Rabbit literally use a DoorDash client on a remote server for you.
The reason these devices failed is because, well, people already have great phones. They're more powerful and better suited for AI tasks than specialized hardware devices. In 2025, these mainstream devices will become more akin to the failed AI products. The Rabbit R1 and Humane AI Pin both aim to use AI to complete tasks for you, and soon, your smartphones, tablets, and wearables will do the same.
Rabbit's failed idea regarding the LAM is what Google and others are trying to do right with agentic AI models.
"Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision," said Google CEO Sundar Pichai in a blog post announcing Gemini 2.0. "With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant."
What does that look like? It depends on the company. Google is building Project Astra, a multimodal AI helper that I tried at Google I/O 2024. It's able to take in your surroundings and use external sources to process queries and then respond with multimodal output. For example, you might ask a question verbally with Project Astra using a camera or a search engine to gain proper context. Then, it can respond with written text, a generated image, or spoken word — or a combination of the above.
Though Project Astra is only being tested through Google's "trusted tester program," the company has shown off the software on phones, glasses, and headsets. It also announced Android XR, a brand-new operating system for headsets and wearables with Gemini at the core. It's safe to say that Project Astra will arrive on hardware next year, and that could include Pixel phones and an unreleased Samsung headset.
Separately, Google has Project Mariner, a research prototype that will literally browse Chrome for you in the works. Meta AI already has multimodal support on Ray-Ban Meta glasses, and Apple has Visual Intelligence on iOS 18.2. Last but not least, OpenAI has ChatGPT-4o, a multimodal assistant much like Project Astra.
There's something that all these features have in common: they will all become deeply integrated with smartphone, tablet, and wearable operating systems. When available, Project Astra will all but certainly debut on Pixel hardware in 2025. Similarly, ChatGPT-4o is available now on iOS 18.2 since Apple Intelligence features ChatGPT integration.
ChatGPT is available system-wide in iOS 18.2, since you can invoke it anywhere with Siri. That's just like how Android users can set Gemini as their default assistant. Additionally, Apple Intelligence's Writing Tools are available anywhere you can access the keyboard. Samsung will follow suit with One UI 7.
So, two things are going to happen with AI in 2025. On one hand, AI services like Project Astra, ChatGPT-4o, and Visual Intelligence will use multimodal processing and actions to control your mobile devices for you. On the other, smaller AI features will be ingrained into the operating system, so you won't have to go to individual apps to access services like Gemini or ChatGPT.
Demis Hassabis, the CEO of Google DeepMind, told The Verge in an interview: "We really see 2025 as the true start of the agent-based era."
One of the key unknowns for AI in 2025 is whether progress is destined to stall. Some experts believe that AI progress was achieved so quickly in 2023 and 2024 that it is bound to plateau in 2025. Top executives have, on occasion, acknowledged a potential slowdown. However, they're united against the idea of a "wall" or that we've reached the limits of AI progress for now.
"When you start out quickly scaling up, you can throw more compute, and you can make a lot of progress, but you definitely are going to need deeper breakthroughs as we go to the next stage," Pichai said at the Dealbook Summit earlier this month. "So you can perceive it as there's a wall, or there's some small barriers."
In one of his classic cryptic posts on X (formerly Twitter), OpenAI CEO Sam Altman expressed a similar sentiment, saying, "There is no wall."
Regardless of whether there will or won't be a slowdown, companies like Apple, OpenAI, Anthropic, and Google will all be vying to build the fastest and most efficient AI models. While companies such as Google have vastly improved their AI models, OpenAI's headstart and ChatGPT's brand recognition have been significant hurdles for competitors. Even if other companies are better than OpenAI, users might not care enough to switch.
They'll also be navigating uncharted legal waters as the government and society decide what's up for grabs in terms of AI training material. The New York Times is just one example of an entity currently suing OpenAI for what it alleges is a breach of copyright law.
Many of these cases are currently ongoing, and we can expect some to come to a head in 2025. Additionally, it's only a matter of time before more governments and regulators try to institute safeguards and protections around AI development and use.
Finally, we'll end our preview of AI in 2025 with a look at what types of privacy efforts and safeguards will be put in place next year. Unfortunately, I think the lines may become blurred as to what is public and private when it comes to AI.
Apple Intelligence is commonly referred to as the industry standard for AI privacy — the company developed custom Private Cloud Compute servers that run a hardened OS, and Apple dares hackers to try and crack it. If they do, there's up to a million-dollar reward.
The problem is that there are three types of security levels baked into Apple Intelligence. Some tasks run on-device using the NPU inside Apple silicon chips, and others are outsourced to Apple's Private Cloud Compute servers. You won't know whether an Apple Intelligence task is running on-device or not. That's up to Apple, and you have to trust it.
Then, there's ChatGPT integration, which Apple views separately from Apple Intelligence — there are some privacy protections for users, but their requests are still shared with OpenAI, even if they aren't stored. If you link your ChatGPT account with iOS 18, something that's required to use ChatGPT Plus and Pro, you agree to OpenAI's privacy policies, not Apple's.
In 2025, I think there will be so many AI features baked into our essential devices that it'll be impossible to truly think about the privacy policies and safeguards for each. It'll be akin to agreeing to the "terms and conditions" before setting up a new device. We'll have to trust that AI companies are acting in our best interests. The only way we'll know that companies aren't living up to their promises is when they inevitably fall short.