Apple Intelligence Update Adds On-Device AI Translation, Visual Recognition, and Developer Access

During Apple’s annual WWDC 2025 conference live stream Monday, the company took some time to outline the next phase of Apple Intelligence, adding real-time translation, expanded visual recognition tools, and on-device AI for developers. The tools work across the entire ecosystem, said Apple, including iPhone, iPad, Mac, Apple Watch, and Vision Pro, and will launch with the fall software updates.

This is a shift from Apple talking about consumer-facing AI features to really focus in on deeper, system-level integration. Apple Intelligence will power Live Translate in calls and messages, identify (and interact with) content on screen, and push further into Image Playground and Genmoji apps. For developers, Apple will let them access its foundational on-device models, making it easier to provide private, AI-enhanced experiences across apps without having to send data to the cloud. This echoes Apple’s long-standing stance on privacy and hardware, which contrasts with more cloud-based AI providers like Open AI and Google.

“Last year, we took the first steps on a journey to bring users intelligence that’s helpful, relevant, easy to use, and right where users need it, all while protecting their privacy. Now, the models that power Apple Intelligence are becoming more capable and efficient, and we’re integrating features in even more places across each of our operating systems,” said Craig Federighi, Apple’s senior vice president of Software Engineering, during the keynote. “We’re also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can’t wait to see what developers create.”

On Device Live Translation

A new Apple Intelligence feature enables real-time translation in Messages, letting users chat across languages without leaving the app.
Apple

Apple promises that Apple Intelligence will come to eight mor languages by the end of the year, too, including Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese.

This will no doubt help the Live Translation feature, which will be integrated into Messages, FaceTime, and Phone apps, enabled by Apple’s models that run on device as well. Your conversations are yours, promises the company, and even it cannot read them.

Messages will translate your text on the fly, letting you communicate with speakers of otehr languages in real. time. IN Facetime, there’ll be translated live captions to read along with what your conversation partner is saying, while the Phone app will turn those captions into audio translations during the whole conversation.

Genmoji and Image Playground Updates

Image Playground, powered by Apple Intelligence, lets users generate styled visuals on-device—including oil painting effects—using prompts and preset suggestions.
Apple

If you’re into messing around with AI image generation, Genmoji and Image Playground improvements should be right up your alley.

In addition to creating your own Genmojis with text descriptions, now you can mix emoji together and combine them with descriptions. You’ll also be able to change expressions and things like hairstyles in Image Playground and Genmoji when you base your creations on your family and friends.

Image Playground gets more ChatGPT integrations, letting you create images with new styles like oil painting or vector art with a tap or a text description. This of course is not processed on-device; Apple will send the images and descriptions to ChatGPT, which will then return the image to you. You’ll have to opt in for that to happen, as well.

Visual Intelligence Sees your Screen

Finally, you can have Apple Intelligence interact with the things you see on your screen. This new feature will let you take a screenshot and ask ChatGPT questions about what you’re seeing on the screen. In true corporate style, the demo showed someone taking a screenshot of a jacket that they then asked to buy, and highlighting a lamp they also wanted to buy. Whether this function will do more than find you places to buy things is yet to be seen.

One interesting use case, though, is the ability to take a screenshot of an event poster and have Visual Intelligence add it to your calendar. That sounds pretty great, to be honest. Add that concert to my calendar, Apple!

Workout Buddy Brings Apple Intelligence to Apple Watch

A user runs down a flight of outdoor stairs, tracking fitness progress with Apple Watch’s Move ring feature and Workout Buddy.
Apple

Fitness fan? Workout Buddy on Apple Watch will take your fitness and workout data and generate “personalized, motivational insights” during your session. Apple wrote:

“To offer meaningful inspiration in real time, Workout Buddy analyzes data from a user’s current workout along with their fitness history, based on data like heart rate, pace, distance, Activity rings, personal fitness milestones, and more. A new text-to-speech model then translates insights into a dynamic generative voice built using voice data from Fitness+ trainers, so it has the right energy, style, and tone for a workout. Workout Buddy processes this data privately and securely with Apple Intelligence.”

Workout Buddy needs an Apple Watch, Bluetooth headphones, and an Apple Intelligence-capable iPhone nearby.

Rate article
Add a comment