It is worth noting that Apple has adopted an on-device model that runs separately from the product instead of an artificial intelligence (AI) service that connects a large language model (LLM) to the cloud like Microsoft (MS) or Google.
Apple introduced new products such as the first mixed reality (MR) headset ‘Vision Pro’ and the latest chips ‘M2 Ultra’ in the keynote speech at the Worldwide Developers Conference 2023 (WWDC 2023) on the 5th (local time) On hand another, introduced more than 10 AI technologies that are embedded in products.
The highlight of the day’s presentation was the Vision Pro, but AI was also a significant part of the presentation under a different name. Apple didn’t mention the word ‘AI’ even once, instead using the rather academic term ‘transformer’ or ‘machine learning’ (ML).
In particular, rather than talking about specific AI models or training data or ways to improve them in the future, Apple simply referred to AI features built into the product.
“We integrate AI capabilities into our products, but people don’t think of it as AI,” said Apple CEO Tim Cook.
Apple has a different approach to AI than Google, Microsoft, or OpenAI. The strategy is to permeate AI throughout the entire product without putting AI technology at the forefront.
That’s because it’s a hardware-based company, specifically the iPhone and its ecosystem. It’s an analysis that prioritizes sales of iOS-based devices rather than revamping search like Google or improving productivity software like Microsoft.
OpenAI’s ChatGPT may have logged in more than 100 million users within two months of its launch, but Apple’s rationale is that it offers AI features that a billion iPhone owners use every day.
So, in this event, instead of cloud-based AI that builds large models with supercomputers and large-scale data, the focus was on on-device AI that runs directly on Apple devices.

Based on this, Apple in iOS 17 ▲ improved auto-correction ▲ Dictation functions and Live Voice Message ▲ Personal Volume function for AirPods ▲ Improved Smart Stack on watchOS ▲ New iPad lock screen that animates live photos; prompt suggestions in the new Journal app and the ability to create 3D avatars for video calls in Vision Pro.
AI functions like these can run directly on Apple devices like the iPhone, but models like the ChatGPT require hundreds of expensive GPUs to work together. By allowing models to run on phones, Apple not only requires less data to run, but also avoids data privacy concerns.
He also emphasized that Apple loads new AI circuits and GPUs into its chips every year, and that it can easily adapt to changes and new technologies by managing the entire architecture. All this on-device AI processing is a breeze for Apple, thanks to an Apple Silicon chip called the Neural Engine specifically designed to accelerate ML applications.

New to iOS 17, Auto-Correct leverages a converter model for word prediction to provide automatic correction to complete a word or an entire sentence when you press the spacebar. Converter models run on device on your device and protect your privacy while you write.
Dictation allows users to tap the small microphone icon on the iPhone keyboard and start speaking, turning it into text. In iOS 17, the dictation function uses the neural engine to provide a transformer-based speech recognition model.
The Live Voicemail feature for the iPhone’s native Phone app means that if someone calls an iPhone recipient, goes unreachable and starts leaving a voicemail, the caller’s words are transcribed into text on the recipient’s screen in real time to decide if they should answer the call. can This feature is powered by the Neural Engine and occurs entirely on the device, and this information is not shared with Apple.
Apple also introduced a new app called Journal that allows a personal text and image journal, as a type of interactive journal, and can be locked and encrypted on the iPhone. The new Journal app in iOS 17 automatically pulls recent photos, workouts and other activities from users’ phones, allowing users to edit the content and add new text and multimedia as desired to create a digital journal.

In addition, the Live Photos feature can use advanced ML models to generate additional frames or mark fields in PDFs to fill with information such as names, addresses and emails from contacts. The custom volume feature for AirPods uses ML to understand environmental conditions and listening preferences over time and automatically adjusts the volume of the AirPods to the user’s preference. An Apple Watch gadget feature called Smart Stack uses ML to display relevant information when needed.

Apple also said that the moving image of the user’s eyes on the front of the goggles in the Headset Vision Pro is part of a special 3D avatar created by scanning the face. After a quick registration process using the Vision Pro’s front face sensors, the system uses an advanced encoder-decoder neural network to create a 3D ‘digital persona’ device.

Finally, Apple unveiled the ‘M2 Ultra’, an Apple Silicon chip based on up to 24 CPU cores, 76 GPU cores and a 32-core Neural Engine known to perform 31.6 trillion operations per second. This is 40% faster than its predecessor, the M1 Ultra.
And the M2 Ultra supports 192GB of integrated memory, 50% more than the M1 Ultra. This allows training large ML workloads, such as large transformer models, that the most powerful discrete GPUs cannot handle due to lack of memory in a single system.
A larger amount of RAM here means that larger and more capable AI models can fit in the memory. The new ‘Mac Studio’ and ‘Mac Pro’ can be used in the form factor of desktop and tower-sized machines for learning AI.
This has major implications for major technologies such as Google, Meta, MS, and IBM, which have jumped into the light business of a large language model. In particular, meta seems to be expanding a niche market with a lightweight open source concept.
It is an analysis that if they succeed in applying all light models to mobile like Apple, there is a possibility that the productive AI industry and the mobile telecommunication industry will also change rapidly.
In addition, some are paying attention to see if Apple will use productive AI. iOS, a powerful mobile OS, is expected to be the key to reshaping the industry.
Meanwhile, Apple CEO Tim Cook said in an interview with ABC News that day, “I personally use ChatGPT, and I’m looking forward to its unique application, and Apple is looking closely at it .”
Reporter Park Chan cpark@aitimes.com