Apple’s recent research papers indicate a significant focus on artificial intelligence (AI) technology. The company is working on developing on-device AI capabilities, including a groundbreaking method for creating animatable avatars and a novel approach to running large language models on iPhones or iPads.
One of the research papers, titled “LLM in a flash,” explores the efficient running of large language models on devices with limited memory. This breakthrough could allow complex AI applications to run smoothly on iPhones and iPads. It may also involve running a generative AI-powered Siri on-device, enhancing its ability to assist with tasks, generate text, and process natural language.
Another research paper introduces “HUGS” (Human Gaussian Splats), a method for creating fully animatable avatars from short video clips captured on an iPhone. HUGS is a neural rendering framework that can train with just a few seconds of video to create detailed avatars. Users can animate these avatars as they please.
These advancements have implications for the iPhone and Vision Pro. Apple has been rumored to be working on its own AI chatbot called “Apple GPT.” The research shows that the company is making progress in running large language models on smaller devices like iPhones, thanks to leveraging flash memory. This could enable sophisticated generative AI tools on-device and potentially lead to a generative AI-powered Siri.
The efficient inference strategy described in “LLM in a flash” could not only improve Siri but also make generative AI tools more accessible. It could advance mobile technology and enhance performance in various applications on everyday devices.
HUGS, in particular, is a significant breakthrough. It can create malleable digital avatars from just a few seconds of video, outperforming competitors in terms of rendering speeds and training time. This technology could bring a new level of personalization and realism to iPhone users in social media, gaming, education, and augmented reality (AR) applications.
HUGS could also have implications for Apple’s Vision Pro, showcased during the company’s last Worldwide Developers’ Conference. By leveraging HUGS, Vision Pro users could create highly realistic avatars that move fluidly, enhancing the Digital Persona experience.
The speed of HUGS enables real-time rendering, which is crucial for a seamless AR experience. It has the potential to enhance social, gaming, and professional applications with realistic, user-controlled avatars.
While Apple typically avoids using buzzwords like “AI,” preferring to focus on machine learning, these research papers suggest a deeper involvement in new AI technologies. However, Apple has not publicly acknowledged the implementation of generative AI in its products or officially confirmed its work with Apple GPT.
In conclusion, Apple’s research papers showcase the company’s efforts in developing on-device AI technology, particularly in the areas of running large language models and creating animatable avatars. These advancements have the potential to improve Siri, make generative AI tools more accessible, and deliver a new level of personalization and realism to iPhone users.