Motorola, widely recognized for its smartphones, is extending its reach beyond hardware into the world of artificial intelligence (AI). Alongside its well-known Edge series, the company is working on Large Action Models (LAMs), a type of AI designed to function on smartphones and perform tasks within apps. This marks an important step in Motorola’s strategy, moving towards AI that enables users to automate daily actions more seamlessly.
Understanding AI LAMs
A Large Action Model (LAM) is an AI system specifically built to execute tasks within apps, distinguishing it from Large Language Models (LLMs), which are commonly used to generate text. While LLMs can respond to queries and produce content, LAMs are action-oriented, capable of interacting with apps on a device to carry out user commands.
For example, instead of simply providing information, AI LAMs could potentially interact with a mobile app to order a coffee or schedule an Uber. This is similar to the way AI tools like Google Gemini and Apple’s potential AI systems aim to perform tasks within applications. However, a fully functional, action-oriented AI that can handle these kinds of tasks has yet to reach mainstream use.
AI-Powered Tasks on Your Smartphone
Motorola recently showcased a concept video illustrating how it envisions AI LAMs working in real life. In the video, a user interacts with an AI model by instructing it to order coffee from Starbucks and call an Uber, all without manual input. The AI navigates through the necessary apps, completing the tasks and providing verbal feedback to the user.
Though this video is not an official product announcement, it offers a glimpse into the potential capabilities of LAMs. The AI model demonstrated in the video learns user preferences over time, suggesting that future versions might be able to remember frequent orders or commonly used locations, further streamlining the user experience.