We may earn commission from the links on this page.
This is being described as a step towards “super intelligence”. meta announced It is releasing Muse Spark, which is a revised and improved AI. This “radically multimodal reasoning model” goes far beyond a chatbot, and it will soon be in your glasses and your social feeds. It is now available in the Meta AI app, with plans to introduce it with a smart glasses update in the next few weeks.
Instead of a one-size-fits-all approach, Muse Spark has three levels of “thinking” and users will be able to control how deep the intelligence goes.
-
Quick Mode: For quick queries and everyday chats.
-
Thinking Mode: This mode is designed for solving more complex problems, so if you need some help with math, science, or logic, this is the mode to go for.
-
Thinking Mode: The highest level of Muse Sparks engages multiple AI agents that work in parallel and collaborate to complete complex, multi-step tasks.
Meta says the Muse Spark’s performance is comparable to or exceeds their Llama 4 Maverick model when using less computing power. This means, theoretically, higher level logic without excessive server usage.
While Muse Spark will be accessible in a variety of locations, this looks to be Muse Spark’s ground-up integration of visual content made For smart glasses. Here are some of the ways Ray-Ban Meta and Oakley Meta users will be able to make use of the new AI.
One of the main improvements of the Muse Spark compared to Meta’s previous models is that the new AI will integrate visual information across different devices. So, theoretically, you could point your glasses at the mess of wires and electronic boxes and say, “How do I hook up this home theater system?” Or get step-by-step coaching to assemble a piece of IKEA furniture without opening the booklet. The AI will read the instructions and make sure you’re not doing anything wrong.
Muse Sparks will have health-related reasoning capabilities
Meta said its Meta Superintelligence Lab has collaborated with more than 1,000 physicians to develop AI’s health reasoning capabilities. Users will be able to do things like generate an interactive display that unlocks nutritional information about a food, and track which muscles activate during a workout.
What do you think so far?
But how will it actually perform?
All of the above is “in theory”. Artificial intelligence has not always lived up to its hype, even when it has existed Promoted to huge audiences. It’s one thing to perform well in lab benchmark tests, but how does the technology work in the real world, where lighting is erratic, Wi-Fi is slow, and furniture instructions can be spotty? extremely Complex, real challenge.
Although I haven’t thought deeply about the technology, I did a quick test of it by turning on “thinking” mode and sending the picture below of a random assortment of audio gear to Meta AI:
Credit: Stephen Johnson
It not only correctly identified everything in the picture, but also gave me a few different options for possible ways to tie it together, and told me (correctly) what cords I needed. So I am looking forward to putting it on my glasses. If you want to test it yourself, Muse Spark is already running meta.ai More Meta AI apps, and smart glasses firmware and social media integration are expected to come soon.
