As powerful as LLMs may be, they all have one shared weakness: hallucinations. For reasons beyond our understanding, AI models have a habit of making things up without completely thinking. A response with well-cited sources and relevant information can be accurate; Then, suddenly, the AI ​​produces a false claim, or mistakenly interprets a sarcastic forum comment as fact. (That’s how you end up with Google’s AI overview recommending you add glue to your pizza.) Some LLMs may hallucinate less than others, but no one is immune. That’s why whenever you use a chatbot, you’ll see some kind of warning on the screen letting you know that the AI ​​can make mistakes.
Apple Intelligence, Apple’s AI platform, is no exception here. When the company first launched its AI, it included notification summaries as a “benefit.” However, once the feature started incorrectly summarizing news alerts, Apple had to quickly backtrack — such as in one case, when Apple Intelligence paraphrased a BBC headline to read that United Healthcare shooting suspect Luigi Mangione had killed himself in prison. The company later reinstated the feature but added some additional guardrails, such as putting news summaries in italics.
Apple intelligence can create new words
I stumbled across this post on the r/iOS subreddit on Thursday, which adds an interesting note to the AI ​​hallucination discussion. The post reads, “Does anyone else find fake words in their AI summaries?” With an attached screenshot, showing the notification summary for the Acme Weather app. The first sentence reads: “Light rain intermittently for hours.” Ahh, constant rain. At least it’s only for an hour. wait; latent?”
Despite sounding like a real word, inquisitive is, in fact, entirely made up. The poster hasn’t shared exactly what the notification said, so we can’t know what words Apple Intelligence is working on here. What we do know is that the poster was seen “unsupported” three times, and they’re not alone. Given the joke that made fun of the Weather app that the OP uses, some comments on the post confirm that other people have seen Apple Intelligence create fake words in their notification summaries. One commenter said he had seen “flaming” in a summary, and “tranqued” in a mail summary; Another shared that he observed “strictly” rather than strictly on two separate occasions.
What do you think so far?
I can’t find any other examples on the internet showing this phenomenon, and I personally don’t use notification summaries on my iPhone, so I haven’t seen the issue myself. I can’t say for sure how widespread this problem is, or whether it’s limited to a certain version of iOS, a specific device, or even from one app to another. However, one of the commenters has a theory: He believes that when the on-device AI model used by Apple Intelligence can’t shorten the original phrase on its own, it creates a portmanteau to accommodate. In his words, AI “YOLOS” is a “vibes-word”, like imbixant. He says this happens to him most often with weather app summaries.
Does Apple Intelligence create words in your summary?
Again, it cannot be said whether this affects a large number of Apple users or only a small portion. The fact that I’ve only been able to find one post about it, with two commenters sharing similar experiences, leads me to believe it’s the latter, but I’d love to hear from someone who has had a similar experience. If you use Apple Intelligence’s Notification Summary, please let me know if you’ve seen any fabrications on your end. I may need to turn on the feature for tracking.
