{"id":114244,"date":"2026-05-01T21:58:40","date_gmt":"2026-05-01T21:58:40","guid":{"rendered":"https:\/\/christiancorner.us\/index.php\/2026\/05\/01\/is-apple-intelligence-making-up-words-now\/"},"modified":"2026-05-01T21:59:56","modified_gmt":"2026-05-01T21:59:56","slug":"is-apple-intelligence-making-up-words-now","status":"publish","type":"post","link":"https:\/\/christiancorner.us\/index.php\/2026\/05\/01\/is-apple-intelligence-making-up-words-now\/","title":{"rendered":"Is Apple Intelligence Making Up Words Now?"},"content":{"rendered":"<p>\n<\/p>\n<div id=\"\">\n<hr class=\"custom-gradient-background my-6 h-(6px) max-w-(75px) border-0\"\/>\n<p>As powerful as LLMs may be, they all have one shared weakness: hallucinations. For reasons beyond our understanding, AI models have a habit of making things up without completely thinking. A response with well-cited sources and relevant information can be accurate; Then, suddenly, the AI \u200b\u200bproduces a false claim, or mistakenly interprets a sarcastic forum comment as fact. (That&#8217;s how you end up with Google&#8217;s AI overview recommending you add glue to your pizza.) Some LLMs may hallucinate less than others, but no one is immune. That&#8217;s why whenever you use a chatbot, you&#8217;ll see some kind of warning on the screen letting you know that the AI \u200b\u200bcan make mistakes.  <\/p>\n<p>Apple Intelligence, Apple&#8217;s AI platform, is no exception here. When the company first launched its AI, it included notification summaries as a &#8220;benefit.&#8221; However, once the feature started incorrectly summarizing news alerts, Apple had to quickly backtrack \u2014 such as in one case, when Apple Intelligence paraphrased a BBC headline to read that United Healthcare shooting suspect Luigi Mangione had killed himself in prison. The company later reinstated the feature but added some additional guardrails, such as putting news summaries in italics.<\/p>\n<h2 id=\"apple-intelligence-might-be-making-up-new-words\">Apple intelligence can create new words<\/h2>\n<p>I stumbled across <a rel=\"noopener\" target=\"_blank\" href=\"https:\/\/www.reddit.com\/r\/ios\/comments\/1szvea3\/anyone_else_get_fake_words_in_their_ai_summaries\/\" title=\"open in a new window\">this post<\/a> on the r\/iOS subreddit on Thursday, which adds an interesting note to the AI \u200b\u200bhallucination discussion. The post reads, &#8220;Does anyone else find fake words in their AI summaries?&#8221; With an attached screenshot, showing the notification summary for the Acme Weather app. The first sentence reads: &#8220;Light rain intermittently for hours.&#8221; Ahh, constant rain. At least it&#8217;s only for an hour. wait; <em>latent<\/em>?&#8221;<\/p>\n<p>Despite sounding like a real word, inquisitive is, in fact, entirely made up. The poster hasn&#8217;t shared exactly what the notification said, so we can&#8217;t know what words Apple Intelligence is working on here. What we do know is that the poster was seen &#8220;unsupported&#8221; three times, and they&#8217;re not alone. Given the joke that made fun of the Weather app that the OP uses, some comments on the post confirm that other people have seen Apple Intelligence create fake words in their notification summaries. One commenter said he had seen &#8220;flaming&#8221; in a summary, and &#8220;tranqued&#8221; in a mail summary; Another shared that he observed &#8220;strictly&#8221; rather than strictly on two separate occasions. <\/p>\n<div class=\"pogoClear relative my-10 border-b-(1.5px) border-t-(1.5px) border-dashed border-black py-5 sm:my-14 sm:border-0 sm:py-0\" data-ga-click=\"\" data-ga-template=\"News\" data-ga-module=\"openweb_widget\" data-ga-element=\"openweb_scroll\" data-ga-item=\"openweb_scroll_midpage\" x-data=\"{&#10;         commentsCount: null,&#10;         hasComments: false,&#10;         async fetchCommentsCount() {&#10;             try {&#10;                 if (window.openweb &amp;&amp; typeof window.openweb.getMessagesCount === 'function') {&#10;                     this.commentsCount = await window.openweb.getMessagesCount('01KQJFMX579CQTX83A7R46AMS7');&#10;                     this.hasComments = this.commentsCount !== null &amp;&amp; this.commentsCount &gt; 0;&#10;                 }&#10;             } catch (e) {&#10;                 console.warn('Failed to fetch comment count:', e);&#10;             }&#10;         }&#10;     }\" x-init=\"fetchCommentsCount()\" x-cloak=\"\">\n<div class=\"relative flex justify-center\">\n<div class=\"flex max-w-fit items-center gap-x-3 bg-white px-5\">\n<p>            <span class=\"text-sm font-medium text-black\"><\/p>\n<p>                What do you think so far?<br \/>\n                <button class=\"ml-1 font-semibold text-brand-green underline hover:text-brand-green-700\" type=\"button\" aria-label=\"Comment section trigger\" onclick=\"window.openweb.scrollToComments('01KQJFMX579CQTX83A7R46AMS7')\" x-text=\"hasComments ? 'Post a comment.' : 'Be the first to post a comment.'\"\/><br \/>\n            <\/span>\n        <\/div>\n<\/p><\/div>\n<\/div>\n<p>I can&#8217;t find any other examples on the internet showing this phenomenon, and I personally don&#8217;t use notification summaries on my iPhone, so I haven&#8217;t seen the issue myself. I can&#8217;t say for sure how widespread this problem is, or whether it&#8217;s limited to a certain version of iOS, a specific device, or even from one app to another. However, one of the commenters has a theory: He believes that when the on-device AI model used by Apple Intelligence can&#8217;t shorten the original phrase on its own, it creates a portmanteau to accommodate. In his words, AI &#8220;YOLOS&#8221; is a &#8220;vibes-word&#8221;, like imbixant. He says this happens to him most often with weather app summaries.     <\/p>\n<h2 id=\"does-apple-intelligence-make-up-words-in-your-summaries\">Does Apple Intelligence create words in your summary?<\/h2>\n<p>Again, it cannot be said whether this affects a large number of Apple users or only a small portion. The fact that I&#8217;ve only been able to find one post about it, with two commenters sharing similar experiences, leads me to believe it&#8217;s the latter, but I&#8217;d love to hear from someone who has had a similar experience. If you use Apple Intelligence&#8217;s Notification Summary, please let me know if you&#8217;ve seen any fabrications on your end. I may need to turn on the feature for tracking. <\/p>\n<\/p><\/div>\n<p><script>\n            var facebookPixelLoaded = false;\n            window.addEventListener(\"load\", function() {\n                document.addEventListener(\"scroll\", facebookPixelScript);\n                document.addEventListener(\"mousemove\", facebookPixelScript);\n            });\n            function facebookPixelScript() {\n                if (!facebookPixelLoaded) {\n                    facebookPixelLoaded = true;\n                    document.removeEventListener(\"scroll\", facebookPixelScript);\n                    document.removeEventListener(\"mousemove\", facebookPixelScript);\n                    window.zdconsent.cmd.push(function() {\n                        ! function(f, b, e, v, n, t, s) {\n                            if (f.fbq) return;\n                            n = f.fbq = function() {\n                                n.callMethod ?\n                                    n.callMethod.apply(n, arguments) : n.queue.push(arguments);\n                            };\n                            if (!f._fbq) f._fbq = n;\n                            n.push = n;\n                            n.loaded = !0;\n                            n.version = \"2.0\";\n                            n.queue = ();\n                            t = b.createElement(e);\n                            t.async = !0;\n                            t.src = v;\n                            s = b.getElementsByTagName(e)(0);\n                            s.parentNode.insertBefore(t, s);\n                        }(window,\n                            document, \"script\", \"\/\/connect.facebook.net\/en_US\/fbevents.js\");\n                        fbq(\"init\", \"37418175030\");\n                        fbq(\"track\", \"PageView\");\n                    });\n                }\n            }\n        <\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As powerful as LLMs may be, they all have one shared weakness: hallucinations. For reasons beyond our understanding, AI models have a habit of making things up without completely thinking. A response with well-cited sources and relevant information can be accurate; Then, suddenly, the AI \u200b\u200bproduces a false claim, or mistakenly interprets a sarcastic forum<\/p>\n","protected":false},"author":1,"featured_media":114246,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[57],"tags":[2145,935,2055,2902],"class_list":{"0":"post-114244","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-bible-verse","8":"tag-apple","9":"tag-intelligence","10":"tag-making","11":"tag-words"},"_links":{"self":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/114244","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/comments?post=114244"}],"version-history":[{"count":1,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/114244\/revisions"}],"predecessor-version":[{"id":114247,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/114244\/revisions\/114247"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/media\/114246"}],"wp:attachment":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/media?parent=114244"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/categories?post=114244"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/tags?post=114244"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}