{"id":47430,"date":"2026-04-07T21:11:12","date_gmt":"2026-04-07T21:11:12","guid":{"rendered":"https:\/\/christiancorner.us\/index.php\/2026\/04\/07\/i-tried-googles-new-on-device-ai-transcription-app-for-iphone-and-it-was-surprisingly-accurate\/"},"modified":"2026-04-07T21:11:33","modified_gmt":"2026-04-07T21:11:33","slug":"i-tried-googles-new-on-device-ai-transcription-app-for-iphone-and-it-was-surprisingly-accurate","status":"publish","type":"post","link":"https:\/\/christiancorner.us\/index.php\/2026\/04\/07\/i-tried-googles-new-on-device-ai-transcription-app-for-iphone-and-it-was-surprisingly-accurate\/","title":{"rendered":"I tried Google&#8217;s new on-device AI transcription app for iPhone, and it was surprisingly accurate"},"content":{"rendered":"<p>\n<\/p>\n<div id=\"\">\n<hr class=\"custom-gradient-background my-6 h-(6px) max-w-(75px) border-0\"\/>\n<p>Google is back with another AI service \u2013 this time, an offline dictation program using its &#8220;Gemma&#8221; architecture. But instead of including it in the Gemini app or as a Gemini function, the company has decided to roll it out in a dedicated iPhone app. <em>Very<\/em> catchy name of<a rel=\"noopener\" target=\"_blank\" href=\"https:\/\/apps.apple.com\/us\/app\/google-ai-edge-eloquent\/id6756505519\" title=\"open in a new window\">Google AI Edge Eloquent<\/a>&#8221; <\/p>\n<p>I decided to try the app on release day, however the privacy policy stopped me. Google says your location, contacts, identifiers, device diagnostics, contact information, user content, usage data and &#8220;other&#8221; data can be linked to you, while purchases and other diagnostics can be collected but not linked to you. That&#8217;s a lot of data, especially for an app that advertises that &#8220;audio, confidential conversations and personal data never leave your device,&#8221; and I&#8217;m not sure I would have been eager to download the app otherwise. But, as the saying goes, if a service is free, <em>You<\/em> Are products. I&#8217;ve reached out to Google for clarification here, and will update this story if I get back. <\/p>\n<h2 id=\"how-to-try-googles-new-ai-transcription-app\">How to try Google&#8217;s new AI transcription app<\/h2>\n<p>Once you&#8217;ve downloaded the app, setup is easy \u2013 you record a sample example phrase that the app asks you to say, then choose an option: &#8220;On-Device Mode&#8221;, which is completely offline, and stores your conversations online on your device; or &#8220;Advanced Text Polishing,&#8221; which keeps <em>audio<\/em> on your device, but uses Gemini to &#8220;polish&#8221; your text, which requires you to send the data to the cloud (and presumably this is where all the above privacy policy data is going). You won&#8217;t need to keep Gemini running though to make basic edits to your transcript \u2013 by design, the app removes &#8220;filler&#8221; words like &#8220;um.&#8221; Keep in mind that the app opens in \u201cEnhanced Text Polishing\u201d mode by default \u2013 at least, that&#8217;s how it works on my end. But a simple tap of the toggle in the top-right corner of the main screen switches you into \u201cOn-Device Mode.\u201d   <\/p>\n<p>I had some trouble getting the app up and running: every time I tried to test it, it claimed it didn&#8217;t talk to me at all. But after pairing the AirPods with my iPhone and unpairing them, the app started working. To test the app, I played its intro <a rel=\"noopener\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=1YviIsCbT9Q\" title=\"open in a new window\">This Audio University YouTube Video<\/a>Which is completely dialogue based. Once the app was working, it immediately began transcribing the video with almost perfect accuracy \u2013 at least until the end. I would see the app enter the wrong words, then go back and change them according to the next words the context provided. Once the recording was finished, the transcript was almost identical to the transcript of the video, except for a few quirks: it mistakenly thought &#8220;If this is our first time meeting&#8221; as &#8220;This is our first time meeting,&#8221; and recorded the same sentence twice. But other than that, this is a perfectly usable transcript of the beginning of the video. <\/p>\n<div class=\"pogoClear relative my-10 border-b-(1.5px) border-t-(1.5px) border-dashed border-black py-5 sm:my-14 sm:border-0 sm:py-0\" data-ga-click=\"\" data-ga-template=\"News\" data-ga-module=\"openweb_widget\" data-ga-element=\"openweb_scroll\" data-ga-item=\"openweb_scroll_midpage\" x-data=\"{&#10;         commentsCount: null,&#10;         hasComments: false,&#10;         async fetchCommentsCount() {&#10;             try {&#10;                 if (window.openweb &amp;&amp; typeof window.openweb.getMessagesCount === 'function') {&#10;                     this.commentsCount = await window.openweb.getMessagesCount('01KNMNCE7GYE03F0KMZWCBHP56');&#10;                     this.hasComments = this.commentsCount !== null &amp;&amp; this.commentsCount &gt; 0;&#10;                 }&#10;             } catch (e) {&#10;                 console.warn('Failed to fetch comment count:', e);&#10;             }&#10;         }&#10;     }\" x-init=\"fetchCommentsCount()\" x-cloak=\"\">\n<div class=\"relative flex justify-center\">\n<div class=\"flex max-w-fit items-center gap-x-3 bg-white px-5\">\n<p>            <span class=\"text-sm font-medium text-black\"><\/p>\n<p>                What do you think so far?<br \/>\n                <button class=\"ml-1 font-semibold text-brand-green underline hover:text-brand-green-700\" type=\"button\" aria-label=\"Comment section trigger\" onclick=\"window.openweb.scrollToComments('01KNMNCE7GYE03F0KMZWCBHP56')\" x-text=\"hasComments ? 'Post a comment.' : 'Be the first to post a comment.'\"\/><br \/>\n            <\/span>\n        <\/div>\n<\/p><\/div>\n<\/div>\n<p>From here, you have many options\u2014especially if you invite Gemini to help. Basically, if you want to correct any incorrect text that was &#8220;polished&#8221; by the AI, you can tap a pencil icon on the transcript to manually edit it. Above this, you can see &#8220;usage statistics&#8221;, which includes the number of words spoken, words spoken per minute, and the number of edits made by the AI. If you switch to Gemini, you&#8217;ll have access to additional AI editing tools, including &#8220;Key Points,&#8221; &#8220;Formal,&#8221; &#8220;Short,&#8221; and &#8220;Long.&#8221; When you&#8217;re satisfied with the transcription, you can tap the Copy button to move the text to your Clipboard and paste it elsewhere. In the &#8220;History&#8221; tab, you can view your previous transcriptions, and return to them to edit them (manually or with AI). In the \u201cDictionary\u201d tab, you can add obscure words that you use frequently but the AI \u200b\u200bdoesn&#8217;t catch, further improving the accuracy of your recordings.  <\/p>\n<p>In my brief testing, the app works well, and I appreciate the option to use it on only one device. If it seems faster or more accurate, I&#8217;d definitely consider using it over iOS&#8217;s built-in transcription, especially since there are some more robust features here &#8211; assuming that&#8217;s actually on-device. <em>does<\/em> This means keeping my data out of Google&#8217;s hands. <\/p>\n<\/p><\/div>\n<p><script>\n            var facebookPixelLoaded = false;\n            window.addEventListener(\"load\", function() {\n                document.addEventListener(\"scroll\", facebookPixelScript);\n                document.addEventListener(\"mousemove\", facebookPixelScript);\n            });\n            function facebookPixelScript() {\n                if (!facebookPixelLoaded) {\n                    facebookPixelLoaded = true;\n                    document.removeEventListener(\"scroll\", facebookPixelScript);\n                    document.removeEventListener(\"mousemove\", facebookPixelScript);\n                    window.zdconsent.cmd.push(function() {\n                        ! function(f, b, e, v, n, t, s) {\n                            if (f.fbq) return;\n                            n = f.fbq = function() {\n                                n.callMethod ?\n                                    n.callMethod.apply(n, arguments) : n.queue.push(arguments);\n                            };\n                            if (!f._fbq) f._fbq = n;\n                            n.push = n;\n                            n.loaded = !0;\n                            n.version = \"2.0\";\n                            n.queue = ();\n                            t = b.createElement(e);\n                            t.async = !0;\n                            t.src = v;\n                            s = b.getElementsByTagName(e)(0);\n                            s.parentNode.insertBefore(t, s);\n                        }(window,\n                            document, \"script\", \"\/\/connect.facebook.net\/en_US\/fbevents.js\");\n                        fbq(\"init\", \"37418175030\");\n                        fbq(\"track\", \"PageView\");\n                    });\n                }\n            }\n        <\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google is back with another AI service \u2013 this time, an offline dictation program using its &#8220;Gemma&#8221; architecture. But instead of including it in the Gemini app or as a Gemini function, the company has decided to roll it out in a dedicated iPhone app. Very catchy name ofGoogle AI Edge Eloquent&#8221; I decided to<\/p>\n","protected":false},"author":1,"featured_media":47433,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[57],"tags":[6415,210,6012,2653,17612,17616,17615],"class_list":{"0":"post-47430","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-bible-verse","8":"tag-accurate","9":"tag-app","10":"tag-googles","11":"tag-iphone","12":"tag-ondevice","13":"tag-surprisingly","14":"tag-transcription-2"},"_links":{"self":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/47430","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/comments?post=47430"}],"version-history":[{"count":1,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/47430\/revisions"}],"predecessor-version":[{"id":47435,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/47430\/revisions\/47435"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/media\/47433"}],"wp:attachment":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/media?parent=47430"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/categories?post=47430"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/tags?post=47430"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}