{"id":88217,"date":"2026-04-22T12:02:34","date_gmt":"2026-04-22T12:02:34","guid":{"rendered":"https:\/\/christiancorner.us\/index.php\/2026\/04\/22\/googles-latest-tensor-processors-take-the-eco-friendly-route-but-what-about-the-cost\/"},"modified":"2026-04-22T12:04:10","modified_gmt":"2026-04-22T12:04:10","slug":"googles-latest-tensor-processors-take-the-eco-friendly-route-but-what-about-the-cost","status":"publish","type":"post","link":"https:\/\/christiancorner.us\/index.php\/2026\/04\/22\/googles-latest-tensor-processors-take-the-eco-friendly-route-but-what-about-the-cost\/","title":{"rendered":"Google&#8217;s latest Tensor processors take the eco-friendly route, but what about the cost?"},"content":{"rendered":"<p>\n<\/p>\n<div data-content-wrapper=\"true\">\n<div class=\"e_f\">\n<div class=\"e_7s\" style=\"max-width:1974px\"><picture class=\"e_Vg\" style=\"padding-top:56.23%;aspect-ratio:1974 \/ 1110\"><source sizes=\"(min-width: 64rem) 51.25rem, 80vw\" srcset=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen.jpg.webp 1974w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-64w-36h.jpg.webp 64w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-1000w-562h.jpg.webp 1000w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-1920w-1080h.jpg.webp 1920w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-1536w-864h.jpg.webp 1536w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-675w-380h.jpg.webp 675w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-300w-170h.jpg.webp 300w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-1280w-720h.jpg.webp 1280w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-840w-472h.jpg.webp 840w\" type=\"image\/webp\"\/><\/picture>\n<div class=\"e_nv e_8s\">\n<p>C. Scott Brown\/Android Authority<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div data-container-type=\"content\" class=\"e_Ui e_e e_P\">\n<p>TL;DR<\/p>\n<ul>\n<li>Google has announced the eighth generation Tensor Processing Unit (TPU) for its data centers.<\/li>\n<li>The new category of TPU is divided based on use, with separate units for training and inference.<\/li>\n<li>Google says this reduces the energy required for the actual end use, which should benefit the environment.<\/li>\n<\/ul>\n<\/div>\n<div class=\"e_e e_P\">\n<p>At its Google Cloud Next event last year, Google announced the Ironwood class of tensor processing units (TPUs) that power its data centers. Designed with the AI \u200b\u200bera in mind, these TPUs focus largely on the ability of AI to make inferences or predictions based on what it has been trained on (essentially what chatbots do), but without actually knowing the answer in advance. This year, it has made further advances in TPU hardware and is now splitting the computation to perform training and inference separately.<\/p>\n<\/div>\n<div class=\"e_e e_P\">\n<p>At Cloud Next 2026, Google announced its eighth generation of TPUs with different architectures for different purposes. The newly introduced TPUs include the TPU 8T, which will be used for training AI models, and the TPU 8i, which will be specialized for inference-related duties.<\/p>\n<\/div>\n<div class=\"e_e e_P\">\n<p>Google says the split is done to address the different power and computing requirements of the two processes. This approach will help its data centers reduce energy consumption, thereby reducing operating costs and reducing the negative impacts of AI on the environment. This means your use of Gemini to keep data centers cool may soon (hopefully!) consume a lot less water.<\/p>\n<\/div>\n<div class=\"e_e e_Dk e_Ck\" data-container-type=\"content\">\n<div class=\"e_e e_P\">\n<p><strong>Don&#8217;t want to miss the best of <em>Android Authority<\/em>?<\/strong><\/p>\n<\/div>\n<div class=\"e_e e_am\"><picture class=\"e_em e_fm e_Vg\" style=\"padding-top:31.51%;aspect-ratio:676 \/ 213\"><source sizes=\"9.375rem\" srcset=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_light@2x.png.webp 676w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_light@2x-64w-20h.png.webp 64w\" type=\"image\/webp\"\/><img class=\"e_Wg\" decoding=\"async\" loading=\"lazy\" sizes=\"9.375rem\" title=\"Google Preferred Source Badge Lite@2x\" srcset=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_light@2x.png 676w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_light@2x-64w-20h.png 64w\" alt=\"Google Preferred Source Badge Lite@2x\" src=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_light@2x.png\"\/><\/picture><picture class=\"e_em e_Vg\" style=\"padding-top:31.51%;aspect-ratio:676 \/ 213\"><source sizes=\"9.375rem\" srcset=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_dark@2x.png.webp 676w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_dark@2x-64w-20h.png.webp 64w\" type=\"image\/webp\"\/><img class=\"e_Wg\" decoding=\"async\" loading=\"lazy\" sizes=\"9.375rem\" title=\"Google Preferred Source Badge Dark@2x\" srcset=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_dark@2x.png 676w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_dark@2x-64w-20h.png 64w\" alt=\"Google Preferred Source Badge Dark@2x\" src=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2025\/09\/google_preferred_source_badge_dark@2x.png\"\/><\/picture><\/div>\n<\/div>\n<div class=\"e_e e_P\">\n<p>Training neural networks involves high-bandwidth memory and large arrays of processing units because it requires updating billions of parameters every second. Training involves a process called &#8220;<a rel=\"noopener\" target=\"_blank\" href=\"https:\/\/www.geeksforgeeks.org\/machine-learning\/backpropagation-in-neural-network\/\">backward propagation of errors<\/a>,&#8221; which involves countless feedback loops that test and optimize the neural network on the training set until it starts remembering accurate data. It&#8217;s basically like testing a person until they tell you the correct answer.<\/p>\n<\/div>\n<div class=\"e_f\">\n<div class=\"e_7s\" style=\"max-width:2560px\"><picture class=\"e_Vg\" style=\"padding-top:31.41%;aspect-ratio:2560 \/ 804\"><source sizes=\"(min-width: 64rem) 51.25rem, 80vw\" srcset=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-scaled.jpg.webp 2560w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-64w-20h.jpg.webp 64w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-1000w-314h.jpg.webp 1000w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-1920w-603h.jpg.webp 1920w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-1536w-483h.jpg.webp 1536w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-675w-212h.jpg.webp 675w\" type=\"image\/webp\"\/><img class=\"e_Wg\" decoding=\"async\" loading=\"lazy\" sizes=\"(min-width: 64rem) 51.25rem, 80vw\" title=\"Google Tensor TPU 8th Generation\" srcset=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-scaled.jpg 2560w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-64w-20h.jpg 64w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-1000w-314h.jpg 1000w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-1920w-603h.jpg 1920w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-1536w-483h.jpg 1536w, https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-675w-212h.jpg 675w\" alt=\"Google Tensor TPU 8th Generation\" src=\"https:\/\/www.androidauthority.com\/wp-content\/uploads\/2026\/04\/google-tensor-8th-gen-differences-scaled.jpg\"\/><\/picture>\n<div class=\"e_nv e_8s\">\n<p>C. Scott Brown\/Android Authority<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"e_e e_P\">\n<p>Meanwhile, inference is less intensive and can be processed on less capable hardware with much lower memory consumption. Therefore, using the same hardware for training and inference makes the actual cost much higher, resulting in higher effective costs for inference-related tasks.<\/p>\n<\/div>\n<div class=\"e_e e_P\">\n<p>Google has previously introduced TPU v5e (where the &#8220;e&#8221; stands for efficiency) for very small-scale operations. The latest TPU 8i appears to be a massive optimization based on the previous hardware. Amazon is also trying to achieve a similar effect with AWS Inferentia.<\/p>\n<\/div>\n<div class=\"e_e e_P\">\n<p>While Google has pointed to the environmental benefits of using a dedicated logic TPU, we haven&#8217;t seen any promises about reducing costs. It remains to be seen whether Google will extend some benefits to its consumers or reserve the benefits for itself and its corporate affiliates.<\/p>\n<\/div>\n<div data-container-type=\"content\">\n<div class=\"e_uc e_P\">\n<p>Thank you for being a part of our community. Please read our comment policy before posting.<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>C. Scott Brown\/Android Authority TL;DR Google has announced the eighth generation Tensor Processing Unit (TPU) for its data centers. The new category of TPU is divided based on use, with separate units for training and inference. Google says this reduces the energy required for the actual end use, which should benefit the environment. At its<\/p>\n","protected":false},"author":1,"featured_media":88222,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[58],"tags":[811,24357,6012,2621,22125,3707,24356],"class_list":{"0":"post-88217","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-devotionals","8":"tag-cost","9":"tag-ecofriendly","10":"tag-googles","11":"tag-latest","12":"tag-processors","13":"tag-route","14":"tag-tensor"},"_links":{"self":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/88217","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/comments?post=88217"}],"version-history":[{"count":1,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/88217\/revisions"}],"predecessor-version":[{"id":88223,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/posts\/88217\/revisions\/88223"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/media\/88222"}],"wp:attachment":[{"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/media?parent=88217"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/categories?post=88217"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/christiancorner.us\/index.php\/wp-json\/wp\/v2\/tags?post=88217"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}