The AI sphere is abuzz lately with news and rumors that the latest Google TPU, Ironwood, is powering the Gemini3 model, outpacing OpenAI on many metrics including intelligence and performance. Now, The Information, echoed by Bloomberg and CNBC, is reporting that Alphabet’s Google is preparing to make its TPUs available beyond Google Cloud, with Meta Platforms as the lead design win. Google is also rumored to be pitching the Ironwood Pod to other hyperscalers and large enterprises as an Nvidia alternative. Consequently, Alphabet is up nearly 50% over the last month, while Nvidia is down over 7%. But another shoe is about to drop. (Note that this story contains my analyst speculation.)

The Ironwood TPU Pod. GOOGLE
My Take
Historically, TPUs have been confined to Google’s own fleet and exposed only as a managed service on Google Cloud. This barrier has limited TPUs to Google’s own workloads, primarily. But now, Google is rumored to be actively pitching on‑prem or colocation TPU deployments inside customers’ own facilities, including banks, HFT shops, and large cloud customers.
Given the specs on Ironwood, with 9216 TPUs in a Pod connected over optical fiber switched networks, is Google’s first TPU designed explicitly for the “age of inference.
Internal Google Cloud forecasts cited in press reports suggest Google management believes that broader TPU adoption could capture on the order of 10% of Nvidia’s current data‑center revenue run‑rate over time, which would imply tens of billions in prospective annual TPU revenue could be had, if the strategy rolls out.
The Other Shoe
The current wet-concrete negotiations described in the press are between Google and Meta; there is no public indication that Amazon Web Services (AWS) Amazon.com, Inc. or Microsoft Azure (Microsoft Microsoft Corporation) are close to offering native TPU instances in their own clouds. However, I did see one breadcrumb on the floor of SuperComputing last week in St. Louis.
Cirrascale, a high-performance specialized cloud service provider, had the Google Cloud Logo on their booth. While the staff was clear that nothing further could be shared without an NDA, I have to wonder if Cirrascale was planning to announce the deal with Google Cloud before a slight snag in timing occurred. Booth property already having been ordered and delivered, the image below suggests that they have a deal. Which begs the question, “Who else?” has such an arrangement and when might the news go live.

Cirrascale showed the Google Cloud Logo at SC25. THE AUTHOR
Strategic Context
Meta’s interest is framed alongside other marquee external users. For example, Apple reportedly used large TPUv4/v5p clusters for training Apple Intelligence models via the Google Cloud. Commentaries on the “AI chip wars” argue that multi‑cloud AI customers want a second source to Nvidia that is available at meaningful scale; TPU externalization plus Meta‑scale reference deployments are seen as Google’s attempt to become that second source, potentially shifting some AI capex away from Nvidia and potentially AMD toward Alphabet if Google’s silicon and software stack prove competitive.
I would note that Google cannot currently match the Nvidia AI Ecosystem, nor its pervasive software stack. So, if my hunch is accurate, this should be seen as an inevitable evolution in the AI ecosystem and supply chain caused more by supply/demand imbalances rather than a competitive deficiency in Nvidia GPUs.
Disclosures: This article expresses the opinions of the author and is not to be taken as advice to purchase from or invest in the companies mentioned. My firm, Cambrian-AI Research, is fortunate to have many semiconductor firms as our clients, including Baya Systems BrainChip, Cadence, Cerebras Systems, D-Matrix, Esperanto, Flex, Groq, IBM, Intel, Micron, NVIDIA, Qualcomm, Graphcore, SImA.ai, Synopsys, Tenstorrent, Ventana Microsystems, and scores of investors.