Amazon Releases Latest AI Chip to Compete with NVIDIA and Google!
With the debut of Gemini 3, $Alphabet-A (GOOGL.US)$ the TPU has moved from behind the scenes to center stage.
Google, with its full-stack AI deployment ranging from foundational hardware to upper-layer applications, has built a strong competitive moat, forming a complete ecosystem loop that covers“chip (TPU) – network (OCS) – model (Gemini) – application.”
Against the backdrop of concerns over an “AI bubble” driving divergent trends in the U.S. tech sector, Google's stock price has surged significantly, with gains of nearly 13% over the past month.
![With the groundbreaking debut of Gemini 3, $Alphabet-A (GOOGL.US)$ TPU has stepped into the spotlight from behind the scenes. Leveraging its full-stack AI deployment, spanning from underlying hardware to upper-level applications, Google has built a robust competitive moat, forming a comprehensive ecosystem covering“chips (TPU) – network (OCS) – models (Gemini) – applications.” Amid concerns over an “AI bubble” causing divergent trends in the U.S. tech sector, Google’s stock price has soared, with gains of nearly 13% over the past month. With Google's strong rise, the market has also sparked discussions on whether NVIDIA's GPUs will be impacted. Previously,[Share Link: "Google TPU vs NVIDIA GPU: Who Will Dominate the Future of AI Computing Power? Which Industrial Chains Are Expected to Benefit?"]explained to fellow investors thatTPUs and GPUs are not in a simple substitution relationship but rather represent specialized divisions within the AI computing power market, where the chip industry is 'not a zero-sum game with only one winner.' Standing on the threshold of 2026, Google's AI strategy remains a key indicator, and its industrial chain undoubtedly represents a promising investment opportunity. The following analysis will focus on the competitive advantages of Google's TPUs and outline the core "gold-mining" opportunities across the supply chain. Why Are Google TPUs Becoming the New Favorite for Large Model Companies? To understand why Google can influence the computing power market, one must recognize a fundamental fact: NVIDIA's dominance in 'single-chip performance' and 'peak cabinet computing power'...](https://nnqimage.futunn.com/sns_client_feed/900080/20251201/web-1764595819106-dFrez6AoSz.png/big?area=1&is_public=true&imageMogr2/ignore-error/1/format/webp)
As Google rises strongly, there is growing market discussion about whether NVIDIA's GPUs may be impacted. Previously,“Google TPU vs NVIDIA GPU: Who Will Dominate the Future of AI Computing? Which Industrial Chains Stand to Benefit?”an article provided explanations for fellow investors,indicating that TPUs and GPUs are not in a simple substitution relationship but reflect specialized division of labor within the AI computing market—this is not a zero-sum game where only one winner exists in the semiconductor industry.
Standing on the threshold of 2026, Google's AI strategy remains a key indicator of industry trends, and its value chain undoubtedly represents an investment goldmine worthy of attention. The following analysis will focus on the competitive advantages of Google’s TPU while outlining core 'gold-mining' opportunities within the value chain.
Why has Google’s TPU become the new favorite among large model companies?
To understand why Google can disrupt the computational power market, one must recognize a fundamental fact: NVIDIA's dominance in 'single-chip performance' and 'peak compute power per rack' has remained unshaken.
Its latest Blackwell architecture (especially the B200/GB200) is specifically designed for trillion-parameter models, excelling in training, inference, and energy efficiency. A GB200 NVL72 rack can achieve a peak compute power of 1.4 EFLOPS—undoubtedly setting the industry benchmark for performance.
However, Google’s approach does not involve direct competition with NVIDIA in its areas of strength. Instead, it has chosen a differentiated path: rather than pursuing the most powerful single card, it focuses on building a system-level computational platform centered around scale, efficiency, cost, and stability. The TPU is not merely a replacement for GPUs but represents a systematic breakthrough.
The release of Ironwood is not just about a single chip innovation; it is a complete, system-level solution aimed at achieving ultimate scalability. Google simultaneously unveiled the racks, network interconnections, and cooling systems built around this chip, demonstrating its full-stack capability to transform cutting-edge computational power into large-scale, high-efficiency productivity.
According to the latest article by semiconductor research firm SemiAnalysis, the reasons for customer defections are straightforward:In the AI arms race, performance is the entry ticket, but TCO (Total Cost of Ownership) determines survival.
Data from SemiAnalysis’s models show thatGoogle’s TPUv7 delivers a crushing cost-efficiency advantage over NVIDIA.From Google's internal perspective, the total cost of ownership (TCO) for TPUv7 servers is approximately 44% lower than that of NVIDIA GB200 servers. Even after accounting for Google and Broadcom’s profit margins, Anthropic’s TCO for using TPUs via GCP remains about 30% lower than purchasing GB200.

Source: SemiAnalysis
Notably,Google's true ace in the hole is its unparalleled optical interconnect (ICI) technology.Unlike NVIDIA, which relies on costly NVLink and InfiniBand/Ethernet switches, Google employs its proprietary Optical Circuit Switch (OCS) and 3D Torus topology to construct an inter-chip interconnection network named ICI.
Given Google TPUs' comprehensive advantages in cost, scalability, and cluster efficiency, large model companies are rethinking their computational architecture. This is not merely a matter of 'cost reduction' but rather a fundamental shift in underlying logic based on a comprehensive evaluation of TSO (Scale × Cost × Risk), representing a rational decision aimed at achieving optimal efficiency.
What investment opportunities exist within Google’s supply chain?
As demand for AI computing power continues to grow, Google has raised its 2025 capital expenditure forecast to $91 billion-$93 billion and expects to further increase investments in 2026, with a focus on expanding TPU clusters and advancing the construction of global data centers.
An end-to-end innovation system spanning chips, clusters, and applications is rapidly taking shape. Google’s ecosystem is not only redefining cloud service providers’ (CSPs) data center architectures but also emerging as a core driver propelling growth in cutting-edge industries such as high-speed optical modules, MEMS, and thin-film lithium niobate.
We have compiled a list of companies within Google’s supply chain for reference by fellow investors:
![With the groundbreaking debut of Gemini 3, $Alphabet-A (GOOGL.US)$ TPU has stepped into the spotlight from behind the scenes. Leveraging its full-stack AI deployment, spanning from underlying hardware to upper-level applications, Google has built a robust competitive moat, forming a comprehensive ecosystem covering“chips (TPU) – network (OCS) – models (Gemini) – applications.” Amid concerns over an “AI bubble” causing divergent trends in the U.S. tech sector, Google’s stock price has soared, with gains of nearly 13% over the past month. With Google's strong rise, the market has also sparked discussions on whether NVIDIA's GPUs will be impacted. Previously,[Share Link: "Google TPU vs NVIDIA GPU: Who Will Dominate the Future of AI Computing Power? Which Industrial Chains Are Expected to Benefit?"]explained to fellow investors thatTPUs and GPUs are not in a simple substitution relationship but rather represent specialized divisions within the AI computing power market, where the chip industry is 'not a zero-sum game with only one winner.' Standing on the threshold of 2026, Google's AI strategy remains a key indicator, and its industrial chain undoubtedly represents a promising investment opportunity. The following analysis will focus on the competitive advantages of Google's TPUs and outline the core "gold-mining" opportunities across the supply chain. Why Are Google TPUs Becoming the New Favorite for Large Model Companies? To understand why Google can influence the computing power market, one must recognize a fundamental fact: NVIDIA's dominance in 'single-chip performance' and 'peak cabinet computing power'...](https://nnqimage.futunn.com/sns_client_feed/900080/20251201/web-1764595820215-0WrJfw7IiQ.png/big?area=1&is_public=true&imageMogr2/ignore-error/1/format/webp)
1. Chip Segment - The Core Computing Brain
First,Within Google's technological framework, $Broadcom (AVGO.US)$plays an irreplaceable role as a core pillar.Its three key technologies—high-speed SerDes, switching ASICs, and the optical switching chips that support the Jupiter network—jointly form the physical foundation of the TPU hyperscale cluster, analogous to the cluster's "blood vessels," "nervous system," and "main highways." Without this series of foundational chips, Google’s TPU clusters and optical networks would not achieve their current scale and performance. Therefore, as long as Google continues to advance its dedicated accelerator strategy, Broadcom will remain an indispensable core supplier.
In terms of packaging, $Taiwan Semiconductor (TSM.US)$、 $Amkor Technology (AMKR.US)$and$ASE Technology (ASX.US)$forms an essential "iron triangle."The heavy reliance of TPU v7 on advanced 3nm/2nm processes, HBM stacking, and high-density chiplet packaging establishes a clear division of labor: Taiwan Semiconductor defines the upper limit of computational power, while Amkor and ASE Group, through cutting-edge packaging technologies, become key enablers for high-bandwidth implementation. As market expectations indicate that Google’s TPU will become the globally dominant custom ASIC by 2026, the technical synergy among these three companies has emerged as an insurmountable cornerstone for Google’s iterative advancements in computational capabilities.
Moreover, Google requires software tools when designing its chips; meanwhile, Google’s Axion CPU is developed based on the architecture of $Cadence Design Systems (CDNS.US)$ and $Synopsys (SNPS.US)$ architecture for development. $Arm Holdings (ARM.US)$
2. Connectivity Technology - The Highway for Data Transmission
The bottleneck in AI computing often lies in data transmission speeds, making this area the fastest-evolving in technological upgrades. This layer of connectivity technology does not handle computation but addresses signal integrity issues arising from high-speed data movement between chips and servers. Among these,
$Astera Labs (ALAB.US)$ : A market leader in PCIe and CXL Retimers, solving high-speed interconnection issues between chips within AI servers.
$Credo Technology (CRDO.US)$ : Specializing in SerDes technology and AEC (Active Electrical Cable) chips, addressing short-distance external interconnections for servers.
$Marvell Technology (MRVL.US)$ : A giant in optical communication DSPs and switch chips.
$Rambus (RMBS.US)$ : Providing high-speed memory interface IPs (HBM/DDR interfaces) and CXL solutions.
$SiTime (SITM.US)$ : MEMS clock components to ensure transmission frequency synchronization.
3. Memory and storage sector
With the rise of Google's seventh-generation TPU, demand in the HBM market is expected to continue growing.
According to reports by South Korea’s Chosun Ilbo and other media outlets, Samsung Electronics ( $CSOP Samsung Electronics Daily (2x) Leveraged Product (07747.HK)$ ) and SK Hynix ( $CSOP SK Hynix Daily (2x) Leveraged Product (07709.HK)$ ) have become key participants in Google’s TPU supply chain. Among them, SK Hynix is likely to be the preferred supplier of HBM3E 8-layer chips for Google’s seventh-generation TPU and will exclusively provide HBM3E 12-layer chips for the enhanced version (TPU 7e), enabling it to achieve higher energy efficiency.
Moreover, analysts at Mizuho believe that Micron Technology, a U.S.-based memory giant, will also be one of the biggest beneficiaries of Google’s accelerating expansion of its AI computing cluster. After all, whether it is Google’s massive TPU AI computing cluster or the vast number of NVIDIA AI GPUs purchased by Google, both rely on fully integrated HBM storage systems equipped with AI chips. Additionally, as Google accelerates the construction or expansion of AI data centers, there will be significant purchases of high-performance DDR5 server-grade memory devices and enterprise-level high-performance SSDs.And $Micron Technology (MU.US)$Being positioned simultaneously in these three areas—HBM, server DRAM (including DDR5/LPDDR5X), and high-end data center SSDs—makes it one of the most direct beneficiaries within the “AI memory + storage stack.”
Additionally, $Western Digital (WDC.US)$and $Seagate Technology (STX.US)$ Provides massive data cold storage.
4. Optical communication and physical connection segment
At the Hot Chips 2025 conference, Google provided a detailed introduction of the new generation TPU chip, Ironwood. The new Ironwood super node consists of 9,216 Ironwood chips interconnected via OCS optical switching to achieve cabinet-level connectivity, further reinforcing the critical role of OCS optical switching in Google's network infrastructure.
Optical Circuit Switching (OCS) is a technology that enables direct switching of optical signals between fiber ports without the need for optoelectronic/electro-optical conversion. Google has incorporated OCS optical switching technology into TPU interconnections, creating TPU super nodes composed of multiple cabinets, supporting a 3D Torus topology architecture.
Founder Securities stated that, according to Cignal AI forecasts, by 2029, the total addressable market for OCS optical switching will exceed $1.6 billion. With the progress of Google’s large-scale models, the OCS industry chain is expected to benefit from the continued advancement of Google’s AI initiatives.
Among them,Optical components include $Lumentum (LITE.US)$、 $Coherent (COHR.US)$。Lumentum is one of the biggest winners of Google's AI boom, primarily because it specializes in the "high-performance network foundation system" that is deeply integrated with Google's TPU AI computing clusters — specifically, the indispensable optical interconnects, namely OCS (optical circuit switches) + high-speed optical components. As the number of TPUs increases by an order of magnitude, its shipments grow exponentially.
The manufacturing and assembly of optical modules includes $Fabrinet (FN.US)$、 $FIT HON TENG (06088.HK)$。Fabrinet is the "king of contract manufacturing" for optical communication modules, assembling high-precision modules for Lumentum and Coherent's optical divisions. Foxconn Interconnect Technology has become a core supplier to Google in the data center hardware sector, mainly providing key components such as optical communication modules, co-packaged optics (CPO) solutions, and high-speed connectors.
Optical fibers, cables, and connectorsincludes $Amphenol (APH.US)$ 、 $Corning (GLW.US)$ 、 $YOFC (06869.HK)$ 、 $TIME INTERCON (01729.HK)$ 、 $Luna Innovations (LUNA.US)$ 。
5. Network Hardware and System Equipment
$Arista Networks (ANET.US)$ : The leading supplier of core switches for Google's data centers.
$Ciena (CIEN.US)$ : Long-distance optical transmission systems (DCI), responsible for connecting different data centers.
6. Assembly and PCB Processes
Responsible for assembling all components into server racks.
$Celestica (CLS.US)$ : Specializes in the assembly of switches and AI servers.
$TTM Technologies (TTMI.US)$ : High-layer-count PCBs designed for servers and networking equipment.
7. Power, Thermal Management, and Infrastructure
$Vicor (VICR.US)$ :Vertical power delivery modules, specifically addressing the high-density power delivery challenges of AI chips (such as TPUs) at the last mile.
$Vertiv Holdings (VRT.US)$ : Thermal management (liquid cooling/air cooling) and power protection systems.
$nVent Electric (NVT.US)$ : Liquid cooling racks and connectivity solutions.
$Parker Hannifin (PH.US)$ : Quick-connect fittings and fluid components for liquid cooling systems.
$TeraWulf (WULF.US)$ / $Cipher Digital (CIFR.US)$ : A hosting provider that offers power and site facilities.
Summary
Overall, the rise of Google's TPU is not a zero-sum game of 'who replaces whom,' but rather a significant 'expansion' in the global AI computing infrastructure. In other words, this does not mark the end of the GPU era; instead, it signals the start of a new cycle of computing power investment.
Morgan Stanley noted that the role of Google’s TPU AI computing clusters is undergoing a strategic shift: evolving from infrastructure serving internal needs to becoming an 'AI strategic asset' available for external use amid a global shortage of computing resources. The firm emphasized that the accelerated growth of Google Cloud’s business and its expansion into the AI chip marketis not only expected to drive a revaluation of Google itself, but will also propel its entire AI ecosystem of partners into a more imaginative phase of valuation.
Risk Disclaimer: The above content only represents the author's view. It does not represent any position or investment advice of Futu. Futu makes no representation or warranty.Read more
Comment (1)
to post a comment
100
238
