Amazon Releases Latest AI Chip to Compete with NVIDIA and Google!
Across the ocean in Silicon Valley, an intensifying "shadow war" over AI computational dominance continues to escalate.
Recently, Google’s new-generation model Gemini 3, trained on its self-developed TPU chips, has surpassed ChatGPT 5, which is based on NVIDIA GPUs, in key performance metrics. Following this, news quickly spread in the market that Meta is also considering a large-scale introduction of Google TPUs to replace some of the NVIDIA GPUs it originally used. This series of developments has created ripple effects in the capital markets.
Since November, the two AI giants have diverged significantly in their stock performance, $NVIDIA (NVDA.US)$ with one declining by 12%, while $Alphabet-C (GOOG.US)$ the other rose by 12% against the trend.

However, regardless of whether GPUs or TPUs eventually prevail, the competition in computing chips essentially points to a system-level battle—placing higher demands on data transmission efficiency. For hardware supply chains, often referred to as 'water carriers,' the development of AI is shifting comprehensively from 'single-card performance competition' to 'system-level competition.' In this framework,optical communication has transitioned from a supporting role to a critical bottleneck determining the upper limits of computational power.
Why is optical communication essential for all?
Regardless of whether the chip competition is ultimately dominated by GPUs, TPUs (or other custom ASICs), the driving force behind it isthe relentless pursuit of ultimate computing power and data processing efficiency.
This trend is not a negative factor for the hardware supply chains, often called 'water carriers,' especially for providers of high-end interconnection solutions and infrastructure.Instead, it has become a long-term tailwind characterized by structured and relatively deterministic factors.This is because, regardless of the computational architecture adopted, large-scale AI clusters will inevitably require the following common elements:
As a result, the overall improvement in data transmission efficiency ensures that sectors such as optical modules, high-speed connectors, and specialized cooling solutions will continue to benefit from both increased volumes and rising unit prices (a dual impact known as the Davis Double Play effect).
Taking Google TPU as an example, optical communication (OCS) serves as the architectural cornerstone for achieving ultra-large-scale, highly utilized single clusters and acts as the physical carrier of its software-defined infrastructure.
Specifically, a basic unit of Google TPU consists of a 4x4x4 cube, comprising 64 TPUs. Connections within the 4x4x4 cube are achieved through copper cables.In contrast, connections external to the 4x4x4 cube (including wraparound connections back to the opposite side of the cube and connections with adjacent 4x4x4 cubes) utilize optical transceivers and OCS (optical circuit switches).

The diagram shows a 3D toroidal network: TPU2,3,4 (on the Z+ plane) uses 800G optical transceivers routed via OCS, featuring wraparound connections back to TPU2,3,1 (on the Z- plane). Source: semianalysis.
Under normal circumstances,a computing unit composed of 64 TPUs requires 96 optical modules, with a configuration ratio reaching 1:1.5.In contrast, NVIDIA’s intra-cabinet interconnections typically do not rely on optical modules. Google's deep integration of optical communication into scale-up architectures has not only opened up new market space for optical modules but, more importantly, maintained deployment and upgrade flexibility through the continued use of pluggable designs.
For NVIDIA GPUs, optical communication is the sole lifeline to sustain performance growth and overcome the power and density limits of single-rack systems. Specifically, although TPU v7 currently leads,NVIDIA will soon mass-produce Rubin shortly after the release of v7,achieving an overtaking. For NVIDIA, each Rubin GPU will be equipped with two CX9 NIC chips, doubling the scale-out bandwidth compared to Blackwell.
Gfund Securities stated that the increase in scale-out bandwidth is driving the growth of optical module deployments: for NVIDIA, each Rubin GPU will be equipped with two CX9 NIC chips, doubling the scale-out bandwidth compared to Blackwell. The firm further estimates that each Rubin Ultra GPU will adopt four CX9 chips, doubling the scale-out bandwidth once again. Therefore,the firm expects the proportion of 1.6T optical modules for Rubin to increase from 1:2.5 to 1:5.
Overall, the surge in artificial intelligence-driven computing demand is driving data center internal rates to upgrade from 400G to 800G and 1.6T, creating explosive growth opportunities for high-speed optical modules.
What opportunities are there in the Hong Kong and U.S. stock markets?
As competition in AI computing power between Google TPU and NVIDIA GPU intensifies, market focus has shifted from mere "computing chips" to "data transmission." When the computing speed of a single chip is fast enough, how to enable high-speed interconnection among thousands of chips becomes the decisive factor.This is the underlying logic behind the explosive growth of the optical communications industry.
Niu Niu has summarized the opportunities within the optical communications industry chain for fellow investors' reference:

1. Optical Chips: The Jewel in the Crown
This segment represents the highest technological barriers and most substantial profit margins. In AI data centers, electrical signals must be converted into optical signals for high-speed transmission, which relies on various DSPs (digital signal processors) and laser driver chips.
Leading Players: $Broadcom (AVGO.US)$ and $Marvell Technology (MRVL.US)$ Broadcom is the absolute leader, almost monopolizing the high-end PAM4 DSP market and serving as an indispensable supplier for NVIDIA and Google data centers.
As one of the largest suppliers of ASICs for hyperscale computing centers, Broadcom’s stock price has risen 65% year-to-date. CNBC noted that Broadcom is closely linked with Google through its ASIC business, assisting in the design and manufacturing of Google's Tensor Processing Units (TPUs). These chips are at the core of Google's internal AI infrastructure and are considered strong competitors to NVIDIA GPUs in AI workloads.
Marvell's core business lies in the "brain" of optical modules — DSP.All high-speed optical modules rely on DSP for error correction, compensation, and noise reduction, determining how far optical signals can be transmitted, how low the bit error rate is, and how much power is consumed. Industry demand for 800G and 1.6T continues to rise: inference is moving toward long text, video, and multimodal applications, with bandwidth and storage pressures mounting. Interconnects between data centers (ZR/ZR+) are also upgrading. In the short term, although the market is discussing new forms like LPO and CPO, limited by distance, yield rates, and system modification costs, DSP remains the dominant solution.This means that the hotter AI becomes, the more Marvell, as the "intersection brain," will be in demand.
Most notably, Marvell also announced today that it will acquire semiconductor startup Celestial AI for $3.25 billion. The core of this transaction lies in Celestial AI’s photonic fabric technology.This technology uses optical signals instead of electrical signals to connect AI chips and memory chips.
Rising star in high-speed interconnects: $Astera Labs (ALAB.US)$ Focused on PCIe and CXL connectivity solutions, addressing the transmission bottleneck of the "last centimeter" between chips.
Astera Labs, a semiconductor company specializing in data center interconnect solutions, is gradually becoming a key player in the AI infrastructure ecosystem. The company's core technology focuses on interconnect technologies for high-performance computing and data centers, aiming to optimize data transmission efficiency within data centers and address the "memory wall" problem in high-performance computing.
ALAB shares plummeted overnight, primarily driven by news that Amazon AWS’s self-developed chip Trainium 4 may shift to adopting NVLink technology.The market had widely anticipated the continuation of the UALink interconnect solution, with ALAB being a key enabler within the UALink ecosystem, focusing on products related to uSwitch, PCIe Retimer, and CXL fabric. Given that ALAB has not invested in NVLink switching technology, market concerns have arisen that if NVLink becomes mainstream, the space for the UALink ecosystem may be squeezed, potentially impacting ALAB's future performance.
The Invisible Blood Vessels Behind the AI Compute Explosion: $Credo Technology (CRDO.US)$ is a company providing high-speed connectivity solutions, dominating the field of AI-driven high-speed data connections.
If NVIDIA is considered the "heart" in the AI chip domain, then Credo serves as the "blood vessels" connecting these hearts.
Credo Technology plays a pivotal role in the field of high-speed data connectivity for AI data centers. The company offers a variety of products, including optical components and data network chips,but its Active Electrical Cables (AECs) business is currently the most closely watched segment.
Credo's success is largely attributed to its absolute leadership in the AEC market. AEC, a copper-cable-based connection technology invented by Credo, is used to connect AI servers with network switches and represents a critical component for high-speed data transmission within AI data centers. Compared to traditional fiber optics, AEC is considered more reliable and consumes less power; compared to traditional passive copper cables, it supports longer transmission distances.
Furthermore, the transition cycle from 800G to 1.6T is underway, with advancements in technological pathways such as Co-Packaged Optics (CPO), Linear-drive Pluggable Optics (LPO), and silicon photonics,presenting companies like Credo, which possess core technologies in underlying SerDes and DSP, with ongoing opportunities for product iteration and market share growth.
In the race for AI compute power, high-speed connectivity technologies (SerDes + optical interconnects) have become a key competitive arena. Leveraging its technological expertise and strategic positioning in SerDes and optical interconnects (such as AEC), Credo is poised to emerge as a core player in this field.
In addition, $MACOM Technology Solutions (MTSI.US)$ 、 $Semtech (SMTC.US)$ are the two leading companies in analog chips, offering TIA (transimpedance amplifier) and Driver (driver chip). Both are driving the LPO technology. MACOM has exceptionally strong semiconductor processes, while Semtech stands out with its Tri-Edge analog CDR technology, which is highly suitable for low-power scenarios.
$POET Technologies (POET.US)$ is a company specializing in the design and development of high-speed optical modules, optical engines, and light source products.,serving the artificial intelligence systems market and hyperscale data centers. The company develops photonic integrated solutions based on its patented platform—the POET Optical Interposer™, which enables seamless integration of electronic and photonic devices on a single chip through advanced wafer-level semiconductor manufacturing processes.
2. Optical Transmission Modules and Components: A Classic 'Selling Picks and Shovels' Business
Chips need to be packaged into modules to be usable. With 800G modules becoming standard and 1.6T modules on the verge of mass production, demand in this sector is showing exponential growth.
Established giants: $Lumentum (LITE.US)$ and $Coherent (COHR.US)$ They provide core optical components such as lasers, forming the backbone of the supply chain.
Lumentum is one of the biggest winners of Google's AI boom, primarily because it specializes in the "high-performance network foundation system" that is deeply integrated with Google's TPU AI computing clusters — specifically, the indispensable optical interconnects, namely OCS (optical circuit switches) + high-speed optical components. As the number of TPUs increases by an order of magnitude, its shipments grow exponentially.
Coherent is one of the world’s largest suppliers of optical modules. They offer high-speed modules (such as 400G, 800G, and future 1.6T modules) for data center interconnections. In AI clusters built by companies like NVIDIA, Coherent's high-speed optical interconnect products serve as the crucial 'arteries' enabling efficient data transmission between GPUs.
Connector giants: $Amphenol (APH.US)$ Though low-profile, their high-density connectors are the invisible champions behind AI server racks.
Amphenol is a global leader in the manufacturing of connectors and interconnect systems, with business segments covering communication solutions, interconnect and sensor systems, and harsh environment solutions. The company provides customers with high-speed, highly reliable connection solutions.
On August 4, 2025, Amphenol announced that it would acquire CommScope's broadband connectivity and cable business unit (CCS) for $10.5 billion in an all-cash deal, including debt. This represents Amphenol’s largest transaction to date.The acquisition is expected to elevate Amphenol to become one of the leading suppliers of global communication infrastructure.
Overall, in the current AI computing power race, optical interconnects and high-speed copper cables are key bottlenecks. Amphenol’s 112G/224G high-speed connection solutions serve as the indispensable "arteries" for building AI clusters, such as NVIDIA’s GB200 system.
Established optical communications company: $Applied Optoelectronics (AAOI.US)$Awaiting significant growth in optical modules next year.
Applied Optoelectronics is a manufacturer specializing in optical communication products, primarily serving the data center and fiber optic communication markets. As a U.S.-based optical module supplier, the company has leveraged years of localized production capacity, highly automated production lines, and technological advantages to gradually emerge as a core supplier of 400G and 800G optical modules for cloud computing giants. With the continuous rise in AI computing demand and large-scale expansion of global cloud vendor data centers, the company is poised for a new wave of substantial business growth.
First Shanghai Securities noted that the company’s 400G single-mode optical module has passed AWS certification and has already begun substantive shipments, with a significant increase in shipment volumes expected by Q4 2025. The 800G single-mode optical module is also nearing the final stages of customer validation, with bulk orders at the ten-thousand-unit level anticipated for final testing and certification. We believe that the company’s collaboration with AWS has made substantive progress, and its optical module business is expected to experience significant growth in 2026.
Leading AI optical communications company: $CIG (06166.HK)$
Cambridge Technology focuses on optical modules as its core business, specializing in high-speed optical communication and data center solutions. Currently, the company is aggressively advancing the scaled delivery of 800G optical modules while actively developing cutting-edge technologies such as 1.6T optical modules, CPO (Co-Packaged Optics), and LPO (Linear Drive Pluggable Optics). Its primary customers include North American cloud service providers and global telecommunications equipment leaders.
In terms of revenue structure, optical module business dominates, with the 800G product becoming the core growth engine. The company expects to deliver 600,000 units of 800G optical modules to Cisco by 2025 and has successfully entered Meta's supply chain, with deliveries expected to begin in February 2026. Additionally, through its production bases in Malaysia and Mexico, Cambridge Technology actively addresses the international trade environment, with overseas revenue accounting for up to 94%, deeply integrating into the global high-end optical module supply chain system.
3. Switches and Network Equipment: The Highways of Data
When tens of thousands of GPUs or TPUs are interconnected, high-performance switches are required to manage traffic flow.
The invisible manufacturing giant behind the switches: $Celestica (CLS.US)$ is a top-tier global electronics manufacturing service provider. In the wave of optical communications and AI, it primarily enters the market through its HPS division. Unlike Cisco and Arista, which design network architectures and software systems, Celestica is responsible for manufacturing high-performance switches and servers either for these major companies or directly for cloud giants.
As AI clusters demand transmission speeds of 400G or even 800G, the hardware design of switches becomes extremely complex (heat dissipation, signal integrity). Celestica has mastered the assembly and manufacturing processes for these advanced switches.
The King of Ethernet: $Arista Networks (ANET.US)$ is the preferred choice for AI Ethernet switches, competing and complementing NVIDIA’s InfiniBand solutions, while being deeply integrated with major cloud giants.
Traditional Giants: $Cisco (CSCO.US)$ Although slower in transformation, it still dominates the enterprise market.
4. Critical Infrastructure: Fiber Optics, Packaging and Testing, and Manufacturing
This segment falls under 'infrastructure' and 'contract manufacturing.' While it may not be as explosive as chips, it benefits from stable performance.
Manufacturing (Contract): $Fabrinet (FN.US)$ Known as the 'TSMC' of the optical communications field, it provides precision manufacturing for major companies like Lumentum, with performance highly correlated to industry conditions. $Tower Semiconductor (TSEM.US)$ Holds a competitive advantage in silicon photonics manufacturing processes.
Testing and Measurement: $Keysight Technologies (KEYS.US)$ Supplies testing equipment for all high-speed optical modules and chips, playing an indispensable role as a 'provider of essential tools' during the R&D phase.
Fiber Optics: $Corning (GLW.US)$ Provides the physical medium for transmission; $YOFC (06869.HK)$ is a global leading provider of fiber optic preforms and optical fibers.
Optical transmission equipment: $Ciena (CIEN.US)$ specializes in long-distance optical transmission, addressing the interconnection needs between data centers (DCI).
Fiber optic communication: $Lumen Technologies (LUMN.US)$ is one of the few companies in the United States with a nationwide high-capacity fiber optic backbone network, owning both terrestrial and submarine long-haul fiber networks. Globally, it has approximately 450,000 route miles of fiber optic cable, connected to metropolitan fiber networks, serving over 60 countries and regions. Lumen revealed that due to the booming demand for artificial intelligence, fiber optic networks have become increasingly important and scarce in AI data processing.
High-density fiber optic connectors:CMB Securities previously issued a research report stating that the demand for AI computing power is robust. $TIME INTERCON (01729.HK)$ With an excellent position in MPO optical communication and AI server sectors, it is poised to embrace high-quality growth. The MPO business is the core profit source of Converge Technology, with industry-leading technology. Currently, 16-core MPO products have achieved bulk shipments and secured a place in the overseas cloud supply chain.making the company a key supplier within the Google ecosystem.The company’s MPO product demand remains strong (fiber optic revenue grew significantly by 43% year-on-year in 2024 and 35% year-on-year in H1 2025), and its profitability far exceeds the company's average level. Given the ongoing expansion of major clients, overseas capacity construction, and technological upgrades, the bank expects the MPO business to maintain rapid growth in the coming years without concern.
Overall, driven by both AI and digitalization, optical communication represents not only a technological revolution but also a global race within a billion-dollar market.
Risk Disclaimer: The above content only represents the author's view. It does not represent any position or investment advice of Futu. Futu makes no representation or warranty.Read more
Comments (9)
to post a comment
171
522
