- IBM broadens access to Nvidia technology for enterprise AI
- The 20+ best early anti-Amazon Spring Sale deals 2025: Walmart, Best Buy, Costco, and more
- These are the wireless headphones that made me forget about Sony and Bose - and they look great
- VMware Product Release Tracker (vTracker)
- LogicMonitor extends observability platform to track AI stack
Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

Nvidia typically uses partnerships where appropriate, and the new switch design was done in collaboration with multiple vendors across different aspects, including creating the lasers, packaging, and other elements as part of the silicon photonics. Hundreds of patents were also included. Nvidia will licensing the innovations created to its partners and customers with the goal of scaling this model.
Nvidia’s partner ecosystem includes TSMC, which provides advanced chip fabrication and 3D chip stacking to integrate silicon photonics into Nvidia’s hardware. Coherent, Eoptolink, Fabrinet, and Innolight are involved in the development, manufacturing, and supply of the transceivers. Additional partners include Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication.
AI has transformed the way data centers are being designed. During his keynote at GTC, CEO Jensen Huang talked about the data center being the “new unit of compute,” which refers to the entire data center having to act like one massive server. That has driven compute to be primarily CPU based to being GPU centric. Now the network needs to evolve to ensure data is being fed to the GPUs at a speed they can process the data. The new co-packaged switches remove external parts, which have historically added a small amount of overhead to networking. Pre-AI this was negligible, but with AI, any slowness in the network leads to dollars being wasted.