- The rise of AI PCs: How businesses are reshaping their tech to keep up
- Benchmarks Find ‘DeepSeek-V3-0324 Is More Vulnerable Than Qwen2.5-Max’ | TechRepublic
- Tripwire Patch Priority Index for March 2025
- Explorando la confluencia de la tecnología y la sostenibilidad para un futuro mejor
- 솔라윈즈, 스쿼드캐스트 기반 사고 대응 도구 출시··· AI로 IT 복원력 강화
Google unveils next-generation AI chip Trillium

Other Trillium features include dataflow processors that accelerate models relying on embeddings found in recommendation models, and support for more high-bandwidth memory (HBM) in order to work with larger models with more weights and larger key-value caches.
More slices
Further, Trillium comes with Google’s multislice technology, which the company introduced for the first time, in preview, while unveiling TPU v5e last year in August.
Multislice technology, according to the company, allows enterprise users to easily scale AI models beyond the boundaries of physical TPU pods — up to tens of thousands of Cloud TPU v5e or TPU v4 chips.
Before the release of this technology, training jobs using TPUs were limited to a single slice of TPU chips, capping the size of the largest jobs at a maximum slice size of 3,072 chips for TPU v4.
“With Multislice, developers can scale workloads up to tens of thousands of chips over inter-chip interconnect (ICI) within a single pod, or across multiple pods over a data center network,” Vahdat explained last year in a blog post co-written with his colleague Mark Lohmeyer.
Open source support
Trillium will support open source libraries, such as JAX, PyTorch/ XLA, and Keras 3, Vahdat said. “Support for JAX and XLA means that declarative model description written for any previous generation of TPUs maps directly to the new hardware and network capabilities of Trillium TPUs,” he wrote, adding that Google has partnered with Hugging Face on Optimum-TPU for streamlined model training and serving.