- This Samsung phone is the model most people should buy (and it's not a flagship)
- The 50+ best Black Friday Walmart deals 2024: Early sales live now
- How to Dockerize WordPress | Docker
- The smartwatch with the best battery life I've tested is also one of the cheapest
- One of the most immersive portable speakers I've tested is not made by Sony or Bose
Nvidia and Google Cloud collaborate to accelerate AI
Axion is based on Arm’s Neoverse V2 design, a data-center-oriented chip built on the ARMv9 architecture. Arm doesn’t make chips; it makes designs, and then licensees take those design and do their own customizations by adding to the basic configuration they get from Arm. Some make smart phones (Apple, Qualcomm), and others make server chips (Ampere).
Google declined to comment on speeds, fees, and cores, but it did claim that Axion processors would deliver instances with up to 30% better performance than the fastest general-purpose Arm-based instances available in the cloud today, up to 50% better performance, and up to 60% better energy-efficiency than comparable current-generation x86-based instances.
Axion is built on Titanium, a system of Google’s own purpose-built custom silicon microcontrollers and tiered scale-out offloads. It offloads operations like networking and security, so Axion processors can focus on computation of the workload, much like the SuperNIC offloads networking traffic from the CPU.
Virtual machines based on Axion processors will be available in preview in the coming months, according to Google.
AI software services updated
In February, Google introduced Gemma, a suite of open models using the same research and technology used to create Google’s Gemini generative AI service. Now, teams from Google and Nvidia have worked together to accelerate the performance of Gemma with Nvidia’s TensorRT-LLM, an open-source library for optimizing LLM inference.
Google Cloud also has made it easier to deploy Nvidia’s NeMo framework for building custom generative AI applications across its platform via its GKE Kubernetes engine and Google Cloud HPC Toolkit. This enables developers to jumpstart the development of generative AI models, allowing for the rapid deployment of turnkey AI products.