- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
- I compared a $190 robot vacuum to a $550 one. Here's my buying advice
- I finally found a reliable wireless charger for all of my Google devices - including the Pixel Watch
- 6 ways to turn your IT help desk into a strategic asset
- How to install and use Microsoft's PowerShell on Linux (and why you should)
Intel shifts to multiarchitecture model
There was a time when Intel was all-x86, all the time, everywhere.
Not anymore.
Last week Intel held its annual Architecture Day with previews of multiple major upcoming architectures beyond x86. For once, it’s not hyperbole when they say these are some of the “biggest shifts in a generation.”
And it’s not just architectures or just more and faster cores, it’s new designs, whole new ways of doing things. Instead of just packing more cores onto a smaller die, Intel is switching to a new hybrid architecture that adds low-energy-draw cores, similar to what some ARM chip makers have been doing for years on mobile devices.
Intel’s announcements covered client and server but we’ll stick with the server stuff here. Sapphire Rapids is the codename for Intel’s next-generation of Xeon Scalable processors and the first to feature the company’s Performance Core microarchitecture.
Performance is a future architecture with emphasis on low latency and single-threaded core performance. A smarter branch predictor improves the flow of code in the instruction pipeline, and eight decoders enable more parallelism of code processing. A wider back-end adds ports for more and faster parallel processing.
Sapphire Rapids will also offer larger private and shared caches, increased core counts, and support for DDR5 memory, PCI Express Gen 5, the next-generation of Optane memory, CXL 1.1 (Compute Express Link), and on-package High Bandwidth Memory (HBM).
Sapphire Rapids will add several new technologies not used in previous generations of the Xeon Scalable processor, such as Intel Accelerator Interfacing Architecture (AIA) to improve signaling to accelerators and devices; Intel Advanced Matrix Extensions (AMX), a workload acceleration engine specifically for tensor processing used in deep learning algorithms; and Intel Data Streaming Accelerator (DSA), which is meant to offload common data movement tasks from the CPU.
Introducing the IPU
Intel also announced a trio of new Infrastructure Processing Units (IPU), designed around data movement specifically for the cloud and communications services. These IPUs are a mix of Intel Xeon-D processor cores, Agilex FPGAs and Intel Ethernet technologies. All are meant to reduce network overhead and increase throughput.
IPUs are also designed to separate the cloud infrastructure from tenant or guest software, so guests can fully control the CPU with their software, while service providers maintain control of the infrastructure and root-of-trust.
The first of the three is Oak Springs Canyon, which features Intel Xeon-D cores, an Agilex FPGA, and dual 100G Ethernet network interfaces. It supports Intel’s Open vSwitch technology and enables the offloading of network virtualization and storage functions like NVMe over fabric and RoCE v2 to reduce CPU overhead.
Second is the Intel N6000 Acceleration Development Platform, codenamed Arrow Creek, a 100G SmartNIC designed for use with Intel Xeon-based servers. It features an Intel Agilex FPGA and Intel Ethernet 800 Series controller for high-performance 100G network acceleration. Arrow Creek is geared toward Communication Service Providers (CoSPs).
Finally there is a new ASIC IPU, codenamed Mount Evans, a first of its type from Intel. Intel says it designed Mount Evans in cooperation with a top cloud service partners. Mount Evans is based on Intel’s packet-processing engine, instantiated in an ASIC. This ASIC supports many use cases like vSwitch offload, firewalls, and virtual routing, and emulates NVMe devices at very high IOPS rates by extending the Optane NVMe controller.
Mount Evans features up to 16 Arm Neoverse N1 cores, with a dedicated compute cache and up to three memory channels. The ASIC can support up to four host Xeons, with 200Gbps of full-duplex bandwidth between them.
This is only the beginning of the news out of Architecture Day. More will come.
Copyright © 2021 IDG Communications, Inc.