- Learn a new language with over 50% off a Rosetta Stone subscription right now
- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for up to 92% off
- Uncovering the Gaps in Cyberthreat Detection & the Hidden Weaknesses of SIEM
- I went mountain-biking with this $350 DJI camera, and the video results blew me away
- Need a VPN? Buy a 5-year subscription for $35 right now
Intel announces 144 core Xeon processor
Intel has announced a new processor with 144 cores designed for simple data-center tasks in a power-efficient manner.
Called Sierra Forest, the Xeon processor is part of the Intel E-Core (Efficiency Core) lineup that that forgoes advanced features such as AVX-512 that require more powerful cores. AVX-512 is Intel Advanced Vector Extensions 512, “a set of new instructions that can accelerate performance for workloads and usages such as scientific simulations, financial analytics, artificial intelligence (AI)/deep learning, 3D modeling and analysis, image and audio/video processing, cryptography and data compression,” according to Intel.
Sierra Forest signals a shift for Intel that splits its data-center product line into two branches, the E-Core and the P-Core (Performance Core), which is the traditional Xeon data-center design that uses high-performance cores.
Sierra Forest’s 144 cores plays out Intel’s belief that x86 CPU revenue will follow core trends more closely than socket trends in the coming years, said Sandra Rivera, executive vice president and general manager of the data center and AI group at Intel speaking at a briefing for data-center and AI investors. She said Intel sees a market opportunity of more than $110 billion for its data-center and AI silicon business by 2027.
In a way, Sierra Forest is not unlike what Ampere is doing with its Altra processors and AMD is doing with its Bergamo line, with lots of small, efficient cores for simpler workloads. Like Ampere, Intel is targeting the cloud where lots of virtual machines perform non-intensive tasks like running containers.
Intel plans to release Sierra Forest in the first half of 2024.
Intel also announced Sierra Forest’s successor, Clearwater Forest. It didn’t go into details beyond the release date in 2025 timeframe and that it will use the 18A process to build the chip. This will be the first Xeon chip with the 18A process, which is basically 1.8 nanometers. That indicates that Intel is on track to deliver on the roadmap set down by CEO Pat Gelsinger in 2021.
Emerald Rapids and Granite Rapids Xeons are scheduled.
Intel newest Xeon, Sapphire Rapids, was released in January and already has Q4 2023 set as the release date for its successor, Emerald Rapids. It will offer faster performance, better power efficiency, and more cores than Sapphire Rapids, and will be socket-compatible with it. That means faster validation by OEM partners making servers since they can use the current socket.
After that comes Granite Rapids in 2024. During the briefin Rivera demoed a dual-socket server running a pre-rele version of Granite Rapids, with an incredible 1.5 TB/s of DDR5 memory bandwidth. For perspective, Nvidia’s Grace CPU superchip has 960 GB/s and AMD’s Genoa generation of Epyc processor has a theoretical peak of 920 GB/s.
The demo featured for the first time a new type of memory Intel developed with SK Hynix called DDR5-8800 Multiplexer Combined Rank (MCR) DRAM. This memory is bandwidth-optimized and is much faster than traditional DRAM. MCR starts at 8000 megatransfers (MT) per second, well above the 6400 MT/s of DDR5 and 3200 MT/s of DDR4.
Intel also discussed non-x86 parts, like FPGAs, GPUs, and purpose-built accelerators. Intel said it would launch 15 new FPGAs in 2023, the most ever in a single year. It did not go into detail on how the FPGAs would be positioned in the marketplace.
Is Intel competing With CUDA?
One of the key advantages that Nvidia has had has been its GPU programming language called CUDA, which allows developers to program directly to the GPU rather than through libraries. AMD and Intel have had no alternative up to now, but it sounds like Intel is working on one.
At the briefing, Greg Lavender, Intel’s Chief Technology Officer and general manager of the software and advanced technology group, set down his software vision for the company. “One of my priorities is to drive a holistic and end-to-end systems-level approach to AI software at Intel. We have the accelerated heterogeneous hardware ready today to meet customer needs. The key to unlocking that value in the hardware is driving scale through software,” he said.
To achieve “the democratization of AI,” Intel is developing an open-AI software ecosystem, he said, enabling software optimizations upstream to AI frameworks like PyTorch and TensorFlow and machine learning frameworks to promote programmability, portability, and ecosystem adoption.
In May 2022, Intel released an open-source toolkit called Cyclomatic to help developers more easily migrate their code from CUDA to its Data Parallel C++ for Intel platforms. Lavender said the tool is typically able to migrate 90% of CUDA source code automatically to the C++ source code, leaving very little for programmers to tune manually.
Copyright © 2023 IDG Communications, Inc.