Intel: Under attack, fighting back on many fronts


At first glance, Intel doesn’t look like a company under siege. In its last fiscal year, it recorded $77.8 billion in sales and $20 billion in profit. Its market capitalization is $220 billion as of mid-September 2021.

And yet it is. When you’re the leader, all your competition is gunning for you. Intel is wrestling with a loss of leadership in manufacturing and process nodes, it’s losing share to a very resurgent AMD, an unrelenting Nvidia is challenging Intel for AI dominance, the Atom processor failed spectacularly against Arm in the mobile market, and it’s on its third CEO in three years.

But Intel revels in the competition. “Our success in so many markets makes us targets for lots of companies,” said Lisa Spelman, corporate vice president and general manager of the Xeon and memory group. “So it’s not a surprise that we have competitors that want a piece of that.” 

“I wouldn’t go as far as to say Intel is ‘under siege,’ but it certainly has been under attack from many angles, including internal disarray. I believe the latter is over, as new CEO Pat Gelsinger is rapidly fixing the internal issues,” said Glenn O’Donnell, a vice president and research director with Forrester Research.

The other factors will be harder to fix.

It’s certainly safe to say its competition is fiercer than ever, as AMD has more momentum, Nvidia GPUs are on a tear, the Arm architecture is now a prominent alternative, and myriad other chips like communications interfaces, microcontrollers, and DSPs are all battlegrounds.

Intel got a badly needed shot in the arm with the February return of Gelsinger as its CEO. Gelsinger was long lauded as one of the most important engineers in the company’s history and was considered a leading candidate for the top spot when he was pushed out in 2009.

Gelsinger has hit the ground running hard since his return, making a series of moves to repair Intel’s lost luster.

“I think he’s making the right moves. And, you know, he’s the new guy. He has license to do bold moves, and they have been bold. He’s still got a long road ahead of him, but I think he’s off on the right foot,” said Shane Rau, research vice president for computing and semiconductors at IDC.

Some of the projects Gelsinger is championing actually began under his predecessor, Bob Swan, who didn’t quite get the credit he deserved given that he was the company’s former CFO – a money guy, not an engineer. “Bob helped us through a lot of challenging times,” said Spelman.

Since leaving Intel back in 2009, Gelsinger has gained broad experience that is serving him well now as he corrects the course of the company. He had a stint as COO of EMC and took over as CEO of VMware, tripling its revenues and expanding way beyond its core hypervisor technology. That experience better prepared him for his new job than if he’d stayed at Intel, O’Donnell said. “It gave him a perspective, not just of how to lead a major tech company, but also gave him a perspective outside of Intel,” O’Donnell said, which will help Gelsinger hone in on what needs to be done. “So I think it was very valuable.”

IDM 2.0 shakes up the manufacturing side

In 2008, AMD split into two companies, one that developed chips and one that made them, called GlobalFoundries. Struggling to compete with Intel, the company could no longer afford to maintain its fabrication facilities in New York and Dresden, Germany, and sold the fabs to two investment funds owned by the Abu Dhabi government. The notion of a chip company being “fabless,” where it just designed the chips but subcontracted out the manufacturing to a third party, was gaining popularity, and Nvidia was proving the notion could work.

With Intel falling behind on manufacturing, Wall Street and other analysts began to whisper that perhaps Intel should do the same. Instead, Intel initiated a strategy called Integrated Device Management 2.0, or IDM 2.0. For starters, IDM 2.0 involves a $20 billion investment in two new chip fabs at its facility in Chandler, Arizona. Beyond that, IDM consists of three components:

  • Intel’s global, internal factory network. Intel has fabs in the U.S., Europe, Central America, and Asia, and with IDM 2.0 committed to making its own chips and not going fabless.
  • Expanded use of third-party foundry capacity. Intel will work with other foundries, including Taiwanese giant TSMC, to farm out the manufacture of some of its chips.
  • Building a world-class foundry business called Intel Foundry Services. Intel plans to make chips for other companies, even some it competes with. This is new for Intel, and it has already lined up two customers, both of which are competitors: Amazon Web Services and Qualcomm. AWS makes its own Arm-based server processors called Graviton, while Qualcomm competes in the 5G chip space.

Spelman said Intel had done manufacturing for other parties before, “however it’s been more opportunistic. With IDM 2.0, it’s more strategic where we talk with customers about products and solutions and opportunities to build solutions.”

Rau says IDM 2.0 also encompasses additional technology factors like leadership software and the foundry business, which he feels will probably be the most challenging component of the IDM 2.0 strategy.

“There is a hunger for leadership process, and manufacturing, across the industry,” he said. “The connection has to be between the Intel foundry company and its customers, not Intel itself. And that means bringing different skills, different IP portfolios, different designs to bear. So the foundry can enable their customers to be successful.”

Spelman says IDM isn’t just a business alliance between chipmakers, it has benefits for end customers, too. “For a customer that is providing an infrastructure as a service, they may be able to drive some unique IP or a unique customization or simply just a different data flow that works best with their environment,” she said. This means potentially shaving time off latencies, improving the density of the compute that they could provide or improve the sustainability of it – all of which can be beneficial to the end user.

Many more architectures

Up until the last decade, Intel was exclusively an x86 show as far as compute was concerned. It didn’t use x86 in things like communications and networking chips, but everywhere else it did (the doomed Itanium processor not withstanding). Atom, its attempt at a mobile processor, was essentially a whittled down x86 core. Its first two attempts at GPUs, Larrabee and Xeon Phi, were x86-based. They all failed.

Then the tide turned. Intel acquired FPGA vendor Altera in 2014, acquired AI chip makers Nervana and Habana, networking experts Barefoot Networks, and lured away a top AMD GPU expert to finally build a proper GPU.

The multiarchitecture approach is not unique to Intel. AMD is in the process of acquiring FPGA maker Xilinx, Nvidia has purchased networking specialist Mellanox and is trying to buy Arm Holdings, and Marvell has gone from making hard-drive controllers to Arm server processors and smart networking adapters. No one is offering a single architecture any more.

“That is a religious argument in a company like Intel, but as I like to say, Gelsinger has to be willing to eat his young, so to speak, to steer Intel into a different direction. That doesn’t mean they move away from x86, but they have to look at x86 plus,” said O’Donnell.

More recently, Intel added a new RISC architecture to the family. In June it struck a deal with RISC-V maker SiFive to create a new development platform, called Horse Creek, and will feature SiFive’s new Performance P550 cores. Intel has also signed up SiFive for its Foundry Service. That same month Bloomberg reported Intel tried to buy SiFive for $2 billion. However, Intel remains tight-lipped on the subject and declined to comment beyond confirming the alliance with SiFive.

O’Donnell said that while the mobile-computing ship has sailed, there is still a big opportunity for Intel in edge computing with a RISC-based chip. “One of the biggest criticisms of x86 is the power consumption,” he said. “But IoT devices and edge-computing devices are going to be massive; that’s a huge growth market. And power consumption matters there. And it’s currently anybody’s market to win.”

Introducing the IPU

Intel has also introduced a new line of smart networking controllers, beyond the SmartNICs it already offered, that it calls infrastructure-processing units (IPU). The idea behind the SmartNIC is to offload the work of routing network traffic from the CPU and free it to do its job processing data. Unlike traditional network controllers, SmartNICs have some form of programmability to perform tasks that traditional dumb NICs cannot handle, such as packet processing.

IPUs are SmartNICs taken to the next step. Up until now, SmartNICs have been an on-prem play, but IPUs are specifically designed around data movement for the cloud and communications services. IPUs are designed to separate the cloud infrastructure from tenant or guest software, so guests can fully control the CPU with their software, while service providers maintain control of the infrastructure and root-of-trust.

Two of the IPUs feature Intel Xeon-D and Agilex FPGA cores to do the processing. The third, codenamed Mount Evans, is a first of its type from Intel. Designed in cooperation with top cloud service partners, Mount Evans is based on Intel’s packet-processing engine and contains up to 16 Arm Neoverse N1 cores, with a dedicated compute cache and up to three memory channels. The ASIC can support up to four host Xeons, with 200Gb/s of full duplex bandwidth between them.

XPU and oneAPI ties them all together

Each processor architecture has strengths and weaknesses, and all are better or best suited to specific use cases. Intel’s XPU project, announced last year, seeks to offer a unified programming model for all types of processor architectures and match every application to its optimal architecture. XPU means you can have x86, FPGA, AI and machine-language processors, and GPUs all mixed into your network, and the app is compiled to the best suited processor for the job.

That is done through the oneAPI project, which goes hand-in-hand with XPU. XPU is the silicon part, while oneAPI is the software that ties it all together. oneAPI is a heterogeneous programming model with code written in common languages such as C, C++, Fortran, and Python, and standards such as MPI and OpenMP.

The oneAPI Base Toolkit includes compilers, performance libraries, analysis and debug tools for general purpose computing, HPC, and AI. It also provides a compatibility tool that aids in migrating code written in Nvidia’s CUDA to Data Parallel C++ (DPC++), the language of Intel’s GPU.

“This is a branded version of a trend that’s taking hold across semiconductor manufacturers,” said Rau. “That is, you can no longer be just CPU-centric, or GPU-centric. You just can’t have one major chip in your portfolio. And to succeed, you have to have multiple kinds of processing and interconnects to connect the multiple kinds of processors.”

Rau said the jury is still out on oneAPI because it’s still in development, something Spelman confirmed.

And she says never say never on the possibility of adding more architectures. “I wouldn’t ever say we’re done there. This industry moves too fast, it’s too dynamic. So you never know where we might end up investing next,” she said.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2021 IDG Communications, Inc.



Source link