- Get four Apple AirTags for just $73 with this Black Friday deal
- I tested Beats' new Pill speaker and it delivered gloriously smooth sound (and it's on sale for Black Friday)
- These Sony headphones are a fan favorite - and $150 off for Black Friday
- I tested a 'luxury' nugget ice maker, and it's totally worth it - plus it's $150 off for Black Friday
- The Dyson Airwrap is $120 off ahead of Black Friday - finally
Computing that’s purpose-built for a more energy-efficient, AI-driven future
In parts one and two of this AI blog series, we explored the strategic considerations and networking needs for a successful AI implementation. In this blog I focus on data center infrastructure with a look at the computing power that brings it all to life.
Just as humans use patterns as mental shortcuts for solving complex problems, AI is about recognizing patterns to distill actionable insights. Now think about how this applies to the data center, where patterns have developed over decades. You have cycles where we use software to solve problems, then hardware innovations enable new software to focus on the next problem. The pendulum swings back and forth repeatedly, with each swing representing a disruptive technology that changes and redefines how we get work done with our developers and with data center infrastructure and operations teams.
AI is clearly the latest pendulum swing and disruptive technology that requires advancements in both hardware and software. GPUs are all the rage today due to the public debut of ChatGPT – but GPUs have been around for a long time. I was a GPU user back in the 1990s because these powerful chips enabled me to play 3D games that required fast processing to calculate things like where all those polygons should be in space, updating visuals fast with each frame.
In technical terms, GPUs can process many parallel floating-point operations faster than standard CPUs and in large part that is their superpower. It’s worth noting that many AI workloads can be optimized to run on a high-performance CPU. But unlike the CPU, GPUs are free from the responsibility of making all the other subsystems within compute work with each other. Software developers and data scientists can leverage software like CUDA and its development tools to harness the power of GPUs and use all that parallel processing capability to solve some of the world’s most complex problems.
A new way to look at your AI needs
Unlike single, heterogenous infrastructure use cases like virtualization, there are multiple patterns within AI that come with different infrastructure needs in the data center. Organizations can think about AI use cases in terms of three main buckets:
- Build the model, for large foundational training.
- Optimize the model, for fine-tuning a pre-trained model with specific data sets.
- Use the model, for inferencing insights from new data.
The least demanding workloads are optimize and use the model because most of the work can be done in a single box with multiple GPUs. The most intensive, disruptive, and expensive workload is build the model. In general, if you’re looking to train these models at scale you need an environment that can support many GPUs across many servers, networking together for individual GPUs that behave as a single processing unit to solve highly complex problems, faster.
This makes the network critical for training use cases and introduces all kinds of challenges to data center infrastructure and operations, especially if the underlying facility was not built for AI from inception. And most organizations today are not looking to build new data centers.
Therefore, organizations building out their AI data center strategies will have to answer important questions like:
- What AI use cases do you need to support, and based on the business outcomes you need to deliver, where do they fall into the build the model, optimize the model, and use the model buckets?
- Where is the data you need, and where is the best location to enable these use cases to optimize outcomes and minimize the costs?
- Do you need to deliver more power? Are your facilities able to cool these types of workloads with existing methods or do you require new methods like water cooling?
- Finally, what is the impact on your organization’s sustainability goals?
The power of Cisco Compute solutions for AI
As the general manager and senior vice president for Cisco’s compute business, I’m happy to say that Cisco UCS servers are designed for demanding use cases like AI fine-tuning and inferencing, VDI, and many others. With its future-ready, highly modular architecture, Cisco UCS empowers our customers with a blend of high-performance CPUs, optional GPU acceleration, and software-defined automation. This translates to efficient resource allocation for diverse workloads and streamlined management through Cisco Intersight. You can say that with UCS, you get the muscle to power your creativity and the brains to optimize its use for groundbreaking AI use cases.
But Cisco is one player in a wide ecosystem. Technology and solution partners have long been a key to our success, and this is certainly no different in our strategy for AI. This strategy revolves around driving maximum customer value to harness the full long-term potential behind each partnership, which enables us to combine the best of compute and networking with the best tools in AI.
This is the case in our strategic partnerships with NVIDIA, Intel, AMD, Red Hat, and others. One key deliverable has been the steady stream of Cisco Validated Designs (CVDs) that provide pre-configured solution blueprints that simplify integrating AI workloads into existing IT infrastructure. CVDs eliminate the need for our customers to build their AI infrastructure from scratch. This translates to faster deployment times and reduced risks associated with complex infrastructure configurations and deployments.
Another key pillar of our AI computing strategy is offering customers a diversity of solution options that include standalone blade and rack-based servers, converged infrastructure, and hyperconverged infrastructure (HCI). These options enable customers to address a variety of use cases and deployment domains throughout their hybrid multicloud environments – from centralized data centers to edge end points. Here are just a couple of examples:
- Converged infrastructures with partners like NetApp and Pure Storage offer a strong foundation for the full lifecycle of AI development from training AI models to day-to-day operations of AI workloads in production environments. For highly demanding AI use cases like scientific research or complex financial simulations, our converged infrastructures can be customized and upgraded to provide the scalability and flexibility needed to handle these computationally intensive workloads efficiently.
- We also offer an HCI option through our strategic partnership with Nutanix that is well-suited for hybrid and multi-cloud environments through the cloud-native designs of Nutanix solutions. This allows our customers to seamlessly extend their AI workloads across on-premises infrastructure and public cloud resources, for optimal performance and cost efficiency. This solution is also ideal for edge deployments, where real-time data processing is crucial.
AI Infrastructure with sustainability in mind
Cisco’s engineering teams are focused on embedding energy management, software and hardware sustainability, and business model transformation into everything we do. Together with energy optimization, these new innovations will have the potential to help more customers accelerate their sustainability goals.
Working in tandem with engineering teams across Cisco, Denise Lee leads Cisco’s Engineering Sustainability Office with a mission to deliver more sustainable products and solutions to our customers and partners. With electricity usage from data centers, AI, and the cryptocurrency sector potentially doubling by 2026, according to a recent International Energy Agency report, we are at a pivotal moment where AI, data centers, and energy efficiency must come together. AI data center ecosystems must be designed with sustainability in mind. Denise outlined the systems design thinking that highlights the opportunities for data center energy efficiency across performance, cooling, and power in her recent blog, Reimagine Your Data Center for Responsible AI Deployments.
Recognition for Cisco’s efforts have already begun. Cisco’s UCS X-series has received the Sustainable Product of the Year by SEAL Awards and an Energy Star rating from the U.S. Environmental Protection Agency. And Cisco continues to focus on critical features in our portfolio through agreement on product sustainability requirements to address the demands on data centers in the years ahead.
Look ahead to Cisco Live
We are just a couple of months away from Cisco Live US, our premier customer event and showcase for the many different and exciting innovations from Cisco and our technology and solution partners. We will be sharing many exciting Cisco Compute solutions for AI and other uses cases. Our Sustainability Zone will feature a virtual tour through a modernized Cisco data center where you can learn about Cisco compute technologies and their sustainability benefits. I’ll share more details in my next blog closer to the event.
Read more about Cisco’s AI strategy with the other blogs in this three-part series on AI for Networking:
Share: