- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
- 5 network automation startups to watch
- 4 Security Controls Keeping Up with the Evolution of IT Environments
- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
Organizations Turn to Open-Source Software to Improve HPC and AI Applications
High performance computing (HPC) is becoming mainstream for organizations, spurred on by their increasing use of artificial intelligence (AI) and data analytics. A 2021 study by Insersect360 Research found that 81% of organizations that use HPC reported they are running AI and machine learning or are planning to implement them soon. It’s happening globally and contributing to worldwide spending on HPC that is poised to exceed $59.65 billion in 2025, according to Grandview Research.
Simultaneously, the intersection of HPC, AI, and analytics workflows are putting pressure on systems administrators to support ever more complex environments. Admins are being asked to complete time-consuming manual configurations and reconfigurations of servers, storage and networking as they move nodes between clusters to provide the resources required for different workload demands. The resulting cluster sprawl consumes inordinate amounts of information technology (IT) resources.
The answer? For many organizations, it’s a greater reliance on open-source software.
Reaping the Benefits of Open-Source Software & Communities
Developers at some organizations have found that open-source software is an effective way to advance the HPC software stack beyond the limitations of any one vendor. Examples of open-source software used for HPC include Apache Ignite, Open MPI, OpenSFS, OpenFOAM, and OpenStack. Almost all major original equipment manufacturers (OEMs) participate in the OpenHPC community, along with key HPC independent software vendors (ISVs) and top
HPC sites.
Organizations like Arizona State University Research Computing have turned to open-source software like Omnia, a set of tools for automating the deployment of open source or publicly available Slurm and Kubernetes workload management along with libraries, frameworks, operators, services, platforms and applications.
The Omnia software stack was created to help simplify and speed the process of building environments for mixed workloads by abstracting away the manual steps that can slow provisioning and lead to configuration errors. It was designed to speed and simplify the process of deploying and managing environments for mixed workloads, including simulation, high throughput computing, machine learning, deep learning and data analytics.
Members of the open-source software community contribute code and documentation updates to feature requests and bug reports. They also provide open forums for conversations about feature ideas and potential implementation solutions. As the open-source project grows and expands, so does the technical governance committee, with representation from top contributors and stakeholders.
“We have ASU engineers on my team working directly with the Dell engineers on the Omnia team,” said Douglas Jennewein, senior director of Arizona State University (ASU) Research Computing. “We’re working on code and providing feedback and direction on what we should look at next. It’s been a very rewarding effort… We’re paving not just the path for ASU but the path for advanced computing.”
ASU teams also use Open OnDemand, an open source HPC portal that allows users to log in to a HPC cluster via a traditional Secure Shell Protocol (SSH) terminal or via a web-based interface that uses Open OnDemand. Once connected, they can upload and download files; create, edit, submit and monitor jobs; run applications; and more via a web browser in a cloud-like experience with no client software to install and configure.
Some Hot New Features of Open-Source Software for HPC
Here is a sampling of some of the latest features in open-source software available to HPC application developers.
- Dynamically change a user’s environment by adding or removing directories to the PATH environment variable. This makes it easier to run specific software in specific folders without updating the PATH environment variable and rebooting. It’s especially useful when third-party applications point to conflicting versions of the same libraries or objects.
- Choice of host operating system (OS) provisioned on bare metal. The speed and accuracy of applications are inherently affected by the host OS installed on the compute node. This provides bare metal options of different operating systems in the lab to be able to choose the one working optimally at any given time and best suited for an HPC application.
- Provide low-cost block storage that natively uses Network File System (NFS). This adds flexible scalability and is ideal for persistent, long-term storage.
- Use telemetry and visualization on Red Hat Enterprise Linux. Users of Red Hat Enterprise Linux can take advantage of telemetry and visualization features to view power consumption, temperatures, and other operational metrics. BOSS RAID controller support. Redundant array of independent disks (RAID) arrays use multiple drives to split the I/O load, and are often preferred by HPC developers.
The benefits of open-source software for HPC are significant. They include the ability to deploy faster, leverage fluid pools of resources, and integrate complete lifecycle management for unified data analytics, AI and HPC clusters.
For more information on and to contribute to the Omnia community, which includes Dell, Intel, university research environments, and many others, visit the Omnia github.
***
Intel® Technologies Move Analytics Forward
Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.
Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.