- The 70+ best Black Friday TV deals 2024: Save up to $2,000
- This AI image generator that went viral for its realistic images gets a major upgrade
- One of the best cheap Android phones I've tested is not a Motorola or Samsung
- The best VPN services for iPhone: Expert tested and reviewed
- Docker Desktop 4.36 | Docker
Beyond AI: Building toward artificial consciousness – Part 2
Beyond the hype surrounding artificial intelligence (AI) in the enterprise lies the next step—artificial consciousness. The first piece in this practical AI innovation series outlined the requirements for this technology, which delved deeply into compute power—the core capability necessary to enable artificial consciousness. This piece looks at the control and storage technologies and requirements that are not only necessary for enterprise AI deployment but also essential to achieve the state of artificial consciousness.
Controlling unprecedented compute power
While artificial consciousness is impossible without a dramatic rise in compute capacity, that is only part of the challenge. Organizations must harness that compute power with the proper control plane nodes—the familiar backbone of the high availability server clusters necessary to deliver that power. This is essential for managing and orchestrating complex computing environments efficiently.
Utilizing standard 2u servers outfitted with a robust set of specifications ensures the reliability and performance needed for critical operations. However, each server in this cluster must be equipped with at least 256GB of DDR5 memory and a 750GB NVMe PCIe gen5 drive for rapid data processing and storage. Additionally, the control plane must include the proper DPU for enhanced network and security functions along with a controller powerful enough to provide advanced management capabilities. To effectively support a range of essential services, including Base Command Manager Nodes, SLURM Head Nodes, and Kubernetes Control Plane Nodes, a minimum of 7x nodes per tenant is recommended.
This configuration ensures a resilient and scalable infrastructure, capable of meeting the computational workload demands of real-time processing and decision-making but also providing the flexibility to adapt to evolving environments and more complex tasks. Simply put, enterprises must deploy advanced memory technology along with state-of-art interconnect technology to prevent bottlenecks and keep scaling the compute for AI workloads.
Storing an exponential increase in data
Finally, alongside the compute fabric is a storage system architecture meticulously engineered to cater to the rigorous demands of high-performance computing environments. This architecture integrates a strategic assembly of server types across 10 racks to ensure peak performance and scalability. Each rack is thoughtfully composed of four Vector servers, eight compute servers and 12 storage servers—all configured to create a robust and adaptable infrastructure capable of efficiently managing and processing a wide array of data-intensive operations. The storage servers feature a single socket configuration, 64GB of memory, a 960GB NVMe boot drive and an impressive 61.44TB of NVMe SSD storage, designed for exceptional speed and reliability. In contrast, the compute servers are based on the same server but in a two-socket setup with 30.72TB of NVMe SSD storage, emphasizing processing power. Completing the storage architecture are Vector servers with 128GB of memory, a 960GB boot drive, and 30.72TB of NVMe SSD storage, making them ideally suited for complex vector computations.
Hardware must lead so the algorithms can follow
To achieve artificial consciousness-capable systems, it will require the most advanced algorithms the industry has ever seen but organizations must get the hardware right before they can even think about the needed AI software. That means not only scaling compute for the unprecedented AI workloads needed to achieve artificial consciousness but also being able to control that power and be able to quickly access and store an exponential increase in data that’s feeding the AI system. Once the AI hardware framework is in place, an enterprise is ready to continue their journey to implementing the advanced software stacks and strategic services that will operationalize this transformational technology.