- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
- How control rooms help organizations and security management
- ITDM 2025 전망 | “효율경영 시대의 핵심 동력 ‘데이터 조직’··· 내년도 활약 무대 더 커진다” 쏘카 김상우 본부장
- 세일포인트 기고 | 2025년을 맞이하며… 머신 아이덴티티의 부상이 울리는 경종
Beyond AI: Building toward artificial consciousness – Part 2
Beyond the hype surrounding artificial intelligence (AI) in the enterprise lies the next step—artificial consciousness. The first piece in this practical AI innovation series outlined the requirements for this technology, which delved deeply into compute power—the core capability necessary to enable artificial consciousness. This piece looks at the control and storage technologies and requirements that are not only necessary for enterprise AI deployment but also essential to achieve the state of artificial consciousness.
Controlling unprecedented compute power
While artificial consciousness is impossible without a dramatic rise in compute capacity, that is only part of the challenge. Organizations must harness that compute power with the proper control plane nodes—the familiar backbone of the high availability server clusters necessary to deliver that power. This is essential for managing and orchestrating complex computing environments efficiently.
Utilizing standard 2u servers outfitted with a robust set of specifications ensures the reliability and performance needed for critical operations. However, each server in this cluster must be equipped with at least 256GB of DDR5 memory and a 750GB NVMe PCIe gen5 drive for rapid data processing and storage. Additionally, the control plane must include the proper DPU for enhanced network and security functions along with a controller powerful enough to provide advanced management capabilities. To effectively support a range of essential services, including Base Command Manager Nodes, SLURM Head Nodes, and Kubernetes Control Plane Nodes, a minimum of 7x nodes per tenant is recommended.
This configuration ensures a resilient and scalable infrastructure, capable of meeting the computational workload demands of real-time processing and decision-making but also providing the flexibility to adapt to evolving environments and more complex tasks. Simply put, enterprises must deploy advanced memory technology along with state-of-art interconnect technology to prevent bottlenecks and keep scaling the compute for AI workloads.
Storing an exponential increase in data
Finally, alongside the compute fabric is a storage system architecture meticulously engineered to cater to the rigorous demands of high-performance computing environments. This architecture integrates a strategic assembly of server types across 10 racks to ensure peak performance and scalability. Each rack is thoughtfully composed of four Vector servers, eight compute servers and 12 storage servers—all configured to create a robust and adaptable infrastructure capable of efficiently managing and processing a wide array of data-intensive operations. The storage servers feature a single socket configuration, 64GB of memory, a 960GB NVMe boot drive and an impressive 61.44TB of NVMe SSD storage, designed for exceptional speed and reliability. In contrast, the compute servers are based on the same server but in a two-socket setup with 30.72TB of NVMe SSD storage, emphasizing processing power. Completing the storage architecture are Vector servers with 128GB of memory, a 960GB boot drive, and 30.72TB of NVMe SSD storage, making them ideally suited for complex vector computations.
Hardware must lead so the algorithms can follow
To achieve artificial consciousness-capable systems, it will require the most advanced algorithms the industry has ever seen but organizations must get the hardware right before they can even think about the needed AI software. That means not only scaling compute for the unprecedented AI workloads needed to achieve artificial consciousness but also being able to control that power and be able to quickly access and store an exponential increase in data that’s feeding the AI system. Once the AI hardware framework is in place, an enterprise is ready to continue their journey to implementing the advanced software stacks and strategic services that will operationalize this transformational technology.