- Threat Actors Weaponizing Hardware Devices to Exploit Fortified Enviro
- Cybersecurity Face-Off: CISA and DoD's Zero Trust Frameworks Explained and Compared
- Beyerdynamic's DT 990 Pro headphones get a refresh, promising portability without compromise
- How AI is helping PwC clients comply with European Union sustainability regulations
- The pressure is on for quick AI rollouts - but slow and steady wins this race too
Cornelis Networks offers alternative to Infiniband or Ethernet for HPC and AI networks

The architecture incorporates several key technical differentiators designed specifically for scale-out parallel computing environments. Credit-based flow control ensures lossless data transmission, while dynamic fine-grained adaptive routing optimizes path selection in real-time. Enhanced congestion control mechanisms are designed to maintain consistent performance under heavy loads, which is a critical requirement for AI training workloads that can involve thousands of endpoints.
Performance metrics and benchmarking
Cornelis positions the CN5000’s advantages in specific technical metrics that address known bottlenecks in AI and HPC workloads. The company claims 2X higher message rates and 35% lower latency compared to other 400Gbps solutions.
What’s different about the Cornelis architecture is that with the same bandwidth, you can achieve double the message rates, Spelman explained. “To me, that’s the way that the architectural correctness for the workloads shows up.”
For AI workloads specifically, the company highlights 6X faster collective communication performance compared to remote direct memory access (RDMA) over converged Ethernet (RoCE) implementations. Collective operations like all-reduce functions represent critical bottlenecks in distributed training, where thousands of nodes must synchronize gradient updates efficiently.
The architecture’s congestion management becomes particularly relevant in AI training scenarios, where synchronized communication patterns can overwhelm traditional networking approaches. Omni-Path’s credit-based flow control and adaptive routing aim to maintain consistent performance even under these demanding conditions.
“With the exact same compute installed and just a swap of the network from another 400 gig to CN5000, you see application performance that improves by 30%,” Spelman said. “Normally to improve by 30% on an application’s performance, you would need a new CPU generation.”