- Best Roborock vacuums 2025: After testing multiple models, these are the top ones
- OpenAI goes all in on hardware, will buy Jony Ive's AI startup
- AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know
- This Eufy robot vacuum's dustbin doubles as a cordless vac and it's $200 off
- Get a free pair of Meta Ray-Bans when you sign up for Verizon 5G home internet
DriveNets extends AI networking fabric with multi-site capabilities for distributed GPU clusters

“We use the same physical architecture as anyone with top of rack and then leaf and spine switch,” Dudy Cohen, vice president of product marketing at DriveNets, told Network World. “But what happens between our top of rack, which is the switch that connects NICs (network interface cards) into the servers and the rest of the network is not based on Clos Ethernet architecture, rather on a very specific cell-based protocol. [It’s] the same protocol, by the way, that is used in the backplane of the chassis.”
Cohen explained that any data packet that comes into an ingress switch from the NIC is cut into evenly sized cells, sprayed across the entire fabric and then reassembled on the other side. This approach distinguishes DriveNets from other solutions that might require specialized components such as Nvidia BlueField DPUs (data processing units) at the endpoints.
“The fabric links between the top of rack and the spine are perfectly load balanced,” he said. “We do not use any hashing mechanism… and this is why we can contain all the congestion avoidance within the fabric and do not need any external assistance.”
Multi-site implementation for distributed GPU clusters
The multi-site capability allows organizations to overcome power constraints in a single data center by spreading GPU clusters across locations.
This isn’t designed as a backup or failover mechanism. Lasser-Raab emphasized that it’s a single cluster in two locations that are up to 80 kilometers apart, which allows for connection to different power grids.
The physical implementation typically uses high-bandwidth connections between sites. Cohen explained that there is either dark fiber or some DWDM (Dense Wavelength Division Multiplexing) fibre optic connectivity between the sites. Typically the connections are bundles of four 800 gigabit ethernet, acting as a single 3.2 terabit per second connection.