- '코트 안팎에서 데이터와 AI 활용하기'··· NBA팀 올랜도 매직의 디지털 여정
- Phone theft is on the rise - 7 ways to protect your device before it's too late
- 최형광 칼럼 | 데이터는 더 이상 정제되지 않는다
- New Intel Xeon 6 CPUs unveiled; one powers rival Nvidia’s DGX B300
- First $1B business with one human employee will happen in 2026, says Anthropic CEO
DriveNets extends AI networking fabric with multi-site capabilities for distributed GPU clusters

“We use the same physical architecture as anyone with top of rack and then leaf and spine switch,” Dudy Cohen, vice president of product marketing at DriveNets, told Network World. “But what happens between our top of rack, which is the switch that connects NICs (network interface cards) into the servers and the rest of the network is not based on Clos Ethernet architecture, rather on a very specific cell-based protocol. [It’s] the same protocol, by the way, that is used in the backplane of the chassis.”
Cohen explained that any data packet that comes into an ingress switch from the NIC is cut into evenly sized cells, sprayed across the entire fabric and then reassembled on the other side. This approach distinguishes DriveNets from other solutions that might require specialized components such as Nvidia BlueField DPUs (data processing units) at the endpoints.
“The fabric links between the top of rack and the spine are perfectly load balanced,” he said. “We do not use any hashing mechanism… and this is why we can contain all the congestion avoidance within the fabric and do not need any external assistance.”
Multi-site implementation for distributed GPU clusters
The multi-site capability allows organizations to overcome power constraints in a single data center by spreading GPU clusters across locations.
This isn’t designed as a backup or failover mechanism. Lasser-Raab emphasized that it’s a single cluster in two locations that are up to 80 kilometers apart, which allows for connection to different power grids.
The physical implementation typically uses high-bandwidth connections between sites. Cohen explained that there is either dark fiber or some DWDM (Dense Wavelength Division Multiplexing) fibre optic connectivity between the sites. Typically the connections are bundles of four 800 gigabit ethernet, acting as a single 3.2 terabit per second connection.