- "기밀 VM의 빈틈을 메운다" 마이크로소프트의 오픈소스 파라바이저 '오픈HCL'란?
- The best early Black Friday AirPods deals: Shop early deals
- The 19 best Black Friday headphone deals 2024: Early sales live now
- I tested the iPad Mini 7 for a week, and its the ultraportable tablet to beat at $100 off
- The best Black Friday deals 2024: Early sales live now
Akamai, Neural Magic team to bolster AI at the network edge
“Additionally, Akamai’s hyper distributed edge network will make the Neural Magic solution available in remote edge locations as the platform expands, empowering more companies to scale AI-based workloads more widely across the globe,” Iyer said.
A case for deep learning at the edge?
The combination of technologies could solve a dilemma that AI poses: whether it’s worth it to put computationally intensive AI at the edge—in this case, Akamai’s own network of edge devices. Generally, network experts feel that it doesn’t make sense to invest in substantial infrastructure at the edge if it’s only going to be used part of the time.
Delivering AI models efficiently at the edge also “is a bigger challenge than most people realize,” said John O’Hara, senior vice president of engineering and COO at Neural Magic, in a press statement. “Specialized or expensive hardware and associated power and delivery requirements are not always available or feasible, leaving organizations to effectively miss out on leveraging the benefits of running AI inference at the edge.”
Using a less expensive processor to do this type of AI work, when it’s needed, may be easier for a company to justify.
AI made easier?
The partnership may serve to foster innovation around edge-AI inference across a host of industries, Iyer said.
“Fundamentally, our partnership with Neural Magic is focused solely on making inference more efficient,” he explained. “There will always be cases where organizations still need a GPU if they are training AI models or their AI workload requires a larger amount of compute/memory requirements; however, CPUs have a role to play as well.”