- The newest Echo Show 8 just hit its lowest price ever for Black Friday
- 기술 기업 노리는 북한의 가짜 IT 인력 캠페인··· 데이터 탈취도 주의해야
- 구글 클라우드, 구글 워크스페이스용 제미나이 사이드 패널에 한국어 지원 추가
- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
Akamai, Neural Magic team to bolster AI at the network edge
“Additionally, Akamai’s hyper distributed edge network will make the Neural Magic solution available in remote edge locations as the platform expands, empowering more companies to scale AI-based workloads more widely across the globe,” Iyer said.
A case for deep learning at the edge?
The combination of technologies could solve a dilemma that AI poses: whether it’s worth it to put computationally intensive AI at the edge—in this case, Akamai’s own network of edge devices. Generally, network experts feel that it doesn’t make sense to invest in substantial infrastructure at the edge if it’s only going to be used part of the time.
Delivering AI models efficiently at the edge also “is a bigger challenge than most people realize,” said John O’Hara, senior vice president of engineering and COO at Neural Magic, in a press statement. “Specialized or expensive hardware and associated power and delivery requirements are not always available or feasible, leaving organizations to effectively miss out on leveraging the benefits of running AI inference at the edge.”
Using a less expensive processor to do this type of AI work, when it’s needed, may be easier for a company to justify.
AI made easier?
The partnership may serve to foster innovation around edge-AI inference across a host of industries, Iyer said.
“Fundamentally, our partnership with Neural Magic is focused solely on making inference more efficient,” he explained. “There will always be cases where organizations still need a GPU if they are training AI models or their AI workload requires a larger amount of compute/memory requirements; however, CPUs have a role to play as well.”