AI workloads to transform enterprise networks

Some enterprises are already prepared to handle AI traffic, says Derek Ashmore, application transformation principal at Asperitas Consulting. That’s because they’ve already begun to move away from inflexible, hard-to-maintain legacy networks, he says. The move to modern cloud networking has been going on for a while, he adds, and was kicked into overdrive during the COVID pandemic. “Even without COVID, that move would have happened, it just would have happened at a slower rate,” Ashmore says.

That’s a good thing, since it sets up enterprises for the challenges coming with generative AI.

For example, multi-modal AI applications process text, images, audio and video — and queries and responses can be very large. For example, Google’s latest Gemini 2.5 model has a context window size of a million tokens, with two million coming soon.

Two million tokens is around 1.5 million words. For reference, all the Harry Potter books put together contained around one million words. Big context windows allow for longer, more complicated conversations — or for AI coding assistants to examine larger portions of a code base. Plus, the AI’s answers are dynamically generated, so there’s no way to cache requests in most instances.

As AI companies leapfrog each other in terms of capabilities, they will be able to handle even larger conversations — and agentic AI may increase the bandwidth requirements exponentially and in unpredictable ways.

Any website or app could become an AI app, simply by adding an AI-powered chatbot to it, says F5’s MacVittie. When that happens, a well-defined, structured traffic pattern will suddenly start looking very different. “When you put the conversational interfaces in front, that changes how that flow actually happens,” she says.



Source link

Leave a Comment