- Cerebras claims record in molecular dynamics simulations, says it’s 748x faster than Frontier supercomputer
- ICO Urges More Data Sharing to Tackle Fraud Epidemic
- Looking to lead technology teams in 2025? Follow this CDO's advice
- 한국의 디지털 전문인력 급여, 구매력 평가 기준 아시아 내 최고
- “AI 더빙 서비스가 글로벌 성장 견인”··· 이스트소프트 ‘페르소닷에이아이’ 회원 2만명 돌파
3 things network pros need to tell developers about why the network matters
Of the 47 enterprises I chatted with in December, guess how many were NOT users of hybrid cloud. Zero.
Guess how many ever used another cloud model. Zero.
Guess how many believe they will “move everything to the cloud”. Zero.
OK, I realize that you may not have read this sort of thing very often or at all, but I think it demonstrates just how important hybrid cloud is and how little we really know about it. That’s bad in that it’s always a bad thing when something critical is hardly understood, but it could be a good thing for network professionals looking to engage again with their company IT planning process.
Our thesis on network professional engagement in application planning is simple. Developers understand functionality and hosting requirements. IT operations people understand hosting and cost management. What network professionals understand is the workflows that bind all this into an experience. By focusing planning discussions on workflows, a network professional creates profound value for the enterprise, and nowhere is that more evident than in the hybrid cloud.
If you draw out the structure of a hybrid-cloud application, putting the user on the left, you’d first draw a bidirectional arrow to a circle labeled “Cloud”, then another from that circle to another circle (on the right) labeled “Data Center”. That’s the general layout of a hybrid cloud application. The user (who might be a worker, a partner, or a customer/prospect) interacts with the cloud via a well-designed GUI. The cloud portion of the application turns this interaction into a transaction, and that goes to the data center. Something there (an application, a database) generates a result, which is then returned via the cloud GUI to the user.
Don’t get hung up on network-think here; remember that the goal is to think about the workflows the interactions create. Application design and componentization are slaves to workflows and interactions. The first point you want to make in a design meeting is that the best, most cost-effective, designs will be ones that limit back-and-forth interactions either from user to cloud or cloud to data center. Those are the two points in the diagram that need to be addressed first.
Step 1: Minimize user-to-cloud traffic.
User-to-cloud interactions can multiply costs, complicate network connections, and eat quality of experience (QoE). Keeping the number of interactions as low as possible without compromising QoE is a starting point, but the real challenge is maximizing cloud benefits without risking massive cost overruns.
The value of the cloud lies in its ability to scale under load and replace failed components quickly, which is often an attribute of scalability. Scalability usually matters most for the application components that connect with the user and process those workflows. If you want scalability, you probably need some form of load balancing to divide work, but you also need to think about the problem of state.
State is developer-speak for “where you are in a multi-step dialog”. Users will almost always have a multi-step interaction with the cloud, so it’s critical to know what step you’re in to process a message correctly. Many developers automatically think of doing this by handling each dialog step in a separate, small, component (a microservice) that’s dedicated to that step. That will multiply the number of components, and with them the costs and the complexity of the cloud network. The alternative is state control, where either the user’s app or a cloud database maintains the dialog state. That means a single microservice can handle multiple steps, perhaps the whole dialog, and that multiple users can share instances of each of the microservices. All of that will reduce the number of microservices and connections, which will reduce cost and latency.
The best way to start a discussion on this issue is to ask developers to map out how the workflows connect within the cloud. This process will quickly uncover issues with the number of microservices and connections, and open the question of how the application design could be optimized to address both problems. Often developers will see the problem quickly and understand how to fix it. And they’ll remember who pointed it out in the first place!
Step 2: Optimize use of MPLS and SD-WAN.
Now look at the data center side. There are a lot of options for the relationship between cloud and data center, and most of them are bad. For example, having a cloud component “read” a database that’s hosted in the data center is going to create a lot of traffic that you’ll pay for, and a lot of per-access delay that will blow your QoE out of the water. Instead, you want to send the data center a single message that calls for all the database work and processing needed, and have it send back only the result.
Most hybrid applications use the data center first for an “inquiry” to find a specific record or set of records (like products that match the user’s interest), and then for an “update” that changes the status of one of the records (like a purchase). A great use for a cloud database is to hold onto the inquiry results as the user browses through options, and that eliminates the need to keep going back to the data center for another record. Doing that incurs traffic charges from the cloud provider, loads the network connection to the cloud, and increases the accumulated delay. When an update is made, the change is sent to the data center.
One question emerging from the data-center workflows is the role of the company VPN. Enterprises all rely on MPLS VPNs, sometimes augmented by or even replaced by SD-WAN VPNs. A connection to the data center could be made via the VPN or directly from the Internet to the data center. In the former case, it would be possible to extend the VPN to the cloud (incurring an extra cost), or to drop cloud traffic on one or more of the remote site locations, to be hauled back to the data center. This is usually an option where there are multiple geographic zones of cloud hosting. The best answer can always be determined by mapping out the workflows and exploring each option for cost and its contribution to latency.
Step 3: Hone scalability and componentization.
The final step is defining the cloud workflows that will link the user interactions and the data center interactions, and this is where it’s important to watch for “excess scalability” and “excessive componentization”. The databases in the data center will typically have a specific maximum transaction rate and specific limits to scalability. Most well-designed hybrid-cloud applications are highly scalable on the user side and less scalable as they move toward the data-center connection. You can identify excess scalability by looking at workflows between the cloud components that connect with users and those that connect with the data center.
A network professional’s role in application planning is cemented by workflows because workflows cement every aspect of every application. Every workflow is a cash flow from enterprise budgets to cloud providers, software and hardware vendors, and network providers. Every workflow adds latency, adds complexity, adds to development and maintenance time and costs. Inside the internet and the cloud, connectivity is implicit and that’s led IT professionals to ignore workflows and their consequences because they “just work”. Because network connections carry workflows, they tie networks and network professionals to applications, the cloud, information technology, and most important, to formal IT planning. Grab a bundle of workflows, and get ready to take your seat.
Copyright © 2023 IDG Communications, Inc.