- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
The CIO’s Hybrid Cloud Journey Incorporates Extensive Systems Thinking
Heading down the path of systems thinking for the hybrid cloud is the equivalent of taking the road less traveled in the storage industry. It is much more common to hear vendor noise about direct cloud integration features, such as a mechanism to move data on a storage array to public cloud services or run separate instances of the core vendor software inside public cloud environments. This is because of a narrow way of thinking that is centered on a storage array mentality. While there is value in those capabilities, practitioners need to consider a broader vision.
When my Infinidat colleagues and I talk to CIOs and other senior leaders at large enterprise organizations, we speak much more about the bigger picture of all the different aspects of the enterprise environment. The CIO needs it to be as simple as possible, especially if the desired state is a low investment in traditional data centers, which is the direction the IT pendulum continues to swing.
Applying systems thinking to the hybrid cloud is changing the way CIOs and IT teams are approaching their cloud journey. Systems thinking takes into consideration the end-to-end environment and the operational realities associated with that environment. There are several components with different values across the environment, which ultimately supports an overall cloud transformation. Storage is a critical part of the overall corporate cloud strategy.
Savvy IT leaders have come to realize the benefits of both the public cloud and private cloud, culminating in hybrid cloud implementations. Escalating costs on the public cloud will likely reinforce hybrid approaches to storage and cause the pendulum to swing back toward private cloud in the future, but besides serving as a transitional path today, the main reasons for using a private cloud today are about control and cybersecurity.
Being able to create a system that can accommodate both of those elements at the right scale for a large enterprise environment is not an easy task. And it goes far beyond the kind of individual array type services that are baked into point solutions within a typical storage environment.
What exactly is hybrid cloud?
Hybrid cloud is simply a world where you have workloads running in at least one public cloud component, plus a data center-based component. It could be traditionally-owned data centers or a co-location facility, but it’s something where the customer is responsible for control of the physical infrastructure, not a vendor.
To support that deployment scenario, you need workload mobility. You need the ability to quickly provision and manage the underlying infrastructure. You need visibility into the entire stack. Those are the biggest rocks among many factors that determine hybrid cloud success.
For typical enterprises, using larger building blocks on the infrastructure side makes the journey to hybrid cloud easier. There are fewer potential points of failure, fewer “moving pieces,” and increased simplification of the existing hybrid or existing physical infrastructure, whether it is deployed in a data center or in a co-location type of environment. This deployment model also can dramatically reduce overall storage estate CAPEX and OPEX.
But what happens when the building blocks for storage are small – under a petabyte or so each? There is inherently more orchestration overhead, and now vendors are increasingly dependent on an extra “glue” layer to put all these smaller pieces together.
Working with bigger pieces (petabytes) from the beginning can omit a significant amount of that complexity in a hybrid cloud. It’s a question of how much investment a CIO wants to put in different pieces of “glue” between different systems vs. getting larger building blocks conducive to a systems thinking approach.
The right places in the stack
A number of storage array vendors highlight an ability to snap data to public clouds, and there is value in this capability, but it’s less valuable than you might think when you’re thinking at a systems level. That is because large enterprises will most likely want backup software with routine, specific schedules across their entire infrastructure and coordination with their application stacks. IT managers are not going to want an array to move data when the application doesn’t know about it.
A common problem is that many storage array vendors focus on doing it within their piece of the stack. Yet, in fact, the right answer is most likely at the backup software layer somewhere − somewhere higher than the individual arrays in the stack. It’s about upleveling the overall thought process to systems thinking: what SLAs you want to achieve across your on-prem and public cloud environments. The right backup software can integrate with the underlying infrastructure pieces to provide that.
Hybrid cloud needs to be thought of holistically, not as a “spec checkbox” type activity. And you need to think about where the right places are in this stack to provide the functionality.
Paying twice for the same storage
Solutions that involve deploying another vendor’s software on top of storage that you already have to pay for from the hyperscaler means paying twice for the same storage, and this makes little sense in the long term.
Sure, it may be an okay transitional solution. Or if you’re really baked into the vendor’s APIs or way of doing things, then maybe that’s good accommodation. But the end state is almost never going to be a situation where the CIO is signing off on a check for two different vendors for the same bits of data. It simply doesn’t make sense.
Thinking at the systems level
Tactical issues get resolved when you apply systems thinking to enterprise storage. Keep in mind:
- Consider where the data resiliency needs to be orchestrated and whether that needs to be within individual arrays or better positioned as part of an overall backup strategy or whatever strategy it is
- Beware of just running the same storage software in the public cloud because it’s a transitional solution at best
- Cost management is critical
On the last point, you should have a good look at the true economic profile your organization is getting on-premises. You can get cloud-like business models and the OPEX aspects from vendors, such as Infinidat, lowering costs compared to traditional storage infrastructure.
Almost all storage decisions are fundamentally economic decisions, whether it’s a direct price per GB cost, the overall operational costs, or cost avoidance/opportunity costs. It all comes back to costs at some level, but an important part of that is questioning the assumptions of the existing architectures.
If you’re coming from a world where you have 50 mid-range arrays, and you have a potential of reducing the quantity of moving pieces in that infrastructure, the consolidation and simplification alone could translate into significant cost benefits: OPEX, CAPEX, and operational manpower. And that’s before you even start talking about moving data outside of more traditional data center environments.
Leveraging technologies, such as Infinidat’s enterprise storage solutions, makes it more straightforward to simplify and consolidate on the on-prem side of the hybrid cloud environment, potentially allowing for incremental investment in the public cloud side, if that’s the direction for your particular enterprise.
How much are you spending maintaining these solutions, the incumbent solutions, both in terms of your standard maintenance or support subscription fees? Those fees, by the way, add up quite significantly. In terms of your staff time and productivity to support 50 arrays, when you could be supporting three systems or one system, you should look holistically at the real costs, not just what you’re paying the vendors. What are the opportunity costs of maintaining a more complex traditional infrastructure?
On the public cloud side of things, leveraging cloud cost management tools, we’ve seen over a billion dollars of VC money that’s gone into that space, and many companies are not taking full advantage of this, particularly enterprises who are early in their cloud transformation. The cost management aspect and the automation around it − the degree of work that you can put into it for real meaningful financial results − are not always the highest priority when you’re just getting started. And the challenge with not baking it in from the beginning is that it’s harder to graft it in when processes become entrenched.
For more information, visit Infinidat here.