IT Issues that Need Cleaning Up: Herzog’s Dirty Dozen
If you are at a supermarket, the “dirty dozen” is a list of foods that are considered the most tainted. Similarly, if you are working in an enterprise or for a service provider, the dirty dozen is a list of issues that tarnish IT infrastructure, “contaminating” current enterprise-grade implementations and exposing the dangers and risks associated with them. They are the challenges, gaps, misconceptions, and problems that keep CxOs, storage administrators, and other IT leaders up at night, worried that they don’t know what they don’t know.
Before you can clean up the issues, you need to know what they are. Here’s the list of Herzog’s Dirty Dozen:
- Insufficient level of cyber resilience
- Disconnect between cybersecurity and enterprise storage
- Storage proliferation
- Lack of utilizing flexible consumption models
- High total cost of ownership
- Slow data recovery after a cyberattack
- Slow application and workload performance
- Reliance on an outdated architecture
- Lag in making storage more green
- Misunderstanding about hybrid cloud
- Lack of autonomous automation
- Underperforming storage vendor IT support
These issues paint a picture of a reality of challenges facing enterprises and service providers, whether the C-suite is aware of it yet or not. Any one of these factors can taint an otherwise high-performing IT operation. Inertia can set in. Other problems take attention away from these ongoing matters that need attention to stay in check. You need sound strategies to solve each of these Dirty Dozen for a more cost effective and efficient data center infrastructure environment.
1. Insufficient level of cyber resilience
Cyber resilience is one of the most important elements of an enterprise’s IT strategy today, but too many enterprises have inadequate levels of it to be sufficiently safeguarded against cyberattacks, especially ransomware and malware. Characteristics of a strong cyber resilience solution include: immutable snapshots, air gapping, a fenced forensic environment, and rapid recovery time (minutes, not hours or days). A cyber resilience solution is deemed effective when it provides guaranteed availability and a fully scaled data restoration for business continuity.
2. Disconnect between cybersecurity and enterprise storage
IT leaders and CISOs need to think of storage as part of their overall enterprise cybersecurity strategy. An end-to-end approach needs to be taken to stay ahead of cybersecurity threats. This entails evaluating the relationship between cybersecurity, storage, and cyber resilience. Primary storage and secondary storage need to be protected. The average number of days to identify and contain a data breach, according to security analysts, is 287 days. The stakes continue to rise amid daunting threats.
3. Storage proliferation
An organization knows it has experienced storage proliferation when it has too many storage arrays. It may have amassed 10, 12, 15, 20, 40 or more storage arrays, implemented over time as the need for enterprise storage capacity increased. With the explosion of data in recent years, this is no surprise, but this high number of storage systems inevitably causes inefficiency, excessive costs, overly complex data centers, poor storage management, negative environmental impact, and waste. Storage consolidation is the strategy to solve this issue.
4. Lack of utilizing flexible consumption models
An enterprise may be accustomed to consuming storage in a specific way, whether a private cloud or a public cloud, a capital expense or an operational expense. It becomes an “either/or,” locking themselves in. However, flexible consumption models have arisen to give customers more options. An enterprise can now do a mix of CAPEX and OPEX, reaping the financial benefits of only using the storage capacity that is needed. Or it can get high-end storage features and functionality with a pay-as-you-go approach in a cloud-like consumption model. You should explore the different flexible consumption models available for your organization.
5. High total cost of ownership
IT costs are soaring, and IT leaders continue to scramble to find ways to lower costs. Storage is one of those areas that the IT team can look at for substantial cost savings without sacrificing availability, reliability, and application performance. Taking a more strategic approach to your storage infrastructure will make a difference to the bottom line. Enterprises can shift to automated and autonomous storage platforms for primary and secondary storage, as well as consolidate many storage arrays down to a few Infinidat platforms. Shifting to storage-as-a-service is also an opportunity to lower costs, while increasing capabilities.
6. Slow data recovery after a cyberattack
The world has seen all too often when a ransomware attack hits a company and takes the data ransom or corrupts it in some way. Recovering the data is typically too slow, but it has been accepted as “that’s the way it is.” However, near-instantaneous recovery from a cyberattack is now possible. Data on Infinidat’s InfiniBox and InfiniBox SSA II can be recovered in one minute or less from immutable snapshots, providing an organization with a trusted and accurate recovery of its data.
7. Slow application and workload performance
IT leaders naturally hear complaints from executives and employees when applications are running slowly. The breakneck pace of business today exposes performance issues in the data infrastructure. Not only are servers important to real-world application performance, but storage is critical. For highly transactional block workloads, you must focus your efforts on application latency. Latency is the number one determinant for real-world transactional performance.
8. Reliance on an outdated architecture
If your business is relying on a dual-redundancy storage architecture, then you’re using an outdated architecture that does not deliver the level of reliability and availability that is needed in today’s digitized world. The best practice in storage has already moved on to the triple-redundancy architecture, even though incumbents are reluctant to acknowledge it because they want to keep their installed base on a dual-mode architecture. But if you want high reliability, 100% availability, and durability, triple redundancy is the way to go for enterprises and service providers.
9. Lag in making storage more green
Too many storage arrays mean more physical objects to have to recycle. This inefficiency is the opposite of a green initiative that aims to reduce waste, recycle more efficiently, and lessen the impact on the environment. Too many storage arrays also use up more energy, more space, and are higher in OPEX and CAPEX. However, storage consolidation aligns with going green – less energy, less space, less waste, less to recycle, while reducing costs.
10. Misunderstanding about hybrid cloud
Too many IT leaders assume that their company needs to go completely “public cloud” and cannot have any private cloud, because they think private cloud is no longer relevant – which is untrue. Forward-thinking CIOs and IT decision-makers are realizing the value of a hybrid cloud approach that combines public cloud and private cloud. It does not need to be “all or nothing at all.” The private cloud for storage has evolved to have cloud-like characteristics, namely in terms of deployment, management, and consumption. Mixing public and private cloud environments gives an enterprise more control, more flexibility, more security, and more cost-effectiveness.
11. Lack of autonomous automation
Without autonomous automation, storage and the enterprise infrastructure overall are more complex and more costly. Automation alone is no longer enough. Incorporating the autonomous aspect to automation takes it to the next level, paving the way for data center simplification and AIOps. You can clean up the complexity with this functionality that enables a set-it-and-forget-it approach. Storage essentially becomes self-learning.
12. Underperforming storage vendor IT support
Most storage vendors cannot deliver white glove service. They don’t value their customers enough to assign a technical advisor right from the start of the relationship at no additional cost. They bump their customers off to a first line of customer support that doesn’t have the same knowledge and experience as an L3. Enterprises and service providers should look for a white glove service because, when hands-on support is needed, you want to make sure your supplier will be there for you. Even better, with a white glove service, your vendor will be proactive and fix the problem before you are even impacted by it.
[cta]
To download a PDF of Herzog’s Dirty Dozen, click here.