- Two free ways to get a Perplexity Pro subscription for one year
- The 40+ best Black Friday PlayStation 5 deals 2024: Deals available now
- The 25+ best Black Friday Nintendo Switch deals 2024
- Why there could be a new AI chatbot champ by the time you read this
- The 70+ best Black Friday TV deals 2024: Save up to $2,000
What are data centers, and how are they changing?
A data center is a physical facility that enterprises use to house their business-critical applications and information. As they evolve, it’s important to think long-term about how to maintain their reliability and security.
What is a data center?
Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements. These can be broken down into three categories:
- Compute: The memory and processing power to run the applications, generally provided by high-end servers
- Storage: Important enterprise data is generally housed in a data center, on media ranging from tape to solid-state drives, with multiple backups
- Networking: Interconnections between data center components and to the outside world, including routers, switches, application-delivery controllers, and more
These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of data centers are typically a top priority. Both software and hardware security measures are a must.
In addition to technical equipment, data centers also require a significant amount of facilities infrastructure to keep the hardware and software up and running. This includes power subsystems, uninterruptable power supplies (UPS), ventilation and cooling systems, backup generators and cabling to connect to external network operators.
Data-center architecture
Any company of significant size will likely have multiple data centers, possibly in multiple regions. This gives the organization flexibility in how it backs up its information and protects against natural and man-made disasters such as floods, storms and terrorist threats. How the data center is architected can require some difficult decisions because there are almost unlimited options. Some of the key considerations are:
- Does the business require mirrored data centers?
- How much geographic diversity is required?
- What is the necessary time to recover in the case of an outage?
- How much room is required for expansion?
- Should you lease a private data center or use a co-location/managed service?
- What are the bandwidth and power requirements?
- Is there a preferred provider?
- What kind of physical security is required?
Answers to these questions can help determine how many data centers to build and where. For example, a financial services firm in Manhattan likely requires continuous operations as any outage could cost millions. The company would likely decide to build two data centers within close proximity, such as one in New Jersey and one in Connecticut, that mirror each another. One of them could then be shut down entirely with no hit to operations because the company could run off the other.
However, a small professional-services firm may not need instant access to information and can have a primary data center in their offices and back up the information to an alternate site across the country on a nightly basis. In the event of an outage, it would start a process to recover the information but would not have the same urgency as a business that relies on real-time data for competitive advantage.
While data centers are often associated with enterprises and web-scale cloud providers, actually any company can have a data center. For some SMBs, the data center could be a room located in their office space.
Industry standards
To help IT leaders understand what type of infrastructure to deploy, in 2005, the American National Standards Institute (ANSI) and Telecommunications Industry Association (TIA) published standards for data centers, which defined four discrete tiers with design and implementation guidelines. A Tier 1 data center is basically a modified server room, where a Tier 4 data center has the highest levels of system reliability and security. A description of each type of data center can be found here.
Data centers are currently undergoing a significant transition, and the data center of tomorrow will look very different from the one most organizations are familiar with today.
Businesses are becoming increasingly dynamic and distributed, which means the technology that powers data centers needs to be agile and scalable. As server virtualization has increased in popularity, the amount of traffic moving laterally across the data center (East-West) has dwarfed traditional client-server traffic, which moves in and out (North-South). This presents challenges for data-center managers and more are on the horizon.
Here are the key technologies that will evolve data centers from being static and rigid environments that can hold back companies’ business goals into fluid, agile facilities capable of meeting the demands of a digital enterprise.
Edge computing and micro data centers
Edge computing is an increasingly popular paradigm in which much of the computational work that would traditionally have happened in a centralized data center happens closer to the edge of the network where data is gathered. That means less delay for applications that need near-real-time action, and a reduction in the amount of data bandwidth needed.
Micro data centers are compact units that can gather, process, analyze and store data physically close to the devices that collect it, and placing them on-site makes edge computing possible. Micro data centers are deployed in support of a number of applications, including 5G networks, Internet of Things rollouts, and content delivery networks.
There are a number of vendors in the micro data-center space, some with background in adjacent areas like infrastructure as a service (IaaS) or colocation services. Micro data centers are often (but not always) sold as pre-assembled appliances, and “micro” covers a fairly wide range of sizes. They can range from a single 19-inch rack to a 40-foot shipping container, and administration may be taken care of by the vendor or outsourced to a managed service provider (MSP).
The role of cloud
Historically, businesses had a choice of building their own data center or using a hosting vendor or an MSP. Going the latter routes shifted ownership and the economics of running a data center, but the long lead times required to deploy and manage the necessary technology still remained. The rise of IaaS from cloud providers like Amazon Web Services and Microsoft Azure has given businesses an option where they can provision a virtual data center in the cloud with just a few mouse clicks. In 2019, for the first time enterprises spent more annually on cloud infrastructure services than they did on physical data-center hardware, and more than half of all servers sold went into cloud providers’ data centers.
Nevertheless, the local on-prem data center isn’t going away any time soon. In a 2020 survey from the Uptime Institute, 58% of respondents said that most of their workloads remained in corporate data centers, and they cited a lack of visibility into public clouds and responsibility for uptime as a reason to resist the switch.
Many organizations are getting the best of both worlds by using a hybrid-cloud approach, in which some workloads are offloaded to a public cloud while others that need more hands-on control or security still run in the local data center. According to the Flexera 2020 State of the Cloud Report, 87% of surveyed organizations have a hybrid-cloud strategy.
Software-defined networking (SDN)
A digital business can only be as agile as its least agile component, and that’s often the network. By separating the network control plane that decides how best to route traffic from the data plane that forwards packets from one point to another, networks can be made more efficient and more flexible. They can be readily optimized via software to adjust to changing network loads.
This architecture is known as software-defined networking (SDN) and can be applied to data centers. Via network controllers that provision and manage data center hardware, data centers can be configured much more quickly, often using plain-language commands that eliminate time-consuming, error-prone manual configuration.
Hyperconverged infrastructure (HCI)
One of the operational challenges of data centers is having to cobble together the right mixture of servers, storage and networking devices to support demanding applications. Then, once the infrastructure is deployed, IT operations needs to figure out how to scale up quickly without disrupting the application. HCI simplifies that by providing easy-to-deploy appliances based on commodity hardware that provide processing power, storage and networking all in a single piece of hardware. The architecture can scale out by adding more nodes.
HCI can deliver a number of advantages to traditional data centers, including scalability, cloud integration, and easier configuration and administration.
Containers, microservices, and service meshes
Application development is often slowed down by the length of time it takes to provision the infrastructure it runs on. This can significantly hamper an organization’s ability to move to a DevOps model. Containers are a method of virtualizing an entire runtime environment that allows developers to run applications and their dependencies in a self-contained system. Containers are very lightweight and can be created and destroyed quickly so they are ideal to test how applications run under certain conditions.
Containerized applications are often broken into individual microservices, each encapsulating a small, discreet chunk of functionality, which interact with one another to form a complete application. The job of coordinating those individual containers falls to an architectural form known as a service mesh, and while the service mesh does a lot of work to abstract complexity away from developers, it needs its own care and maintenance. Service-mesh automation and management should be integrated into comprehensive data-center networking-management systems, especially as container deployments become more numerous, complex and strategic.
Microsegmentation
Traditional data centers have all the security technology at the core, so as traffic moves in and out of them it passes through security tools and protects the business. The rise of horizontal traffic within data centers means that traffic bypasses firewalls, intrusion prevention systems and other security systems and enabling malware to spread very quickly. Microsegmentation is a method of creating many segments within a data center where groups of resources can be isolated from one another so if a breach happens, damage is contained within a segment. Microsegmentation is typically done in software, making it very agile.
Non-volatile memory express (NVMe)
Everything is faster in a world that is becoming increasingly digitized, and that means data needs to move faster in and out of data-center storage. Traditional storage protocols such as the small computer system interface (SCSI) and Advanced Technology Attachment (ATA) have been around for decades and are reaching their limit. NVMe is a storage protocol designed to accelerate the transfer of information between systems and solid-state drives, greatly improving data-transfer rates.
And NVMe isn’t just limited to connecting to solid-state memory chips: NVMe over Fabrics (NVMe-oF) allows the creation of super-fast storage networks with latencies that rival direct attached storage.
GPU computing
Central processing units (CPUs) have had powered data-center infrastructure for decades but Moore’s Law is running up against physical limitations. Also, new workloads such as analytics, machine learning and IoT are driving the need for a new type of compute model that exceeds what CPUs can do. Graphics processing units (GPUs), once only used for games, operate fundamentally differently, as they are able to process many threads in parallel.
As a result, GPUs are finding a place in the modern data center, which is increasingly tasked with taking on AI and neural-networking. This will result in a number of shifts in how data centers are architected, from how they’re connected to the network to how they’re cooled.
Data centers have always been critical to the success of businesses of almost all sizes, and that won’t change. However, the number of ways to deploy a data center and the enabling technologies are undergoing a radical shift. Technologies that accelerate that shift are the ones that will be needed in the future.
Copyright © 2020 IDG Communications, Inc.