- Windows 11 24H2 hit by a brand new bug, but there's a workaround
- This Samsung OLED spoiled every other TV for me, and it's $1,400 off for Black Friday
- NetBox Labs launches tools to combat network configuration drift
- Navigating the Complexities of AI in Content Creation and Cybersecurity
- Russian Cyber Spies Target Organizations with Custom Malware
5 ways to boost server efficiency
For AMD servers in particular, efficiency improves sharply as server work capacity increases. Upgrading from a low-end server that handles two million SSJs to a high-end server that can do more than eight million can double server efficiency. For Intel servers, there are still efficiency benefits, though they are less dramatic, Uptime says.
Increase server cores to improve efficiency
Another way to improve efficiency dramatically is increasing the number of processor cores. In the case of 2021 AMD servers, as the number of server cores increases from eight to 64, the efficiency triples, Uptime found. For Intel, the increase was less but still significant for 2021 machines.
It’s important to note that not all workloads are capable of using all available cores, says Dietrich. “Some workloads will work most efficiently on, say, a 12-core processor,” he says. So it’s important to match processors’ ability with the needs of the applications running on the server in order to gain the most efficiency.
In some cases, hypervisors and virtual machines can be used to maximize usage, he says, but not all applications lend themselves to these environments.
IT power management is often overlooked
Power-management features of servers can improve the energy-efficiency equation, according to Uptime’s research, boosting server efficiency by at least 10%. The way this works is that CPU voltage and frequency can be increased or decreased, and unused cores can move into a low-power idle state. Many organizations don’t use these features, however, because of performance worries or latency issues.
According to the Uptime Institute report, power management can increase latency by 20 to 80 microseconds, which is unacceptable for some types of workloads, such as financial trading. “And there are some applications where you might decide not to use it because it will cause performance or response time problems,” Dietrich says. But there are other applications where delays won’t have a business impact.
“The biggest mistake is that some operators are risk averse,” Dietrich says. “They think that if they’re going to save a couple of hundred bucks a server on their energy bill but are risking breaking their SLA which will cost them a million dollars, they’re not going to turn [power management] on.”
Dietrich recommends that when companies buy new servers and run their performance tests, make sure they test whether power management affects the applications adversely or not. “If it doesn’t bother them, then you can use power management,” he says. “You can implement a set of power-management functions that will let you save energy and still provide response time and performance that your customers want.”
Andy Lawrence, executive director of research at Uptime, noted in a blog post that the efficiency benefits of IT power management are well established and understood, yet few operators use it. “IT power management has long been overlooked as a means of improving data center efficiency,” Lawrence wrote. “Uptime Intelligence’s data shows that in most cases, concerns about IT performance are far outweighed by the reduction in energy use. Managers from both IT and facilities will benefit from analyzing the data, applying it to their use cases and, unless there are significant technical and performance issues, using power management as a default.”
How Uptime measured server efficiency
Uptime analyzed the efficiency of 429 server platforms using The Green Grid’s Server Efficiency Rating Tool (SERT) database. The Green Grid is a consortium whose goal is to create tools, provide technical expertise, and advocate for energy and resource efficiency in data center environments.
The SERT suite is an industry standard for measuring server efficiency; mandatory server efficiency requirements set by the EU’s Ecodesign Directive and the US Energy Star program specify that servers report the SERT overall efficiency metric.
Uptime analyzed AMD and Intel server data from the SERT database, noting that different processor types have advantages and disadvantages depending on the workload. Uptime focused on servers that use AMD EPYC or Intel Xeon processors, and analyzed server generations from 2017, 2019, and 2021.
The institute ran the servers through their paces with a simulated enterprise online transaction-processing application that stresses processors and memory. That simulation is the SERT worklet server-side Java (SSJ). Uptime says it was chosen in part because SSJ data is available for eight levels (rather than just four levels) of server utilization (12.5%, 25%, 37.5%, 50%, 62.5%, 75%, 87.5% and 100%), which allows for a more granular analysis.