- "기밀 VM의 빈틈을 메운다" 마이크로소프트의 오픈소스 파라바이저 '오픈HCL'란?
- The best early Black Friday AirPods deals: Shop early deals
- The 19 best Black Friday headphone deals 2024: Early sales live now
- I tested the iPad Mini 7 for a week, and its the ultraportable tablet to beat at $100 off
- The best Black Friday deals 2024: Early sales live now
Harvesting the Benefits of Cloud-Native Hyperconvergence
The logical progression from the virtualization of servers and storage in VSANs was hyperconvergence. By abstracting the three elements of storage, compute, and networking, data centers were promised limitless infrastructure control. That promised ideal was in keeping with the aims of hyperscale operators needing to grow to meet increased demand and that had to modernize their infrastructure to stay agile. Hyperconverged infrastructure (HCI) offered elasticity and scalability on a per-use basis for multiple clients, each of whom could deploy multiple applications and services.
There are clear caveats in the HCI world: limitless control is all well and good, but infrastructure details like lack of local storage and slow networking hardware restricting I/O would always define the hard limits on what is possible. Furthermore, there are some strictures emplaced by HCI vendors that limit the flavour of hypervisor or constrain hardware choices to approved kits. Worries around vendor lock-in surround the black-box nature of HCI-in-a-box appliances, too.
The elephant in the room for hyperconverged infrastructures is indubitably cloud. It’s something of a cliché in the technology landscape to mention the speed at which tech develops, but cloud-native technologies like Kubernetes are showing their capabilities and future potential in the cloud, the data center, and at the edge. The concept of HCI was presented first and foremost as a data center technology. It was clearly the sole remit, at the time, of the very large organization with its own facilities. Those facilities are effectively closed loops with limits created by physical resources.
Today, cloud facilities are available from hyperscalers at attractive prices to a much broader market. It is forecasted that the market for HCI solutions will grow significantly over the next few years, with year-on-year growth at just under 30%. Vendors are selling cheap(er) appliances and lower license tiers to try and mop up the midmarket, and hyperconvergence technologies are beginning to work with hybrid and multi-cloud topologies. The latter trend is demand-led. After all, if an IT team wants to consolidate its stack for efficiency and easy management, any consolidation must be all-encompassing and include local hardware, containers, multiple clouds, and edge installations. That ability also implies inherent elasticity, and by proxy, a degree of future-proofing baked in.
The cloud-native technologies around containers are well-beyond flash-in-the-pan status. The CNCF (Cloud Native Computing Foundation) Annual Survey for 2021 shows that containers and Kubernetes have gone mainstream. 96% of organizations are either using or evaluating Kubernetes. In addition, 93% of respondents are currently using, or planning to use, containers in production. Portable, scalable and platform-agnostic, containers are the natural next evolution in virtualization. CI/CD workflows are happening, increasingly, with microservices at their core.
So, what of hyperconvergence in these evolving computing environments? How can HCI solutions handle modern cloud-native workloads alongside full-blown virtual machines (VMs) across a distributed infrastructure. It can be done with “traditional” hyperconvergence, but the solution will be proprietary incurring steep cost.
Last year, SUSE launched Harvester, a 100% free-to-use, open source modern hyperconverged infrastructure solution that is built on a foundation of cloud native solutions including Kubernetes, Longhorn and Kubevirt. Built on top of Kubernetes, Harvester bridges the gap between traditional HCI software and the modern cloud-native ecosystem. It unifies your VMs with cloud-native workloads and provides organizations a single point of creation, monitoring, and control of an entire compute-storage-network stack. Since containers may run anywhere, from SOC ARM boards up to supercomputing clusters, Harvester is perfect for organizations with workloads spread over data centers, public clouds, and edge locations. Its small footprint makes it a perfect fit for edge scenarios and when you combine it with SUSE Rancher, you can centrally manage all your VMs and container workloads across all your edge locations.
VMs, containers, and HCI are critical technologies for extending IT service to new locations. Harvester represents how organizations can unify them and deploy HCI without proprietary closed solutions, using enterprise-grade open-source software that slots right into a modern cloud-native CI/CD pipeline.
To learn more about Harvester, we’ve provided the comprehensive report for you here.
SUSE
Vishal Ghariwala is the Chief Technology Officer for the APJ and Greater China regions for SUSE, a global leader in true open source solutions. In this capacity, he engages with customer and partner executives across the region, and is responsible for growing SUSE’s mindshare by being the executive technical voice to the market, press, and analysts. He also has a global charter with the SUSE Office of the CTO to assess relevant industry, market and technology trends and identify opportunities aligned with the company’s strategy.
Prior to joining SUSE, Vishal was the Director for Cloud Native Applications at Red Hat where he led a team of senior technologists responsible for driving the growth and adoption of the Red Hat OpenShift, API Management, Integration and Business Automation portfolios across the Asia Pacific region.
Vishal has over 20 years of experience in the Software industry and holds a Bachelor’s Degree in Electrical and Electronic Engineering from the Nanyang Technological University in Singapore.
Vishal is here on LinkedIn: https://www.linkedin.com/in/vishalghariwala/