What is a virtual machine, and why are they so useful?


Many of today’s cutting-edge technologies such as cloud computing, edge computing and microservices, owe their start to the concept of the virtual machine—separating operating systems and software instances from the underlying physical computer.

What is a virtual machine?

A virtual machine (VM) is software that runs programs or applications without being tied to a physical machine. In a VM instance, one or more guest machines can run on a host computer.

Each VM has its own operating system, and functions separately from other VMs, even if they are located on the same physical host. VMs generally run on servers, but they can also be run on desktop systems, or even embedded platforms. Multiple VMs can share resources from a physical host, including CPU cycles, network bandwidth and memory.

VMs trace their origins to the early days of computing in the 1960s when time sharing for mainframe users was used to separate software from a physical host system. A virtual machine was defined in the early 1970s as “an efficient, isolated duplicate of a real computer machine.”

VMs as we know them today have gained steam over the past 20 years as companies adopted server virtualization in order to utilize the compute power of their physical servers more efficiently, reducing the number of physical servers and saving space in the data center. Because apps with different OS requirements could run on a single physical host, different server hardware was not required for each one.

How do VMs work?

In general, there are two types of VMs: Process VMs, which separate a single process, and system VMs, which offer a full separation of the operating system and applications from the physical computer. Examples of process VMs include the Java Virtual Machine, the .NET Framework and the Parrot virtual machine.

System VMs rely on hypervisors as a go-between that give software access to the hardware resources. The hypervisor emulates the computer’s CPU, memory, hard disk, network and other hardware resources, creating a pool of resources that can be allocated to the individual VMs according to their specific requirements. The hypervisor can support multiple virtual hardware platforms that are isolated from each other, enabling VMs to run Linux and Windows Server OSes on the same physical host.

Big names in the hypervisor space include VMware (ESX/ESXi), Intel/Linux Foundation (Xen), Oracle (MV Server for SPARC and Oracle VM Server for x86) and Microsoft (Hyper-V).

Desktop computer systems can also utilize virtual machines. An example here would be a Mac user running a virtual Windows instance on their physical Mac hardware.

What are the two types of hypervisors?

The hypervisor manages resources and allocates them to VMs. It also schedules and adjusts how resources are distributed based on how the hypervisor and VMs have been configured, and it can reallocate resources as demands fluctuate. Most hypervisors fall into one of two categories:

  • Type 1. A bare-metal hypervisor runs directly on the physical host machine and has direct access to its hardware. Type 1 hypervisors typically run on servers and are considered more efficient and better-performing than Type 2 hypervisors, making them well suited to server, desktop and application virtualization. Examples of Type 1 hypervisors include Microsoft Hyper-V and VMware ESXi.
  • Type 2. Sometimes called a hosted hypervisor, a Type 2 hypervisor is installed on top of the host machine’s OS, which manages calls to the hardware resources. Type 2 hypervisors are generally deployed on end-user systems for specific use cases. For example, a developer might use a Type 2 hypervisor to create a specific environment for building an application, or a data analyst might use it to test an application in an isolated environment. Examples include VMware Workstation and Oracle VirtualBox.

What are the advantages of virtual machines?

Because the software is separate from the physical host computer, users can run multiple OS instances on a single piece of hardware, saving a company time, management costs and physical space. Another advantage is that VMs can support legacy apps, reducing or eliminating the need and cost of migrating an older app to an updated or different operating system.

In addition, developers use VMs in order to test apps in a safe, sandboxed environment. Developers looking to see whether their applications will work on a new OS can utilize VMs to test their software instead of purchasing the new hardware and OS ahead of time. For example, Microsoft recently updated its free Windows VMs that let developers download an evaluation VM with Windows 11 to try the OS without updating a primary computer.

This can also help isolate malware that might infect a given VM instance. Because software inside a VM cannot tamper with the host computer, malicious software cannot spread as much damage.

What are the downsides of virtual machines?

Virtual machines do have a few disadvantages. Running multiple VMs on one physical host can result in unstable performance, especially if infrastructure requirements for a particular application are not met. This also makes them less efficient in many cases when compared to a physical computer.

And if the physical server crashes, all of the applications running on it will go down. Most IT shops utilize a balance between physical and virtual systems.

What are some other forms of virtualization?

The success of VMs in server virtualization led to applying virtualization to other areas including storage, networking, and desktops. Chances are if there’s a type of hardware that’s being used in the data center, the concept of virtualizing it is being explored (for example, application delivery controllers).

In network virtualization, companies have explored network-as-a-service options and network functions virtualization (NFV), which uses commodity servers to replace specialized network appliances to enable more flexible and scalable services. This differs a bit from software-defined networking, which separates the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources. A third technology, virtual network functions, are software-based services that can run in an NFV environment, including processes such as routing, firewalling, load balancing, WAN acceleration, and encryption.

Verizon, for example, uses NFV to power its Virtual Network Services that enables customers to spin up new services and capabilities on demand. Services include virtual applications, routing, software-defined WANs, WAN optimization and even Session Border Controller as a Service (SBCaaS) to centrally manage and securely deploy IP-based real-time services, such as VoIP and unified communications.

VMs and containers

The growth of VMs has led to further development of technologies such as containers, which take the concept another step and is gaining appeal among web application developers. In a container setting, a single application along with its dependencies, can be virtualized. With much less overhead than a VM, a container only includes binaries, libraries, and applications.

While some think the development of containers may kill the virtual machine, there are enough capabilities and benefits of VMs that keep the technology moving forward. For example, VMs remain useful when running multiple applications together, or when running legacy applications on older operating systems.

In addition, some feel that containers are less secure than VM hypervisors because containers have only one OS that applications share, while VMs can isolate the application and the OS.

Gary Chen, the research manager of IDC’s Software-Defined Compute division, said the VM software market remains a foundational technology, even as customers explore cloud architectures and containers. “The virtual machine software market has been remarkably resilient and will continue to grow positively over the next five years, despite being highly mature and approaching saturation,” Chen writes in IDC’s Worldwide Virtual Machine Software Forecast, 2019-2022.

VMs, 5G and edge computing

VMs are seen as a part of new technologies such as 5G and edge computing. For example, virtual desktop infrastructure (VDI) vendors such as Microsoft, VMware and Citrix are looking at ways to extend their VDI systems to employees who now work at home as part of a post-COVID hybrid model.

“With VDI, you need extremely low latency because you are sending your keystrokes and mouse movements to basically a remote desktop,” says Mahadev Satyanarayanan, a professor of computer science at Carnegie Mellon University. In 2009, Satyanarayanan wrote about how virtual machine-based cloudlets could be used to provide better processing capabilities to mobile devices on the edge of the Internet, which led to the development of edge computing.

In the 5G wireless space, the process of network slicing uses software-defined networking and NFV technologies to help install network functionality onto VMs on a virtualized server to provide services that once ran only on proprietary hardware.

Like many other technologies in use today, these emerging innovations would not have been developed had it not been for the original VM concepts introduced decades ago.

Keith Shaw is a freelance digital journalist who has written about the IT world for more than 20 years.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2022 IDG Communications, Inc.



Source link