Most of the major cloud providers trust virtual machines (VMs) for the security and versatility they offer over bare metal servers. They mostly rely on VMs to manage their internal-use infrastructure and use containers running on VMs to support the majority of their service offerings. When platform engineers assess whether to run their containerized workloads on VMs or bare metal, the major cloud providers can offer a solid case study to follow.
These findings are based on a report that analyst firm ReveCom produced. ReveCom estimates that well over the majority - likely in the 80–95% range - of internal containerized production workloads run on virtualized infrastructure rather than bare metal.
These findings by analyst firm ReveCom, based on documentation reviews and interviews, indicate that container performance on VMs is equal or near-equal to bare metal across the majority of use cases.
Bare metal may still be used for highly-specific, performance-intensive use cases. But thanks to their flexibility, scalability, and comparable performance, the vast majority of container usage today is on top of VMs.
And if VMs are more than “just good enough” for internal use at the the vast majority of hyperscalers, then that should serve as a stamp of approval for organizations running their own private clouds, which is usually a hybrid mix of provider clouds and on-premises infrastructure.
You might say, “Okay, we've been using VMs for years. We don't want to start reinventing the wheel,” ReveCom analyst Bruce Gain said. “Additionally, performance of containers on VMs is on par or marginally less than containers running on bare metal."
While bare metal offers a marginal raw performance advantage, virtualization (especially with modern, optimized hypervisors like AWS Nitro and Firecracker) provides superior security, isolation, and operational scalability. AWS runs the vast majority of its containerized workloads on virtualized hosts. Bare metal is used selectively for specialized performance or hardware access, but multi‑tenancy, isolation, and fleet management make virtualization the default, according to ReveCom.
Like the other hyperscalers, AWS has sought to eliminate the historical performance and latency drawbacks of virtualization. The Nitro System achieves this by offloading host management functions onto dedicated, custom hardware, allowing the use of a lightweight hypervisor and delivering performance nearly indistinguishable from a bare metal server, according to AWS documentation.
Firecracker, built upon this foundation, provides the necessary speed and isolation to power high-density, transient services like AWS Lambda and AWS Fargate. Consequently, virtualization is the default state for the AWS control plane and customer runtimes. AWS’ use of bare metal deployment is calculated and highly selective, reserved only for specialized tasks such as testing custom silicon, supporting specialized hardware features, or fulfilling specific regulatory and licensing mandates.
Similarly, Microsoft’s Kubernetes deployments, which support services like Microsoft Teams and Microsoft 365, primarily run on VMs. VMs provide full elasticity and simplified lifecycle management, making it simple to create clusters with diverse machine sizes, including the very small footprints that many customers start with, and to scale them dynamically as needs evolve, according to Sean McKenna, partner, product management, Azure Kubernetes Service.
“Day-2 operations, like cluster upgrades, are also straightforward. We simply provision new VMs on the target Kubernetes version and tear down the old ones. The key advantage of VMs is their simplicity and scalability,” McKenna said. “However, there are specific scenarios where using bare metal is advantageous, such as high-performance computing or specialized AI workloads. With that in mind, some services adopt a hybrid model to optimize for specific workload requirements."
Google’s internal infrastructure operates on a dual strategy, where the use of VMs represents the majority approach, though its foundational legacy system, Borg, remains a notable exception.
The core Borg infrastructure, which manages much of Google’s internal compute environment, is bare metal. This system was designed and created over a decade ago, is considered "grandfathered in," and runs its containers on bare metal almost exclusively. The company does not plan to change this architecture to virtualize it or run it on VMs. Consequently, the Borg setup represents the exception to the company’s general rule for internal use.
However, Google leverages virtualization for the vast majority of other applications and services. The majority of the company's clusters utilize VMs, with containers primarily running on VMs for anything outside of Google Cloud Platform (GCP). Major applications such as Gmail and YouTube rely on this VM-based infrastructure. For Google, virtualization largely outweighs any advantages that bare metal might offer in the vast majority of cases, reflecting its standard approach for running internal services. This strategy also allows for a hybrid route internally, where VMs can be used for sensitive environments or legacy systems.
DigitalOcean also relies primarily on VMs as the foundation for its internal platform, with most internal workloads running as containers on top of those VMs. Its internal applications, including the Gradient GenAI platform, run on App Platform, which runs on DOKS (DigitalOcean Kubernetes), which in turn sits on DigitalOcean’s VM layer.
“We use VMs for all of our internal applications…all of our internal use cases, we do not prefer to use bare metal. It’s so much easier to just use and maintain,” Archana Kamath, senior director, IaaS, said. Kamath added, “The VM-based architecture also aligns with what customers commonly use, since VMs offer familiar cloud constructs and predictable operational behavior.”
While DigitalOcean offers bare metal for customers who need maximum flexibility, the company notes that bare-metal environments require deep kernel, networking, and security expertise, and are more difficult to recover and operate. By contrast, VMs provide fast node recovery, built-in multitenancy, and an easier operational model for managing Kubernetes and containerized workloads. Both DigitalOcean and many of its customers ultimately “prefer Kubernetes on VMs and containers on VMs,” Kamath said, reflecting a model where VMs remain the stable and efficient substrate for modern cloud applications.
Internal or external usage
Imagine how large an AWS data center would have to be if every instance of a Kubernetes cluster were hosted on bare metal. Not only would that be untenable, but it would also be so expensive that even Amazon's bottom line would be negatively impacted. Instead, all AWS-hosted Kubernetes clusters are deployed via VMs.
As a result, all th A the vast majority of AWS-hosted Kubernetes clusters are deployed via VMs. The same holds for Azure. Additionally, since these providers operate at a massive global scale, both internal and external Kubernetes services depend on the flexibility and efficiency of virtualization.
According to this report from SMTX, "VM-based Kubernetes can effectively meet the requirements of the majority of containerized applications in production environments."
In other words, bare metal is hardly needed.
When is running Kubernetes on VMs beneficial?
Not every situation is the same, and what works for one business might not for another. Because of that, cloud giants fully understand that running Kubernetes on VMs is beneficial.
On the future of cloud infrastructure: DigitalOcean’s Archana Kamath said: "People are familiar with cloud, people are familiar with VMs... You want that simplification... We feel VMs are where... futuristically, it'll land, especially as you called out, it's getting to a place where [performance has caught up]."
In the end
The choice to run Kubernetes on VMs or cloud-native infrastructure depends on many things: specific requirements, use case, priorities such as security, performance, or cost, and portability. For DevOps and platform engineering teams, VMs are the dominant way to run containers today on-premises and in the cloud. For those cloud giants that host millions of services around the globe, VMs not only make sense, but in most cases, they are the only choice.








