The rise in popularity of containers in virtual machines (VMs) managed by internal developer platforms (IDPs) is a direct response to cloud-native complexity. The industry has realized that bare metal often lacks the necessary isolation and flexibility for enterprise-scale container orchestration.
This shift is driven by a fundamental technical reality of containers in bare metal: the shared kernel architecture. In a bare-metal environment, all containers on a host share the same underlying OS kernel, creating a significant blast radius where a kernel panic or security vulnerability in one container can jeopardize the entire physical host.
Running containers within VMs solves this by providing multiple levels of isolation. Because each VM carries its own guest OS and kernel, the security boundary is discrete. In this way, the additional isolation layer serves as additional protection when a breach in one container is contained by the hypervisor.
This architecture also improves resource management so it is less static and becomes a more fluid exercise. Unlike the rigid nature of bare-metal hardware, VMs allow for "hot-adding" of resources and dynamic right-sizing, which effectively eliminates the common bar pitfall of stranded capacity.
Performance parity between the two architectures - containers in bare metal vs containers in VMs - has been achieved. With virtualized workloads now matching or even exceeding bare-metal throughput in AI and RAN benchmarks, organizations are reconsidering their physical footprints.
However, simply "lifting and shifting" bare-metal workloads onto VMs is only a partial victory. Without a proper platform engineering strategy, organizations still face challenges delivering the operational efficiency that modern cloud environments require. Improving operational efficiencies and resource utilization are becoming that much more critical as infrastructure costs continue to rise significantly.
A proper IDP solves this by adding a universal, policy-driven management layer. Infrastructure scalability then becomes detached from headcount, allowing a single platform team to govern thousands of environments in ways they previously could not do. Again, the VM layer is required to achieve these efficiencies at scale, whether the aim is faster provisioning, improved utilization, or better policy compliance.
The performance reality
The debate about bare metal and virtualization is often driven by outdated decade old assumptions about their relative performance. Recent benchmarks demonstrate that modern VMs have achieved near-parity with bare metal.
Specifically, Broadcom’s MLPerf Inference 5.1 results showed that virtualized AI workloads (using vSphere with NVIDIA B200 and H200 GPUs) delivered between 98% and 102% of the performance of bare metal. Similarly, for latency-sensitive Radio Access Network (RAN) workloads, vSphere matched bare metal with latency consistently below 10 microseconds.
While a niche 1% of applications or workloads, such as high-frequency trading, may still demand raw access, the "virtualization tax" is effectively gone.
The impact of the virtualization performance overhead is dwarfed by the efficiency, security, isolation and productivity gains organizations can realize. These gains are realized by implementing a consistent platform strategy consisting of comprehensive automation and orchestration enabling self service and observability.
The precision problem: Containers on bare metal vs. on VMs
Running containers directly on bare metal is often pitched as "cutting out the middleman." But the reality is this creates significant infrastructure management challenges. When deploying containers directly on bare metal, the hardware abstraction layer is lost. This creates three specific friction points:
1. Kernel dependencies: Containers on bare metal share the host OS kernel directly. A configuration error on the host affects all containers, creating a massive "blast radius." VMs provide a hard isolation boundary that contains these failures.
2. Resource rigidness: Bare metal servers are static. If a cluster needs less capacity, you cannot easily reclaim that physical server for a different tenant. VMs allow for dynamic resizing and bin-packing, maximizing hardware utilization.
3. Security: Relying solely on container namespaces for isolation on bare metal introduces higher security risk for multi-tenant environments compared to the mature security boundary of a hypervisor.
A well managed and automated virtualization layer between containers and hardware provides the connective tissue needed to optimally leverage the advantages of application containers while avoiding messy custom orchestration projects to ensure efficient, secure and well-performing hardware allocation. From ‘silo’ to standardization.
Without virtualization, infrastructure teams are forced to manage physical servers individually. This leads to waste, over-provisioning, and "siloed" teams that are difficult to scale. There is a glaring need for standardization, if the full promise of containers is to be realized.
"As our journey matured, we realized that virtualization provided benefits that were much more valuable than the performance losses from the abstraction," says Louis Bailleul, director of architecture and platform engineering at TGS.
"The hard dependency on the host disappeared... making the advantages of bare metal marginal compared to the flexibility of virtualization."
The platform engineering approach
Virtualization provides the "standard unit" of infrastructure that platform engineering teams require. Tara Stella, a principal systems architect, emphasizes that simply moving workloads isn't enough; teams must build a platform that abstracts complexity with VMs.
"I’ve seen so many teams overlook the necessity of VMs for workloads requiring strict isolation, yet they fail to build the automated platforms needed to manage them," Stella said. "That’s not a modern platform; that’s just fragmented infrastructure masquerading as a cloud."
One platform to unite them all.
By standardizing on VMs, organizations gain the flexibility to orchestrate containers more efficiently, improving both security and their use of resources. Using a platform to manage this layer allows teams to share a centralized view, ensuring transparency and control while giving developers less operations-related tasks to manage.
A platform engineering approach embeds containers into the overall DevOps lifecycle, taking advantage of their ability to deliver resources to container clusters in VMs that are free from the limitations of statically assigned bare metal resources.
This means containers become part of the CI/CD pipeline, observability and monitoring strategy, and the security and compliance framework. Without this integration, containers just become a mosaic of statically assigned resources that are more difficult to manage than traditional infrastructure.
VMs by contrast add an additional layer of isolation, and with the added security on offer, containerized environments become that much more manageable and flexible.
Meanwhile, the performance of containers running within VMs is now mostly on par with that of applications running directly on bare metal.
When VMs mean organizations are able to take advantage of improved flexibility, and isolation, and security, without worrying about the “virtualization tax”, running containers on bare metal becomes unnecessary friction. But running containers on virtual machines provides the foundation for modern platform engineering at scale.












