What Is KubeVirt in OpenShift?
KubeVirt is an extension of Kubernetes that allows you to run virtual machines (VMs) alongside container workloads in the OpenShift environment. It achieves this by leveraging Kubernetes’ resources to treat VMs as native Kubernetes resources. This integration bridges the gap between traditional VM-based workloads and modern containerized applications, offering a unified platform for running both types of applications.
By running VMs in Kubernetes, KubeVirt takes advantage of Kubernetes’ orchestration capabilities. This includes automated scaling, high availability, and efficient resource management. KubeVirt integrates with existing Kubernetes tools and processes, enabling operations teams to manage VMs using familiar interfaces.
KubeVirt is an open source project founded by Red Hat in collaboration with ARM, NVIDIA, Intel Gaudi, and the Linux Foundation. You can get it from the official project website.
This is part of a series of articles about Kubernetes networking.
In this article:
- Benefits of Integrating KubeVirt with OpenShift
- KubeVirt Architecture Overview
- Deploying KubeVirt on OpenShift
- Best Practices for Using KubeVirt in OpenShift
Benefits of Integrating KubeVirt with OpenShift
KubeVirt is an extension for Kubernetes that enables running virtual machines (VMs) alongside container workloads in OpenShift. It provides the following benefits for OpenShift users:
- Unified management of workloads: With KubeVirt, administrators can manage both VMs and containers through the same Kubernetes interface. This reduces the complexity of maintaining separate tools for each type of workload.
- Modernization: Organizations can modernize their infrastructure incrementally by running containerized applications alongside legacy VM-based workloads.
- Resource utilization: OpenShift’s Kubernetes-based resource scheduling ensures that VMs and containers share infrastructure efficiently, maximizing hardware utilization.
- Scalability and high availability: KubeVirt leverages Kubernetes’ scaling and failover mechanisms to ensure that VMs benefit from the same resilience and scalability as containers.
- DevOps processes: Development and operations teams can adopt container-native workflows for both VMs and containers, fostering collaboration and speeding up delivery cycles.
- Integration with Kubernetes ecosystem: KubeVirt allows VMs to work with Kubernetes-native tools like monitoring, logging, and CI/CD pipelines.
Related content: Read our guide to container security
KubeVirt Architecture Overview
KubeVirt is designed as a service-oriented architecture that leverages Kubernetes to manage virtual machine instances (VMIs) alongside traditional containerized workloads.
Core Components
Custom Resource Definitions (CRDs):
- KubeVirt extends Kubernetes by introducing new resource types to its API, such as
VirtualMachine
andVirtualMachineInstance (VMI)
. - These CRDs allow users to define and manage VMIs using the same Kubernetes API used for pods and other native workloads.
Controllers:
Specialized controllers manage the lifecycle of the new resource types. For example:
- The
virt-controller
ensures VMIs are scheduled on appropriate hosts and manages their overall state. - Controllers also handle the high-level logic for stateful VMs (
VirtualMachine
) and replica sets of VMIs (VirtualMachineInstanceReplicaSet
).
Daemons:
- Node-level daemons, such as the
virt-handler
, operate alongside the Kuberneteskubelet
to manage VMIs on specific nodes. - These daemons handle tasks like launching VMIs, configuring virtualized hardware, and monitoring their runtime state to align with desired configurations.
KubeVirt Pods:
All KubeVirt components, including controllers and daemons, are deployed as pods in the Kubernetes cluster. This design ensures they are managed and orchestrated like any other Kubernetes-native workload.
Operational Flow
KubeVirt’s operational workflow involves two main steps: VMI scheduling and pod-like behavior.
1. VMI Scheduling
- When a user defines a VMI, it is processed by the Kubernetes API server and stored as an object.
- The
virt-controller
schedules the VMI to a suitable node based on available resources and policies. - The
virt-handler
on the target node launches the VMI using the node’s virtualization capabilities (e.g., KVM).
2. Pod-Like Behavior
- VMIs are managed similarly to pods, with Kubernetes mechanisms like resource quotas, namespaces, and labels applying to both.
- This consistency simplifies operations and integrates VMs into Kubernetes-native workflows.
Related content: Read our guide to OpenShift Virtualization
Tips from the Expert
In my experience, here are tips that can help you better implement and optimize KubeVirt in OpenShift:
Leverage node labeling for VM placement:
Use Kubernetes node labels to define VM-specific placement policies. For instance, label nodes based on hardware capabilities (e.g., SSD storage, GPU availability) to ensure that VMs are scheduled on appropriate resources.
Enhance VM lifecycle automation using Ansible:
Pair KubeVirt with Ansible for automating VM provisioning, configuration, and decommissioning. This can save time and reduce errors in managing large-scale environments.
Use topology-aware scheduling:
Enable topology-aware scheduling for both VMs and containers to ensure optimal performance by colocating interdependent workloads on the same physical host or network topology.
Test VM networking latency:
VMs in KubeVirt may have different network performance characteristics than containers. Use tools like iperf to benchmark and fine-tune the network stack for latency-sensitive applications.
Audit API interactions with KubeVirt CRDs:
Regularly audit access and operations on KubeVirt CRDs (e.g., VirtualMachine, VirtualMachineInstance) to detect unauthorized or misconfigured workloads. Integrate audit logs with a SIEM for better security oversight.
Deploying KubeVirt on OpenShift
To deploy KubeVirt on OpenShift, you must meet specific prerequisites and follow a defined process to set up its components for managing virtual machines (VMs). These instructions are adapted from the KubeVirt documentation.
Prerequisites
Before deploying KubeVirt, ensure the following requirements are met:
- Compatible Kubernetes Cluster: The OpenShift cluster must be based on one of the latest three Kubernetes releases available at the time of the KubeVirt release.
- Privileged Kubernetes API Server: The Kubernetes API server should be configured with
--allow-privileged=true
to enable the execution of KubeVirt’s privileged DaemonSet. - Supported Container Runtimes: KubeVirt supports container runtimes like
containerd
andcri-o
with virtualization features. While other runtimes may work, these are the recommended options. - Hardware Virtualization Support: The host machines should support hardware virtualization, validated using the
virt-host-validate
tool.
Installation Steps
1. Install the KubeVirt Operator: The KubeVirt operator manages the lifecycle of KubeVirt components. Use the following commands to install the latest release:
export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt) kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
2. Deploy the KubeVirt Custom Resource: The custom resource (CR) triggers the installation of KubeVirt components in the cluster:
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
3. Verify Installation: Wait until all components are available:
kubectl -n kubevirt wait kv kubevirt --for condition=Available
4. Once installed, components like virt-api
, virt-controller
, and virt-handler
will be running in the kubevirt
namespace.
Additional Considerations
Configure Emulation (Optional)
If hardware virtualization is unavailable, enable software emulation by modifying the KubeVirt CR:
kubectl edit -n kubevirt kubevirt kubevirt
Add the following configuration:
spec: configuration: developerConfiguration: useEmulation: true
SELinux Support
Ensure that container-selinux
is installed on nodes with SELinux enabled. The required minimum version is documented in the KubeVirt repository.
AppArmor Profiles
In environments with AppArmor, you may need to adjust profiles to allow the execution of privileged containers used by KubeVirt.
Best Practices for Using KubeVirt in OpenShift
Optimize Resource Allocation
Optimizing resource allocation in KubeVirt involves setting up resource requests and limits, ensuring VMs have the necessary compute and storage to operate efficiently without waste. This optimization balances performance and cost, leveraging Kubernetes’ scheduling capabilities to maximize utilization and minimize redundancy.
Resource allocation can be fine-tuned by analyzing workload characteristics and adjusting configurations to meet peak demands. Administrators should employ monitoring solutions to track resource usage, adjusting allocation policies to reflect actual needs.
Implement Security Measures
Implementing security measures in a KubeVirt environment is crucial to protect data integrity and system stability. Security practices include isolating workloads, configuring role-based access controls (RBAC), and employing network policies to protect against unauthorized access. Additionally, patch management and security monitoring play roles in safeguarding the environment from vulnerabilities and threats.
Tools and policies should be employed to secure VM and container interactions, ensuring that multi-tenancy environments remain safe and compliant. By integrating security best practices into daily operations, enterprises can protect sensitive data and maintain high standards of availability and integrity.
Monitor Virtual Machine Performance
Monitoring VM performance is vital for maintaining high-quality service delivery. Utilizing tools and dashboards within Kubernetes provides visibility into resource usage, alerting to potential issues such as bottlenecks or failures. Consistent monitoring helps identify patterns and respond to anomalies promptly. Continuously gathering performance data also informs capacity planning.
Performance monitoring tools allow for proactive adjustments and calibrations to meet performance benchmarks. By adopting integrated monitoring solutions, organizations gain insights into system health and responsiveness.
Backup and Recovery Strategies
Establishing robust backup and recovery strategies for KubeVirt environments guarantees data protection and business continuity. Regular backups of VMs ensure that data remains recoverable in the event of failures or disasters. Efficient recovery plans ensure rapid return to service, reducing downtime and its associated impacts.
The use of automated backup tools and policies minimizes manual effort and enhances reliability in recovery processes. Regular testing of recovery procedures ensures readiness and effectiveness, mitigating risks associated with data loss.
Regularly Update and Patch Systems
Regular updates and patches are essential for maintaining the security and efficiency of KubeVirt implementations. By keeping the system and its components updated, organizations protect against vulnerabilities and benefit from the latest features and improvements. Strategically scheduling and testing updates minimizes disruptions while ensuring that the infrastructure remains robust and secure.
Routine patching and updates extend beyond security, encompassing performance and feature enhancements that improve system capabilities and user experience. Employing automated update solutions ensures that environments remain consistent and secure.
Networking and Security with Red Hat OpenShift Virtualization and Calico
Calico provides a streamlined approach to managing containers, virtual machines, and hosts using Red Hat OpenShift Virtualization:
- BGP Peering: Calico’s BGP peering enables seamless networking integration of Kubernetes applications with existing network infrastructure, providing dynamic and scalable routing for Kubernetes clusters.
- VRF Isolation: Calico’s VRF isolation enables multi-tenant network segmentation by creating virtual routing and forwarding instances, ensuring strict traffic separation and enhanced security.
- Tenant and Workload Isolation: With Calico, author, preview, stage and enforce network policies at networking and application layer that define which pods, services, or namespaces can communicate with each other by leveraging Kubernetes-native constructs like labels and namespaces, thus preventing unauthorized access and enhancing security across the cluster.
- Automatic Namespace Isolation: Calico automatically isolates tenants in separate namespaces thus restricting any unauthorized communication between tenants by default and preventing lateral movement of threats within a cluster.
- Automated Policy Recommendation: Calico’s policy recommendation engine recommends policies based on the traffic flow of your workloads, which can be enforced with just one click—no coding necessary. Further, all recommendedpolicies can be modified before enforcement.
- Policy Lifecycle Management: Calico helps preview and stage policies prior to enforcement to secure workloads and understand policies’ impact on the application’s performance and security posture. It also provides immediate feedback on policy rule changes in the production environment before enforcement.
- Dynamic Policy Enforcement: Calico’s Dynamic Policy Segmentation delivers real-time network policy updates within milliseconds, ensuring immediate response to network changes and minimizing potential vulnerabilities.
- Policy As Code: Calico implements network security and observability as code, enabling automated, scalable, and compliant workload management. It uses Kubernetes primitives and declarative models, using the same versioning that teams use for source code. It ensures continuous compliance and security for all components, regardless of deployment, distribution, or container type.
- Observability: The Dynamic Service and Threat Graph built on flow logs, collects and analyzes information about applications and their communication flows to create a comprehensive map that helps administrators eliminate the guesswork involved in understanding the applications’ upstream and downstream dependencies and policy gaps.
- Distributed IDS/IPS: Calico’s workload-centric IDS/IPS protects against network-based threats by ingesting different threat feeds such as AlienVault by default and custom sources to pinpoint the source of malicious activity in case of a breach.
Next steps:
- Solution brief: OpenShift virtualization