Installation
Last updated
Last updated
KubeVirt is a virtualization add-on to Kubernetes and this guide assumes that a Kubernetes cluster is already installed.
If installed on OKD, the web console is extended for management of virtual machines.
A few requirements need to be met before you can begin:
cluster or derivative
(such as , Tectonic)
based on Kubernetes 1.10 or greater
Kubernetes apiserver must have --allow-privileged=true
in order to run KubeVirt's privileged DaemonSet.
kubectl
client utility
KubeVirt is currently supported on the following container runtimes:
docker
crio (with runv)
Other container runtimes, which do not use virtualization features, should work too. However, they are not tested.
Hardware with virtualization support is recommended. You can use virt-host-validate to ensure that your hosts are capable of running virtualization workloads:
KubeVirt can be installed using the KubeVirt operator, which manages the lifecycle of all the KubeVirt core components. Below is an example of how to install KubeVirt using an official release.
Add the following to the kubevirt.yaml
file
Note: Prior to release v0.20.0 the condition for the
kubectl wait
command was named "Ready" instead of "Available"Note: Prior to KubeVirt 0.34.2 a ConfigMap called
kubevirt-config
in the install-namespace was used to configure KubeVirt. Since 0.34.2 this method is deprecated. The configmap still has precedence overconfiguration
on the CR exists, but it will not receive future updates and you should migrate any custom configurations tospec.configuration
on the KubeVirt CR.
All new components will be deployed under the kubevirt
namespace:
Once privileges are granted, the KubeVirt can be deployed as described above.
No additional steps are required to extend OKD's web console for KubeVirt.
The virtualization extension is automatically enabled when KubeVirt deployment is detected.
Once nodes are restarted with this configuration, the KubeVirt can be deployed as described above.
To install the latest developer build, run the following commands:
To find out which commit this build is based on, run:
Experimental ARM64 developer builds can be installed like this:
You can patch the virt-handler
DaemonSet post-deployment to restrict it to a specific subset of nodes with a nodeSelector. For example, to restrict the DaemonSet to only nodes with the "region=primary" label:
If hardware virtualization is not available, then a can be enabled using by setting in the KubeVirt CR spec.configuration.developerConfiguration.useEmulation
to true
as follows:
The following needs to be added prior KubeVirt deployment:
You can find KubeVirt in the OKD Service Catalog and install it from there. In order to do that please follow the documentation in the .
The following needs to be added to all nodes prior KubeVirt deployment:
KubeVirt releases daily a developer build from current master. One can see when the last release happened by looking at our .
See the to understand how to build and deploy KubeVirt from source.
KubeVirt alone does not bring any additional network plugins, it just allows user to utilize them. If you want to attach your VMs to multiple networks (Multus CNI) or have full control over L2 (OVS CNI), you need to deploy respective network plugins. For more information, refer to .
Note: KubeVirt Ansible installs these plugins by default.