Hotplug Volumes
KubeVirt now supports hotplugging volumes into a running Virtual Machine Instance (VMI). The volume must be either a block volume or contain a disk image. When a VM that has hotplugged volumes is rebooted, the hotplugged volumes will be attached to the restarted VM. If the volumes are persisted they will become part of the VM spec, and will not be considered hotplugged. If they are not persisted, the volumes will be reattached as hotplugged volumes
Enabling hotplug volume support
Hotplug volume support must be enabled in the feature gates to be supported. The feature gates field in the KubeVirt CR must be expanded by adding the HotplugVolumes
to it.
Virtctl support
In order to hotplug a volume, you must first prepare a volume. This can be done by using a DataVolume (DV). In the example we will use a blank DV in order to add some extra storage to a running VMI
In this example we are using ReadWriteOnce
accessMode, and the default FileSystem volume mode. Volume hotplugging supports all combinations of block volume mode and ReadWriteMany
/ReadWriteOnce
/ReadOnlyMany
accessModes, if your storage supports the combination.
Addvolume
Now lets assume we have started a VMI like the Fedora VMI in examples and the name of the VMI is 'vmi-fedora'. We can add the above blank volume to this running VMI by using the 'addvolume' command available with virtctl
This will hotplug the volume into the running VMI, and set the serial of the disk to the volume name. In this example it is set to example-hotplug-volume.
Serial
You can change the serial of the disk by specifying the --serial parameter, for example:
The serial will be used in the guest so you can identify the disk inside the guest by the serial. For instance in Fedora the disk by id will contain the serial.
As you can see the serial is part of the disk name, so you can uniquely identify it.
The format and length of serials are specified according to the libvirt documentation:
Persist
In some cases you want a hotplugged volume to become part of the standard disks after a restart of the VM. For instance if you added some permanent storage to the VM. We also assume that the running VMI has a matching VM that defines it specification. You can call the addvolume command with the --persist flag. This will update the VM domain disks section in addition to updating the VMI domain disks. This means that when you restart the VM, the disk is already defined in the VM, and thus in the new VMI.
In the VM spec this will now show as a new disk
Removevolume
In addition to hotplug plugging the volume, you can also unplug it by using the 'removevolume' command available with virtctl
NOTE You can only unplug volumes that were dynamically added with addvolume, or using the API.
VolumeStatus
VMI objects have a new status.VolumeStatus
field. This is an array containing each disk, hotplugged or not. For example, after hotplugging the volume in the addvolume example, the VMI status will contain this:
Vda is the container disk that contains the Fedora OS, vdb is the cloudinit disk. As you can see those just contain the name and target used when assigning them to the VM. The target is the value passed to QEMU when specifying the disks. The value is unique for the VM and does NOT represent the naming inside the guest. For instance for a Windows Guest OS the target has no meaning. The same will be true for hotplugged volumes. The target is just a unique identifier meant for QEMU, inside the guest the disk can be assigned a different name.
The hotplugVolume has some extra information that regular volume statuses do not have. The attachPodName is the name of the pod that was used to attach the volume to the node the VMI is running on. If this pod is deleted it will also stop the VMI as we cannot guarantee the volume will remain attached to the node. The other fields are similar to conditions and indicate the status of the hot plug process. Once a Volume is ready it can be used by the VM.
Live Migration
Currently Live Migration is enabled for any VMI that has volumes hotplugged into it.
NOTE However there is a known issue that the migration may fail for VMIs with hotplugged block volumes if the target node uses CPU manager with static policy and
runc
prior to versionv1.0.0
.
Last updated