Release notes

Forklift 2.3

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

The release notes describe technical changes, new features and enhancements, and known issues.

Technical changes

This release has the following technical changes:

Setting the VddkInitImage path is part of the procedure of adding VMware provider.

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

The StorageProfile resource needs to be updated for a non-provisioner storage class

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

New features and enhancements

This release has the following features and improvements:

Forklift 2.3 supports warm migration from oVirt

You can use warm migration to migrate VMs from both VMware and oVirt.

The minimal sufficient set of privileges for VMware users is established

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

Forklift documentation is updated with instructions on using hooks

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

Known issues

This release has the following known issues:

Some warm migrations from oVirt might fail

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

Snapshots are not deleted after warm migration

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

Warm migration from oVirt fails if a snapshot operation is performed on the source VM

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

During a oVirt warm migration, the status of the target VM becomes DataVolumeError

During a warm migration from oVirt, the status of the target VM might become DataVolumeError. The cause is a restart of the importer pod. This does not affect the migration. (BZ#2055201)

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Importer pod log is unavailable after warm migration

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

Deleting migration plan does not remove temporary resources.

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

Unclear error status message for VM with no operating system

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

Log archive file includes logs of a deleted migration plan or VM

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

Forklift 2.2

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

The release notes describe technical changes, new features and enhancements, and known issues.

Technical changes

This release has the following technical changes:

Setting the precopy time interval for warm migration

You can set the time interval between snapshots taken during the precopy stage of warm migration.

New features and enhancements

This release has the following features and improvements:

Creating validation rules

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

Downloading logs by using the web console

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

Duplicating a migration plan by using the web console

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

Archiving a migration plan by using the web console

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

Known issues

This release has the following known issues:

Certain Validation service issues do not block migration

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

The following Validation service assessments do not block migration:

Table 1. Issues that do not block migration
Assessment Result

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Missing resource causes error message in current.log file

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

Importer pod log is unavailable after warm migration

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

Deleting migration plan does not remove temporary resources.

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

Unclear error status message for VM with no operating system

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

Log archive file includes logs of a deleted migration plan or VM

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR

If you delete a target VirtualMachine CR during the Convert image to kubevirt step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

Forklift 2.1

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

The release notes describe new features and enhancements, known issues, and technical changes.

Technical changes

VDDK image added to HyperConverged custom resource

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

New features and enhancements

This release adds the following features and improvements.

Cold migration from oVirt

You can perform a cold migration of VMs from oVirt.

Migration hooks

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

Filtered must-gather data collection

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

SR-IOV network support

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

Known issues

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Disk copy stage does not progress

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

The cause of this problem might be one of the following conditions:

  • The storage class does not exist on the target cluster.

  • The VDDK image has not been added to the HyperConverged custom resource.

  • The VM does not have a disk.

  • The VM disk is locked.

  • The VM time zone is not set to UTC.

  • The VM is configured for a USB device.

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

To determine the cause:

  1. Click WorkloadsVirtualization in the OKD web console.

  2. Click the Virtual Machines tab.

  3. Select a virtual machine to open the Virtual Machine Overview screen.

  4. Click Status to view the status of the virtual machine.

VM time zone must be UTC with no offset

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

oVirt resource UUID causes a "Provider not found" error

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

You must use the resource name. (BZ#1994037)

Same oVirt resource name in different data centers causes ambiguous reference

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

Snapshots are not deleted after warm migration

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

Forklift 2.0

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

The release notes describe new features and enhancements, known issues, and technical changes.

New features and enhancements

This release adds the following features and improvements.

Warm migration

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

Cancel migration

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

Migration network

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

Validation service

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

The validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Known issues

This section describes known issues and mitigations.

QEMU guest agent is not installed on migrated VMs

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

Network map displays a "Destination network not found" error

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

Warm migration gets stuck during third precopy

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

You can do one of the following to mitigate this issue:

  • Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

  • Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    $ kubectl patch configmap/vm-import-controller-config \
      -n openshift-cnv -p '{"data": \
      {"warmImport.intervalMinutes": "720"}}'