Installing and using Forklift 2.1

About Forklift

You can migrate virtual machines from VMware vSphere or Red Hat Virtualization to KubeVirt with Forklift.

Prerequisites

Review the following prerequisites to ensure that your environment is prepared for migration.

Software compatibility guidelines

You must install compatible software versions.

Table 1. Compatible software versions
Forklift OKD KubeVirt VMware vSphere Red Hat Virtualization

2.1

4.8

4.8.1

6.5 or later

4.3 or later

Storage support and default modes

Forklift uses the following default volume and access modes for supported storage.

If the KubeVirt storage does not support dynamic provisioning, Forklift applies the default settings:

  • Filesystem volume mode

    Filesystem volume mode is slower than Block volume mode.

  • ReadWriteOnce access mode

    ReadWriteOnce access mode does not support live virtual machine migration.

Table 2. Default volume and access modes
Provisioner Volume mode Access mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

Network prerequisites

The following prerequisites apply to all migrations:

  • IP addresses, VLANs, and other network configuration settings must not be changed before or after migration. The MAC addresses of the virtual machines are preserved during migration.

  • The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

  • If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

Ports

The firewalls must enable traffic over the following ports:

Table 3. Network ports required for migrating from VMware vSphere
Port Protocol Source Destination Purpose

443

TCP

OpenShift nodes

VMware vCenter

VMware provider inventory

Disk transfer authentication

443

TCP

OpenShift nodes

VMware ESXi hosts

Disk transfer authentication

902

TCP

OpenShift nodes

VMware ESXi hosts

Disk transfer data copy

Table 4. Network ports required for migrating from Red Hat Virtualization
Port Protocol Source Destination Purpose

443

TCP

OpenShift nodes

RHV Engine

RHV provider inventory

Disk transfer authentication

443

TCP

OpenShift nodes

RHV hosts

Disk transfer authentication

54322

TCP

OpenShift nodes

RHV hosts

Disk transfer data copy

Source virtual machine prerequisites

The following prerequisites apply to all migrations:

  • ISO/CDROM disks must be unmounted.

  • Each NIC must contain one IPv4 and/or one IPv6 address.

  • The VM name must contain only lowercase letters (a-z), numbers (0-9), or hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

  • The VM name must not duplicate the name of a VM in the KubeVirt environment.

  • The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

Red Hat Virtualization prerequisites

The following prerequisites apply to Red Hat Virtualization migrations:

  • You must have the CA certificate of the Manager.

    You can obtain the CA certificate by navigating to https://<www.example.com>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

VMware prerequisites

The following prerequisites apply to VMware migrations:

  • You must install VMware Tools on all source virtual machines (VMs).

  • If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

  • You must create a VMware Virtual Disk Development Kit (VDDK) image.

  • You must obtain the SHA-1 fingerprint of the vCenter host.

  • If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

Creating a VDDK image

Forklift uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.

You must download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. Later, you will add the VDDK image to the HyperConverged custom resource (CR).

Storing the VDDK image in a public registry might violate the VMware license terms.

Prerequisites
Procedure
  1. Create and navigate to a temporary directory:

    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
  2. In a browser, navigate to the VMware VDDK download page.

  3. Select the latest VDDK version and click Download.

  4. Save the VDDK archive file in the temporary directory.

  5. Extract the VDDK archive:

    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
  6. Create a Dockerfile:

    $ cat > Dockerfile <<EOF
    FROM registry.access.redhat.com/ubi8/ubi-minimal
    COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    RUN mkdir -p /opt
    ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    EOF
  7. Build the VDDK image:

    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
  8. Push the VDDK image to the registry:

    $ podman push <registry_route_or_server_path>/vddk:<tag>
  9. Ensure that the image is accessible to your KubeVirt environment.

Obtaining the SHA-1 fingerprint of a vCenter host

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

Procedure
  • Run the following command:

    $ openssl s_client \
        -connect <www.example.com>:443 \ (1)
        < /dev/null 2>/dev/null \
        | openssl x509 -fingerprint -noout -in /dev/stdin \
        | cut -d '=' -f 2
    1 Specify the vCenter name.
    Example output
    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67

Increasing the NFC service memory of an ESXi host

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise,the migration will fail because the NFC service memory is limited to 10 parallel connections.

Procedure
  1. Log in to the ESXi host as root.

  2. Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    ...
          <nfcsvc>
             <path>libnfcsvc.so</path>
             <enabled>true</enabled>
             <maxMemory>1000000000</maxMemory>
             <maxStreamMemory>10485760</maxStreamMemory>
          </nfcsvc>
    ...
  3. Restart hostd:

    # /etc/init.d/hostd restart

    You do not need to reboot the host.

Installing Forklift

You can install Forklift by using the OKD web console or the command line interface (CLI).

Installing the Forklift Operator

You can install the Forklift Operator by using the OKD web console or the command line interface (CLI).

Installing the Forklift Operator by using the OKD web console

You can install the Forklift Operator by using the OKD web console.

Prerequisites
  • OKD 4.8 installed.

  • KubeVirt Operator installed.

  • You must be logged in as a user with cluster-admin permissions.

Procedure
  1. In the OKD web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field to search for forklift-operator.

    The Forklift Operator is a Community Operator. Red Hat does not support Community Operators.

  3. Click the Forklift Operator and then click Install.

  4. On the Install Operator page, click Install.

  5. Click OperatorsInstalled Operators to verify that the Forklift Operator appears in the konveyor-forklift project with the status Succeeded.

  6. Click the Forklift Operator.

  7. Under Provided APIs, locate the ForkliftController, and click Create Instance.

  8. Click Create.

  9. Click WorkloadsPods to verify that the Forklift pods are running.

Obtaining the Forklift web console URL

You can obtain the Forklift web console URL by using the OKD web console.

Prerequisites
  • You must have the KubeVirt Operator installed.

  • You must have the Forklift Operator installed.

  • You must be logged in as a user with cluster-admin privileges.

Procedure
  1. Log in to the OKD web console.

  2. Click NetworkingRoutes.

  3. Select the konveyor-forklift project in the Project: list.

  4. Click the URL for the forklift-ui service to open the login page for the Forklift web console.

Installing the Forklift Operator from the command line interface

You can install the Forklift Operator from the command line interface (CLI).

Prerequisites
  • OKD 4.8 installed.

  • KubeVirt Operator installed.

  • You must be logged in as a user with cluster-admin permissions.

Procedure
  1. Create the konveyor-forklift project:

    $ cat << EOF | oc apply -f -
    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: konveyor-forklift
    EOF
  2. Create an OperatorGroup CR called migration:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: migration
      namespace: konveyor-forklift
    spec:
      targetNamespaces:
        - konveyor-forklift
    EOF
  3. Create a Subscription CR for the Operator:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: forklift-operator
      namespace: konveyor-forklift
    spec:
      channel: development
      installPlanApproval: Automatic
      name: forklift-operator
      source: community-operators
      sourceNamespace: openshift-marketplace
      startingCSV: "konveyor-forklift-operator.2.1.0"
    EOF
  4. Create a ForkliftController CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: ForkliftController
    metadata:
      name: forklift-controller
      namespace: konveyor-forklift
    spec:
      olm_managed: true
    EOF
  5. Verify that the Forklift pods are running:

    $ oc get pods -n konveyor-forklift
    Example output
    NAME                                  READY  STATUS   RESTARTS  AGE
    forklift-controller-788bdb4c69-mw268  2/2    Running  0         2m
    forklift-operator-6bf45b8d8-qps9v     1/1    Running  0         5m
    forklift-ui-7cdf96d8f6-xnw5n          1/1    Running  0         2m

Obtaining the Forklift web console URL

You can obtain the Forklift web console URL from the command line.

Prerequisites
  • You must have the KubeVirt Operator installed.

  • You must have the Forklift Operator installed.

  • You must be logged in as a user with cluster-admin privileges.

Procedure
  1. Obtain the Forklift web console URL:

    $ oc get route virt -n konveyor-forklift \
      -o custom-columns=:.spec.host
    Example output
    https://virt-konveyor-forklift.apps.cluster.openshift.com.
  2. Launch a browser and navigate to the Forklift web console.

Migrating virtual machines to KubeVirt

You can migrate virtual machines (VMs) to KubeVirt by using the Forklift web console or the command line interface (CLI).

You must ensure that all migration prerequisites are met.

About cold and warm migration

Forklift supports cold migration from Red Hat Virtualization and warm migration from VMware vSphere.

Cold migration

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

Warm migration

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

Then the VMs are shut down and the remaining data is copied during the cutover stage.

Precopy stage

The VMs are not shut down during the precopy stage.

The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by patching the vm-import-controller-config config map.

You must enable CBT on the source VMs and the VM disks.

A VM can support up to 28 CBT snapshots. If that limit is exceeded, a warm import retry limit reached error message is displayed. If the VM has preexisting CBT snapshots, it will reach this limit sooner.

The precopy stage runs until either the cutover stage starts or the maximum number of CBT snapshots is reached.

Cutover stage

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

You can start the cutover stage manually in the Forklift console.

You can schedule a cutover time by specifying the value of the cutover parameter in the Migration CR manifest.

Migrating virtual machines by using the Forklift web console

You can migrate virtual machines to KubeVirt by using the Forklift web console.

Adding providers

You can add providers by using the Forklift web console.

Adding a VMware source provider

You can add a VMware source provider by using the Forklift web console.

Prerequisites
  • vCenter SHA-1 fingerprint.

  • VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

Procedure
  1. Add the VDDK image to the HyperConverged CR:

    $ cat << EOF | oc apply -f -
    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      vddkInitImage: <registry_route_or_server_path>/vddk:<tag> (1)
    EOF
    1 Specify the VDDK image that you created.
  2. In the Forklift web console, click Providers.

  3. Click Add provider.

  4. Select VMware from the Type list.

  5. Fill in the following fields:

    • Name: Name to display in the list of providers

    • Hostname or IP address: vCenter host name or IP address

    • Username: vCenter admin user, for example, administrator@vsphere.local

    • Password: vCenter admin password

    • SHA-1 fingerprint: vCenter SHA-1 fingerprint

  6. Click Add to add and save the provider.

    The source provider appears in the list of providers.

Adding a Red Hat Virtualization source provider

You can add a Red Hat Virtualization source provider by using the Forklift web console.

Prerequisites
  • CA certificate of the Manager.

Procedure
  1. In the Forklift web console, click Providers.

  2. Click Add provider.

  3. Select Red Hat Virtualization from the Type list.

  4. Fill in the following fields:

    • Name: Name to display in the list of providers

    • Hostname or IP address: Manager host name or IP address

    • Username: Manager user

    • Password: Manager password

    • CA certificate: CA certificate of the Manager

  5. Click Add to add and save the provider.

    The source provider appears in the list of providers.

Selecting a migration network for a source provider

You can select a migration network in the Forklift web console for a source provider to reduce risk to the source environment and to improve performance.

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

Prerequisites
  • The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

  • The migration network must be accessible to the KubeVirt nodes through the default gateway.

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

  • The migration network must have jumbo frames enabled.

Procedure
  1. In the Forklift web console, click Providers

  2. Click the Red Hat Virtualization or VMware tab.

  3. Click the host number in the Hosts column beside a provider to view a list of hosts.

  4. Select one or more hosts and click Select migration network.

  5. Select a Network.

    You can clear the selection by selecting the default network.

  6. If your source provider is VMware, complete the following fields:

    • ESXi host admin username: Specify the ESXi host admin user, for example, root.

    • ESXi host admin password: Specify the ESXi host admin password.

  7. If your source provider is Red Hat Virtualization, complete the following fields:

    • Username: Specify the Manager user.

    • Password: Specify the Manager password.

  8. Click Save.

  9. Verify that the status of each host is Ready.

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

Adding a KubeVirt provider

You can add a KubeVirt provider to the Forklift web console in addition to the default KubeVirt provider, which is the provider where you installed Forklift.

Prerequisites
Procedure
  1. In the Forklift web console, click Providers.

  2. Click Add provider.

  3. Select KubeVirt from the Type list.

  4. Complete the following fields:

    • Cluster name: Specify the cluster name to display in the list of target providers.

    • URL: Specify the API endpoint of the cluster.

    • Service account token: Specify the cluster-admin service account token.

  5. Click Check connection to verify the credentials.

  6. Click Add.

    The provider appears in the list of providers.

Selecting a migration network for a KubeVirt provider

You can select a default migration network for a KubeVirt provider in the Forklift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure
  1. In the Forklift web console, click Providers.

  2. Click the KubeVirt tab.

  3. Select a provider and click Select migration network.

  4. Select a network from the list of available networks and click Select.

  5. Click the network number in the Networks column beside the provider to verify that the selected network is the default migration network.

Creating a network mapping

You can create one or more network mappings by using the Forklift web console to map source networks to KubeVirt networks.

Prerequisites
  • Source and target providers added to the web console.

  • If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

Procedure
  1. Click Mappings.

  2. Click the Network tab and then click Create mapping.

  3. Complete the following fields:

    • Name: Enter a name to display in the network mappings list.

    • Source provider: Select a source provider.

    • Target provider: Select a target provider.

    • Source networks: Select a source network.

    • Target namespaces/networks: Select a target network.

  4. Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

  5. If you create an additional network mapping, select the network attachment definition as the target network.

  6. Click Create.

    The network mapping is displayed on the Network mappings screen.

Creating a storage mapping

You can create a storage mapping by using the Forklift web console to map source data stores to KubeVirt storage classes.

Prerequisites
  • Source and target providers added to the web console.

  • Local and shared persistent storage that support VM migration.

Procedure
  1. Click Mappings.

  2. Click the Storage tab and then click Create mapping.

  3. Enter the Name of the storage mapping.

  4. Select a Source provider and a Target provider.

  5. If your source provider is VMware, select a Source datastore and a Target storage class.

  6. If your source provider is Red Hat Virtualization, select a Source storage domain and a Target storage class.

  7. Optional: Click Add to create additional storage mappings or to map multiple source data stores or storage domains to a single storage class.

  8. Click Create.

    The mapping is displayed on the Storage mappings page.

Creating a migration plan

You can create a migration plan by using the Forklift web console.

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

Prerequisites
  • If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

Procedure
  1. In the web console, click Migration plans and then click Create migration plan.

  2. Complete the following fields:

    • Plan name: Enter a migration plan name to display in the migration plan list.

    • Plan description: Optional: Brief description of the migration plan.

    • Source provider: Select a source provider.

    • Target provider: Select a target provider.

    • Target namespace: You can type to search for an existing target namespace or create a new namespace.

    • You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and clicking Select.

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, that network is the default network for all migration plans. Otherwise, the pod network is used.

  3. Click Next.

  4. Select options to filter the list of source VMs and click Next.

  5. Select the VMs to migrate and then click Next.

  6. Select an existing network mapping or create a new network mapping.

    To create a new network mapping:

    • Select a target network for each source network.

    • Optional: Select Save mapping to use again and enter a network mapping name.

  7. Click Next.

  8. Select an existing storage mapping or create a new storage mapping.

    To create a new storage mapping:

    • Select a target storage class for each VMware data store or Red Hat Virtualization storage domain.

    • Optional: Select Save mapping to use again and enter a storage mapping name.

  9. Click Next.

  10. Select a migration type and click Next.

    • Cold migration: The source VMs are stopped while the data is copied.

    • Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata. Warm migration is not supported for Red Hat Virtualization.

  11. Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    1. Click Add hook.

    2. Select the step when the hook will run.

    3. Select a hook definition:

      • Ansible playbook: Browse to the Ansible playbook or paste it into the field.

      • Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        The registry must be accessible to your OKD cluster.

  12. Click Next.

  13. Review your migration plan and click Finish.

    The migration plan is saved in the migration plan list.

  14. Click the Options menu kebab of the migration plan and select View details to verify the migration plan details.

Running a migration plan

You can run a migration plan and view its progress in the Forklift web console.

Prerequisites
  • Valid migration plan.

Procedure
  1. Click Migration plans.

    The Migration plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, and the status of the plan.

  2. Click Start beside a migration plan to start the migration.

    Warm migration only:

    • The precopy stage starts.

    • Click Cutover to complete the migration.

  3. Expand a migration plan to view the migration details.

    The migration details screen displays the migration start and end time, the amount of data copied, and a progress pipeline for each VM being migrated.

  4. Expand a VM to view the migration steps, elapsed time of each step, and its state.

Canceling a migration

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Forklift web console.

Procedure
  1. Click Migration Plans.

  2. Click the name of a running migration plan to view the migration details.

  3. Select one or more VMs and click Cancel.

  4. Click Yes, cancel to confirm the cancellation.

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

Migrating virtual machines from the command line interface

You can migrate virtual machines (VMs) from the command line (CLI) by creating the following custom resources (CRs):

  • Secret contains the source provider credentials.

  • Provider contains the source provider details.

  • Host contains the VMware host details.

  • NetworkMap maps the source and destination networks.

  • StorageMap maps the source and destination storage.

  • Optional: Hook CR contains custom code that you can run on a VM at a specified phase of the migration.

  • Plan contains a list of VMs to migrate and specifies whether the migration is cold or warm. The Plan references the providers and maps.

  • Migration runs the Plan. If the migration is warm, it specifies the cutover time.

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

The term destination in the API is the same as target in the web console.

You must specify a name for cluster-scoped CRs.

You must specify both a name and a namespace for namespace-scoped CRs.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

  • VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

Procedure
  1. VMware only: Add the VDDK image to the HyperConverged CR:

    $ cat << EOF | oc apply -f -
    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      vddkInitImage: <registry_route_or_server_path>/vddk:<tag> (1)
    EOF
    1 Specify the VDDK image that you created.
  2. Create a Secret CR manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: konveyor-forklift
    type: Opaque
    stringData:
      user: <user> (1)
      password: <password> (2)
      cacert: <RHV_ca_certificate> (3)
      thumbprint: <vcenter_fingerprint> (4)
    EOF
    1 Specify the base64-encoded vCenter admin user or the RHV Manager user.
    2 Specify the base64-encoded password.
    3 RHV only: Specify the base64-encoded CA certificate of the Manager. You can retrieve it at https://<www.example.com>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
    4 VMware only: Specify the vCenter SHA-1 fingerprint.
  3. Create a Provider CR manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <provider>
      namespace: konveyor-forklift
    spec:
      type: <provider_type> (1)
      url: <api_end_point> (2)
      secret:
        name: <secret> (3)
        namespace: konveyor-forklift
    EOF
    1 Allowed values are ovirt and vsphere.
    2 Specify the API end point URL, for example, https://<www.example.com>/sdk for vSphere or https://<www.example.com>/ovirt-engine/api/ for RHV.
    3 Specify the name of provider Secret CR.
  4. VMware only: Create a Host CR manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Host
    metadata:
      name: <vmware_host>
      namespace: konveyor-forklift
    spec:
      provider:
        namespace: konveyor-forklift
        name: <source_provider> (1)
      id: <source_host_mor> (2)
      ipAddress: <source_network_ip> (3)
    EOF
    1 Specify the name of the VMware Provider CR.
    2 Specify the managed object reference (MOR) of the VMware host.
    3 Specify the IP address of the VMware migration network.
  5. Create a NetworkMap CR manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: konveyor-forklift
    spec:
      map:
        - destination:
            name: <pod>
            namespace: konveyor-forklift
            type: pod (1)
          source: (2)
            id: <source_network_id> (3)
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> (4)
            namespace: <network_attachment_definition_namespace> (5)
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: konveyor-forklift
        destination:
          name: <destination_cluster>
          namespace: konveyor-forklift
    EOF
    1 Allowed values are pod and multus.
    2 You can use either the id or the name parameter to specify the source network.
    3 Specify the VMware network MOR or RHV network UUID.
    4 Specify a network attachment definition for each additional KubeVirt network.
    5 Specify the namespace of the KubeVirt network attachment definition.
  6. Create a StorageMap CR manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: konveyor-forklift
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> (1)
          source:
            id: <source_datastore> (2)
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            id: <source_datastore>
      provider:
        source:
          name: <source_provider>
          namespace: konveyor-forklift
        destination:
          name: <destination_cluster>
          namespace: konveyor-forklift
    EOF
    1 Allowed values are ReadWriteOnce and ReadWriteMany.
    2 Specify the VMware data storage MOR or RHV storage domain UUID, for example, f2737930-b567-451a-9ceb-2887f6207009.
  7. Optional: Create a Hook CR manifest:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: konveyor-forklift
    spec:
      image: quay.io/konveyor/hook-runner (1)
      playbook: | (2)
        LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
        YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
        IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
        cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
        bG9hZAoK
    EOF
    1 You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
    2 Optional: Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.
  8. Create a Plan CR manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> (1)
      namespace: konveyor-forklift
    spec:
      warm: true (2)
      provider:
        source:
          name: <source_provider>
          namespace: konveyor-forklift
        destination:
          name: <destination_cluster>
          namespace: konveyor-forklift
      map:
        network: (3)
          name: <network_map> (4)
          namespace: konveyor-forklift
        storage:
          name: <storage_map> (5)
          namespace: konveyor-forklift
      targetNamespace: konveyor-forklift
      vms: (6)
        - id: <source_vm> (7)
        - name: <source_vm>
          hooks: (8)
            - hook:
                namespace: konveyor-forklift
                name: <hook> (9)
              step: <step> (10)
    EOF
    1 Specify the name of the Plan CR.
    2 VMware only: Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration CR manifest, only the precopy stage will run. Warm migration is not supported for RHV.
    3 You can add multiple network mappings.
    4 Specify the name of the NetworkMap CR.
    5 Specify the name of the StorageMap CR.
    6 You can use either the id or the name parameter to specify the source VMs.
    7 Specify the VMware VM MOR or RHV VM UUID.
    8 Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    9 Specify the name of the Hook CR.
    10 Allowed values are PreHook, before the migation plan starts, or PostHook, after the migration is complete.
  9. Optional, for VMware only: To change the time interval between the CBT snapshots for warm migration, patch the vm-import-controller-config config map:

    $ oc patch configmap/vm-import-controller-config \
      -n openshift-cnv -p '{"data": \
      {"warmImport.intervalMinutes": "<interval>"}}' (1)
    1 Specify the time interval in minutes. The default value is 60.
  10. Create a Migration CR manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration> (1)
      namespace: konveyor-forklift
    spec:
      plan:
        name: <plan> (2)
        namespace: konveyor-forklift
      cutover: <cutover_time> (3)
    EOF
    1 Specify the name of the Migration CR.
    2 Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachineImport CR for each VM that is migrated.
    3 Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.
  11. View the VirtualMachineImport pods to monitor the progress of the migration:

    $ oc get pods -n konveyor-forklift

Canceling a migration

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

Canceling an entire migration
  • Delete the Migration CR:

    $ oc delete migration <migration> -n konveyor-forklift (1)
Canceling the migration of individual VMs
  1. Add the individual VMs to the Migration CR manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <migration>
      namespace: konveyor-forklift
    ...
    spec:
      cancel: (1)
      - id: vm-102 (2)
      - id: vm-203
      - name: rhel8-vm
    EOF
    1 You can specify the canceled VMs by using either the id or the name.
    2 VMware VM managed object reference or RHV VM UUID.
  2. View the VirtualMachineImport pods to monitor the progress of the remaining VMs:

    $ oc get pods -n konveyor-forklift

Upgrading Forklift

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

You must upgrade to the next release without skipping a release, for example, from 2.0 to 2.1 or from 2.1 to 2.2.

See Upgrading installed Operators in the OKD documentation.

Uninstalling Forklift

You can uninstall Forklift by using the OKD web console or the command line interface (CLI).

Uninstalling Forklift by using the OKD web console

You can uninstall Forklift by using the OKD web console to delete the konveyor-forklift project and custom resource definitions (CRDs).

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

Procedure
  1. Click HomeProjects.

  2. Locate the konveyor-forklift project.

  3. On the right side of the project, select Delete Project from the Options menu kebab.

  4. In the Delete Project pane, enter the project name and click Delete.

  5. Click AdministrationCustomResourceDefinitions.

  6. Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group.

  7. On the right side of each CRD, select Delete CustomResourceDefinition from the Options menu kebab.

Uninstalling Forklift from the command line interface

You can uninstall Forklift from the command line interface (CLI) by deleting the konveyor-forklift project and the forklift.konveyor.io custom resource definitions (CRDs).

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

Procedure
  1. Delete the project:

    $ oc delete project konveyor-forklift
  2. Delete the CRDs:

    $ oc get crd -o name | grep 'forklift' | xargs oc delete
  3. Delete the OAuthClient:

    $ oc delete oauthclient/forklift-ui

Troubleshooting

This section provides information for troubleshooting common migration issues.

Architecture

This section describes Forklift custom resources, services, and workflows.

Forklift custom resources and services

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

Forklift custom resources
  • Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

  • NetworkMapping CR maps the networks of the source and target providers.

  • StorageMapping CR maps the storage of the source and target providers.

  • Provisioner CR stores the configuration of the storage provisioners, such as supported volume and access modes.

  • Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

  • Migration CR runs a migration plan.

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

Forklift services
  • Provider Inventory service:

    • Connects to the source and target providers.

    • Maintains a local inventory for mappings and plans.

    • Stores VM configurations.

    • Runs the Validation service if a VM configuration change is detected.

  • Validation service checks the suitability of a VM for migration by applying rules.

The Validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

  • User Interface service:

    • Enables you to create and configure Forklift CRs.

    • Displays the status of the CRs and the progress of a migration.

  • Migration Controller service orchestrates migrations.

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller changes the plan status to Completed.

  • Virtual Machine Import Controller, Kubevirt Controller, and Containerized Data Import (CDI) Controller services handle most technical operations.

High-level migration workflow

The high-level workflow shows the migration process from the point of view of the user.

Forklift workflow
Figure 1. High-level workflow

The workflow describes the following steps:

  1. You create a source provider, a target provider, a network mapping, and a storage mapping.

  2. You create a migration plan that includes the following resources:

    • Source provider

    • Target provider

    • Network mapping

    • Storage mapping

    • One or more VMs

  3. You run a migration plan by creating a Migration CR that references the migration plan. If a migration is incomplete, you can run a migration plan multiple times until all VMs are migrated.

  4. For each VM in the migration plan, the Migration Controller creates a VirtualMachineImport CR and monitors its status. When all VMs have been migrated, the Migration Controller sets the status of the migration plan to Completed. The power state of a source VM is maintained after migration.

Detailed migration workflow

You can use the detailed migration workflow to troubleshoot a failed migration.

KubeVirt workflow
Figure 2. Detailed KubeVirt migration workflow

The workflow describes the following steps:

  1. When you run a migration plan, the Migration Controller creates a VirtualMachineImport custom resource (CR) for each source virtual machine (VM).

  2. The Virtual Machine Import Controller validates the VirtualMachineImport CR and generates a VirtualMachine CR.

  3. The Virtual Machine Import Controller retrieves the VM configuration, including network, storage, and metadata, linked in the VirtualMachineImport CR.



    For each VM disk:

  4. The Virtual Machine Import Controller creates a DataVolume CR as a wrapper for a Persistent Volume Claim (PVC) and annotations.



  5. The Containerized Data Importer (CDI) Controller creates a PVC. The Persistent Volume (PV) is dynamically provisioned by the StorageClass provisioner.



  6. The CDI Controller creates an Importer pod.

  7. For a VMware provider, the Importer pod connects to the VM disk by using the VMware Virtual Disk Development Kit (VDDK) SDK and streams the VM disk to the PV.

    After the VM disks are transferred:

  8. The Virtual Machine Import Controller creates a Conversion pod with the PVCs attached to it.

    The Conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

  9. The Virtual Machine Import Controller creates a VirtualMachineInstance CR.

  10. When the target VM is powered on, the KubeVirt Controller creates a VM pod.

    The VM pod runs QEMU-KVM with the PVCs attached as VM disks.

Error messages

This section describes error messages and how to resolve them.

warm import retry limit reached

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage. You must delete some of the CBT snapshots from the VM and restart the migration plan.

Using the must-gather tool

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

Prerequisites
  • You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

  • You must have the OKD CLI (oc) installed.

Collecting logs and CR information
  1. Navigate to the directory where you want to store the must-gather data.

  2. Run the oc adm must-gather command:

    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

  3. Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    • Namespace:

      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
        -- NS=<namespace> /usr/bin/targeted
    • Migration plan:

      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
        -- PLAN=<migration_plan> /usr/bin/targeted
    • Virtual machine:

      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
        -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      1 Specify the VM ID as it appears in the Plan CR.