Standard Network Port to NT Port Live Migration

Getting Started with Napatech Link-Virtualization™ Software

Platform
Napatech SmartNIC
Content Type
Getting Started
Getting Started Guide
Capture Software Version
Link-Virtualization™ Software 4.5

This describes how to perform live migration from a VM with a standard network port to a VM with a Napatech virtual port with HW-offload capabilities.

Before you begin

It is possible to perform live migration from a VM with a standard network port to a VM with a Napatech virtual port. It is important to note that the migration source VM must start up initially on a Napatech port. This is because the VM must have the virtio supported features, which correspond to the Napatech hardware-offload capabilities. These virtio supported features are negotiated between the VM and the host at start-up of the VM.

Procedure

  1. Setup OVS offload on server 1 as described in OVS-DPDK Configuration Examples without starting the VM.
  2. Start the VM on server 1, which is the migration source:
    taskset -c 8,10,12,14 /usr/libexec/qemu-kvm \
    	-enable-kvm -cpu host -m 4096 -smp 4 \
    	-chardev socket,id=char0,path=/usr/local/var/run/stdvio5,server=on \
    	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
    	-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce=on \
    	-device virtio-net-pci,packed=on,netdev=mynet1,mac=52:55:00:02:d9:03 \
    	-numa node,memdev=mem -mem-prealloc \
    	-net nic,macaddr=52:54:00:12:34:58 \
    	-net user,hostfwd=tcp::10021-:22 \
    	-nographic \
    	-monitor telnet:127.0.0.1:3333,server,nowait \
    	./centos7_1.img
    
    path=/usr/local/var/run/stdvio5 is used to start the VM on a Napatech port.
    Note: -monitor telnet:127.0.0.1:3333,server,nowait configures QEMU to run the VM with a control port.
  3. Add a vhost-user-client port on server 2.
    ovs-vsctl add-port br0 vhost-client-1 -- set Interface vhost-client-1 \
    type=dpdkvhostuserclient options:vhost-server-path=/tmp/myVHostSock
  4. Start the VM on server 2, which is the migration destination. For example:
    taskset -c 2,4,6,8 /usr/libexec/qemu-kvm \
    	-enable-kvm -cpu host -m 4G -smp 4 \
    	-object memory-backend-file,id=mem,size=4G,mem-path=/mnt/huge,share=on \
    	-numa node,memdev=mem -mem-prealloc \
    	-chardev socket,id=char0,path=/tmp/myVHostSock,server=on \
    	-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce=on \
    	-device virtio-net-pci,packed=on,netdev=mynet1,mac=52:55:00:02:d9:03 \
    	-net user,hostfwd=tcp::10021-:22 \
    	-net nic,macaddr=52:54:00:12:34:59 \
    	-incoming tcp:0:4444 \
    	./centos7_1.img

    path=/tmp/myVHostSock is the vhost-user-client port of the standard NIC.

    This VM will, initially, not start, but waits on an incoming channel for the migration command.

    Note: It is important to have the exact same VM image file on both servers.
    Note: It is important to have the exact same MAC address, mac=52:55:00:02:d9:03, for the OVS virtual ports on both servers.
  5. Use your preferred tool to validate VM connectivity:
  6. Start live migration. On server 1, connect to QEMU using telnet:
    telnet 127.0.0.1 3333
  7. Execute the migration command in the QEMU terminal:
    migrate -d tcp:<server 2 ip>:4444
    Packet drop while migrating is caused by the re-programming time from stopping the first stream to redirecting the flow to the other phy. This is approx. 0.2s of traffic.

    The VM is now ready to be migrated to a Napatech virtual port with HW-offload capabilities.