xref: /cloud-hypervisor/docs/live_migration.md (revision 7d7bfb2034001d4cb15df2ddc56d2d350c8da30f)
1# Live Migration
2
3This document gives two examples of how to use the live migration
4support in Cloud Hypervisor:
5
61. local migration - migrating between two VMs running on the same
7   machine;
81. nested-vm migration - migrating between two nested VMs whose host VMs
9   are running on the same machine.
10
11## Local Migration (Suitable for Live Upgrade of VMM)
12Launch the source VM (on the host machine):
13```bash
14$ target/release/cloud-hypervisor
15    --kernel ~/workloads/vmlinux \
16    --disk path=~/workloads/focal.raw \
17    --cpus boot=1 --memory size=1G,shared=on \
18    --cmdline "root=/dev/vda1 console=ttyS0"  \
19    --serial tty --console off --api-socket=/tmp/api1
20```
21
22Launch the destination VM from the same directory (on the host machine):
23```bash
24$ target/release/cloud-hypervisor --api-socket=/tmp/api2
25```
26
27Get ready for receiving migration for the destination VM (on the host machine):
28```bash
29$ target/release/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock
30```
31
32Start to send migration for the source VM (on the host machine):
33```bash
34$ target/release/ch-remote --api-socket=/tmp/api1 send-migration --local unix:/tmp/sock
35```
36
37When the above commands completed, the source VM should be successfully
38migrated to the destination VM. Now the destination VM is running while
39the source VM is terminated gracefully.
40
41## Nested-VM Migration
42
43Launch VM 1 (on the host machine) with an extra virtio-blk device for
44exposing a guest image for the nested source VM:
45> Note: the example below also attached an additional virtio-blk device
46> with a dummy image for testing purpose (which is optional).
47```bash
48$ head -c 1M < /dev/urandom > tmp.img # create a dummy image for testing
49$ sudo /target/release/cloud-hypervisor \
50        --serial tty --console off \
51        --cpus boot=1 --memory size=512M \
52        --kernel vmlinux \
53        --cmdline "root=/dev/vda1 console=ttyS0"  \
54        --disk path=focal-1.raw path=focal-nested.raw path=tmp.img\
55        --net ip=192.168.101.1
56```
57
58Launch VM 2 (on the host machine) with an extra virtio-blk device for
59exposing the same guest image for the nested destination VM:
60```bash
61$ sudo /target/release/cloud-hypervisor \
62        --serial tty --console off \
63        --cpus boot=1 --memory size=512M \
64        --kernel vmlinux \
65        --cmdline "root=/dev/vda1 console=ttyS0"  \
66        --disk path=focal-2.raw path=focal-nested.raw path=tmp.img\
67        --net ip=192.168.102.1
68```
69
70Launch the nested source VM (inside the guest OS of the VM 1) :
71```bash
72vm-1:~$ sudo ./cloud-hypervisor \
73        --serial tty --console off \
74        --memory size=128M \
75        --kernel vmlinux \
76        --cmdline "console=ttyS0 root=/dev/vda1" \
77        --disk path=/dev/vdb path=/dev/vdc \
78        --api-socket=/tmp/api1 \
79        --net ip=192.168.100.1
80vm-1:~$ # setup the guest network if needed
81vm-1:~$ sudo ip addr add 192.168.101.2/24 dev ens4
82vm-1:~$ sudo ip link set up dev ens4
83vm-1:~$ sudo ip r add default via 192.168.101.1
84```
85Optional: Run the guest workload below (on the guest OS of the nested source VM),
86which performs intensive virtio-blk operations. Now the console of the nested
87source VM should repeatedly print `"equal"`, and our goal is migrating
88this VM and the running workload without interruption.
89```bash
90#/bin/bash
91
92# On the guest OS of the nested source VM
93
94input="/dev/vdb"
95result=$(md5sum $input)
96tmp=$(md5sum $input)
97
98while  [[ "$result" == "$tmp" ]]
99do
100    echo "equal"
101    tmp=$(md5sum $input)
102done
103
104echo "not equal"
105echo "result = $result"
106echo "tmp = $tmp"
107```
108
109Launch the nested destination VM (inside the guest OS of the VM 2):
110```bash
111vm-2:~$ sudo ./cloud-hypervisor --api-socket=/tmp/api2
112vm-2:~$ # setup the guest network with the following commands if needed
113vm-2:~$ sudo ip addr add 192.168.102.2/24 dev ens4
114vm-2:~$ sudo ip link set up dev ens4
115vm-2:~$ sudo ip r add default via 192.168.102.1
116vm-2:~$ ping 192.168.101.2 # This should succeed
117```
118> Note: If the above ping failed, please check the iptables rule on the
119> host machine, e.g. whether the policy for the `FORWARD` chain is set
120> to `DROP` (which is the default setting configured by Docker).
121
122Get ready for receiving migration for the nested destination VM (inside
123the guest OS of the VM 2):
124```bash
125vm-2:~$ sudo ./ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock2
126vm-2:~$ sudo socat TCP-LISTEN:6000,reuseaddr UNIX-CLIENT:/tmp/sock2
127```
128
129Start to send migration for the nested source VM (inside the guest OS of
130the VM 1):
131```bash
132vm-1:~$ sudo socat UNIX-LISTEN:/tmp/sock1,reuseaddr TCP:192.168.102.2:6000
133vm-1:~$ sudo ./ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/sock1
134```
135
136When the above commands completed, the source VM should be successfully
137migrated to the destination VM without interrupting our testing guest
138workload. Now the destination VM is running the testing guest workload
139while the source VM is terminated gracefully.
140