1# Cloud Hypervisor Hot Plug 2 3Currently Cloud Hypervisor only support hot plugging of CPU devices. 4 5## Kernel support 6 7For hotplug on Cloud Hypervisor ACPI GED support is needed. This can either be achieved by turning on `CONFIG_ACPI_REDUCED_HARDWARE_ONLY` 8or by using this kernel patch (available in 5.5-rc1 and later): https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/acpi/Makefile?id=ac36d37e943635fc072e9d4f47e40a48fbcdb3f0 9 10## CPU Hot Plug 11 12Extra vCPUs can be added and removed from a running Cloud Hypervisor instance. This is controlled by two mechanisms: 13 141. Specifying a number of maximum potential vCPUs that is greater than the number of default (boot) vCPUs. 152. Making a HTTP API request to the VMM to ask for the additional vCPUs to be added. 16 17To use CPU hotplug start the VM with the number of max vCPUs greater than the number of boot vCPUs, e.g. 18 19```shell 20$ pushd $CLOUDH 21$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 22$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 23 --kernel custom-vmlinux.bin \ 24 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 25 --disk path=focal-server-cloudimg-amd64.raw \ 26 --cpus boot=4,max=8 \ 27 --memory size=1024M \ 28 --net "tap=,mac=,ip=,mask=" \ 29 --rng \ 30 --api-socket=/tmp/ch-socket 31$ popd 32``` 33 34Notice the addition of `--api-socket=/tmp/ch-socket` and a `max` parameter on `--cpus boot=4,max=8`. 35 36To ask the VMM to add additional vCPUs then use the resize API: 37 38```shell 39./ch-remote --api-socket=/tmp/ch-socket resize --cpus 8 40``` 41 42The extra vCPU threads will be created and advertised to the running kernel. The kernel does not bring up the CPUs immediately and instead the user must "online" them from inside the VM: 43 44```shell 45root@ch-guest ~ # lscpu | grep list: 46On-line CPU(s) list: 0-3 47Off-line CPU(s) list: 4-7 48root@ch-guest ~ # echo 1 | tee /sys/devices/system/cpu/cpu[4,5,6,7]/online 491 50root@ch-guest ~ # lscpu | grep list: 51On-line CPU(s) list: 0-7 52``` 53 54After a reboot the added CPUs will remain. 55 56Removing CPUs works similarly by reducing the number in the "desired_vcpus" field of the reisze API. The CPUs will be automatically offlined inside the guest so there is no need to run any commands inside the guest: 57 58```shell 59./ch-remote --api-socket=/tmp/ch-socket resize --cpus 2 60``` 61 62As per adding CPUs to the guest, after a reboot the VM will be running with the reduced number of vCPUs. 63 64## Memory Hot Plug 65 66### ACPI method 67 68Extra memory can be added from a running Cloud Hypervisor instance. This is controlled by two mechanisms: 69 701. Allocating some of the guest physical address space for hotplug memory. 712. Making a HTTP API request to the VMM to ask for a new amount of RAM to be assigned to the VM. In the case of expanding the memory for the VM the new memory will be hotplugged into the running VM, if reducing the size of the memory then change will take effect after the next reboot. 72 73To use memory hotplug start the VM specifying some size RAM in the `hotplug_size` parameter to the memory configuration. Not all the memory specified in this parameter will be available to hotplug as there are spacing and alignment requirements so it is recommended to make it larger than the hotplug RAM needed. 74 75Because the ACPI method is the default, there is no need to add the extra option `hotplug_method=acpi`. 76 77```shell 78$ pushd $CLOUDH 79$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 80$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 81 --kernel custom-vmlinux.bin \ 82 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 83 --disk path=focal-server-cloudimg-amd64.raw \ 84 --cpus boot=4,max=8 \ 85 --memory size=1024M,hotplug_size=8192M \ 86 --net "tap=,mac=,ip=,mask=" \ 87 --rng \ 88 --api-socket=/tmp/ch-socket 89$ popd 90``` 91 92Before issuing the API request it is necessary to run the following command inside the VM to make it automatically online the added memory: 93 94```shell 95root@ch-guest ~ # echo online | sudo tee /sys/devices/system/memory/auto_online_blocks 96``` 97 98To ask the VMM to expand the RAM for the VM: 99 100```shell 101./ch-remote --api-socket=/tmp/ch-socket resize --memory 3G 102``` 103 104The new memory is now available to use inside the VM: 105 106```shell 107free -h 108 total used free shared buff/cache available 109Mem: 3.0Gi 71Mi 2.8Gi 0.0Ki 47Mi 2.8Gi 110Swap: 32Mi 0B 32Mi 111``` 112 113Due to guest OS limitations is is necessary to ensure that amount of memory added (between currently assigned RAM and that which is desired) is a multiple of 128MiB. 114 115The same API can also be used to reduce the desired RAM for a VM but the change will not be applied until the VM is rebooted. 116 117Memory and CPU resizing can be combined together into the same HTTP API request. 118 119### virtio-mem method 120 121Extra memory can be added and removed from a running Cloud Hypervisor instance. This is controlled by two mechanisms: 122 1231. Allocating some of the guest physical address space for hotplug memory. 1242. Making a HTTP API request to the VMM to ask for a new amount of RAM to be assigned to the VM. 125 126To use memory hotplug start the VM specifying some size RAM in the `hotplug_size` parameter along with `hotplug_method=virtio-mem` to the memory configuration. 127 128```shell 129$ pushd $CLOUDH 130$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 131$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 132 --kernel custom-vmlinux.bin \ 133 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 134 --disk path=focal-server-cloudimg-amd64.raw \ 135 --memory size=1024M,hotplug_size=8192M,hotplug_method=virtio-mem \ 136 --net "tap=,mac=,ip=,mask=" \ 137 --api-socket=/tmp/ch-socket 138$ popd 139``` 140 141To ask the VMM to expand the RAM for the VM (request is in bytes): 142 143```shell 144./ch-remote --api-socket=/tmp/ch-socket resize --memory 3G 145``` 146 147The new memory is now available to use inside the VM: 148 149```shell 150free -h 151 total used free shared buff/cache available 152Mem: 3.0Gi 71Mi 2.8Gi 0.0Ki 47Mi 2.8Gi 153Swap: 32Mi 0B 32Mi 154``` 155 156The same API can also be used to reduce the desired RAM for a VM. It is important to note that reducing RAM size might only partially work, as the guest might be using some of it. 157 158## PCI Device Hot Plug 159 160Extra PCI devices can be added and removed from a running Cloud Hypervisor instance. This is controlled by making a HTTP API request to the VMM to ask for the additional device to be added, or for the existing device to be removed. 161 162To use PCI device hotplug start the VM with the HTTP server. 163 164```shell 165$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 166$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 167 --kernel custom-vmlinux.bin \ 168 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 169 --disk path=focal-server-cloudimg-amd64.raw \ 170 --cpus boot=4 \ 171 --memory size=1024M \ 172 --net "tap=,mac=,ip=,mask=" \ 173 --api-socket=/tmp/ch-socket 174``` 175 176Notice the addition of `--api-socket=/tmp/ch-socket`. 177 178### Add VFIO Device 179 180To ask the VMM to add additional VFIO device then use the `add-device` API. 181 182```shell 183./ch-remote --api-socket=/tmp/ch-socket add-device path=/sys/bus/pci/devices/0000:01:00.0/ 184``` 185 186### Add Disk Device 187 188To ask the VMM to add additional disk device then use the `add-disk` API. 189 190```shell 191./ch-remote --api-socket=/tmp/ch-socket add-disk path=/foo/bar/cloud.img 192``` 193 194### Add Fs Device 195 196To ask the VMM to add additional fs device then use the `add-fs` API. 197 198```shell 199./ch-remote --api-socket=/tmp/ch-socket add-fs tag=myfs,socket=/foo/bar/virtiofs.sock 200``` 201 202### Add Net Device 203 204To ask the VMM to add additional network device then use the `add-net` API. 205 206```shell 207./ch-remote --api-socket=/tmp/ch-socket add-net tap=chtap0 208``` 209 210### Add Pmem Device 211 212To ask the VMM to add additional PMEM device then use the `add-pmem` API. 213 214```shell 215./ch-remote --api-socket=/tmp/ch-socket add-pmem file=/foo/bar.cloud.img 216``` 217 218### Add Vsock Device 219 220To ask the VMM to add additional vsock device then use the `add-vsock` API. 221 222```shell 223./ch-remote --api-socket=/tmp/ch-socket add-vsock cid=3,socket=/foo/bar/vsock.sock 224``` 225 226### Common Across All PCI Devices 227 228The extra PCI device will be created and advertised to the running kernel. The new device can be found by checking the list of PCI devices. 229 230```shell 231root@ch-guest ~ # lspci 23200:00.0 Host bridge: Intel Corporation Device 0d57 23300:01.0 Unassigned class [ffff]: Red Hat, Inc. Virtio console (rev 01) 23400:02.0 Mass storage controller: Red Hat, Inc. Virtio block device (rev 01) 23500:03.0 Unassigned class [ffff]: Red Hat, Inc. Virtio RNG (rev 01) 236``` 237 238After a reboot the added PCI device will remain. 239 240### Remove PCI device 241 242Removing a PCI device works the same way for all kind of PCI devices. The unique identifier related to the device must be provided. This identifier can be provided by the user when adding the new device, or by default Cloud Hypervisor will assign one. 243 244```shell 245./ch-remote --api-socket=/tmp/ch-socket remove-device _disk0 246``` 247 248As per adding a PCI device to the guest, after a reboot the VM will be running without the removed PCI device. 249