1# Cloud Hypervisor Hot Plug 2 3Currently Cloud Hypervisor supports hot plugging of CPUs devices (x86 only), PCI devices and memory resizing. 4 5## Kernel support 6 7For hotplug on Cloud Hypervisor ACPI GED support is needed. This can either be achieved by turning on `CONFIG_ACPI_REDUCED_HARDWARE_ONLY` 8or by using this kernel patch (available in 5.5-rc1 and later): https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/acpi/Makefile?id=ac36d37e943635fc072e9d4f47e40a48fbcdb3f0 9 10## CPU Hot Plug 11 12Extra vCPUs can be added and removed from a running `cloud-hypervisor` instance. This is controlled by two mechanisms: 13 141. Specifying a number of maximum potential vCPUs that is greater than the number of default (boot) vCPUs. 152. Making a HTTP API request to the VMM to ask for the additional vCPUs to be added. 16 17To use CPU hotplug start the VM with the number of max vCPUs greater than the number of boot vCPUs, e.g. 18 19```shell 20$ pushd $CLOUDH 21$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 22$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 23 --kernel custom-vmlinux.bin \ 24 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 25 --disk path=focal-server-cloudimg-amd64.raw \ 26 --cpus boot=4,max=8 \ 27 --memory size=1024M \ 28 --net "tap=,mac=,ip=,mask=" \ 29 --rng \ 30 --api-socket=/tmp/ch-socket 31$ popd 32``` 33 34Notice the addition of `--api-socket=/tmp/ch-socket` and a `max` parameter on `--cpus boot=4,max=8`. 35 36To ask the VMM to add additional vCPUs then use the resize API: 37 38```shell 39./ch-remote --api-socket=/tmp/ch-socket resize --cpus 8 40``` 41 42The extra vCPU threads will be created and advertised to the running kernel. The kernel does not bring up the CPUs immediately and instead the user must "online" them from inside the VM: 43 44```shell 45root@ch-guest ~ # lscpu | grep list: 46On-line CPU(s) list: 0-3 47Off-line CPU(s) list: 4-7 48root@ch-guest ~ # echo 1 | tee /sys/devices/system/cpu/cpu[4,5,6,7]/online 491 50root@ch-guest ~ # lscpu | grep list: 51On-line CPU(s) list: 0-7 52``` 53 54After a reboot the added CPUs will remain. 55 56Removing CPUs works similarly by reducing the number in the "desired_vcpus" field of the reisze API. The CPUs will be automatically offlined inside the guest so there is no need to run any commands inside the guest: 57 58```shell 59./ch-remote --api-socket=/tmp/ch-socket resize --cpus 2 60``` 61 62As per adding CPUs to the guest, after a reboot the VM will be running with the reduced number of vCPUs. 63 64## Memory Hot Plug 65 66### ACPI method 67 68Extra memory can be added from a running `cloud-hypervisor` instance. This is controlled by two mechanisms: 69 701. Allocating some of the guest physical address space for hotplug memory. 712. Making a HTTP API request to the VMM to ask for a new amount of RAM to be assigned to the VM. In the case of expanding the memory for the VM the new memory will be hotplugged into the running VM, if reducing the size of the memory then change will take effect after the next reboot. 72 73To use memory hotplug start the VM specifying some size RAM in the `hotplug_size` parameter to the memory configuration. Not all the memory specified in this parameter will be available to hotplug as there are spacing and alignment requirements so it is recommended to make it larger than the hotplug RAM needed. 74 75Because the ACPI method is the default, there is no need to add the extra option `hotplug_method=acpi`. 76 77```shell 78$ pushd $CLOUDH 79$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 80$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 81 --kernel custom-vmlinux.bin \ 82 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 83 --disk path=focal-server-cloudimg-amd64.raw \ 84 --cpus boot=4,max=8 \ 85 --memory size=1024M,hotplug_size=8192M \ 86 --net "tap=,mac=,ip=,mask=" \ 87 --rng \ 88 --api-socket=/tmp/ch-socket 89$ popd 90``` 91 92Before issuing the API request it is necessary to run the following command inside the VM to make it automatically online the added memory: 93 94```shell 95root@ch-guest ~ # echo online | sudo tee /sys/devices/system/memory/auto_online_blocks 96``` 97 98To ask the VMM to expand the RAM for the VM: 99 100```shell 101./ch-remote --api-socket=/tmp/ch-socket resize --memory 3G 102``` 103 104The new memory is now available to use inside the VM: 105 106```shell 107free -h 108 total used free shared buff/cache available 109Mem: 3.0Gi 71Mi 2.8Gi 0.0Ki 47Mi 2.8Gi 110Swap: 32Mi 0B 32Mi 111``` 112 113Due to guest OS limitations is is necessary to ensure that amount of memory added (between currently assigned RAM and that which is desired) is a multiple of 128MiB. 114 115The same API can also be used to reduce the desired RAM for a VM but the change will not be applied until the VM is rebooted. 116 117Memory and CPU resizing can be combined together into the same HTTP API request. 118 119### virtio-mem method 120 121Extra memory can be added and removed from a running Cloud Hypervisor instance. This is controlled by two mechanisms: 122 1231. Allocating some of the guest physical address space for hotplug memory. 1242. Making a HTTP API request to the VMM to ask for a new amount of RAM to be assigned to the VM. 125 126To use memory hotplug start the VM specifying some size RAM in the `hotplug_size` parameter along with `hotplug_method=virtio-mem` to the memory configuration. 127 128```shell 129$ pushd $CLOUDH 130$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 131$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 132 --kernel custom-vmlinux.bin \ 133 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 134 --disk path=focal-server-cloudimg-amd64.raw \ 135 --memory size=1024M,hotplug_size=8192M,hotplug_method=virtio-mem \ 136 --net "tap=,mac=,ip=,mask=" \ 137 --api-socket=/tmp/ch-socket 138$ popd 139``` 140 141To ask the VMM to expand the RAM for the VM (request is in bytes): 142 143```shell 144./ch-remote --api-socket=/tmp/ch-socket resize --memory 3G 145``` 146 147The new memory is now available to use inside the VM: 148 149```shell 150free -h 151 total used free shared buff/cache available 152Mem: 3.0Gi 71Mi 2.8Gi 0.0Ki 47Mi 2.8Gi 153Swap: 32Mi 0B 32Mi 154``` 155 156The same API can also be used to reduce the desired RAM for a VM. It is important to note that reducing RAM size might only partially work, as the guest might be using some of it. 157 158## PCI Device Hot Plug 159 160Extra PCI devices can be added and removed from a running `cloud-hypervisor` instance. This is controlled by making a HTTP API request to the VMM to ask for the additional device to be added, or for the existing device to be removed. 161 162Note: On AArch64 platform, PCI device hotplug can only be achieved using ACPI. Please refer to the [documentation](uefi.md#building-uefi-firmware-for-aarch64) for more information. 163 164To use PCI device hotplug start the VM with the HTTP server. 165 166```shell 167$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor 168$ ./cloud-hypervisor/target/release/cloud-hypervisor \ 169 --kernel custom-vmlinux.bin \ 170 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \ 171 --disk path=focal-server-cloudimg-amd64.raw \ 172 --cpus boot=4 \ 173 --memory size=1024M \ 174 --net "tap=,mac=,ip=,mask=" \ 175 --api-socket=/tmp/ch-socket 176``` 177 178Notice the addition of `--api-socket=/tmp/ch-socket`. 179 180### Add VFIO Device 181 182To ask the VMM to add additional VFIO device then use the `add-device` API. 183 184```shell 185./ch-remote --api-socket=/tmp/ch-socket add-device path=/sys/bus/pci/devices/0000:01:00.0/ 186``` 187 188### Add Disk Device 189 190To ask the VMM to add additional disk device then use the `add-disk` API. 191 192```shell 193./ch-remote --api-socket=/tmp/ch-socket add-disk path=/foo/bar/cloud.img 194``` 195 196### Add Fs Device 197 198To ask the VMM to add additional fs device then use the `add-fs` API. 199 200```shell 201./ch-remote --api-socket=/tmp/ch-socket add-fs tag=myfs,socket=/foo/bar/virtiofs.sock 202``` 203 204### Add Net Device 205 206To ask the VMM to add additional network device then use the `add-net` API. 207 208```shell 209./ch-remote --api-socket=/tmp/ch-socket add-net tap=chtap0 210``` 211 212### Add Pmem Device 213 214To ask the VMM to add additional PMEM device then use the `add-pmem` API. 215 216```shell 217./ch-remote --api-socket=/tmp/ch-socket add-pmem file=/foo/bar.cloud.img 218``` 219 220### Add Vsock Device 221 222To ask the VMM to add additional vsock device then use the `add-vsock` API. 223 224```shell 225./ch-remote --api-socket=/tmp/ch-socket add-vsock cid=3,socket=/foo/bar/vsock.sock 226``` 227 228### Common Across All PCI Devices 229 230The extra PCI device will be created and advertised to the running kernel. The new device can be found by checking the list of PCI devices. 231 232```shell 233root@ch-guest ~ # lspci 23400:00.0 Host bridge: Intel Corporation Device 0d57 23500:01.0 Unassigned class [ffff]: Red Hat, Inc. Virtio console (rev 01) 23600:02.0 Mass storage controller: Red Hat, Inc. Virtio block device (rev 01) 23700:03.0 Unassigned class [ffff]: Red Hat, Inc. Virtio RNG (rev 01) 238``` 239 240After a reboot the added PCI device will remain. 241 242### Remove PCI device 243 244Removing a PCI device works the same way for all kind of PCI devices. The unique identifier related to the device must be provided. This identifier can be provided by the user when adding the new device, or by default Cloud Hypervisor will assign one. 245 246```shell 247./ch-remote --api-socket=/tmp/ch-socket remove-device _disk0 248``` 249 250As per adding a PCI device to the guest, after a reboot the VM will be running without the removed PCI device. 251