1# Device Model 2 3This document describes the device model supported by `cloud-hypervisor`. 4 5## Summary 6 7| Device | Build configurable | Enabled by default | Runtime configurable | 8| :----: | :----: | :----: | :----: | 9| Serial port | :x: | :x: | :heavy_check_mark: | 10| RTC/CMOS | :heavy_check_mark: | :heavy_check_mark: | :x: | 11| I/O APIC | :x: | :x: | :heavy_check_mark: | 12| i8042 shutdown/reboot | :x: | :x: | :x: | 13| ACPI shutdown/reboot | :x: | :heavy_check_mark: | :x: | 14| virtio-blk | :x: | :x: | :heavy_check_mark: | 15| virtio-console | :x: | :x: | :heavy_check_mark: | 16| virtio-iommu | :x: | :x: | :heavy_check_mark: | 17| virtio-net | :x: | :x: | :heavy_check_mark: | 18| virtio-pmem | :x: | :x: | :heavy_check_mark: | 19| virtio-rng | :x: | :x: | :heavy_check_mark: | 20| virtio-vsock | :x: | :x: | :heavy_check_mark: | 21| vhost-user-blk | :x: | :x: | :heavy_check_mark: | 22| vhost-user-fs | :x: | :x: | :heavy_check_mark: | 23| vhost-user-net | :x: | :x: | :heavy_check_mark: | 24| VFIO | :heavy_check_mark: | :x: | :heavy_check_mark: | 25 26## Legacy devices 27 28### Serial port 29 30Simple emulation of a serial port by reading and writing to specific port I/O 31addresses. The serial port can be very useful to gather early logs from the 32operating system booted inside the VM. 33 34For x86_64, The default serial port is from an emulated 16550A device. It can 35be used as the default console for Linux when booting with the option 36`console=ttyS0`. For AArch64, the default serial port is from an emulated 37PL011 UART device. The related command line for AArch64 is `console=ttyAMA0`. 38 39This device is always built-in, and it is disabled by default. It can be 40enabled with the `--serial` option, as long as its parameter is not `off`. 41 42### RTC/CMOS 43 44For environments such as Windows or EFI which cannot rely on KVM clock, the 45emulation of this legacy device makes the platform usable. 46 47This device is built-in by default, but it can be compiled out with Rust 48features. When compiled in, it is always enabled, and cannot be disabled 49from the command line. 50 51For AArch64 machines, an ARM PrimeCell Real Time Clock(PL031) is implemented. 52This device is built-in by default for the AArch64 platform, and it is always 53enabled, and cannot be disabled from the command line. 54 55### I/O APIC 56 57`cloud-hypervisor` supports a so-called split IRQ chip implementation by 58implementing support for the [IOAPIC](https://wiki.osdev.org/IOAPIC). 59By moving part of the IRQ chip implementation from kernel space to user space, 60the IRQ chip emulation does not always run in a fully privileged mode. 61 62The device is always built-in, and it is enabled depending on the presence of 63the serial port. If the serial port is disabled, and because no other device 64would require pin based interrupts (INTx), the I/O APIC is disabled. 65 66### i8042 67 68Simplified PS/2 port since it supports only one key to trigger a reboot or 69shutdown, depending on the ACPI support. 70 71This device is always built-in, but it is disabled by default. Because ACPI is 72enabled by default, the handling of reboot/shutdown goes through the dedicated 73ACPI device. In case ACPI is disabled, this device is enabled to bring to the 74VM some reboot/shutdown support. 75 76### ARM PrimeCell General Purpose Input/Output (PL061) 77 78Simplified ARM PrimeCell GPIO (PL061) implementation. Only supports key 3 to 79trigger a graceful shutdown of the AArch64 guest. 80 81### ACPI device 82 83This is a dedicated device for handling ACPI shutdown and reboot when ACPI is 84enabled. 85 86This device is always built-in, and it is enabled by default since the ACPI 87feature is enabled by default. 88 89## Virtio devices 90 91For all virtio devices listed below, only `virtio-pci` transport layer is 92supported. Cloud Hypervisor supports multiple PCI segments, and users can 93append `,pci_segment=<PCI_segment_number>` to the device flag in the Cloud 94Hypervisor command line to assign devices to a specific PCI segment. 95 96### virtio-block 97 98The `virtio-blk` device exposes a block device to the guest. This device is 99usually used to boot the operating system running in the VM. 100 101This device is always built-in, and it is enabled based on the presence of the 102flag `--disk`. 103 104### virtio-console 105 106`cloud-hypervisor` exposes a `virtio-console` device to the guest. Although 107using this device as a guest console can potentially cut some early boot 108messages, it can reduce the guest boot time and provides a complete console 109implementation. 110 111This device is always built-in, and it is enabled by default to provide a guest 112console. It can be disabled, switching back to the legacy serial port by 113selecting `--serial tty --console off` from the command line. 114 115### virtio-iommu 116 117As we want to improve our nested guests support, we added support for exposing 118a [paravirtualized IOMMU](iommu.md) device through virtio. This allows for a 119safer nested virtio and directly assigned devices support. 120 121This device is always built-in, and it is enabled based on the presence of the 122parameter `iommu=on` in any of the virtio or VFIO devices. If at least one of 123these devices needs to be connected to the paravirtualized IOMMU, the 124`virtio-iommu` device will be created. 125 126### virtio-net 127 128The `virtio-net` device provides network connectivity for the guest, as it 129creates a network interface connected to a TAP interface automatically created 130by the `cloud-hypervisor` on the host. 131 132This device is always built-in, and it is enabled based on the presence of the 133flag `--net`. 134 135### virtio-pmem 136 137The `virtio-pmem` implementation emulates a virtual persistent memory device 138that `cloud-hypervisor` can e.g. boot from. Booting from a `virtio-pmem` device 139allows to bypass the guest page cache and improve the guest memory footprint. 140 141This device is always built-in, and it is enabled based on the presence of the 142flag `--pmem`. 143 144### virtio-rng 145 146A VM does not generate entropy like a real machine would, which is an issue 147when workloads running in the guest need random numbers to be generated. The 148`virtio-rng` device provides entropy to the guest by relying on the generator 149that can be found on the host. By default, the chosen source of entropy is 150`/dev/urandom`. 151 152This device is always built-in, and it is always enabled. The `--rng` flag can 153be used to change the source of entropy. 154 155### virtio-vsock 156 157In order to more efficiently and securely communicate between host and guest, 158we added a hybrid implementation of the [VSOCK](http://man7.org/linux/man-pages/man7/vsock.7.html) 159socket address family over virtio. 160Credits go to the [Firecracker](https://github.com/firecracker-microvm/firecracker/blob/master/docs/vsock.md) 161project as our implementation is a copy of theirs. 162 163This device is always built-in, and it is enabled based on the presence of the 164flag `--vsock`. 165 166## Vhost-user devices 167 168Vhost-user devices are virtio backends running outside of the VMM, as its own 169separate process. They are usually used to bring more flexibility and increased 170isolation. 171 172### vhost-user-blk 173 174As part of the general effort to offload paravirtualized I/O to external 175processes, we added support for vhost-user-blk backends. This enables 176`cloud-hypervisor` users to plug a `vhost-user` based block device (e.g. SPDK) 177into the VMM as their virtio block backend. 178 179This device is always built-in, and it is enabled when `vhost_user=true` and 180`socket` are provided to the `--disk` parameter. 181 182### vhost-user-fs 183 184`cloud-hypervisor` supports the [virtio-fs](https://virtio-fs.gitlab.io/) 185shared file system, allowing for an efficient and reliable way of sharing 186a filesystem between the host and the cloud-hypervisor guest. 187 188See our [filesystem sharing](fs.md) documentation for more details on how to 189use virtio-fs with cloud-hypervisor. 190 191This device is always built-in, and it is enabled based on the presence of the 192flag `--fs`. 193 194### vhost-user-net 195 196As part of the general effort to offload paravirtualized I/O to external 197processes, we added support for [vhost-user-net](https://access.redhat.com/solutions/3394851) 198backends. This enables `cloud-hypervisor` users to plug a `vhost-user` based 199networking device (e.g. DPDK) into the VMM as their virtio network backend. 200 201This device is always built-in, and it is enabled when `vhost_user=true` and 202`socket` are provided to the `--net` parameter. 203 204## VFIO 205 206VFIO (Virtual Function I/O) is a kernel framework that exposes direct device 207access to userspace. `cloud-hypervisor` uses VFIO to directly assign host 208physical devices into its guest. 209 210See our [VFIO documentation](vfio.md) for more details on how to directly 211assign host devices to `cloud-hypervisor` guests. 212 213Because VFIO implies `vfio-pci` in the `cloud-hypervisor` context, the VFIO 214support is built-in when the `pci` feature is selected. And because the `pci` 215feature is built-in by default, VFIO support is also built-in by default. 216When VFIO support is built-in, a physical device can be passed through, using 217the flag `--device` in order to enable the VFIO code. 218