1- [1. What is Cloud Hypervisor?](#1-what-is-cloud-hypervisor) 2 - [Objectives](#objectives) 3 - [High Level](#high-level) 4 - [Architectures](#architectures) 5 - [Guest OS](#guest-os) 6- [2. Getting Started](#2-getting-started) 7 - [Host OS](#host-os) 8 - [Use Pre-built Binaries](#use-pre-built-binaries) 9 - [Packages](#packages) 10 - [Building from Source](#building-from-source) 11 - [Booting Linux](#booting-linux) 12 - [Firmware Booting](#firmware-booting) 13 - [Custom Kernel and Disk Image](#custom-kernel-and-disk-image) 14 - [Building your Kernel](#building-your-kernel) 15 - [Disk image](#disk-image) 16 - [Booting the guest VM](#booting-the-guest-vm) 17- [3. Status](#3-status) 18 - [Hot Plug](#hot-plug) 19 - [Device Model](#device-model) 20 - [Roadmap](#roadmap) 21- [4. Relationship with _Rust VMM_ Project](#4-relationship-with-rust-vmm-project) 22 - [Differences with Firecracker and crosvm](#differences-with-firecracker-and-crosvm) 23- [5. Community](#5-community) 24 - [Contribute](#contribute) 25 - [Slack](#slack) 26 - [Mailing list](#mailing-list) 27 - [Security issues](#security-issues) 28 29# 1. What is Cloud Hypervisor? 30 31Cloud Hypervisor is an open source Virtual Machine Monitor (VMM) that runs on 32top of the [KVM](https://www.kernel.org/doc/Documentation/virtual/kvm/api.txt) 33hypervisor and the Microsoft Hypervisor (MSHV). 34 35The project focuses on running modern, _Cloud Workloads_, on specific, common, 36hardware architectures. In this case _Cloud Workloads_ refers to those that are 37run by customers inside a Cloud Service Provider. This means modern operating 38systems with most I/O handled by 39paravirtualised devices (e.g. _virtio_), no requirement for legacy devices, and 4064-bit CPUs. 41 42Cloud Hypervisor is implemented in [Rust](https://www.rust-lang.org/) and is 43based on the [Rust VMM](https://github.com/rust-vmm) crates. 44 45## Objectives 46 47### High Level 48 49- Runs on KVM or MSHV 50- Minimal emulation 51- Low latency 52- Low memory footprint 53- Low complexity 54- High performance 55- Small attack surface 56- 64-bit support only 57- CPU, memory, PCI hotplug 58- Machine to machine migration 59 60### Architectures 61 62Cloud Hypervisor supports the `x86-64`, `AArch64` and `riscv64` 63architectures, with functionality varying across these platforms. The 64functionality differences between `x86-64` and `AArch64` are documented 65in [#1125](https://github.com/cloud-hypervisor/cloud-hypervisor/issues/1125). 66The `riscv64` architecture support is experimental and offers limited 67functionality. For more details and instructions, please refer to [riscv 68documentation](docs/riscv.md). 69 70### Guest OS 71 72Cloud Hypervisor supports `64-bit Linux` and Windows 10/Windows Server 2019. 73 74# 2. Getting Started 75 76The following sections describe how to build and run Cloud Hypervisor. 77 78## Prerequisites for AArch64 79 80- AArch64 servers (recommended) or development boards equipped with the GICv3 81 interrupt controller. 82 83## Host OS 84 85For required KVM functionality and adequate performance the recommended host 86kernel version is 5.13. The majority of the CI currently tests with kernel 87version 5.15. 88 89## Use Pre-built Binaries 90 91The recommended approach to getting started with Cloud Hypervisor is by using a 92pre-built binary. Binaries are available for the [latest 93release](https://github.com/cloud-hypervisor/cloud-hypervisor/releases/latest). 94Use `cloud-hypervisor-static` for `x86-64` or `cloud-hypervisor-static-aarch64` 95for `AArch64` platform. 96 97## Packages 98 99For convenience, packages are also available targeting some popular Linux 100distributions. This is thanks to the [Open Build 101Service](https://build.opensuse.org). The [OBS 102README](https://github.com/cloud-hypervisor/obs-packaging) explains how to 103enable the repository in a supported Linux distribution and install Cloud Hypervisor 104and accompanying packages. Please report any packaging issues in the 105[obs-packaging](https://github.com/cloud-hypervisor/obs-packaging) repository. 106 107## Building from Source 108 109Please see the [instructions for building from source](docs/building.md) if you 110do not wish to use the pre-built binaries. 111 112## Booting Linux 113 114Cloud Hypervisor supports direct kernel boot (the x86-64 kernel requires the kernel 115built with PVH support or a bzImage) or booting via a firmware (either [Rust Hypervisor 116Firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware) or an 117edk2 UEFI firmware called `CLOUDHV` / `CLOUDHV_EFI`.) 118 119Binary builds of the firmware files are available for the latest release of 120[Rust Hypervisor 121Firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/latest) 122and [our edk2 123repository](https://github.com/cloud-hypervisor/edk2/releases/latest) 124 125The choice of firmware depends on your guest OS choice; some experimentation 126may be required. 127 128### Firmware Booting 129 130Cloud Hypervisor supports booting disk images containing all needed components 131to run cloud workloads, a.k.a. cloud images. 132 133The following sample commands will download an Ubuntu Cloud image, converting 134it into a format that Cloud Hypervisor can use and a firmware to boot the image 135with. 136 137```shell 138$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img 139$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw 140$ wget https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.4.2/hypervisor-fw 141``` 142 143The Ubuntu cloud images do not ship with a default password so it necessary to 144use a `cloud-init` disk image to customise the image on the first boot. A basic 145`cloud-init` image is generated by this [script](scripts/create-cloud-init.sh). 146This seeds the image with a default username/password of `cloud/cloud123`. It 147is only necessary to add this disk image on the first boot. Script also assigns 148default IP address using `test_data/cloud-init/ubuntu/local/network-config` details 149with `--net "mac=12:34:56:78:90:ab,tap="` option. Then the matching mac address 150interface will be enabled as per `network-config` details. 151 152```shell 153$ sudo setcap cap_net_admin+ep ./cloud-hypervisor 154$ ./create-cloud-init.sh 155$ ./cloud-hypervisor \ 156 --kernel ./hypervisor-fw \ 157 --disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \ 158 --cpus boot=4 \ 159 --memory size=1024M \ 160 --net "tap=,mac=,ip=,mask=" 161``` 162 163If access to the firmware messages or interaction with the boot loader (e.g. 164GRUB) is required then it necessary to switch to the serial console instead of 165`virtio-console`. 166 167```shell 168$ ./cloud-hypervisor \ 169 --kernel ./hypervisor-fw \ 170 --disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \ 171 --cpus boot=4 \ 172 --memory size=1024M \ 173 --net "tap=,mac=,ip=,mask=" \ 174 --serial tty \ 175 --console off 176``` 177 178### Custom Kernel and Disk Image 179 180#### Building your Kernel 181 182Cloud Hypervisor also supports direct kernel boot. For x86-64, a `vmlinux` ELF kernel (compiled with PVH support) or a regular bzImage are supported. In order to support development there is a custom branch; however provided the required options are enabled any recent kernel will suffice. 183 184To build the kernel: 185 186```shell 187# Clone the Cloud Hypervisor Linux branch 188$ git clone --depth 1 https://github.com/cloud-hypervisor/linux.git -b ch-6.12.8 linux-cloud-hypervisor 189$ pushd linux-cloud-hypervisor 190$ make ch_defconfig 191# Do native build of the x86-64 kernel 192$ KCFLAGS="-Wa,-mx86-used-note=no" make bzImage -j `nproc` 193# Do native build of the AArch64 kernel 194$ make -j `nproc` 195$ popd 196``` 197 198For x86-64, the `vmlinux` kernel image will then be located at 199`linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin`. 200For AArch64, the `Image` kernel image will then be located at 201`linux-cloud-hypervisor/arch/arm64/boot/Image`. 202 203#### Disk image 204 205For the disk image the same Ubuntu image as before can be used. This contains 206an `ext4` root filesystem. 207 208```shell 209$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img # x86-64 210$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-arm64.img # AArch64 211$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw # x86-64 212$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-arm64.img focal-server-cloudimg-arm64.raw # AArch64 213``` 214 215#### Booting the guest VM 216 217These sample commands boot the disk image using the custom kernel whilst also 218supplying the desired kernel command line. 219 220- x86-64 221 222```shell 223$ sudo setcap cap_net_admin+ep ./cloud-hypervisor 224$ ./create-cloud-init.sh 225$ ./cloud-hypervisor \ 226 --kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \ 227 --disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \ 228 --cmdline "console=hvc0 root=/dev/vda1 rw" \ 229 --cpus boot=4 \ 230 --memory size=1024M \ 231 --net "tap=,mac=,ip=,mask=" 232``` 233 234- AArch64 235 236```shell 237$ sudo setcap cap_net_admin+ep ./cloud-hypervisor 238$ ./create-cloud-init.sh 239$ ./cloud-hypervisor \ 240 --kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \ 241 --disk path=focal-server-cloudimg-arm64.raw path=/tmp/ubuntu-cloudinit.img \ 242 --cmdline "console=hvc0 root=/dev/vda1 rw" \ 243 --cpus boot=4 \ 244 --memory size=1024M \ 245 --net "tap=,mac=,ip=,mask=" 246``` 247 248If earlier kernel messages are required the serial console should be used instead of `virtio-console`. 249 250- x86-64 251 252```shell 253$ ./cloud-hypervisor \ 254 --kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \ 255 --console off \ 256 --serial tty \ 257 --disk path=focal-server-cloudimg-amd64.raw \ 258 --cmdline "console=ttyS0 root=/dev/vda1 rw" \ 259 --cpus boot=4 \ 260 --memory size=1024M \ 261 --net "tap=,mac=,ip=,mask=" 262``` 263 264- AArch64 265 266```shell 267$ ./cloud-hypervisor \ 268 --kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \ 269 --console off \ 270 --serial tty \ 271 --disk path=focal-server-cloudimg-arm64.raw \ 272 --cmdline "console=ttyAMA0 root=/dev/vda1 rw" \ 273 --cpus boot=4 \ 274 --memory size=1024M \ 275 --net "tap=,mac=,ip=,mask=" 276``` 277 278# 3. Status 279 280Cloud Hypervisor is under active development. The following stability 281guarantees are currently made: 282 283* The API (including command line options) will not be removed or changed in a 284 breaking way without a minimum of 2 major releases notice. Where possible 285 warnings will be given about the use of deprecated functionality and the 286 deprecations will be documented in the release notes. 287 288* Point releases will be made between individual releases where there are 289 substantial bug fixes or security issues that need to be fixed. These point 290 releases will only include bug fixes. 291 292Currently the following items are **not** guaranteed across updates: 293 294* Snapshot/restore is not supported across different versions 295* Live migration is not supported across different versions 296* The following features are considered experimental and may change 297 substantially between releases: TDX, vfio-user, vDPA. 298 299Further details can be found in the [release documentation](docs/releases.md). 300 301As of 2023-01-03, the following cloud images are supported: 302 303- [Ubuntu Focal](https://cloud-images.ubuntu.com/focal/current/) (focal-server-cloudimg-{amd64,arm64}.img) 304- [Ubuntu Jammy](https://cloud-images.ubuntu.com/jammy/current/) (jammy-server-cloudimg-{amd64,arm64}.img ) 305- [Fedora 36](https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/36/Cloud/) ([Fedora-Cloud-Base-36-1.5.x86_64.raw.xz](https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/36/Cloud/x86_64/images/) / [Fedora-Cloud-Base-36-1.5.aarch64.raw.xz](https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/36/Cloud/aarch64/images/)) 306 307Direct kernel boot to userspace should work with a rootfs from most 308distributions although you may need to enable exotic filesystem types in the 309reference kernel configuration (e.g. XFS or btrfs.) 310 311## Hot Plug 312 313Cloud Hypervisor supports hotplug of CPUs, passthrough devices (VFIO), 314`virtio-{net,block,pmem,fs,vsock}` and memory resizing. This 315[document](docs/hotplug.md) details how to add devices to a running VM. 316 317## Device Model 318 319Details of the device model can be found in this 320[documentation](docs/device_model.md). 321 322## Roadmap 323 324The project roadmap is tracked through a [GitHub 325project](https://github.com/orgs/cloud-hypervisor/projects/6). 326 327# 4. Relationship with _Rust VMM_ Project 328 329In order to satisfy the design goal of having a high-performance, 330security-focused hypervisor the decision was made to use the 331[Rust](https://www.rust-lang.org/) programming language. The language's strong 332focus on memory and thread safety makes it an ideal candidate for implementing 333VMMs. 334 335Instead of implementing the VMM components from scratch, Cloud Hypervisor is 336importing the [Rust VMM](https://github.com/rust-vmm) crates, and sharing code 337and architecture together with other VMMs like e.g. Amazon's 338[Firecracker](https://firecracker-microvm.github.io/) and Google's 339[crosvm](https://chromium.googlesource.com/chromiumos/platform/crosvm/). 340 341Cloud Hypervisor embraces the _Rust VMM_ project's goals, which is to be able 342to share and re-use as many virtualization crates as possible. 343 344## Differences with Firecracker and crosvm 345 346A large part of the Cloud Hypervisor code is based on either the Firecracker or 347the crosvm project's implementations. Both of these are VMMs written in Rust 348with a focus on safety and security, like Cloud Hypervisor. 349 350The goal of the Cloud Hypervisor project differs from the aforementioned 351projects in that it aims to be a general purpose VMM for _Cloud Workloads_ and 352not limited to container/serverless or client workloads. 353 354The Cloud Hypervisor community thanks the communities of both the Firecracker 355and crosvm projects for their excellent work. 356 357# 5. Community 358 359The Cloud Hypervisor project follows the governance, and community guidelines 360described in the [Community](https://github.com/cloud-hypervisor/community) 361repository. 362 363## Contribute 364 365The project strongly believes in building a global, diverse and collaborative 366community around the Cloud Hypervisor project. Anyone who is interested in 367[contributing](CONTRIBUTING.md) to the project is welcome to participate. 368 369Contributing to a open source project like Cloud Hypervisor covers a lot more 370than just sending code. Testing, documentation, pull request 371reviews, bug reports, feature requests, project improvement suggestions, etc, 372are all equal and welcome means of contribution. See the 373[CONTRIBUTING](CONTRIBUTING.md) document for more details. 374 375## Slack 376 377Get an [invite to our Slack channel](https://join.slack.com/t/cloud-hypervisor/shared_invite/enQtNjY3MTE3MDkwNDQ4LWQ1MTA1ZDVmODkwMWQ1MTRhYzk4ZGNlN2UwNTI3ZmFlODU0OTcwOWZjMTkwZDExYWE3YjFmNzgzY2FmNDAyMjI), 378 [join us on Slack](https://cloud-hypervisor.slack.com/), and [participate in our community activities](https://cloud-hypervisor.slack.com/archives/C04R5DUQVBN). 379 380## Mailing list 381 382Please report bugs using the [GitHub issue 383tracker](https://github.com/cloud-hypervisor/cloud-hypervisor/issues) but for 384broader community discussions you may use our [mailing 385list](https://lists.cloudhypervisor.org/g/dev/). 386 387## Security issues 388 389Please contact the maintainers listed in the MAINTAINERS.md file with security issues. 390