1- [1. What is Cloud Hypervisor?](#1-what-is-cloud-hypervisor) 2 - [Objectives](#objectives) 3 - [High Level](#high-level) 4 - [Architectures](#architectures) 5 - [Guest OS](#guest-os) 6- [2. Getting Started](#2-getting-started) 7 - [Host OS](#host-os) 8 - [Use Pre-built Binaries](#use-pre-built-binaries) 9 - [Packages](#packages) 10 - [Building from Source](#building-from-source) 11 - [Booting Linux](#booting-linux) 12 - [Firmware Booting](#firmware-booting) 13 - [Custom Kernel and Disk Image](#custom-kernel-and-disk-image) 14 - [Building your Kernel](#building-your-kernel) 15 - [Disk image](#disk-image) 16 - [Booting the guest VM](#booting-the-guest-vm) 17- [3. Status](#3-status) 18 - [Hot Plug](#hot-plug) 19 - [Device Model](#device-model) 20 - [Roadmap](#roadmap) 21- [4. Relationship with _Rust VMM_ Project](#4-relationship-with-rust-vmm-project) 22 - [Differences with Firecracker and crosvm](#differences-with-firecracker-and-crosvm) 23- [5. Community](#5-community) 24 - [Contribute](#contribute) 25 - [Slack](#slack) 26 - [Mailing list](#mailing-list) 27 - [Security issues](#security-issues) 28 29# 1. What is Cloud Hypervisor? 30 31Cloud Hypervisor is an open source Virtual Machine Monitor (VMM) that runs on 32top of the [KVM](https://www.kernel.org/doc/Documentation/virtual/kvm/api.txt) 33hypervisor and the Microsoft Hypervisor (MSHV). 34 35The project focuses on running modern, _Cloud Workloads_, on specific, common, 36hardware architectures. In this case _Cloud Workloads_ refers to those that are 37run by customers inside a Cloud Service Provider. This means modern operating 38systems with most I/O handled by 39paravirtualised devices (e.g. _virtio_), no requirement for legacy devices, and 4064-bit CPUs. 41 42Cloud Hypervisor is implemented in [Rust](https://www.rust-lang.org/) and is 43based on the [Rust VMM](https://github.com/rust-vmm) crates. 44 45## Objectives 46 47### High Level 48 49- Runs on KVM or MSHV 50- Minimal emulation 51- Low latency 52- Low memory footprint 53- Low complexity 54- High performance 55- Small attack surface 56- 64-bit support only 57- CPU, memory, PCI hotplug 58- Machine to machine migration 59 60### Architectures 61 62Cloud Hypervisor supports the `x86-64` and `AArch64` architectures. There are 63minor differences in functionality between the two architectures 64(see [#1125](https://github.com/cloud-hypervisor/cloud-hypervisor/issues/1125)). 65 66### Guest OS 67 68Cloud Hypervisor supports `64-bit Linux` and Windows 10/Windows Server 2019. 69 70# 2. Getting Started 71 72The following sections describe how to build and run Cloud Hypervisor. 73 74## Prerequisites for AArch64 75 76- AArch64 servers (recommended) or development boards equipped with the GICv3 77 interrupt controller. 78 79## Host OS 80 81For required KVM functionality and adequate performance the recommended host 82kernel version is 5.13. The majority of the CI currently tests with kernel 83version 5.15. 84 85## Use Pre-built Binaries 86 87The recommended approach to getting started with Cloud Hypervisor is by using a 88pre-built binary. Binaries are available for the [latest 89release](https://github.com/cloud-hypervisor/cloud-hypervisor/releases/latest). 90Use `cloud-hypervisor-static` for `x86-64` or `cloud-hypervisor-static-aarch64` 91for `AArch64` platform. 92 93## Packages 94 95For convenience, packages are also available targeting some popular Linux 96distributions. This is thanks to the [Open Build 97Service](https://build.opensuse.org). The [OBS 98README](https://github.com/cloud-hypervisor/obs-packaging) explains how to 99enable the repository in a supported Linux distribution and install Cloud Hypervisor 100and accompanying packages. Please report any packaging issues in the 101[obs-packaging](https://github.com/cloud-hypervisor/obs-packaging) repository. 102 103## Building from Source 104 105Please see the [instructions for building from source](docs/building.md) if you 106do not wish to use the pre-built binaries. 107 108## Booting Linux 109 110Cloud Hypervisor supports direct kernel boot (the x86-64 kernel requires the kernel 111built with PVH support) or booting via a firmware (either [Rust Hypervisor 112Firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware) or an 113edk2 UEFI firmware called `CLOUDHV` / `CLOUDHV_EFI`.) 114 115Binary builds of the firmware files are available for the latest release of 116[Rust Hypervisor 117Firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/latest) 118and [our edk2 119repository](https://github.com/cloud-hypervisor/edk2/releases/latest) 120 121The choice of firmware depends on your guest OS choice; some experimentation 122may be required. 123 124### Firmware Booting 125 126Cloud Hypervisor supports booting disk images containing all needed components 127to run cloud workloads, a.k.a. cloud images. 128 129The following sample commands will download an Ubuntu Cloud image, converting 130it into a format that Cloud Hypervisor can use and a firmware to boot the image 131with. 132 133```shell 134$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img 135$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw 136$ wget https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.4.2/hypervisor-fw 137``` 138 139The Ubuntu cloud images do not ship with a default password so it necessary to 140use a `cloud-init` disk image to customise the image on the first boot. A basic 141`cloud-init` image is generated by this [script](scripts/create-cloud-init.sh). 142This seeds the image with a default username/password of `cloud/cloud123`. It 143is only necessary to add this disk image on the first boot. Script also assigns 144default IP address using `test_data/cloud-init/ubuntu/local/network-config` details 145with `--net "mac=12:34:56:78:90:ab,tap="` option. Then the matching mac address 146interface will be enabled as per `network-config` details. 147 148```shell 149$ sudo setcap cap_net_admin+ep ./cloud-hypervisor 150$ ./create-cloud-init.sh 151$ ./cloud-hypervisor \ 152 --kernel ./hypervisor-fw \ 153 --disk path=focal-server-cloudimg-amd64.raw --disk path=/tmp/ubuntu-cloudinit.img \ 154 --cpus boot=4 \ 155 --memory size=1024M \ 156 --net "tap=,mac=,ip=,mask=" 157``` 158 159If access to the firmware messages or interaction with the boot loader (e.g. 160GRUB) is required then it necessary to switch to the serial console instead of 161`virtio-console`. 162 163```shell 164$ ./cloud-hypervisor \ 165 --kernel ./hypervisor-fw \ 166 --disk path=focal-server-cloudimg-amd64.raw --disk path=/tmp/ubuntu-cloudinit.img \ 167 --cpus boot=4 \ 168 --memory size=1024M \ 169 --net "tap=,mac=,ip=,mask=" \ 170 --serial tty \ 171 --console off 172``` 173 174### Custom Kernel and Disk Image 175 176#### Building your Kernel 177 178Cloud Hypervisor also supports direct kernel boot. For x86-64, a `vmlinux` ELF kernel (compiled with PVH support) is needed. In order to support development there is a custom branch; however provided the required options are enabled any recent kernel will suffice. 179 180To build the kernel: 181 182```shell 183# Clone the Cloud Hypervisor Linux branch 184$ git clone --depth 1 https://github.com/cloud-hypervisor/linux.git -b ch-6.2 linux-cloud-hypervisor 185$ pushd linux-cloud-hypervisor 186# Use the x86-64 cloud-hypervisor kernel config to build your kernel for x86-64 187$ wget https://raw.githubusercontent.com/cloud-hypervisor/cloud-hypervisor/main/resources/linux-config-x86_64 188# Use the AArch64 cloud-hypervisor kernel config to build your kernel for AArch64 189$ wget https://raw.githubusercontent.com/cloud-hypervisor/cloud-hypervisor/main/resources/linux-config-aarch64 190$ cp linux-config-x86_64 .config # x86-64 191$ cp linux-config-aarch64 .config # AArch64 192# Do native build of the x86-64 kernel 193$ KCFLAGS="-Wa,-mx86-used-note=no" make bzImage -j `nproc` 194# Do native build of the AArch64 kernel 195$ make -j `nproc` 196$ popd 197``` 198 199For x86-64, the `vmlinux` kernel image will then be located at 200`linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin`. 201For AArch64, the `Image` kernel image will then be located at 202`linux-cloud-hypervisor/arch/arm64/boot/Image`. 203 204#### Disk image 205 206For the disk image the same Ubuntu image as before can be used. This contains 207an `ext4` root filesystem. 208 209```shell 210$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img # x86-64 211$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-arm64.img # AArch64 212$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw # x86-64 213$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-arm64.img focal-server-cloudimg-arm64.raw # AArch64 214``` 215 216#### Booting the guest VM 217 218These sample commands boot the disk image using the custom kernel whilst also 219supplying the desired kernel command line. 220 221- x86-64 222 223```shell 224$ sudo setcap cap_net_admin+ep ./cloud-hypervisor 225$ ./create-cloud-init.sh 226$ ./cloud-hypervisor \ 227 --kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \ 228 --disk path=focal-server-cloudimg-amd64.raw --disk path=/tmp/ubuntu-cloudinit.img \ 229 --cmdline "console=hvc0 root=/dev/vda1 rw" \ 230 --cpus boot=4 \ 231 --memory size=1024M \ 232 --net "tap=,mac=,ip=,mask=" 233``` 234 235- AArch64 236 237```shell 238$ sudo setcap cap_net_admin+ep ./cloud-hypervisor 239$ ./create-cloud-init.sh 240$ ./cloud-hypervisor \ 241 --kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \ 242 --disk path=focal-server-cloudimg-arm64.raw --disk path=/tmp/ubuntu-cloudinit.img \ 243 --cmdline "console=hvc0 root=/dev/vda1 rw" \ 244 --cpus boot=4 \ 245 --memory size=1024M \ 246 --net "tap=,mac=,ip=,mask=" 247``` 248 249If earlier kernel messages are required the serial console should be used instead of `virtio-console`. 250 251- x86-64 252 253```shell 254$ ./cloud-hypervisor \ 255 --kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \ 256 --console off \ 257 --serial tty \ 258 --disk path=focal-server-cloudimg-amd64.raw \ 259 --cmdline "console=ttyS0 root=/dev/vda1 rw" \ 260 --cpus boot=4 \ 261 --memory size=1024M \ 262 --net "tap=,mac=,ip=,mask=" 263``` 264 265- AArch64 266 267```shell 268$ ./cloud-hypervisor \ 269 --kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \ 270 --console off \ 271 --serial tty \ 272 --disk path=focal-server-cloudimg-arm64.raw \ 273 --cmdline "console=ttyAMA0 root=/dev/vda1 rw" \ 274 --cpus boot=4 \ 275 --memory size=1024M \ 276 --net "tap=,mac=,ip=,mask=" 277``` 278 279# 3. Status 280 281Cloud Hypervisor is under active development. The following stability 282guarantees are currently made: 283 284* The API (including command line options) will not be removed or changed in a 285 breaking way without a minimum of 2 major releases notice. Where possible 286 warnings will be given about the use of deprecated functionality and the 287 deprecations will be documented in the release notes. 288 289* Point releases will be made between individual releases where there are 290 substantial bug fixes or security issues that need to be fixed. These point 291 releases will only include bug fixes. 292 293Currently the following items are **not** guaranteed across updates: 294 295* Snapshot/restore is not supported across different versions 296* Live migration is not supported across different versions 297* The following features are considered experimental and may change 298 substantially between releases: TDX, vfio-user, vDPA. 299 300Further details can be found in the [release documentation](docs/releases.md). 301 302As of 2023-01-03, the following cloud images are supported: 303 304- [Ubuntu Focal](https://cloud-images.ubuntu.com/focal/current/) (focal-server-cloudimg-{amd64,arm64}.img) 305- [Ubuntu Jammy](https://cloud-images.ubuntu.com/jammy/current/) (jammy-server-cloudimg-{amd64,arm64}.img ) 306- [Fedora 36](https://fedora.mirrorservice.org/fedora/linux/releases/36/Cloud/) ([Fedora-Cloud-Base-36-1.5.x86_64.raw.xz](https://fedora.mirrorservice.org/fedora/linux/releases/36/Cloud/x86_64/images/) / [Fedora-Cloud-Base-36-1.5.aarch64.raw.xz](https://fedora.mirrorservice.org/fedora/linux/releases/36/Cloud/aarch64/images/)) 307 308Direct kernel boot to userspace should work with a rootfs from most 309distributions although you may need to enable exotic filesystem types in the 310reference kernel configuration (e.g. XFS or btrfs.) 311 312## Hot Plug 313 314Cloud Hypervisor supports hotplug of CPUs, passthrough devices (VFIO), 315`virtio-{net,block,pmem,fs,vsock}` and memory resizing. This 316[document](docs/hotplug.md) details how to add devices to a running VM. 317 318## Device Model 319 320Details of the device model can be found in this 321[documentation](docs/device_model.md). 322 323## Roadmap 324 325The project roadmap is tracked through a [GitHub 326project](https://github.com/orgs/cloud-hypervisor/projects/6). 327 328# 4. Relationship with _Rust VMM_ Project 329 330In order to satisfy the design goal of having a high-performance, 331security-focused hypervisor the decision was made to use the 332[Rust](https://www.rust-lang.org/) programming language. The language's strong 333focus on memory and thread safety makes it an ideal candidate for implementing 334VMMs. 335 336Instead of implementing the VMM components from scratch, Cloud Hypervisor is 337importing the [Rust VMM](https://github.com/rust-vmm) crates, and sharing code 338and architecture together with other VMMs like e.g. Amazon's 339[Firecracker](https://firecracker-microvm.github.io/) and Google's 340[crosvm](https://chromium.googlesource.com/chromiumos/platform/crosvm/). 341 342Cloud Hypervisor embraces the _Rust VMM_ project's goals, which is to be able 343to share and re-use as many virtualization crates as possible. 344 345## Differences with Firecracker and crosvm 346 347A large part of the Cloud Hypervisor code is based on either the Firecracker or 348the crosvm project's implementations. Both of these are VMMs written in Rust 349with a focus on safety and security, like Cloud Hypervisor. 350 351The goal of the Cloud Hypervisor project differs from the aforementioned 352projects in that it aims to be a general purpose VMM for _Cloud Workloads_ and 353not limited to container/serverless or client workloads. 354 355The Cloud Hypervisor community thanks the communities of both the Firecracker 356and crosvm projects for their excellent work. 357 358# 5. Community 359 360The Cloud Hypervisor project follows the governance, and community guidelines 361described in the [Community](https://github.com/cloud-hypervisor/community) 362repository. 363 364## Contribute 365 366The project strongly believes in building a global, diverse and collaborative 367community around the Cloud Hypervisor project. Anyone who is interested in 368[contributing](CONTRIBUTING.md) to the project is welcome to participate. 369 370Contributing to a open source project like Cloud Hypervisor covers a lot more 371than just sending code. Testing, documentation, pull request 372reviews, bug reports, feature requests, project improvement suggestions, etc, 373are all equal and welcome means of contribution. See the 374[CONTRIBUTING](CONTRIBUTING.md) document for more details. 375 376## Slack 377 378Get an [invite to our Slack channel](https://join.slack.com/t/cloud-hypervisor/shared_invite/enQtNjY3MTE3MDkwNDQ4LWQ1MTA1ZDVmODkwMWQ1MTRhYzk4ZGNlN2UwNTI3ZmFlODU0OTcwOWZjMTkwZDExYWE3YjFmNzgzY2FmNDAyMjI), 379 [join us on Slack](https://cloud-hypervisor.slack.com/), and [participate in our community activities](https://cloud-hypervisor.slack.com/archives/C04R5DUQVBN). 380 381## Mailing list 382 383Please report bugs using the [GitHub issue 384tracker](https://github.com/cloud-hypervisor/cloud-hypervisor/issues) but for 385broader community discussions you may use our [mailing 386list](https://lists.cloudhypervisor.org/g/dev/). 387 388## Security issues 389 390Please contact the maintainers listed in the MAINTAINERS.md file with security issues. 391