1# Cloud Hypervisor VFIO-user HOWTO 2 3VFIO-user is an *experimental* protocol for allowing devices to be implemented in another process and communicate over a socket; ie.e VFIO-user is to VFIO as virtio is to vhost-user. 4 5The protocol is documented here: https://github.com/nutanix/libvfio-user/blob/master/docs/vfio-user.rst 6 7The Cloud Hypervisor support for such devices is *experimental*. Not all Cloud Hypervisor functionality is supported in particular: virtio-mem and iommu are not supported. 8 9## Usage 10 11The `--user-device socket=<path>` parameter is used to create a vfio-user device when creating the VM specifying the socket to connect to. The device can also be hotplugged with `ch-remote add-user-device socket=<path>`. 12 13## Example (GPIO device) 14 15There is a simple GPIO device included in the libvfio-user repository: https://github.com/nutanix/libvfio-user#gpio 16 17Run the example from the libvfio-user repository: 18 19```sh 20rm /tmp/vfio-user.sock 21./build/dbg/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock & 22``` 23 24Start Cloud Hypervisor: 25 26```sh 27target/debug/cloud-hypervisor \ 28 --memory size=1G,shared=on \ 29 --disk path=~/images/focal-server-cloudimg-amd64.raw \ 30 --kernel ~/src/linux/vmlinux \ 31 --cmdline "root=/dev/vda1 console=hvc0" \ 32 --user-device socket=/tmp/vfio-user.sock 33``` 34 35Inside the VM you can test the device with: 36 37```sh 38cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export 39for ((i=0;i<12;i++)); do cat /sys/class/gpio/OUT0/value; done 40``` 41 42## Example (NVMe device) 43 44Use SPDK: https://github.com/spdk/spdk 45 46Compile with `./configure --with-vfio-user` 47 48Create an NVMe controller listening on a vfio-user socket with a simple AIO block device in spdk. 49More details of configuring SPDK bdev can be viewed in [SPDK bdev](https://spdk.io/doc/bdev.html). 50More details of setting SPDK NVMe-oF target can be viewed in [SDPK NVMe-oF tgt](https://spdk.io/doc/nvmf.html). 51 52```sh 53sudo scripts/setup.sh 54rm ~/images/test-disk.raw 55truncate ~/images/test-disk.raw -s 128M 56mkfs.ext4 ~/images/test-disk.raw 57sudo killall ./build/bin/nvmf_tgt 58sudo ./build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 & 59sleep 2 60sudo ./scripts/rpc.py nvmf_create_transport -t VFIOUSER 61sudo rm -rf /tmp/nvme-vfio-user 62sudo mkdir -p /tmp/nvme-vfio-user 63sudo ./scripts/rpc.py bdev_aio_create ~/images/test-disk.raw test 512 64sudo ./scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode -a -s test 65sudo ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode test 66sudo ./scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode -t VFIOUSER -a /tmp/nvme-vfio-user -s 0 67sudo chown $USER.$USER -R /tmp/nvme-vfio-user 68``` 69 70Start Cloud Hypervisor: 71 72```sh 73target/debug/cloud-hypervisor \ 74 --memory size=1G,shared=on \ 75 --disk path=~/images/focal-server-cloudimg-amd64.raw \ 76 --kernel ~/src/linux/vmlinux \ 77 --cmdline "root=/dev/vda1 console=hvc0" \ 78 --user-device socket=/tmp/nvme-vfio-user/cntrl 79``` 80