xref: /cloud-hypervisor/docs/vfio-user.md (revision f67b3f79ea19c9a66e04074cbbf5d292f6529e43)
1# Cloud Hypervisor VFIO-user HOWTO
2
3VFIO-user is an *experimental* protocol for allowing devices to be implemented in another process and communicate over a socket; ie.e VFIO-user is to VFIO as virtio is to vhost-user.
4
5The protocol is documented here: https://github.com/nutanix/libvfio-user/blob/master/docs/vfio-user.rst
6
7The Cloud Hypervisor support for such devices is *experimental*. Not all Cloud Hypervisor functionality is supported in particular: virtio-mem and iommu are not supported.
8
9## Usage
10
11The `--user-device socket=<path>` parameter is used to create a vfio-user device when creating the VM specifying the socket to connect to. The device can also be hotplugged with `ch-remote add-user-device socket=<path>`.
12
13## Example (GPIO device)
14
15There is a simple GPIO device included in the libvfio-user repository: https://github.com/nutanix/libvfio-user#gpio
16
17Run the example from the libvfio-user repository:
18
19```sh
20rm /tmp/vfio-user.sock
21./build/dbg/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock &
22```
23
24Start Cloud Hypervisor:
25
26```sh
27target/debug/cloud-hypervisor \
28    --memory size=1G,shared=on \
29    --disk path=~/images/focal-server-cloudimg-amd64.raw \
30    --kernel ~/src/linux/vmlinux \
31    --cmdline "root=/dev/vda1 console=hvc0" \
32    --user-device socket=/tmp/vfio-user.sock
33```
34
35Inside the VM you can test the device with:
36
37```sh
38cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export
39for ((i=0;i<12;i++)); do cat /sys/class/gpio/OUT0/value; done
40```
41
42## Example (NVMe device)
43
44Use SPDK: https://github.com/spdk/spdk
45
46Compile with `./configure --with-vfio-user`
47
48Create an NVMe controller listening on a vfio-user socket with a simple block device:
49
50```sh
51sudo scripts/setup.sh
52rm ~/images/test-disk.raw
53truncate ~/images/test-disk.raw -s 128M
54mkfs.ext4  ~/images/test-disk.raw
55sudo killall ./build/bin/nvmf_tgt
56sudo ./build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 &
57sleep 2
58sudo ./scripts/rpc.py nvmf_create_transport -t VFIOUSER
59sudo rm -rf /tmp/nvme-vfio-user
60sudo mkdir -p /tmp/nvme-vfio-user
61sudo ./scripts/rpc.py bdev_aio_create ~/images/test-disk.raw test 512
62sudo ./scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode -a -s test
63sudo ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode test
64sudo ./scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode -t VFIOUSER -a /tmp/nvme-vfio-user -s 0
65sudo chown $USER.$USER -R /tmp/nvme-vfio-user
66```
67
68Start Cloud Hypervisor:
69
70```sh
71target/debug/cloud-hypervisor \
72    --memory size=1G,shared=on \
73    --disk path=~/images/focal-server-cloudimg-amd64.raw \
74    --kernel ~/src/linux/vmlinux \
75    --cmdline "root=/dev/vda1 console=hvc0" \
76    --user-device socket=/tmp/nvme-vfio-user/cntrl
77```
78