Lines Matching +full:add +full:- +full:user +full:- +full:device
17 With a virtual IOMMU, the VMM stands between the guest driver and its device
29 user intends to run multiple VMs from this L1. We can end up with multiple L2
32 device implementation from the host VMM to access the entire guest L1 memory.
34 addresses the device is authorized to access.
50 ## Why virtio-iommu?
52 The Cloud Hypervisor project decided to implement the brand new virtio-iommu
53 device in order to provide a virtual IOMMU to its users. The reason being the
59 ## Pre-requisites
63 As of Kernel 5.14, virtio-iommu is available for both X86-64 and Aarch64.
68 virtio-iommu device and expose it through the ACPI IORT table. This can be
69 simply achieved by attaching at least one device to the virtual IOMMU.
71 The way to expose to the guest a specific device as sitting behind this IOMMU
78 Refer to the command line `--help` to find out which device support to be
81 Below is a simple example exposing the `virtio-blk` device as attached to the
85 ./cloud-hypervisor \
86 --cpus boot=1 \
87 --memory size=512M \
88 --disk path=focal-server-cloudimg-amd64.raw,iommu=on \
89 --kernel custom-vmlinux \
90 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
93 From a guest perspective, it is easy to verify if the device is protected by
103 possible to find out the b/d/f of the device(s) part of this group.
110 And you can validate the device is the one we expect running `lspci`:
114 00:00.0 Host bridge: Intel Corporation Device 0d57
115 00:01.0 Unassigned class [ffff]: Red Hat, Inc. Device 1057
117 00:03.0 Mass storage controller: Red Hat, Inc. Virtio block device
126 When ACPI is disabled, virtual IOMMU is supported through Flattened Device Tree
127 (FDT). In this case, the guest kernel cannot tell which device should be
128 IOMMU-attached and which should not. No matter how many devices you attached to
147 causes the virtual IOMMU device to be asked for 4k mappings only. This
152 passing a device through a L2 guest, the VFIO driver running in L1 will update
153 the DMAR entries for the specific device. Because VFIO pins the entire guest
164 RAM as larger pages, and because the virtual IOMMU device supports it, the
185 ./cloud-hypervisor \
186 --cpus boot=1 \
187 --memory size=8G,hugepages=on \
188 --disk path=focal-server-cloudimg-amd64.raw \
189 --kernel custom-vmlinux \
190 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw hugepagesz=2M hugepages=2048" \
191 --net tap=,mac=,iommu=on
198 huge pages. Here is how to achieve this, assuming the physical device you are
202 ./cloud-hypervisor \
203 --cpus boot=1 \
204 --memory size=8G,hugepages=on \
205 --disk path=focal-server-cloudimg-amd64.raw \
206 --kernel custom-vmlinux \
207 …--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw kvm-intel.nested=1 vfio_iommu_type1.allow_…
208 --device path=/sys/bus/pci/devices/0000:00:01.0,iommu=on
211 Once the L1 VM is running, unbind the device from the default driver in the
216 echo 8086 1502 > /sys/bus/pci/drivers/vfio-pci/new_id
217 echo 0000:00:04.0 > /sys/bus/pci/drivers/vfio-pci/bind
223 ./cloud-hypervisor \
224 --cpus boot=1 \
225 --memory size=4G,hugepages=on \
226 --disk path=focal-server-cloudimg-amd64.raw \
227 --kernel custom-vmlinux \
228 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
229 --device path=/sys/bus/pci/devices/0000:00:04.0
237 This is accomplished through `--platform
244 ./cloud-hypervisor \
245 --api-socket=/tmp/api \
246 --cpus boot=1 \
247 --memory size=4G,hugepages=on \
248 --disk path=focal-server-cloudimg-amd64.raw \
249 --kernel custom-vmlinux \
250 --cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
251 --platform num_pci_segments=2,iommu_segments=1
254 This adds a second PCI segment to the platform behind the IOMMU. A VFIO device
260 ./ch-remote --api-socket=/tmp/api add-device path=/sys/bus/pci/devices/0000:00:04.0,iommu=on,pci_se…