Lines Matching +full:dma +full:- +full:requests
1 Multi-process QEMU
6 This is the design document for multi-process QEMU. It does not
31 -------------
34 VM control point, where VMs can be created, migrated, re-configured, and
40 A multi-process QEMU
43 A multi-process QEMU involves separating QEMU services into separate
51 A QEMU control process would remain, but in multi-process mode, will
53 provide the user interface to hot-plug devices or live migrate the VM.
55 A first step in creating a multi-process QEMU is to separate IO services
62 ----------------------
82 VM. (e.g., via the QEMU command line as *-device foo*)
85 parent object (such as "pci-device" for a PCI device) and QEMU will
104 mission-mode IO is performed by the application. The vhost user
158 the guest virtio device to DMA to or from is not a guest physical
161 a cache of IOMMMU translations: sending translation requests back to
162 QEMU on cache misses, and in turn receiving flush requests from QEMU
168 Much of the vhost model can be re-used by separated device emulation. In
179 break vhost store acceleration since they are synchronous - guest
188 #### qemu-io model
190 ``qemu-io`` is a test harness used to test changes to the QEMU block backend
192 emulation). ``qemu-io`` is not a device emulation application per se, but it
198 -------------------------------------------
212 a tractable to re-implement both the object model and the many device
223 disk-proc -blockdev driver=file,node-name=file0,filename=disk-file0 \
224 -blockdev driver=qcow2,node-name=drive0,file=file0
226 would indicate process *disk-proc* uses a qcow2 emulated disk named
242 disk-proc <socket number> <backend list>
253 disk-proc -qmp unix:/tmp/disk-mon,server
255 can be monitored over the UNIX socket path */tmp/disk-mon*.
261 represented as a *-device* of type *pci-proxy-dev*. A socket
262 sub-option to this option specifies the Unix socket that connects
263 to the remote process. An *id* sub-option is required, and it should
268 qemu-system-x86_64 ... -device pci-proxy-dev,id=lsi0,socket=3
286 the remote process. It is also used to pass on device-agnostic commands
289 per-device channels
299 QEMU has an object model based on sub-classes inherited from the
300 "object" super-class. The sub-classes that are of interest here are the
301 "device" and "bus" sub-classes whose child sub-classes make up the
308 sub-class of the "pci-device" class, and will have the same PCI bus
322 - Parses the "socket" sub option and connects to the remote process
324 - Uses the "id" sub-option to connect to the emulated device on the
342 Other tasks will be device-specific. For example, PCI device objects
368 read-only, but certain registers (especially BAR and MSI-related ones)
375 "pci-device-proxy" class that can serve as the parent of a PCI device
376 proxy object. This class's parent would be "pci-device" and it would
401 Generic device operations, such as DMA, will be performed by the remote
404 DMA operations
407 DMA operations would be handled much like vhost applications do. One of
412 must be backed by shared file-backed memory, for example, using
413 *-object memory-backend-file,share=on* and setting that memory backend
420 QEMU will need to create a socket for IOMMU requests from the emulation
421 process. It will handle those requests with an
424 device's DMA address space. When an IOMMU memory region is created
425 within the DMA address space, an IOMMU notifier for unmaps will be added
429 device hot-plug via QMP
434 *-device* command line option does. The remote process may either be one
435 started at QEMU startup, or be one added by the "add-process" QMP
460 handle requests from the QEMU process, and route machine-level requests
471 - address spaces
478 - RAM
487 - PCI
504 handle MMIO requests from QEMU, the PCI physical addresses must be the
517 - PCI pin interrupts
520 PCI bus object, and the root PCI object forwards interrupt requests to
528 - PCI MSI/X interrupts
530 PCI MSI/X interrupts are implemented in HW as DMA writes to a
531 CPU-specific PCI address. In QEMU on x86, a KVM APIC object receives
532 these DMA writes, then calls into the KVM driver to inject the interrupt
534 the MSI DMA address from QEMU as a message at initialization, then
538 DMA operations
541 When a emulation object wants to DMA into or out of guest memory, it
542 first must use dma\_memory\_map() to convert the DMA address to a local
544 will be used to translate the DMA address to a local virtual address the
550 When an IOMMU is in use in QEMU, DMA translation uses IOMMU memory
551 regions to translate the DMA address to a guest physical address before
555 - IOTLB cache
560 DMA address to a guest PA. On a cache miss, a message will be sent back
564 - IOTLB purge
566 The IOMMU emulation will also need to act on unmap requests from QEMU.
578 restore - the channel will be passed to ``qemu_loadvm_state()`` to
608 - guest physical range
620 +--------+----------------------------+
624 +--------+----------------------------+
626 +--------+----------------------------+
628 +--------+----------------------------+
630 +--------+----------------------------+
632 - MMIO request structure
640 +----------+------------------------+
644 +----------+------------------------+
646 +----------+------------------------+
648 +----------+------------------------+
650 +----------+------------------------+
652 +----------+------------------------+
654 +----------+------------------------+
656 - MMIO request queues
664 - scoreboard
669 wait queue and sequence number for the per-CPU threads, allowing them to
676 - device shadow memory
678 Some MMIO loads do not have device side-effects. These MMIOs can be
687 side-effects (and can be completed immediately), and which require a
707 - create
710 ``ioctl()`` on its per-VM file descriptor. It will allocate and
714 - ioctl
742 - destroy
755 - read
757 A read returns any pending MMIO requests from the KVM driver as MMIO
759 multiple MMIO operations pending. The MMIO requests are moved from the
764 - write
766 A write also consists of a set of MMIO requests. They are compared to
767 the MMIO requests in the sent queue. Matches are removed from the sent
769 removed, then the number of posted stores in the per-CPU scoreboard is
770 decremented. When the number is zero, and a non side-effect load was
773 - ioctl
789 - poll
792 to determine if there are MMIO requests waiting to be read. It will
795 - mmap
799 memory, changes with no side-effects will be reflected in the shadow,
806 Each KVM per-CPU thread can handle MMIO operation on behalf of the guest
812 - read
815 Loads with side-effects must be handled synchronously, with the KVM
817 process reply before re-starting the guest. Loads that do not have
818 side-effects may be optimized by satisfying them from the shadow image,
823 - write
828 the per-CPU scoreboard, in order to implement the PCI ordering
845 irq file descriptor, a re-sampling file descriptor needs to be sent to
848 acknowledged by the guest, so they can re-trigger the interrupt if their
849 device has not de-asserted its interrupt.
855 ``using event_notifier_init()`` to create the irq and re-sampling
867 interrupt logic to change the route: de-assigning the existing irq
874 MSI/X interrupts are sent as DMA transactions to the host. The interrupt
890 The guest may dynamically update several MSI-related tables in the
891 device's PCI config space. These include per-MSI interrupt enables and
897 --------------
900 ---------------------------
908 --------------------
946 types separate from the main QEMU process and non-disk emulation
950 and non-network emulation process, and only that type can access the