Lines Matching full:device
68 Another is the modular nature of QEMU device emulation code provides
69 interface points where the QEMU functions that perform device emulation
74 QEMU device emulation
77 QEMU uses an object oriented SW architecture for device emulation code.
80 code to emulate a device named "foo" is always present in QEMU, but its
81 instantiation code is only run when the device is included in the target
82 VM. (e.g., via the QEMU command line as *-device foo*)
84 The object model is hierarchical, so device emulation code names its
85 parent object (such as "pci-device" for a PCI device) and QEMU will
86 instantiate a parent object before calling the device's instantiation
92 In order to separate the device emulation code from the CPU emulation
93 code, the device object code must run in a different process. There are
100 Virtio guest device drivers can be connected to vhost user applications
102 device drivers in the guest and vhost user device objects in QEMU, but
111 As mentioned above, one of the tasks of the vhost device object within
113 information about this device instance. As part of the configuration
128 One of the events that can cause a return to QEMU is when a guest device
130 operation to the corresponding QEMU device object. In the case of a
131 vhost user device, the memory operation would need to be sent over a
144 that triggers the device interrupt in the guest when the eventfd is
158 the guest virtio device to DMA to or from is not a guest physical
165 applicability to device separation
168 Much of the vhost model can be re-used by separated device emulation. In
169 particular, the ideas of using a socket between QEMU and the device
175 application works and the needs of separated device emulation. The most
176 basic is that vhost uses custom virtio device drivers which always
177 trigger IO with MMIO stores. A separated device emulation model must
178 work with existing IO device models and guest device drivers. MMIO loads
192 emulation). ``qemu-io`` is not a device emulation application per se, but it
194 QEMU one. This could be useful for disk device emulation, since its
202 while minimizing the changes needed to the device emulation code. The
210 modification. The device emulation objects will be also be based on the
211 QEMU code, because for anything but the simplest device, it would not be
212 a tractable to re-implement both the object model and the many device
230 configuration might be to put all controllers of the same device class
260 Each remote device emulated in a remote process on the host is
261 represented as a *-device* of type *pci-proxy-dev*. A socket
268 qemu-system-x86_64 ... -device pci-proxy-dev,id=lsi0,socket=3
270 can be used to add a device emulated in a remote process
276 QEMU is not aware of the type of type of the remote PCI device. It is
277 a pass through device as far as QEMU is concerned.
286 the remote process. It is also used to pass on device-agnostic commands
289 per-device channels
292 Each remote device communicates with QEMU using a dedicated communication
296 QEMU device proxy objects
301 "device" and "bus" sub-classes whose child sub-classes make up the
302 device tree of a QEMU emulated system.
304 The proxy object model will use device proxy objects to replace the
305 device emulation code within the QEMU process. These objects will live
308 sub-class of the "pci-device" class, and will have the same PCI bus
318 The Proxy device objects are initialized in the exact same manner in
319 which any other QEMU device would be initialized.
324 - Uses the "id" sub-option to connect to the emulated device on the
342 Other tasks will be device-specific. For example, PCI device objects
343 will initialize the PCI config space in order to make a valid PCI device
349 Most devices are driven by guest device driver accesses to IO addresses
350 or ports. The QEMU device emulation code uses QEMU's memory region
352 functions that QEMU will invoke when the guest accesses the device's
354 device, the VM will exit HW virtualization mode and return to QEMU,
358 device emulator would perform in its initialization code, but with its
360 they will forward the operation to the device emulation process.
366 guest driver. Guest accesses to this space is not handled by the device
375 "pci-device-proxy" class that can serve as the parent of a PCI device
376 proxy object. This class's parent would be "pci-device" and it would
384 A proxy for a device that generates interrupts will need to create a
387 be injected into the guest. For example, a PCI device object may use
394 a live migration event. The device proxy does not need to manage the
395 remote device's *vmstate*; that will be handled by the remote process
398 QEMU remote device operation
401 Generic device operations, such as DMA, will be performed by the remote
424 device's DMA address space. When an IOMMU memory region is created
429 device hot-plug via QMP
432 An QMP "device\_add" command can add a device emulated by a remote
434 *-device* command line option does. The remote process may either be one
437 forward the new device's JSON description to the corresponding emulation
446 descriptor to save the remote process's device *vmstate* over. The
453 device emulation in remote process
457 object model; the memory emulation objects; the device emulation objects
458 of the targeted device, and any dependent devices; and, the device's
468 device emulation objects. The JSON descriptions sent by the QEMU process
473 Before the device objects are created, the initial address spaces and
490 QEMU process. For a PCI device, a PCI bus will need to be created with
499 The device emulation objects will use ``memory_region_init_io()`` to
505 same in the QEMU process and the device emulation process. In order to
512 When device emulation wants to inject an interrupt into the VM, the
513 request climbs the device's bus object hierarchy until the point where a
525 process, and have the device proxy object reflect it up the PCI tree
545 device emulation code can access.
577 the process's device state back to QEMU. This method will be reversed on
579 restore the device state.
581 Accelerating device emulation
598 The expanded idea would require a new type of KVM device:
599 *KVM\_DEV\_TYPE\_USER*. This device has two file descriptors: a master
611 device will respond to. It includes the base and length of the range, as
615 A device can have multiple physical address ranges it responds to (e.g.,
616 a PCI device can have multiple BARs), so the structure will also include
617 an enumerated identifier to specify which of the device's ranges is
671 program. It also tracks the number of posted MMIO stores to the device
673 that a load to a device will not complete until all previous stores to
674 that device have been completed.
676 - device shadow memory
678 Some MMIO loads do not have device side-effects. These MMIOs can be
680 emulation program shares a shadow image of the device's memory image
694 The master descriptor is used by QEMU to configure the new KVM device.
698 KVM\_DEV\_TYPE\_USER device ops
703 ``kvm_init()``. These device ops are called by the KVM driver when QEMU
711 initialize a KVM user device specific data structure, and assign the
718 device type. *KVM\_DEV\_TYPE\_USER* ones will need several commands:
721 be passed to the device emulation program. Only one slave can be created
730 when a guest changes a device's PCI BAR registers.
752 responds to system calls on the descriptor performed by the device
781 device memory with the KVM driver.
798 image allocated by the KVM driver. As device emulation updates device
814 This callback is invoked when the guest performs a load to the device.
819 if there are no outstanding stores to the device by this CPU. PCI memory
821 the same device have been completed.
836 the device's corresponding interrupt to be triggered by the KVM driver.
838 initialization, and are used when the emulation code raises a device
849 device has not de-asserted its interrupt.
863 Intx routing can be changed when the guest programs the APIC the device
875 data contains a vector that is programmed by the guest, A device may have
891 device's PCI config space. These include per-MSI interrupt enables and
892 vector data. Additionally, MSIX tables exist in device memory space, not
922 ID, and the third for all other user IDs. Each device instance would
951 host tun/tap device used to provide guest networking.
962 each device emulation process could be provisioned with a separate
963 category. The different device emulation processes would not be able to
968 used to prevent device emulation processes in different classes from