1Inter-VM Shared Memory Flat Device 2---------------------------------- 3 4The ivshmem-flat device is meant to be used on machines that lack a PCI bus, 5making them unsuitable for the use of the traditional ivshmem device modeled as 6a PCI device. Machines like those with a Cortex-M MCU are good candidates to use 7the ivshmem-flat device. Also, since the flat version maps the control and 8status registers directly to the memory, it requires a quite tiny "device 9driver" to interact with other VMs, which is useful in some RTOSes, like 10Zephyr, which usually run on constrained resource targets. 11 12Similar to the ivshmem device, the ivshmem-flat device supports both peer 13notification via HW interrupts and Inter-VM shared memory. This allows the 14device to be used together with the traditional ivshmem, enabling communication 15between, for instance, an aarch64 VM (using the traditional ivshmem device and 16running Linux), and an arm VM (using the ivshmem-flat device and running Zephyr 17instead). 18 19The ivshmem-flat device does not support the use of a ``memdev`` option (see 20ivshmem.rst for more details). It relies on the ivshmem server to create and 21distribute the proper shared memory file descriptor and the eventfd(s) to notify 22(interrupt) the peers. Therefore, to use this device, it is always necessary to 23have an ivshmem server up and running for proper device creation. 24 25Although the ivshmem-flat supports both peer notification (interrupts) and 26shared memory, the interrupt mechanism is optional. If no input IRQ is 27specified for the device it is disabled, preventing the VM from notifying or 28being notified by other VMs (a warning will be displayed to the user to inform 29the IRQ mechanism is disabled). The shared memory region is always present. 30 31The MMRs (INTRMASK, INTRSTATUS, IVPOSITION, and DOORBELL registers) offsets at 32the MMR region, and their functions, follow the ivshmem spec, so they work 33exactly as in the ivshmem PCI device (see ./specs/ivshmem-spec.txt). 34