Searched hist:"637 b0aa139565cb82a7b9269e62214f87082635c" (Results 1 – 4 of 4) sorted by relevance
/qemu/include/hw/pci/ |
H A D | pci_device.h | 637b0aa139565cb82a7b9269e62214f87082635c Mon Aug 19 13:54:54 UTC 2024 Mattias Nissler <mnissler@rivosinc.com> softmmu: Support concurrent bounce buffers
When DMA memory can't be directly accessed, as is the case when running the device model in a separate process without shareable DMA file descriptors, bounce buffering is used.
It is not uncommon for device models to request mapping of several DMA regions at the same time. Examples include: * net devices, e.g. when transmitting a packet that is split across several TX descriptors (observed with igb) * USB host controllers, when handling a packet with multiple data TRBs (observed with xhci)
Previously, qemu only provided a single bounce buffer per AddressSpace and would fail DMA map requests while the buffer was already in use. In turn, this would cause DMA failures that ultimately manifest as hardware errors from the guest perspective.
This change allocates DMA bounce buffers dynamically instead of supporting only a single buffer. Thus, multiple DMA mappings work correctly also when RAM can't be mmap()-ed.
The total bounce buffer allocation size is limited individually for each AddressSpace. The default limit is 4096 bytes, matching the previous maximum buffer size. A new x-max-bounce-buffer-size parameter is provided to configure the limit for PCI devices.
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20240819135455.2957406-1-mnissler@rivosinc.com Signed-off-by: Peter Xu <peterx@redhat.com>
|
/qemu/system/ |
H A D | physmem.c | 637b0aa139565cb82a7b9269e62214f87082635c Mon Aug 19 13:54:54 UTC 2024 Mattias Nissler <mnissler@rivosinc.com> softmmu: Support concurrent bounce buffers
When DMA memory can't be directly accessed, as is the case when running the device model in a separate process without shareable DMA file descriptors, bounce buffering is used.
It is not uncommon for device models to request mapping of several DMA regions at the same time. Examples include: * net devices, e.g. when transmitting a packet that is split across several TX descriptors (observed with igb) * USB host controllers, when handling a packet with multiple data TRBs (observed with xhci)
Previously, qemu only provided a single bounce buffer per AddressSpace and would fail DMA map requests while the buffer was already in use. In turn, this would cause DMA failures that ultimately manifest as hardware errors from the guest perspective.
This change allocates DMA bounce buffers dynamically instead of supporting only a single buffer. Thus, multiple DMA mappings work correctly also when RAM can't be mmap()-ed.
The total bounce buffer allocation size is limited individually for each AddressSpace. The default limit is 4096 bytes, matching the previous maximum buffer size. A new x-max-bounce-buffer-size parameter is provided to configure the limit for PCI devices.
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20240819135455.2957406-1-mnissler@rivosinc.com Signed-off-by: Peter Xu <peterx@redhat.com>
|
H A D | memory.c | 637b0aa139565cb82a7b9269e62214f87082635c Mon Aug 19 13:54:54 UTC 2024 Mattias Nissler <mnissler@rivosinc.com> softmmu: Support concurrent bounce buffers
When DMA memory can't be directly accessed, as is the case when running the device model in a separate process without shareable DMA file descriptors, bounce buffering is used.
It is not uncommon for device models to request mapping of several DMA regions at the same time. Examples include: * net devices, e.g. when transmitting a packet that is split across several TX descriptors (observed with igb) * USB host controllers, when handling a packet with multiple data TRBs (observed with xhci)
Previously, qemu only provided a single bounce buffer per AddressSpace and would fail DMA map requests while the buffer was already in use. In turn, this would cause DMA failures that ultimately manifest as hardware errors from the guest perspective.
This change allocates DMA bounce buffers dynamically instead of supporting only a single buffer. Thus, multiple DMA mappings work correctly also when RAM can't be mmap()-ed.
The total bounce buffer allocation size is limited individually for each AddressSpace. The default limit is 4096 bytes, matching the previous maximum buffer size. A new x-max-bounce-buffer-size parameter is provided to configure the limit for PCI devices.
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20240819135455.2957406-1-mnissler@rivosinc.com Signed-off-by: Peter Xu <peterx@redhat.com>
|
/qemu/hw/pci/ |
H A D | pci.c | 637b0aa139565cb82a7b9269e62214f87082635c Mon Aug 19 13:54:54 UTC 2024 Mattias Nissler <mnissler@rivosinc.com> softmmu: Support concurrent bounce buffers
When DMA memory can't be directly accessed, as is the case when running the device model in a separate process without shareable DMA file descriptors, bounce buffering is used.
It is not uncommon for device models to request mapping of several DMA regions at the same time. Examples include: * net devices, e.g. when transmitting a packet that is split across several TX descriptors (observed with igb) * USB host controllers, when handling a packet with multiple data TRBs (observed with xhci)
Previously, qemu only provided a single bounce buffer per AddressSpace and would fail DMA map requests while the buffer was already in use. In turn, this would cause DMA failures that ultimately manifest as hardware errors from the guest perspective.
This change allocates DMA bounce buffers dynamically instead of supporting only a single buffer. Thus, multiple DMA mappings work correctly also when RAM can't be mmap()-ed.
The total bounce buffer allocation size is limited individually for each AddressSpace. The default limit is 4096 bytes, matching the previous maximum buffer size. A new x-max-bounce-buffer-size parameter is provided to configure the limit for PCI devices.
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20240819135455.2957406-1-mnissler@rivosinc.com Signed-off-by: Peter Xu <peterx@redhat.com>
|