Lines Matching +full:memory +full:- +full:to +full:- +full:memory

1 .. SPDX-License-Identifier: GPL-2.0
7 swiotlb is a memory buffer allocator used by the Linux kernel DMA layer. It is
8 typically used when a device doing DMA can't directly access the target memory
10 the DMA layer calls swiotlb to allocate a temporary memory buffer that conforms
11 to the limitations. The DMA is done to/from this temporary memory buffer, and
13 memory buffer. This approach is generically called "bounce buffering", and the
14 temporary memory buffer is called a "bounce buffer".
18 the normal DMA map, unmap, and sync APIs when programming a device to do DMA.
19 These APIs use the device DMA attributes and kernel-wide settings to determine
25 memory buffer, doing bounce buffering is slower than doing DMA directly to the
26 original memory buffer, and it consumes more CPU resources. So it is used only
30 ---------------
31 swiotlb was originally created to handle DMA for devices with addressing
32 limitations. As physical memory sizes grew beyond 4 GiB, some devices could
33 only provide 32-bit DMA addresses. By allocating bounce buffer memory below
37 More recently, Confidential Computing (CoCo) VMs have the guest VM's memory
38 encrypted by default, and the memory is not accessible by the host hypervisor
39 and VMM. For the host to do I/O on behalf of the guest, the I/O must be
40 directed to guest memory that is unencrypted. CoCo VMs set a kernel-wide option
41 to force all DMA I/O to use bounce buffers, and the bounce buffer memory is set
42 up as unencrypted. The host does DMA I/O to/from the bounce buffer memory, and
43 the Linux kernel DMA layer does "sync" operations to cause the CPU to copy the
44 data to/from the original target memory buffer. The CPU copying bridges between
45 the unencrypted and the encrypted memory. This use of bounce buffers allows
46 device drivers to "just work" in a CoCo VM, with no modifications
47 needed to handle the memory encryption complexity.
50 mappings are set up for a DMA operation to/from a device that is considered
51 "untrusted", the device should be given access only to the memory containing
52 the data being transferred. But if that memory occupies only part of an IOMMU
54 IOMMU access control is per-granule, the untrusted device can gain access to
60 ------------------
64 buffer memory is physically contiguous. The expectation is that the DMA layer
65 maps the physical memory address to a DMA address, and returns the DMA address
66 to the driver for programming into the device. If a DMA operation specifies
67 multiple memory buffer segments, a separate bounce buffer must be allocated for
69 CPU copy) to initialize the bounce buffer to match the contents of the original
73 updated the bounce buffer memory and DMA_ATTR_SKIP_CPU_SYNC is not set, the
74 unmap does a "sync" operation to cause a CPU copy of the data from the bounce
75 buffer back to the original buffer. Then the bounce buffer memory is freed.
77 swiotlb also provides "sync" APIs that correspond to the dma_sync_*() APIs that
82 buffer is copied to/from the original buffer.
85 ------------------------------
88 block. Hence the default memory pool for swiotlb allocations must be
89 pre-allocated at boot time (but see Dynamic swiotlb below). Because swiotlb
90 allocations must be physically contiguous, the entire default memory pool is
93 The need to pre-allocate the default swiotlb pool creates a boot-time tradeoff.
94 The pool should be large enough to ensure that bounce buffer requests can
95 always be satisfied, as the non-blocking requirement means requests can't wait
96 for space to become available. But a large pool potentially wastes memory, as
97 this pre-allocated memory is not available for other uses in the system. The
99 I/O. These VMs use a heuristic to set the default pool size to ~6% of memory,
100 with a max of 1 GiB, which has the potential to be very wasteful of memory.
104 default memory pool size remains an open issue.
106 A single allocation from swiotlb is limited to IO_TLB_SIZE * IO_TLB_SEGSIZE
109 must be limited to that 256 KiB. This value is communicated to higher-level
111 higher-level code fails to account for this limit, it may make requests that
118 min_align_mask is non-zero, it may produce an "alignment offset" in the address
124 swiotlb, max_sectors_kb will be 256 KiB. When min_align_mask is non-zero,
129 physical address with the alloc_align_mask bits set to zero. But the actual
130 bounce buffer might start at a larger address if min_align_mask is non-zero.
131 Hence there may be pre-padding space that is allocated prior to the start of
132 the bounce buffer. Similarly, the end of the bounce buffer is rounded up to an
133 alloc_align_mask boundary, potentially resulting in post-padding space. Any
134 pre-padding or post-padding space is not initialized by swiotlb code. The
136 devices. It is set to the granule size - 1 so that the bounce buffer is
140 ------------------------
141 Memory used for swiotlb bounce buffers is allocated from overall system memory
145 due to other conditions, such as running in a CoCo VM, as described above. If
148 memory. The default pool is allocated below the 4 GiB physical address line so
149 it works for devices that can only address 32-bits of physical memory (unless
150 architecture-specific code provides the SWIOTLB_ANY flag). In a CoCo VM, the
151 pool memory must be decrypted before swiotlb is used.
158 slot set, which leads to the maximum bounce buffer size being IO_TLB_SIZE *
159 IO_TLB_SEGSIZE. Multiple smaller bounce buffers may co-exist in a single slot
163 entirely in a single area. Each area has its own spin lock that must be held to
166 VM. The number of areas defaults to the number of CPUs in the system for
168 slots, it might be necessary to assign multiple CPUs to the same area. The
179 the code uses shifting and bit masking to do many of the calculations. The
180 number of areas is rounded up to a power of 2 if necessary to meet this
184 argument to swiotlb_tbl_map_single() specifies a larger alignment, one or more
189 granule size, and granules cannot be larger than PAGE_SIZE. But if that were to
190 change in the future, the initial pool allocation might need to be done with
194 ---------------
195 When CONFIG_SWIOTLB_DYNAMIC is enabled, swiotlb can do on-demand expansion of
196 the amount of memory available for allocation as bounce buffers. If a bounce
197 buffer request fails due to lack of available space, an asynchronous background
198 task is kicked off to allocate memory from general system memory and turn it
200 because the memory allocation may block, and as noted above, swiotlb requests
201 are not allowed to block. Once the background task is kicked off, the bounce
202 buffer request creates a "transient pool" to avoid returning an "swiotlb full"
204 deleted when the bounce buffer is freed. Memory for this transient pool comes
205 from the general system memory atomic pool so that creation does not block.
207 where the memory must be decrypted, so it is done only as a stopgap until the
208 background task can add another non-transient pool.
210 Adding a dynamic pool has limitations. Like with the default pool, the memory
211 must be physically contiguous, so the size is limited to MAX_PAGE_ORDER pages
212 (e.g., 4 MiB on a typical x86 system). Due to memory fragmentation, a max size
215 memory fragmentation, dynamically adding a pool might not succeed at all.
231 few CPUs. It allows the default swiotlb pool to be smaller so that memory is
236 ----------------------
238 io_tlb_area, and io_tlb_slot. io_tlb_mem describes a swiotlb memory allocator,
239 which includes the default memory pool and any dynamic or transient pools
240 linked to it. Limited statistics on swiotlb usage are kept per memory allocator
244 io_tlb_pool describes a memory pool, either the default pool, a dynamic pool,
246 the memory in the pool, a pointer to an array of io_tlb_area structures, and a
247 pointer to an array of io_tlb_slot structures that are associated with the pool.
249 io_tlb_area describes an area. The primary field is the spin lock used to
250 serialize access to slots in the area. The io_tlb_area array for a pool has an
251 entry for each area, and is accessed using a 0-based area index derived from the
252 calling processor ID. Areas exist solely to allow parallel access to swiotlb
255 io_tlb_slot describes an individual memory slot in the pool, with size
257 index computed from the bounce buffer address relative to the starting memory
261 The io_tlb_slot array is designed to meet several requirements. First, the DMA
264 swiotlb_tbl_map_single(), and then passed as an argument to
266 memory buffer address obviously must be passed as an argument to
267 swiotlb_tbl_map_single(), but it is not passed to the other APIs. Consequently,
268 swiotlb data structures must save the original memory buffer address so that it
273 the argument to swiotlb_sync_*() is not the address of the start of the bounce
275 address of the start of the bounce buffer isn't known to swiotlb code. But
276 swiotlb code must be able to calculate the corresponding original memory buffer
277 address to do the CPU copy dictated by the "sync". So an adjusted original
278 memory buffer address is populated into the struct io_tlb_slot for each slot
284 Third, the io_tlb_slot array is used to track available slots. The "list" field
291 available slots to use for a new bounce buffer. They are updated when allocating
293 "list" field is initialized to IO_TLB_SEGSIZE down to 1 for the slots in every
296 Fourth, the io_tlb_slot array keeps track of any "padding slots" allocated to
298 swiotlb_tbl_map_single() allocates bounce buffer space to meet alloc_align_mask
299 requirements, it may allocate pre-padding space across zero or more slots. But
304 The "pad_slots" value is recorded only in the first non-padding slot allocated
305 to the bounce buffer.
308 ----------------
310 memory separate from the default swiotlb pool, and that are dedicated for DMA
311 use by a particular device. Restricted pools provide a level of DMA memory
320 allocate/free slots from/to the restricted pool directly and do not go through