Lines Matching +full:foo +full:- +full:queue
10 with example pseudo-code. For a concise description of the API, see
11 DMA-API.txt.
39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40 so devices only need to use 32-bit DMA addresses.
49 +-------+ +------+ +------+
52 C +-------+ --------> B +------+ ----------> +------+ A
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
60 X +-------+ --------> Y +------+ <---------- +------+ Z
64 +-------+ +------+
100 bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
105 #include <linux/dma-mapping.h>
137 buffers were cacheline-aligned. Without that, you'd see cacheline
138 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
152 By default, the kernel assumes that your device can address 32-bits of DMA
153 addressing. For a 64-bit capable device, this needs to be increased, and for
156 Special note about PCI: PCI-X specification requires PCI-X devices to support
157 64-bit addressing (DAC) for all transactions. And at least one platform (SGI
158 SN2) requires 64-bit consistent allocations to operate correctly when the IO
159 bus is in PCI-X mode.
184 device struct of your device is embedded in the bus-specific device struct of
185 your device. For example, &pdev->dev is a pointer to the device struct of a
191 system. If it returns non-zero, your device cannot perform DMA properly on
198 1) Use some non-DMA mode for data transfer, if possible.
206 The standard 64-bit addressing device would do something like this::
213 If the device only supports 32-bit addressing for descriptors in the
214 coherent allocations, but supports full 64-bits for streaming mappings
227 Finally, if your device can only drive the low 24-bits of
231 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
248 Here is pseudo-code showing how this might be done::
258 card->playback_enabled = 1;
260 card->playback_enabled = 0;
262 card->name);
265 card->record_enabled = 1;
267 card->record_enabled = 0;
269 card->name);
281 - Consistent DMA mappings which are usually mapped at driver
296 - Network card DMA ring descriptors.
297 - SCSI adapter mailbox command data structures.
298 - Device firmware microcode executed out of
314 desc->word0 = address;
316 desc->word1 = DESC_VALID;
325 - Streaming DMA mappings which are usually mapped for one DMA
334 - Networking buffers transmitted/received by a device.
335 - Filesystem buffers written/read by a SCSI device.
344 Also, systems with caches that aren't DMA-coherent will work better
369 which is 32-bit addressable. Even if the device indicates (via the DMA mask)
370 that it may address the upper 32-bits, consistent allocation will only
371 return > 32-bit addresses for DMA if the consistent DMA mask has been
399 like queue heads needing to be aligned on N byte boundaries.
473 potential platform-specific optimizations of such) is for debugging.
503 struct device *dev = &my_dev->dev;
505 void *addr = buffer->ptr;
506 size_t size = buffer->len;
538 struct device *dev = &my_dev->dev;
540 struct page *page = buffer->page;
541 unsigned long offset = buffer->offset;
542 size_t size = buffer->len;
581 ends and the second one starts on a page boundary - in fact this is a huge
582 advantage for cards which either cannot do scatter-gather or have very
583 limited number of scatter-gather entries) and returns the actual number
588 accessed sg->address and sg->length as shown above.
609 properly in order for the CPU and device to see the most up-to-date and
654 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
655 if (dma_mapping_error(cp->dev, mapping)) {
664 cp->rx_buf = buffer;
665 cp->rx_len = len;
666 cp->rx_dma = mapping;
686 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
687 cp->rx_len,
691 hp = (struct my_card_header *) cp->rx_buf;
693 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
695 pass_to_upper_layers(cp->rx_buf);
699 * DMA_FROM_DEVICE-mapped area,
713 dynamic DMA mapping scheme - you have to always store the DMA addresses
730 - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
732 - checking the dma_addr_t returned from dma_map_single() and dma_map_page()
747 - unmap pages that are already mapped, when mapping error occurs in the middle
861 ringp->mapping = FOO;
862 ringp->len = BAR;
866 dma_unmap_addr_set(ringp, mapping, FOO);
872 dma_unmap_single(dev, ringp->mapping, ringp->len,
882 It really should be self-explanatory. We treat the ADDR and LEN
901 DMA-safe. Drivers and subsystems depend on it. If an architecture
902 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
910 alignment constraints (e.g. the alignment constraints about 64-bit
929 David Mosberger-Tang <davidm@hpl.hp.com>