Searched refs:alloc_align_mask (Results 1 – 2 of 2) sorted by relevance
1033 unsigned int alloc_align_mask) in swiotlb_search_pool_area() argument1059 if (!alloc_align_mask && !iotlb_align_mask && alloc_size >= PAGE_SIZE) in swiotlb_search_pool_area()1060 alloc_align_mask = PAGE_SIZE - 1; in swiotlb_search_pool_area()1067 alloc_align_mask |= (IO_TLB_SIZE - 1); in swiotlb_search_pool_area()1068 iotlb_align_mask &= ~alloc_align_mask; in swiotlb_search_pool_area()1074 stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask)); in swiotlb_search_pool_area()1089 if ((tlb_addr & alloc_align_mask) || in swiotlb_search_pool_area()1158 unsigned int alloc_align_mask, struct io_tlb_pool **retpool) in swiotlb_search_area() argument1172 alloc_align_mask); in swiotlb_search_area()1197 size_t alloc_size, unsigned int alloc_align_mask, in swiotlb_find_slots() argument[all …]
127 swiotlb_tbl_map_single() also takes an "alloc_align_mask" parameter. This129 physical address with the alloc_align_mask bits set to zero. But the actual133 alloc_align_mask boundary, potentially resulting in post-padding space. Any135 "alloc_align_mask" parameter is used by IOMMU code when mapping for untrusted183 The default pool is allocated with PAGE_SIZE alignment. If an alloc_align_mask185 initial slots in each slot set might not meet the alloc_align_mask criterium.188 Currently, there's no problem because alloc_align_mask is set based on IOMMU297 meet alloc_align_mask requirements described above. When298 swiotlb_tbl_map_single() allocates bounce buffer space to meet alloc_align_mask301 alloc_align_mask value that governed the allocation, and therefore the