Lines Matching defs:to
18 * @page: Pointer to the page to be mapped
28 * The returned virtual address is globally visible and valid up to the
29 * point where it is unmapped via kunmap(). The pointer can be handed to
41 * @page: Pointer to the page which was mapped by kmap()
43 * Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of
50 * @addr: The address to look up
52 * Returns: The page which is mapped to @addr.
57 * kmap_flush_unused - Flush all unused kmap mappings in order to
64 * @page: Pointer to the page to be mapped
71 * management is stack based. The unmap has to be in the reverse order of
82 * Contrary to kmap() mappings the mapping is only valid in the context of
83 * the caller and cannot be handed to other contexts.
93 * disabling migration in order to keep the virtual address stable across
104 * management is stack based. The unmap has to be in the reverse order of
115 * Contrary to kmap() mappings the mapping is only valid in the context of
116 * the caller and cannot be handed to other contexts.
126 * disabling migration in order to keep the virtual address stable across
136 * @page: Pointer to the page to be mapped
148 * It is used in atomic context when code wants to access the contents of a
151 * can be used in a manner similar to the following::
156 * // Gain access to the contents of that page.
159 * // Do something to the contents of that page.
168 * If you need to map two pages because you want to copy from one page to
169 * another you need to keep the kmap_atomic calls strictly nested, like:
213 * @vma: The VMA the page is to be allocated for.
261 * If we pass in a base or tail page, we can zero up to PAGE_SIZE.
262 * If we pass in a head page, we can zero up to the size of the compound page.
303 static inline void copy_user_highpage(struct page *to, struct page *from,
309 vto = kmap_local_page(to);
310 copy_user_page(vto, vfrom, vaddr, to);
311 kmsan_unpoison_memory(page_address(to), PAGE_SIZE);
320 static inline void copy_highpage(struct page *to, struct page *from)
325 vto = kmap_local_page(to);
327 kmsan_copy_page_meta(to, from);
341 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
348 vto = kmap_local_page(to);
351 kmsan_unpoison_memory(page_address(to), PAGE_SIZE);
358 static inline int copy_mc_highpage(struct page *to, struct page *from)
364 vto = kmap_local_page(to);
367 kmsan_copy_page_meta(to, from);
374 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
377 copy_user_highpage(to, from, vaddr, vma);
381 static inline int copy_mc_highpage(struct page *to, struct page *from)
383 copy_highpage(to, from);
411 static inline void memcpy_from_page(char *to, struct page *page,
417 memcpy(to, from + offset, len);
424 char *to = kmap_local_page(page);
427 memcpy(to + offset, from, len);
429 kunmap_local(to);
442 static inline void memcpy_from_folio(char *to, struct folio *folio,
454 memcpy(to, from, chunk);
457 to += chunk;
469 char *to = kmap_local_folio(folio, offset);
475 memcpy(to, from, chunk);
476 kunmap_local(to);
488 * @folio: The folio to zero.
489 * @offset: The byte offset in the folio to start zeroing at.
490 * @kaddr: The address the folio is currently mapped to.
492 * If you have already used kmap_local_folio() to map a folio, written
493 * some data to it and now need to zero the end of the folio (and flush
498 * Return: An address which can be passed to kunmap_local().
525 * folio_fill_tail - Copy some data to a folio and pad with zeroes.
527 * @offset: The offset into @folio at which to start copying.
528 * @from: The data to copy.
529 * @len: How many bytes of data to copy.
532 * When they want to copy data from the inode into the page cache, this
539 char *to = kmap_local_folio(folio, offset);
547 memcpy(to, from, max);
548 kunmap_local(to);
553 to = kmap_local_folio(folio, offset);
557 memcpy(to, from, len);
558 to = folio_zero_tail(folio, offset + len, to + len);
559 kunmap_local(to);
564 * @to: The destination buffer.
565 * @folio: The folio to copy from.
567 * @len: The maximum number of bytes to copy.
569 * Copy up to @len bytes from this folio. This may be limited by PAGE_SIZE
574 static inline size_t memcpy_from_file_folio(char *to, struct folio *folio,
586 memcpy(to, from, len);
594 * @folio: The folio to write to.
595 * @start1: The first byte to zero.
597 * @start2: The first byte to zero in the second range.
608 * @folio: The folio to write to.
609 * @start: The first byte to zero.
610 * @xend: One more than the last byte to zero.
620 * @folio: The folio to write to.
621 * @start: The first byte to zero.
622 * @length: The number of bytes to zero.
632 * @folio: The folio to release.
633 * @addr: The address previously returned by a call to kmap_local_folio().
635 * It is common, eg in directory handling to kmap a folio. This function
636 * unmaps the folio and drops the refcount that was being held to keep the