Lines Matching +full:page +full:- +full:based

12 - "graceful fallback": mm components which don't have transparent hugepage
17 - if a hugepage allocation fails because of memory fragmentation,
22 - if some task quits and more hugepages become available (either
27 - it doesn't require memory reservation and in turn it uses hugepages
40 address of the page and its temporary pinning to release after the I/O
41 is complete, so they won't ever notice the fact the page is huge. But
42 if any driver is going to mangle over the page structure of the tail
43 page (like for checking page->mapping or other bits that are relevant
44 for the head page and not the tail page), it should be updated to jump
45 to check head page instead. Taking a reference on any head/tail page would
46 prevent the page from being split by anyone.
68 calling split_huge_page(page). This is what the Linux VM does before
70 if the page is pinned and you must handle this correctly.
75 diff --git a/mm/mremap.c b/mm/mremap.c
76 --- a/mm/mremap.c
78 @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
99 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
100 page table lock will prevent the huge pmd being converted into a
103 should just drop the page table lock and fallback to the old code as
105 hugepage natively. Once finished, you can drop the page table lock.
113 - get_page()/put_page() and GUP operate on the folio->_refcount.
115 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
118 - map/unmap of a PMD entry for the whole THP increment/decrement
119 folio->_entire_mapcount and folio->_large_mapcount.
126 folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount goes
127 from -1 to 0 or 0 to -1.
129 - map/unmap of individual pages with PTE entry increment/decrement
130 folio->_large_mapcount.
137 page->_mapcount and increment/decrement folio->_nr_pages_mapped when
138 page->_mapcount goes from -1 to 0 or 0 to -1 as this counts the number
142 page to the tail pages before clearing all PG_head/tail bits from the page
143 structures. It can be done easily for refcounts taken by page table
146 requests to split pinned huge pages: it expects page count to be equal to
147 the sum of mapcount of all sub-pages plus one (split_huge_page caller must
148 have a reference to the head page).
150 split_huge_page uses migration entries to stabilize page->_refcount and
151 page->_mapcount of anonymous pages. File pages just get unmapped.
154 a scanner can get a reference to a page is get_page_unless_zero().
156 All tail pages have zero ->_refcount until atomic_add(). This prevents the
157 scanner from getting a reference to the tail page up to that point. After the
158 atomic_add() we don't care about the ->_refcount value. We already know how
159 many references should be uncharged from the head page.
161 For head page get_page_unless_zero() will succeed and we don't mind. It's
162 clear where references should go after split: it will stay on the head page.
175 Splitting the page right away is not an option due to locking context in
184 With CONFIG_PAGE_MAPCOUNT, we reliably detect partial mappings based on
185 folio->_nr_pages_mapped.
187 With CONFIG_NO_PAGE_MAPCOUNT, we detect partial mappings based on the
188 average per-page mapcount in a THP: if the average is < 1, an anon THP is
190 this detection is reliable. With long-running child processes, there can