Lines Matching +full:memory +full:- +full:mapped

6 support and its interaction with other parts of the memory management
12 - "graceful fallback": mm components which don't have transparent hugepage
17 - if a hugepage allocation fails because of memory fragmentation,
22 - if some task quits and more hugepages become available (either
23 immediately in the buddy or through the VM), guest physical memory
27 - it doesn't require memory reservation and in turn it uses hugepages
29 to avoid unmovable pages to fragment all the memory but such a tweak
43 page (like for checking page->mapping or other bits that are relevant
75 diff --git a/mm/mremap.c b/mm/mremap.c
76 --- a/mm/mremap.c
78 @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
99 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
113 - get_page()/put_page() and GUP operate on the folio->_refcount.
115 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
118 - map/unmap of a PMD entry for the whole THP increment/decrement
119 folio->_entire_mapcount and folio->_large_mapcount.
122 corresponding mapcount), and the current status ("maybe mapped shared" vs.
123 "mapped exclusively").
126 folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount goes
127 from -1 to 0 or 0 to -1.
129 - map/unmap of individual pages with PTE entry increment/decrement
130 folio->_large_mapcount.
133 corresponding mapcount), and the current status ("maybe mapped shared" vs.
134 "mapped exclusively").
137 page->_mapcount and increment/decrement folio->_nr_pages_mapped when
138 page->_mapcount goes from -1 to 0 or 0 to -1 as this counts the number
139 of pages mapped by PTE.
147 the sum of mapcount of all sub-pages plus one (split_huge_page caller must
150 split_huge_page uses migration entries to stabilize page->_refcount and
151 page->_mapcount of anonymous pages. File pages just get unmapped.
153 We are safe against physical memory scanners too: the only legitimate way
156 All tail pages have zero ->_refcount until atomic_add(). This prevents the
158 atomic_add() we don't care about the ->_refcount value. We already know how
171 memory immediately. Instead, we detect that a subpage of THP is not in use
172 in folio_remove_rmap_*() and queue the THP for splitting if memory pressure
181 The splitting itself will happen when we get memory pressure via shrinker
185 folio->_nr_pages_mapped.
188 average per-page mapcount in a THP: if the average is < 1, an anon THP is
189 certainly partially mapped. As long as only a single process maps a THP,
190 this detection is reliable. With long-running child processes, there can
192 might need asynchronous detection during memory reclaim in the future.