Lines Matching +full:many +full:- +full:to +full:- +full:one

16 But in the future it can expand to other filesystems.
26 requiring larger clear-page copy-page in page faults which is a
32 factor will affect all subsequent accesses to the memory for the whole
44 hugepages but a significant speedup already happens if only one of
46 going to run faster.
48 Modern kernels support "multi-size THP" (mTHP), which introduces the
49 ability to allocate memory in blocks that are bigger than a base page
50 but smaller than traditional PMD-size (as described above), in
51 increments of a power-of-2 number of pages. mTHP can back anonymous
52 memory (for example 16K, 32K, 64K, etc). These THPs continue to be
53 PTE-mapped, but in many cases can still provide similar benefits to
56 prominent because the size of each page isn't as huge as the PMD-sized
57 variant and there is less memory to clear in each page fault. Some
58 architectures also employ TLB compression mechanisms to squeeze more
63 THP can be enabled system wide or restricted to certain tasks or even
66 collapses sequences of basic pages into PMD-sized huge pages.
72 if compared to the reservation approach of hugetlbfs by allowing all
73 unused memory to be used as cache or other movable (or even unmovable
74 entities). It doesn't require reservation to prevent hugepage
75 allocation failures to be noticeable from userland. It allows paging
76 and all other advanced VM features to be available on the
77 hugepages. It requires no modifications for applications to take
80 Applications however can be further optimized to take advantage of
81 this feature, like for example they've been optimized before to avoid
91 possible to disable hugepages system-wide and to only have them inside
95 to eliminate any risk of wasting any precious byte of memory and to
99 risk to lose memory by using hugepages, should use
108 -------------------
112 regions (to avoid the risk of consuming more memory resources) or enabled
113 system wide. This can be achieved per-supported-THP-size with one of::
115 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
116 echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
117 echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
124 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
126 Alternatively it is possible to specify that a given hugepage size
127 will inherit the top-level "enabled" value::
129 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
133 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
135 The top-level setting (for use with "inherit") can be set by issuing
136 one of the following commands::
142 By default, PMD-sized hugepages have enabled="inherit" and all other
147 It's also possible to limit defrag efforts in the VM to generate
148 anonymous hugepages in case they're not immediately free to madvise
149 regions or to never try to defrag memory and simply fallback to regular
151 time to defrag memory, we would expect to gain even more by the fact we
167 memory in an effort to allocate a THP immediately. This may be
169 use and are willing to delay the VM start to utilise them.
173 to reclaim pages and wake kcompactd to compact memory so that
175 of khugepaged to then install the THP pages later.
180 other regions will wake kswapd in the background to reclaim
181 pages and wake kcompactd to compact memory so that THP is
190 should be self-explanatory.
192 By default kernel tries to use huge, PMD-mappable zero page on read
193 page fault to anonymous mapping. It's possible to disable huge zero
200 allocation library) may want to know the size (in bytes) of a
201 PMD-mappable transparent hugepage::
205 khugepaged will be automatically started when one or more hugepage
207 or by setting "inherit" while the top-level enabled is set to "always"
210 setting "inherit" while the top-level enabled is set to "never").
213 -------------------
216 khugepaged currently only searches for opportunities to collapse to
217 PMD-sized THP and no attempt is made to collapse to other THP
220 khugepaged runs usually at low frequency so while one may not want to
223 also possible to disable defrag in khugepaged by writing 0 or enable
229 You can also control how many pages khugepaged should scan at each
234 and how many milliseconds to wait in khugepaged between each pass (you
235 can set this to 0 to run khugepaged at 100% utilization of one core)::
239 and how many milliseconds to wait in khugepaged if there's an hugepage
240 allocation failure to throttle the next allocation attempt::
248 one 2M hugepage. Each may happen independently, or together, depending on
259 ``max_ptes_none`` specifies how many extra small pages (that are
261 of small pages into one large page::
265 A higher value leads to use additional memory for programs.
266 A lower value leads to gain less thp performance. Value of
270 ``max_ptes_swap`` specifies how many pages can be brought in from
280 ``max_ptes_shared`` specifies how many pages can be shared across multiple
293 to the kernel command line.
302 Attempt to allocate huge pages every time we need a new page;
316 ``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
317 ``huge=never`` will not attempt to break up huge pages at all, just stop more
320 There's also sysfs knob to control hugepage allocation policy for internal
325 In addition to policies listed above, shmem_enabled allows two further
329 For use in emergencies, to force the huge option off from
332 Force the huge option on for all - very useful for testing;
338 transparent_hugepage/hugepages-<size>kB/enabled values and tmpfs mount
339 option only affect future behavior. So to make them effective you need
340 to restart any application that could have been using hugepages. This
341 also applies to the regions registered in khugepaged.
347 Currently the below counters only record events relating to
348 PMD-sized THP. Events relating to other THP sizes are not included.
350 The number of PMD-sized anonymous transparent huge pages currently used by the
352 To identify what applications are using PMD-sized anonymous transparent huge
353 pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
354 fields for each mapping. (Note that AnonHugePages only applies to traditional
355 PMD-sized THP for historical reasons and should have been called
358 The number of file transparent huge pages mapped to userspace is available
360 To identify what applications are mapping file transparent huge pages, it
361 is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields
367 There are a number of counters in ``/proc/vmstat`` that may be used to
372 allocated to handle a page fault.
376 a range of pages to collapse into one huge page and has
377 successfully allocated a new huge page to store the data.
380 is incremented if a page fault fails to allocate
381 a huge page and instead falls back to using small pages.
384 is incremented if a page fault fails to charge a huge page and
385 instead falls back to using small pages even though the
390 of pages that should be collapsed into one huge page but failed
398 is incremented if a file huge page is attempted to be allocated
399 but fails and instead falls back to using small pages.
403 falls back to using small pages even though the allocation was
417 is incremented if kernel fails to split huge
424 going to be split under memory pressure.
438 is incremented if kernel fails to allocate
439 huge zero page and falls back to using small pages.
442 is incremented every time a huge page is swapout in one
446 is incremented if a huge page has to be split before swapout.
447 Usually because failed to allocate some continuous swap space
451 system uses memory compaction to copy data around memory to free a
452 huge page for use. There are some counters in ``/proc/vmstat`` to help
456 is incremented every time a process stalls to run
464 is incremented if the system tries to compact memory
467 It is possible to establish how long the stalls were using the function
468 tracer to record how long was spent in __alloc_pages() and
469 using the mm_page_alloc tracepoint to identify which allocations were
475 To be guaranteed that the kernel will map a THP immediately in any
476 memory region, the mmap region has to be hugepage naturally
485 usual features belonging to hugetlbfs are preserved and