Lines Matching +full:tlb +full:- +full:split

26 requiring larger clear-page copy-page in page faults which is a
36 1) the TLB miss will run faster (especially with virtualization using
40 2) a single TLB entry will be mapping a much larger amount of virtual
41 memory in turn reducing the number of TLB misses. With
42 virtualization and nested pagetables the TLB can be mapped of
45 the two is using hugepages just because of the fact the TLB miss is
48 Modern kernels support "multi-size THP" (mTHP), which introduces the
50 but smaller than traditional PMD-size (as described above), in
51 increments of a power-of-2 number of pages. mTHP can back anonymous
53 PTE-mapped, but in many cases can still provide similar benefits to
56 prominent because the size of each page isn't as huge as the PMD-sized
58 architectures also employ TLB compression mechanisms to squeeze more
60 and approporiately aligned. In this case, TLB misses will occur less
66 collapses sequences of basic pages into PMD-sized huge pages.
91 possible to disable hugepages system-wide and to only have them inside
108 -------------------
113 system wide. This can be achieved per-supported-THP-size with one of::
115 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
116 echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
117 echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
124 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
127 will inherit the top-level "enabled" value::
129 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
133 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
135 The top-level setting (for use with "inherit") can be set by issuing
142 By default, PMD-sized hugepages have enabled="inherit" and all other
190 should be self-explanatory.
192 By default kernel tries to use huge, PMD-mappable zero page on read
201 PMD-mappable transparent hugepage::
206 and will therefore be split under memory presure if they are considered
207 "underused". A THP is underused if the number of zero-filled pages in
215 khugepaged will be automatically started when PMD-sized THP is enabled
216 (either of the per-size anon control or the top-level control are set
218 PMD-sized THP is disabled (when both the per-size anon control and the
219 top-level control are "never")
222 -------------------
226 PMD-sized THP and no attempt is made to collapse to other THP
300 You can change the sysfs boot time default for the top-level "enabled"
306 passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
315 thp_anon=16K-64K:always;128K,512K:inherit;256K:madvise;1M-2M:never
363 to as "multi-size THP" (mTHP). Huge pages of any size are commonly
372 ------------
396 ``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
408 Force the huge option on for all - very useful for testing;
411 ----------------------
418 '/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled'
424 per-size knob is set to 'inherit'.
433 Inherit the top-level "shmem_enabled" value. By default, PMD-sized hugepages
450 transparent_hugepage/hugepages-<size>kB/enabled values and tmpfs mount
458 The number of PMD-sized anonymous transparent huge pages currently used by the
460 To identify what applications are using PMD-sized anonymous transparent huge
463 PMD-sized THP for historical reasons and should have been called
522 is incremented every time a huge page is split into base
528 is incremented if kernel fails to split huge
532 is incremented when a huge page is put onto split
534 splitting it would free up some memory. Pages on split queue are
535 going to be split under memory pressure.
538 is incremented when a huge page on the split queue was split
544 is incremented every time a PMD split into table of PTEs.
546 munmap() on part of huge page. It doesn't split huge page, only
563 is incremented if a huge page has to be split before swapout.
567 In /sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/stats, There are
591 is incremented every time a huge page is swapped in from a non-zswap
605 is incremented every time a huge page is swapped out to a non-zswap
609 is incremented if a huge page has to be split before swapout.
626 split
627 is incremented every time a huge page is successfully split into
632 is incremented if kernel fails to split huge
636 is incremented when a huge page is put onto split queue.
638 it would free up some memory. Pages on split queue are going to
639 be split under memory pressure, if splitting is possible.