Lines Matching full:will
32 factor will affect all subsequent accesses to the memory for the whole
36 1) the TLB miss will run faster (especially with virtualization using
40 2) a single TLB entry will be mapping a much larger amount of virtual
60 and approporiately aligned. In this case, TLB misses will occur less
127 will inherit the top-level "enabled" value::
144 sizes, the kernel will select the most appropriate enabled size for a
165 means that an application requesting THP will stall on
172 means that an application will wake kswapd in the background
178 will enter direct reclaim and compaction like ``always``, but
180 other regions will wake kswapd in the background to reclaim
185 will enter direct reclaim like ``always`` but only for regions
205 All THPs at fault and collapse time will be added to _deferred_list,
206 and will therefore be split under memory presure if they are considered
215 khugepaged will be automatically started when PMD-sized THP is enabled
311 For example, the following will set 16K, 32K, 64K THP to ``always``,
323 ``thp_anon`` is not specified, PMD_ORDER THP will default to ``inherit``.
325 PMD_ORDER THP policy will be overridden. If the policy for PMD_ORDER
326 is not defined within a valid ``thp_anon``, its policy will default to
352 ``thp_shmem`` is not specified, PMD_ORDER hugepage will default to
354 user, the PMD_ORDER hugepage policy will be overridden. If the policy for
355 PMD_ORDER is not defined within a valid ``thp_shmem``, its policy will
367 shmem mount (see below), ordinary tmpfs mounts will make use of all available
384 Only allocate huge page if it will be fully within i_size.
397 ``huge=never`` will not attempt to break up huge pages at all, just stop more
401 /sys/kernel/mm/transparent_hugepage/shmem_enabled will affect the
423 individually, and will only use the setting of the global knob when the
440 Only allocate <size> huge page if it will be fully within i_size.
473 frequently will incur overhead.
678 To be guaranteed that the kernel will map a THP immediately in any
687 hugetlbfs other than there will be less overall fragmentation. All
689 unaffected. libhugetlbfs will also work fine as usual.