Lines Matching full:pages

90 admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
119 huge pages although processes will also directly compact memory as required.
129 Note that compaction has a non-trivial system-wide impact as pages
148 allowed to examine the unevictable lru (mlocked pages) for pages to compact.
151 compaction from moving pages that are unevictable. Default value is 1.
160 and maintain the ability to produce huge pages / higher-order pages.
181 Contains, as a percentage of total available memory that contains free pages
182 and reclaimable pages, the number of pages at which the background kernel
199 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
216 Contains, as a percentage of total available memory that contains free pages
217 and reclaimable pages, the number of pages at which a process which is
226 When a lazytime inode is constantly having its pages dirtied, the inode with
292 solution for memory pages having (excessive) corrected memory errors.
300 transparent hugepage into raw pages, then migrates only the raw error page.
307 pages without compensation, reducing the capacity of the HugeTLB pool by 1.
314 memory pages. When set to 1, kernel attempts to soft offline the pages
316 the request to soft offline the pages. Its default value is 1.
319 following requests to soft offline pages will not be performed:
321 - Request to soft offline pages from RAS Correctable Errors Collector.
323 - On ARM, the request to soft offline pages from GHES driver.
325 - On PARISC, the request to soft offline pages from Page Deallocation Table.
408 pages for each zones from them. These are shown as array of protection pages
410 Each zone has an array of protection pages like this::
413 pages free 1355
429 In this example, if normal pages (index=2) are required to this DMA zone and
455 256 means 1/256. # of protection pages becomes about "0.39%" of total managed
456 pages of higher zones on the node.
458 If you would like to protect more pages, smaller values are effective.
460 disables protection of the pages.
510 for a few types of pages, like kernel internally allocated data or
511 the swap cache, but works for the majority of user pages.
541 Each lowmem zone gets a number of reserved free pages based
556 A percentage of the total pages in each zone. On Zone reclaim
558 than this percentage of pages in a zone are reclaimable slab pages.
574 This is a percentage of the total pages in each zone. Zone reclaim will
575 only occur if more than this percentage of pages are in a state that
579 against all file-backed unmapped pages including swapcache pages and tmpfs
580 files. Otherwise, only unmapped pages backed by normal files but not tmpfs
591 accidentally operate based on the information in the first couple of pages
629 This parameter controls whether gigantic pages may be allocated from
630 ZONE_MOVABLE. If set to non-zero, gigantic pages can be allocated
637 Note that using ZONE_MOVABLE gigantic pages make memory hotremove unreliable.
640 sufficient gigantic pages to service migration requests associated with the
645 Additionally, as multiple gigantic pages may be reserved on a single block,
646 it may appear that gigantic pages are available for migration when in reality
648 two gigantic pages, one reserved and one allocated, and an admin attempts to
670 Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
671 buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
672 per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
673 optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool
674 to the buddy allocator, the vmemmap pages representing that range needs to be
675 remapped again and the vmemmap pages discarded earlier need to be rellocated
676 again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g.
677 never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set
678 'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on
681 of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy
683 pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB
684 pool to the buddy allocator since the allocation of vmemmap pages could be
687 Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
689 time from buddy allocator disappears, whereas already optimized HugeTLB pages
691 pages, you can set "nr_hugepages" to 0 first and then disable this. Note that
692 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
693 pages. So, those surplus pages are still optimized until they are no longer
694 in use. You would need to wait for those surplus pages to be released before
695 there are no optimized pages in the system.
725 trims excess pages aggressively. Any value >= 1 acts as the watermark where
872 page-cluster controls the number of pages up to which consecutive pages
879 it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
882 The default value is three (eight pages at a time). There may be some
888 that consecutive pages readahead would have brought in.
931 This is the fraction of pages in each zone that are can be stored to
934 that we do not allow more than 1/8th of pages in each zone to be stored
991 cache and swap-backed pages equally; lower values signify more
1007 file-backed pages is less than the high watermark in a zone.
1081 reclaimed if pages of different mobility are being mixed within pageblocks.
1084 allocations, THP and hugetlbfs pages.
1092 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1109 that the number of free pages kswapd maintains for latency reasons is
1126 2 Zone reclaim writes dirty pages out
1127 4 Zone reclaim swaps pages
1139 allocating off node pages.
1141 Allowing zone reclaim to write out pages stops processes that are
1142 writing large amounts of data from dirtying pages on other nodes. Zone
1143 reclaim will write out dirty pages if a zone fills up and so effectively