Lines Matching full:pages
89 admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
118 huge pages although processes will also directly compact memory as required.
128 Note that compaction has a non-trivial system-wide impact as pages
141 allowed to examine the unevictable lru (mlocked pages) for pages to compact.
144 compaction from moving pages that are unevictable. Default value is 1.
153 and maintain the ability to produce huge pages / higher-order pages.
174 Contains, as a percentage of total available memory that contains free pages
175 and reclaimable pages, the number of pages at which the background kernel
192 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
209 Contains, as a percentage of total available memory that contains free pages
210 and reclaimable pages, the number of pages at which a process which is
219 When a lazytime inode is constantly having its pages dirtied, the inode with
283 solution for memory pages having (excessive) corrected memory errors.
291 transparent hugepage into raw pages, then migrates only the raw error page.
298 pages without compensation, reducing the capacity of the HugeTLB pool by 1.
305 memory pages. When set to 1, kernel attempts to soft offline the pages
307 the request to soft offline the pages. Its default value is 1.
310 following requests to soft offline pages will not be performed:
312 - Request to soft offline pages from RAS Correctable Errors Collector.
314 - On ARM, the request to soft offline pages from GHES driver.
316 - On PARISC, the request to soft offline pages from Page Deallocation Table.
406 pages for each zones from them. These are shown as array of protection pages
408 Each zone has an array of protection pages like this::
411 pages free 1355
427 In this example, if normal pages (index=2) are required to this DMA zone and
453 256 means 1/256. # of protection pages becomes about "0.39%" of total managed
454 pages of higher zones on the node.
456 If you would like to protect more pages, smaller values are effective.
458 disables protection of the pages.
504 for a few types of pages, like kernel internally allocated data or
505 the swap cache, but works for the majority of user pages.
535 Each lowmem zone gets a number of reserved free pages based
550 A percentage of the total pages in each zone. On Zone reclaim
552 than this percentage of pages in a zone are reclaimable slab pages.
568 This is a percentage of the total pages in each zone. Zone reclaim will
569 only occur if more than this percentage of pages are in a state that
573 against all file-backed unmapped pages including swapcache pages and tmpfs
574 files. Otherwise, only unmapped pages backed by normal files but not tmpfs
585 accidentally operate based on the information in the first couple of pages
637 Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
638 buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
639 per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
640 optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool
641 to the buddy allocator, the vmemmap pages representing that range needs to be
642 remapped again and the vmemmap pages discarded earlier need to be rellocated
643 again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g.
644 never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set
645 'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on
648 of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy
650 pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB
651 pool to the buddy allocator since the allocation of vmemmap pages could be
654 Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
656 time from buddy allocator disappears, whereas already optimized HugeTLB pages
658 pages, you can set "nr_hugepages" to 0 first and then disable this. Note that
659 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
660 pages. So, those surplus pages are still optimized until they are no longer
661 in use. You would need to wait for those surplus pages to be released before
662 there are no optimized pages in the system.
692 trims excess pages aggressively. Any value >= 1 acts as the watermark where
839 page-cluster controls the number of pages up to which consecutive pages
846 it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
849 The default value is three (eight pages at a time). There may be some
855 that consecutive pages readahead would have brought in.
898 This is the fraction of pages in each zone that are can be stored to
901 that we do not allow more than 1/8th of pages in each zone to be stored
958 cache and swap-backed pages equally; lower values signify more
974 file-backed pages is less than the high watermark in a zone.
1039 reclaimed if pages of different mobility are being mixed within pageblocks.
1042 allocations, THP and hugetlbfs pages.
1050 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1067 that the number of free pages kswapd maintains for latency reasons is
1084 2 Zone reclaim writes dirty pages out
1085 4 Zone reclaim swaps pages
1097 allocating off node pages.
1099 Allowing zone reclaim to write out pages stops processes that are
1100 writing large amounts of data from dirtying pages on other nodes. Zone
1101 reclaim will write out dirty pages if a zone fills up and so effectively