Lines Matching +full:page +full:- +full:based

1 # SPDX-License-Identifier: GPL-2.0-only
33 compress them into a dynamically allocated RAM-based memory pool.
55 If exclusive loads are enabled, when a page is loaded from zswap,
59 This avoids having two copies of the same page in memory
60 (compressed and uncompressed) after faulting in a page from zswap.
61 The cost is that if the page was never dirtied and needs to be
62 swapped out again, it will be re-compressed.
88 available at the following LWN page:
192 page. While this design limits storage density, it has simple and
202 page. It is a ZBUD derivative so the simplicity and determinism are
210 zsmalloc is a slab-based memory allocator designed to store
225 int "Maximum number of physical pages per-zspage"
231 that a zmalloc page (zspage) can consist of. The optimal zspage
303 Try running: slabinfo -DA
322 normal kmalloc allocation and makes kmalloc randomly pick one based
336 bool "Page allocator randomization"
339 Randomization of the page allocator improves the average
340 utilization of a direct-mapped memory-side-cache. See section
343 the presence of a memory-side-cache. There are also incidental
344 security benefits as it reduces the predictability of page
347 order of pages is selected based on cache utilization benefits
353 after runtime detection of a direct-mapped memory-side-cache.
364 also breaks ancient binaries (including anything libc5 based).
369 On non-ancient distros (post-2000 ones) N is usually a safe choice.
384 ELF-FDPIC binfmt's brk and stack allocator.
388 userspace. Since that isn't generally a problem on no-MMU systems,
391 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
412 This option is best suited for non-NUMA systems with
428 memory hot-plug systems. This is normal.
432 hot-plug and hot-remove.
502 # Keep arch NUMA mapping infrastructure post-init.
548 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
550 Say Y here if you want all hot-plugged memory blocks to appear in
552 Say N here if you want the default policy to keep all hot-plugged
571 # Heavily threaded applications may benefit from splitting the mm-wide
575 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
576 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
577 # SPARC32 allocates multiple pte tables within a single page, and therefore
578 # a per-page lock leads to problems when multiple tables need to be locked
580 # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
623 reliably. The page allocator relies on compaction heavily and
628 linux-mm@kvack.org.
637 # support for free page reporting
639 bool "Free page reporting"
642 Free page reporting allows for the incremental acquisition of
648 # support for page migration
651 bool "Page migration"
659 pages as migration can relocate pages to satisfy a huge page
675 HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
685 int "Maximum scale factor of PCP (Per-CPU pageset) batch allocate/free"
689 In page allocator, PCP (Per-CPU pageset) is refilled and drained in
690 batches. The batch number is scaled automatically to improve page
712 bool "Enable KSM for page merging"
719 the many instances by a single page with that content, so
772 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
781 long-term mappings means that the space is wasted.
791 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
808 applications by speeding up page faults during memory
851 XXX: For now, swap cluster backing transparent huge page
857 bool "Read-only THP for filesystems (EXPERIMENTAL)"
861 Allow khugepaged to put read-only file-backed pages in THP.
870 # UP and nommu archs use km based percpu allocator
896 subsystems to allocate big physically-contiguous blocks of memory.
944 soft-dirty bit on pte-s. This bit it set when someone writes
945 into a page just as regular dirty bit, but unlike the latter
948 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
954 int "Default maximum user stack size for 32-bit processes (MB)"
959 This is the maximum stack size in Megabytes in the VM layout of 32-bit
984 This adds PG_idle and PG_young flags to 'struct page'. PTE Accessed
989 bool "Enable idle page tracking"
998 See Documentation/admin-guide/mm/idle_page_tracking.rst for
1008 checking, an architecture-agnostic way to find the stack pointer
1040 "device-physical" addresses which is needed for using a DAX
1046 # Helpers to mirror range of the CPU page tables of a process into device page
1078 Enable the definition of PG_arch_x page flags with x > 1. Only
1079 suitable for 64-bit architectures with CONFIG_FLATMEM or
1081 enough room for additional bits in page->flags.
1089 on EXPERT systems. /proc/vmstat will only show page counts
1100 bool "Enable infrastructure for get_user_pages()-related unit tests"
1104 to make ioctl calls that can launch kernel-based unit tests for
1109 the non-_fast variants.
1111 There is also a sub-test that allows running dump_page() on any
1113 range of user-space addresses. These pages are either pinned via
1156 # struct io_mapping based helper. Selected by drivers that need them
1170 not mapped to other processes and other kernel page tables.
1201 handle page faults in userland.
1212 file-backed memory types like shmem and hugetlbfs.
1215 # multi-gen LRU {
1217 bool "Multi-Gen LRU"
1219 # make sure folio->flags has enough spare bits
1223 Documentation/admin-guide/mm/multigen_lru.rst for details.
1229 This option enables the multi-gen LRU by default.
1238 This option has a per-memcg and per-node memory overhead.
1252 Allow per-vma locking during page fault handling.
1255 handling page faults instead of taking mmap_lock.