Lines Matching +full:page +full:- +full:based

1 # SPDX-License-Identifier: GPL-2.0-only
33 compress them into a dynamically allocated RAM-based memory pool.
72 available at the following LWN page:
160 zsmalloc is a slab-based memory allocator designed to store
175 int "Maximum number of physical pages per-zspage"
181 that a zmalloc page (zspage) can consist of. The optimal zspage
252 specifically-sized allocations with user-controlled contents
256 user-controlled allocations. This may very slightly increase
258 of extra pages since the bulk of user-controlled allocations
259 are relatively long-lived.
274 Try running: slabinfo -DA
293 normal kmalloc allocation and makes kmalloc randomly pick one based
307 bool "Page allocator randomization"
310 Randomization of the page allocator improves the average
311 utilization of a direct-mapped memory-side-cache. See section
314 the presence of a memory-side-cache. There are also incidental
315 security benefits as it reduces the predictability of page
318 order of pages is selected based on cache utilization benefits
334 also breaks ancient binaries (including anything libc5 based).
339 On non-ancient distros (post-2000 ones) N is usually a safe choice.
354 ELF-FDPIC binfmt's brk and stack allocator.
358 userspace. Since that isn't generally a problem on no-MMU systems,
361 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
382 This option is best suited for non-NUMA systems with
398 memory hot-plug systems. This is normal.
402 hot-plug and hot-remove.
478 # Keep arch NUMA mapping infrastructure post-init.
534 Example kernel usage would be page structs and page tables.
536 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
569 sufficient kernel-capable memory (ZONE_NORMAL) must be
570 available to allocate page structs to describe ZONE_MOVABLE.
590 # Heavily threaded applications may benefit from splitting the mm-wide
594 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
595 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
596 # SPARC32 allocates multiple pte tables within a single page, and therefore
597 # a per-page lock leads to problems when multiple tables need to be locked
599 # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
647 reliably. The page allocator relies on compaction heavily and
652 linux-mm@kvack.org.
661 # support for free page reporting
663 bool "Free page reporting"
665 Free page reporting allows for the incremental acquisition of
671 # support for page migration
674 bool "Page migration"
682 pages as migration can relocate pages to satisfy a huge page
698 HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
708 int "Maximum scale factor of PCP (Per-CPU pageset) batch allocate/free"
712 In page allocator, PCP (Per-CPU pageset) is refilled and drained in
713 batches. The batch number is scaled automatically to improve page
735 bool "Enable KSM for page merging"
742 the many instances by a single page with that content, so
795 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
804 long-term mappings means that the space is wasted.
814 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
835 applications by speeding up page faults during memory
878 XXX: For now, swap cluster backing transparent huge page
884 bool "Read-only THP for filesystems (EXPERIMENTAL)"
888 Allow khugepaged to put read-only file-backed pages in THP.
895 bool "No per-page mapcount (EXPERIMENTAL)"
897 Do not maintain per-page mapcounts for pages part of larger
901 this information will rely on less-precise per-allocation information
902 instead: for example, using the average per-page mapcount in such
903 a large allocation instead of the per-page mapcount.
933 # UP and nommu archs use km based percpu allocator
959 subsystems to allocate big physically-contiguous blocks of memory.
998 soft-dirty bit on pte-s. This bit it set when someone writes
999 into a page just as regular dirty bit, but unlike the latter
1002 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
1008 int "Default maximum user stack size for 32-bit processes (MB)"
1013 This is the maximum stack size in Megabytes in the VM layout of 32-bit
1039 This adds PG_idle and PG_young flags to 'struct page'. PTE Accessed
1044 bool "Enable idle page tracking"
1053 See Documentation/admin-guide/mm/idle_page_tracking.rst for
1069 checking, an architecture-agnostic way to find the stack pointer
1101 "device-physical" addresses which is needed for using a DAX
1107 # Helpers to mirror range of the CPU page tables of a process into device page
1146 on EXPERT systems. /proc/vmstat will only show page counts
1157 bool "Enable infrastructure for get_user_pages()-related unit tests"
1161 to make ioctl calls that can launch kernel-based unit tests for
1166 the non-_fast variants.
1168 There is also a sub-test that allows running dump_page() on any
1170 range of user-space addresses. These pages are either pinned via
1203 # struct io_mapping based helper. Selected by drivers that need them
1217 not mapped to other processes and other kernel page tables.
1248 handle page faults in userland.
1259 file-backed memory types like shmem and hugetlbfs.
1262 # multi-gen LRU {
1264 bool "Multi-Gen LRU"
1266 # make sure folio->flags has enough spare bits
1270 Documentation/admin-guide/mm/multigen_lru.rst for details.
1276 This option enables the multi-gen LRU by default.
1285 This option has a per-memcg and per-node memory overhead.
1299 Allow per-vma locking during page fault handling.
1302 handling page faults instead of taking mmap_lock.
1329 stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss).
1335 bool "reclaim empty user page table pages"
1340 Try to reclaim empty user page table pages in paths other than munmap
1343 Note: now only empty user PTE page table pages will be reclaimed.