1===============================
2Documentation for /proc/sys/vm/
3===============================
4
5kernel version 2.6.29
6
7Copyright (c) 1998, 1999,  Rik van Riel <riel@nl.linux.org>
8
9Copyright (c) 2008         Peter W. Morreale <pmorreale@novell.com>
10
11For general info and legal blurb, please look in index.rst.
12
13------------------------------------------------------------------------------
14
15This file contains the documentation for the sysctl files in
16/proc/sys/vm and is valid for Linux kernel version 2.6.29.
17
18The files in this directory can be used to tune the operation
19of the virtual memory (VM) subsystem of the Linux kernel and
20the writeout of dirty data to disk.
21
22Default values and initialization routines for most of these
23files can be found in mm/swap.c.
24
25Currently, these files are in /proc/sys/vm:
26
27- admin_reserve_kbytes
28- compact_memory
29- compaction_proactiveness
30- compact_unevictable_allowed
31- defrag_mode
32- dirty_background_bytes
33- dirty_background_ratio
34- dirty_bytes
35- dirty_expire_centisecs
36- dirty_ratio
37- dirtytime_expire_seconds
38- dirty_writeback_centisecs
39- drop_caches
40- enable_soft_offline
41- extfrag_threshold
42- highmem_is_dirtyable
43- hugetlb_shm_group
44- laptop_mode
45- legacy_va_layout
46- lowmem_reserve_ratio
47- max_map_count
48- mem_profiling         (only if CONFIG_MEM_ALLOC_PROFILING=y)
49- memory_failure_early_kill
50- memory_failure_recovery
51- min_free_kbytes
52- min_slab_ratio
53- min_unmapped_ratio
54- mmap_min_addr
55- mmap_rnd_bits
56- mmap_rnd_compat_bits
57- nr_hugepages
58- nr_hugepages_mempolicy
59- nr_overcommit_hugepages
60- nr_trim_pages         (only if CONFIG_MMU=n)
61- numa_zonelist_order
62- oom_dump_tasks
63- oom_kill_allocating_task
64- overcommit_kbytes
65- overcommit_memory
66- overcommit_ratio
67- page-cluster
68- page_lock_unfairness
69- panic_on_oom
70- percpu_pagelist_high_fraction
71- stat_interval
72- stat_refresh
73- numa_stat
74- swappiness
75- unprivileged_userfaultfd
76- user_reserve_kbytes
77- vfs_cache_pressure
78- watermark_boost_factor
79- watermark_scale_factor
80- zone_reclaim_mode
81
82
83admin_reserve_kbytes
84====================
85
86The amount of free memory in the system that should be reserved for users
87with the capability cap_sys_admin.
88
89admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
90
91That should provide enough for the admin to log in and kill a process,
92if necessary, under the default overcommit 'guess' mode.
93
94Systems running under overcommit 'never' should increase this to account
95for the full Virtual Memory Size of programs used to recover. Otherwise,
96root may not be able to log in to recover the system.
97
98How do you calculate a minimum useful reserve?
99
100sshd or login + bash (or some other shell) + top (or ps, kill, etc.)
101
102For overcommit 'guess', we can sum resident set sizes (RSS).
103On x86_64 this is about 8MB.
104
105For overcommit 'never', we can take the max of their virtual sizes (VSZ)
106and add the sum of their RSS.
107On x86_64 this is about 128MB.
108
109Changing this takes effect whenever an application requests memory.
110
111
112compact_memory
113==============
114
115Available only when CONFIG_COMPACTION is set. When 1 is written to the file,
116all zones are compacted such that free memory is available in contiguous
117blocks where possible. This can be important for example in the allocation of
118huge pages although processes will also directly compact memory as required.
119
120compaction_proactiveness
121========================
122
123This tunable takes a value in the range [0, 100] with a default value of
12420. This tunable determines how aggressively compaction is done in the
125background. Write of a non zero value to this tunable will immediately
126trigger the proactive compaction. Setting it to 0 disables proactive compaction.
127
128Note that compaction has a non-trivial system-wide impact as pages
129belonging to different processes are moved around, which could also lead
130to latency spikes in unsuspecting applications. The kernel employs
131various heuristics to avoid wasting CPU cycles if it detects that
132proactive compaction is not being effective.
133
134Be careful when setting it to extreme values like 100, as that may
135cause excessive background compaction activity.
136
137compact_unevictable_allowed
138===========================
139
140Available only when CONFIG_COMPACTION is set. When set to 1, compaction is
141allowed to examine the unevictable lru (mlocked pages) for pages to compact.
142This should be used on systems where stalls for minor page faults are an
143acceptable trade for large contiguous free memory.  Set to 0 to prevent
144compaction from moving pages that are unevictable.  Default value is 1.
145On CONFIG_PREEMPT_RT the default value is 0 in order to avoid a page fault, due
146to compaction, which would block the task from becoming active until the fault
147is resolved.
148
149defrag_mode
150===========
151
152When set to 1, the page allocator tries harder to avoid fragmentation
153and maintain the ability to produce huge pages / higher-order pages.
154
155It is recommended to enable this right after boot, as fragmentation,
156once it occurred, can be long-lasting or even permanent.
157
158dirty_background_bytes
159======================
160
161Contains the amount of dirty memory at which the background kernel
162flusher threads will start writeback.
163
164Note:
165  dirty_background_bytes is the counterpart of dirty_background_ratio. Only
166  one of them may be specified at a time. When one sysctl is written it is
167  immediately taken into account to evaluate the dirty memory limits and the
168  other appears as 0 when read.
169
170
171dirty_background_ratio
172======================
173
174Contains, as a percentage of total available memory that contains free pages
175and reclaimable pages, the number of pages at which the background kernel
176flusher threads will start writing out dirty data.
177
178The total available memory is not equal to total system memory.
179
180
181dirty_bytes
182===========
183
184Contains the amount of dirty memory at which a process generating disk writes
185will itself start writeback.
186
187Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be
188specified at a time. When one sysctl is written it is immediately taken into
189account to evaluate the dirty memory limits and the other appears as 0 when
190read.
191
192Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
193value lower than this limit will be ignored and the old configuration will be
194retained.
195
196
197dirty_expire_centisecs
198======================
199
200This tunable is used to define when dirty data is old enough to be eligible
201for writeout by the kernel flusher threads.  It is expressed in 100'ths
202of a second.  Data which has been dirty in-memory for longer than this
203interval will be written out next time a flusher thread wakes up.
204
205
206dirty_ratio
207===========
208
209Contains, as a percentage of total available memory that contains free pages
210and reclaimable pages, the number of pages at which a process which is
211generating disk writes will itself start writing out dirty data.
212
213The total available memory is not equal to total system memory.
214
215
216dirtytime_expire_seconds
217========================
218
219When a lazytime inode is constantly having its pages dirtied, the inode with
220an updated timestamp will never get chance to be written out.  And, if the
221only thing that has happened on the file system is a dirtytime inode caused
222by an atime update, a worker will be scheduled to make sure that inode
223eventually gets pushed out to disk.  This tunable is used to define when dirty
224inode is old enough to be eligible for writeback by the kernel flusher threads.
225And, it is also used as the interval to wakeup dirtytime_writeback thread.
226
227
228dirty_writeback_centisecs
229=========================
230
231The kernel flusher threads will periodically wake up and write `old` data
232out to disk.  This tunable expresses the interval between those wakeups, in
233100'ths of a second.
234
235Setting this to zero disables periodic writeback altogether.
236
237
238drop_caches
239===========
240
241Writing to this will cause the kernel to drop clean caches, as well as
242reclaimable slab objects like dentries and inodes.  Once dropped, their
243memory becomes free.
244
245To free pagecache::
246
247	echo 1 > /proc/sys/vm/drop_caches
248
249To free reclaimable slab objects (includes dentries and inodes)::
250
251	echo 2 > /proc/sys/vm/drop_caches
252
253To free slab objects and pagecache::
254
255	echo 3 > /proc/sys/vm/drop_caches
256
257This is a non-destructive operation and will not free any dirty objects.
258To increase the number of objects freed by this operation, the user may run
259`sync` prior to writing to /proc/sys/vm/drop_caches.  This will minimize the
260number of dirty objects on the system and create more candidates to be
261dropped.
262
263This file is not a means to control the growth of the various kernel caches
264(inodes, dentries, pagecache, etc...)  These objects are automatically
265reclaimed by the kernel when memory is needed elsewhere on the system.
266
267Use of this file can cause performance problems.  Since it discards cached
268objects, it may cost a significant amount of I/O and CPU to recreate the
269dropped objects, especially if they were under heavy use.  Because of this,
270use outside of a testing or debugging environment is not recommended.
271
272You may see informational messages in your kernel log when this file is
273used::
274
275	cat (1234): drop_caches: 3
276
277These are informational only.  They do not mean that anything is wrong
278with your system.  To disable them, echo 4 (bit 2) into drop_caches.
279
280enable_soft_offline
281===================
282Correctable memory errors are very common on servers. Soft-offline is kernel's
283solution for memory pages having (excessive) corrected memory errors.
284
285For different types of page, soft-offline has different behaviors / costs.
286
287- For a raw error page, soft-offline migrates the in-use page's content to
288  a new raw page.
289
290- For a page that is part of a transparent hugepage, soft-offline splits the
291  transparent hugepage into raw pages, then migrates only the raw error page.
292  As a result, user is transparently backed by 1 less hugepage, impacting
293  memory access performance.
294
295- For a page that is part of a HugeTLB hugepage, soft-offline first migrates
296  the entire HugeTLB hugepage, during which a free hugepage will be consumed
297  as migration target.  Then the original hugepage is dissolved into raw
298  pages without compensation, reducing the capacity of the HugeTLB pool by 1.
299
300It is user's call to choose between reliability (staying away from fragile
301physical memory) vs performance / capacity implications in transparent and
302HugeTLB cases.
303
304For all architectures, enable_soft_offline controls whether to soft offline
305memory pages.  When set to 1, kernel attempts to soft offline the pages
306whenever it thinks needed.  When set to 0, kernel returns EOPNOTSUPP to
307the request to soft offline the pages.  Its default value is 1.
308
309It is worth mentioning that after setting enable_soft_offline to 0, the
310following requests to soft offline pages will not be performed:
311
312- Request to soft offline pages from RAS Correctable Errors Collector.
313
314- On ARM, the request to soft offline pages from GHES driver.
315
316- On PARISC, the request to soft offline pages from Page Deallocation Table.
317
318extfrag_threshold
319=================
320
321This parameter affects whether the kernel will compact memory or direct
322reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
323debugfs shows what the fragmentation index for each order is in each zone in
324the system. Values tending towards 0 imply allocations would fail due to lack
325of memory, values towards 1000 imply failures are due to fragmentation and -1
326implies that the allocation will succeed as long as watermarks are met.
327
328The kernel will not compact memory in a zone if the
329fragmentation index is <= extfrag_threshold. The default value is 500.
330
331
332highmem_is_dirtyable
333====================
334
335Available only for systems with CONFIG_HIGHMEM enabled (32b systems).
336
337This parameter controls whether the high memory is considered for dirty
338writers throttling.  This is not the case by default which means that
339only the amount of memory directly visible/usable by the kernel can
340be dirtied. As a result, on systems with a large amount of memory and
341lowmem basically depleted writers might be throttled too early and
342streaming writes can get very slow.
343
344Changing the value to non zero would allow more memory to be dirtied
345and thus allow writers to write more data which can be flushed to the
346storage more effectively. Note this also comes with a risk of pre-mature
347OOM killer because some writers (e.g. direct block device writes) can
348only use the low memory and they can fill it up with dirty data without
349any throttling.
350
351
352hugetlb_shm_group
353=================
354
355hugetlb_shm_group contains group id that is allowed to create SysV
356shared memory segment using hugetlb page.
357
358
359laptop_mode
360===========
361
362laptop_mode is a knob that controls "laptop mode". All the things that are
363controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
364
365
366legacy_va_layout
367================
368
369If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
370will use the legacy (2.4) layout for all processes.
371
372
373lowmem_reserve_ratio
374====================
375
376For some specialised workloads on highmem machines it is dangerous for
377the kernel to allow process memory to be allocated from the "lowmem"
378zone.  This is because that memory could then be pinned via the mlock()
379system call, or by unavailability of swapspace.
380
381And on large highmem machines this lack of reclaimable lowmem memory
382can be fatal.
383
384So the Linux page allocator has a mechanism which prevents allocations
385which *could* use highmem from using too much lowmem.  This means that
386a certain amount of lowmem is defended from the possibility of being
387captured into pinned user memory.
388
389(The same argument applies to the old 16 megabyte ISA DMA region.  This
390mechanism will also defend that region from allocations which could use
391highmem or lowmem).
392
393The `lowmem_reserve_ratio` tunable determines how aggressive the kernel is
394in defending these lower zones.
395
396If you have a machine which uses highmem or ISA DMA and your
397applications are using mlock(), or if you are running with no swap then
398you probably should change the lowmem_reserve_ratio setting.
399
400The lowmem_reserve_ratio is an array. You can see them by reading this file::
401
402	% cat /proc/sys/vm/lowmem_reserve_ratio
403	256     256     32
404
405But, these values are not used directly. The kernel calculates # of protection
406pages for each zones from them. These are shown as array of protection pages
407in /proc/zoneinfo like the following. (This is an example of x86-64 box).
408Each zone has an array of protection pages like this::
409
410  Node 0, zone      DMA
411    pages free     1355
412          min      3
413          low      3
414          high     4
415	:
416	:
417      numa_other   0
418          protection: (0, 2004, 2004, 2004)
419	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
420    pagesets
421      cpu: 0 pcp: 0
422          :
423
424These protections are added to score to judge whether this zone should be used
425for page allocation or should be reclaimed.
426
427In this example, if normal pages (index=2) are required to this DMA zone and
428watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should
429not be used because pages_free(1355) is smaller than watermark + protection[2]
430(4 + 2004 = 2008). If this protection value is 0, this zone would be used for
431normal page requirement. If requirement is DMA zone(index=0), protection[0]
432(=0) is used.
433
434zone[i]'s protection[j] is calculated by following expression::
435
436  (i < j):
437    zone[i]->protection[j]
438    = (total sums of managed_pages from zone[i+1] to zone[j] on the node)
439      / lowmem_reserve_ratio[i];
440  (i = j):
441     (should not be protected. = 0;
442  (i > j):
443     (not necessary, but looks 0)
444
445The default values of lowmem_reserve_ratio[i] are
446
447    === ====================================
448    256 (if zone[i] means DMA or DMA32 zone)
449    32  (others)
450    === ====================================
451
452As above expression, they are reciprocal number of ratio.
453256 means 1/256. # of protection pages becomes about "0.39%" of total managed
454pages of higher zones on the node.
455
456If you would like to protect more pages, smaller values are effective.
457The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
458disables protection of the pages.
459
460
461max_map_count:
462==============
463
464This file contains the maximum number of memory map areas a process
465may have. Memory map areas are used as a side-effect of calling
466malloc, directly by mmap, mprotect, and madvise, and also when loading
467shared libraries.
468
469While most applications need less than a thousand maps, certain
470programs, particularly malloc debuggers, may consume lots of them,
471e.g., up to one or two maps per allocation.
472
473The default value is 65530.
474
475
476mem_profiling
477==============
478
479Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
480
4811: Enable memory profiling.
482
4830: Disable memory profiling.
484
485Enabling memory profiling introduces a small performance overhead for all
486memory allocations.
487
488The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
489
490
491memory_failure_early_kill:
492==========================
493
494Control how to kill processes when uncorrected memory error (typically
495a 2bit error in a memory module) is detected in the background by hardware
496that cannot be handled by the kernel. In some cases (like the page
497still having a valid copy on disk) the kernel will handle the failure
498transparently without affecting any applications. But if there is
499no other up-to-date copy of the data it will kill to prevent any data
500corruptions from propagating.
501
5021: Kill all processes that have the corrupted and not reloadable page mapped
503as soon as the corruption is detected.  Note this is not supported
504for a few types of pages, like kernel internally allocated data or
505the swap cache, but works for the majority of user pages.
506
5070: Only unmap the corrupted page from all processes and only kill a process
508who tries to access it.
509
510The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can
511handle this if they want to.
512
513This is only active on architectures/platforms with advanced machine
514check handling and depends on the hardware capabilities.
515
516Applications can override this setting individually with the PR_MCE_KILL prctl
517
518
519memory_failure_recovery
520=======================
521
522Enable memory failure recovery (when supported by the platform)
523
5241: Attempt recovery.
525
5260: Always panic on a memory failure.
527
528
529min_free_kbytes
530===============
531
532This is used to force the Linux VM to keep a minimum number
533of kilobytes free.  The VM uses this number to compute a
534watermark[WMARK_MIN] value for each lowmem zone in the system.
535Each lowmem zone gets a number of reserved free pages based
536proportionally on its size.
537
538Some minimal amount of memory is needed to satisfy PF_MEMALLOC
539allocations; if you set this to lower than 1024KB, your system will
540become subtly broken, and prone to deadlock under high loads.
541
542Setting this too high will OOM your machine instantly.
543
544
545min_slab_ratio
546==============
547
548This is available only on NUMA kernels.
549
550A percentage of the total pages in each zone.  On Zone reclaim
551(fallback from the local zone occurs) slabs will be reclaimed if more
552than this percentage of pages in a zone are reclaimable slab pages.
553This insures that the slab growth stays under control even in NUMA
554systems that rarely perform global reclaim.
555
556The default is 5 percent.
557
558Note that slab reclaim is triggered in a per zone / node fashion.
559The process of reclaiming slab memory is currently not node specific
560and may not be fast.
561
562
563min_unmapped_ratio
564==================
565
566This is available only on NUMA kernels.
567
568This is a percentage of the total pages in each zone. Zone reclaim will
569only occur if more than this percentage of pages are in a state that
570zone_reclaim_mode allows to be reclaimed.
571
572If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared
573against all file-backed unmapped pages including swapcache pages and tmpfs
574files. Otherwise, only unmapped pages backed by normal files but not tmpfs
575files and similar are considered.
576
577The default is 1 percent.
578
579
580mmap_min_addr
581=============
582
583This file indicates the amount of address space  which a user process will
584be restricted from mmapping.  Since kernel null dereference bugs could
585accidentally operate based on the information in the first couple of pages
586of memory userspace processes should not be allowed to write to them.  By
587default this value is set to 0 and no protections will be enforced by the
588security module.  Setting this value to something like 64k will allow the
589vast majority of applications to work correctly and provide defense in depth
590against future potential kernel bugs.
591
592
593mmap_rnd_bits
594=============
595
596This value can be used to select the number of bits to use to
597determine the random offset to the base address of vma regions
598resulting from mmap allocations on architectures which support
599tuning address space randomization.  This value will be bounded
600by the architecture's minimum and maximum supported values.
601
602This value can be changed after boot using the
603/proc/sys/vm/mmap_rnd_bits tunable
604
605
606mmap_rnd_compat_bits
607====================
608
609This value can be used to select the number of bits to use to
610determine the random offset to the base address of vma regions
611resulting from mmap allocations for applications run in
612compatibility mode on architectures which support tuning address
613space randomization.  This value will be bounded by the
614architecture's minimum and maximum supported values.
615
616This value can be changed after boot using the
617/proc/sys/vm/mmap_rnd_compat_bits tunable
618
619
620nr_hugepages
621============
622
623Change the minimum size of the hugepage pool.
624
625See Documentation/admin-guide/mm/hugetlbpage.rst
626
627
628hugetlb_optimize_vmemmap
629========================
630
631This knob is not available when the size of 'struct page' (a structure defined
632in include/linux/mm_types.h) is not power of two (an unusual system config could
633result in this).
634
635Enable (set to 1) or disable (set to 0) HugeTLB Vmemmap Optimization (HVO).
636
637Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
638buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
639per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
640optimized.  When those optimized HugeTLB pages are freed from the HugeTLB pool
641to the buddy allocator, the vmemmap pages representing that range needs to be
642remapped again and the vmemmap pages discarded earlier need to be rellocated
643again.  If your use case is that HugeTLB pages are allocated 'on the fly' (e.g.
644never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set
645'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on
646the fly') instead of being pulled from the HugeTLB pool, you should weigh the
647benefits of memory savings against the more overhead (~2x slower than before)
648of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy
649allocator.  Another behavior to note is that if the system is under heavy memory
650pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB
651pool to the buddy allocator since the allocation of vmemmap pages could be
652failed, you have to retry later if your system encounter this situation.
653
654Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
655buddy allocator will not be optimized meaning the extra overhead at allocation
656time from buddy allocator disappears, whereas already optimized HugeTLB pages
657will not be affected.  If you want to make sure there are no optimized HugeTLB
658pages, you can set "nr_hugepages" to 0 first and then disable this.  Note that
659writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
660pages.  So, those surplus pages are still optimized until they are no longer
661in use.  You would need to wait for those surplus pages to be released before
662there are no optimized pages in the system.
663
664
665nr_hugepages_mempolicy
666======================
667
668Change the size of the hugepage pool at run-time on a specific
669set of NUMA nodes.
670
671See Documentation/admin-guide/mm/hugetlbpage.rst
672
673
674nr_overcommit_hugepages
675=======================
676
677Change the maximum size of the hugepage pool. The maximum is
678nr_hugepages + nr_overcommit_hugepages.
679
680See Documentation/admin-guide/mm/hugetlbpage.rst
681
682
683nr_trim_pages
684=============
685
686This is available only on NOMMU kernels.
687
688This value adjusts the excess page trimming behaviour of power-of-2 aligned
689NOMMU mmap allocations.
690
691A value of 0 disables trimming of allocations entirely, while a value of 1
692trims excess pages aggressively. Any value >= 1 acts as the watermark where
693trimming of allocations is initiated.
694
695The default value is 1.
696
697See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
698
699
700numa_zonelist_order
701===================
702
703This sysctl is only for NUMA and it is deprecated. Anything but
704Node order will fail!
705
706'where the memory is allocated from' is controlled by zonelists.
707
708(This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation.
709you may be able to read ZONE_DMA as ZONE_DMA32...)
710
711In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
712ZONE_NORMAL -> ZONE_DMA
713This means that a memory allocation request for GFP_KERNEL will
714get memory from ZONE_DMA only when ZONE_NORMAL is not available.
715
716In NUMA case, you can think of following 2 types of order.
717Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL::
718
719  (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
720  (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
721
722Type(A) offers the best locality for processes on Node(0), but ZONE_DMA
723will be used before ZONE_NORMAL exhaustion. This increases possibility of
724out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
725
726Type(B) cannot offer the best locality but is more robust against OOM of
727the DMA zone.
728
729Type(A) is called as "Node" order. Type (B) is "Zone" order.
730
731"Node order" orders the zonelists by node, then by zone within each node.
732Specify "[Nn]ode" for node order
733
734"Zone Order" orders the zonelists by zone type, then by node within each
735zone.  Specify "[Zz]one" for zone order.
736
737Specify "[Dd]efault" to request automatic configuration.
738
739On 32-bit, the Normal zone needs to be preserved for allocations accessible
740by the kernel, so "zone" order will be selected.
741
742On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
743order will be selected.
744
745Default order is recommended unless this is causing problems for your
746system/application.
747
748
749oom_dump_tasks
750==============
751
752Enables a system-wide task dump (excluding kernel threads) to be produced
753when the kernel performs an OOM-killing and includes such information as
754pid, uid, tgid, vm size, rss, pgtables_bytes, swapents, oom_score_adj
755score, and name.  This is helpful to determine why the OOM killer was
756invoked, to identify the rogue task that caused it, and to determine why
757the OOM killer chose the task it did to kill.
758
759If this is set to zero, this information is suppressed.  On very
760large systems with thousands of tasks it may not be feasible to dump
761the memory state information for each one.  Such systems should not
762be forced to incur a performance penalty in OOM conditions when the
763information may not be desired.
764
765If this is set to non-zero, this information is shown whenever the
766OOM killer actually kills a memory-hogging task.
767
768The default value is 1 (enabled).
769
770
771oom_kill_allocating_task
772========================
773
774This enables or disables killing the OOM-triggering task in
775out-of-memory situations.
776
777If this is set to zero, the OOM killer will scan through the entire
778tasklist and select a task based on heuristics to kill.  This normally
779selects a rogue memory-hogging task that frees up a large amount of
780memory when killed.
781
782If this is set to non-zero, the OOM killer simply kills the task that
783triggered the out-of-memory condition.  This avoids the expensive
784tasklist scan.
785
786If panic_on_oom is selected, it takes precedence over whatever value
787is used in oom_kill_allocating_task.
788
789The default value is 0.
790
791
792overcommit_kbytes
793=================
794
795When overcommit_memory is set to 2, the committed address space is not
796permitted to exceed swap plus this amount of physical RAM. See below.
797
798Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one
799of them may be specified at a time. Setting one disables the other (which
800then appears as 0 when read).
801
802
803overcommit_memory
804=================
805
806This value contains a flag that enables memory overcommitment.
807
808When this flag is 0, the kernel compares the userspace memory request
809size against total memory plus swap and rejects obvious overcommits.
810
811When this flag is 1, the kernel pretends there is always enough
812memory until it actually runs out.
813
814When this flag is 2, the kernel uses a "never overcommit"
815policy that attempts to prevent any overcommit of memory.
816Note that user_reserve_kbytes affects this policy.
817
818This feature can be very useful because there are a lot of
819programs that malloc() huge amounts of memory "just-in-case"
820and don't use much of it.
821
822The default value is 0.
823
824See Documentation/mm/overcommit-accounting.rst and
825mm/util.c::__vm_enough_memory() for more information.
826
827
828overcommit_ratio
829================
830
831When overcommit_memory is set to 2, the committed address
832space is not permitted to exceed swap plus this percentage
833of physical RAM.  See above.
834
835
836page-cluster
837============
838
839page-cluster controls the number of pages up to which consecutive pages
840are read in from swap in a single attempt. This is the swap counterpart
841to page cache readahead.
842The mentioned consecutivity is not in terms of virtual/physical addresses,
843but consecutive on swap space - that means they were swapped out together.
844
845It is a logarithmic value - setting it to zero means "1 page", setting
846it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
847Zero disables swap readahead completely.
848
849The default value is three (eight pages at a time).  There may be some
850small benefits in tuning this to a different value if your workload is
851swap-intensive.
852
853Lower values mean lower latencies for initial faults, but at the same time
854extra faults and I/O delays for following faults if they would have been part of
855that consecutive pages readahead would have brought in.
856
857
858page_lock_unfairness
859====================
860
861This value determines the number of times that the page lock can be
862stolen from under a waiter. After the lock is stolen the number of times
863specified in this file (default is 5), the "fair lock handoff" semantics
864will apply, and the waiter will only be awakened if the lock can be taken.
865
866panic_on_oom
867============
868
869This enables or disables panic on out-of-memory feature.
870
871If this is set to 0, the kernel will kill some rogue process,
872called oom_killer.  Usually, oom_killer can kill rogue processes and
873system will survive.
874
875If this is set to 1, the kernel panics when out-of-memory happens.
876However, if a process limits using nodes by mempolicy/cpusets,
877and those nodes become memory exhaustion status, one process
878may be killed by oom-killer. No panic occurs in this case.
879Because other nodes' memory may be free. This means system total status
880may be not fatal yet.
881
882If this is set to 2, the kernel panics compulsorily even on the
883above-mentioned. Even oom happens under memory cgroup, the whole
884system panics.
885
886The default value is 0.
887
8881 and 2 are for failover of clustering. Please select either
889according to your policy of failover.
890
891panic_on_oom=2+kdump gives you very strong tool to investigate
892why oom happens. You can get snapshot.
893
894
895percpu_pagelist_high_fraction
896=============================
897
898This is the fraction of pages in each zone that are can be stored to
899per-cpu page lists. It is an upper boundary that is divided depending
900on the number of online CPUs. The min value for this is 8 which means
901that we do not allow more than 1/8th of pages in each zone to be stored
902on per-cpu page lists. This entry only changes the value of hot per-cpu
903page lists. A user can specify a number like 100 to allocate 1/100th of
904each zone between per-cpu lists.
905
906The batch value of each per-cpu page list remains the same regardless of
907the value of the high fraction so allocation latencies are unaffected.
908
909The initial value is zero. Kernel uses this value to set the high pcp->high
910mark based on the low watermark for the zone and the number of local
911online CPUs.  If the user writes '0' to this sysctl, it will revert to
912this default behavior.
913
914
915stat_interval
916=============
917
918The time interval between which vm statistics are updated.  The default
919is 1 second.
920
921
922stat_refresh
923============
924
925Any read or write (by root only) flushes all the per-cpu vm statistics
926into their global totals, for more accurate reports when testing
927e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo
928
929As a side-effect, it also checks for negative totals (elsewhere reported
930as 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
931(At time of writing, a few stats are known sometimes to be found negative,
932with no ill effects: errors and warnings on these stats are suppressed.)
933
934
935numa_stat
936=========
937
938This interface allows runtime configuration of numa statistics.
939
940When page allocation performance becomes a bottleneck and you can tolerate
941some possible tool breakage and decreased numa counter precision, you can
942do::
943
944	echo 0 > /proc/sys/vm/numa_stat
945
946When page allocation performance is not a bottleneck and you want all
947tooling to work, you can do::
948
949	echo 1 > /proc/sys/vm/numa_stat
950
951
952swappiness
953==========
954
955This control is used to define the rough relative IO cost of swapping
956and filesystem paging, as a value between 0 and 200. At 100, the VM
957assumes equal IO cost and will thus apply memory pressure to the page
958cache and swap-backed pages equally; lower values signify more
959expensive swap IO, higher values indicates cheaper.
960
961Keep in mind that filesystem IO patterns under memory pressure tend to
962be more efficient than swap's random IO. An optimal value will require
963experimentation and will also be workload-dependent.
964
965The default value is 60.
966
967For in-memory swap, like zram or zswap, as well as hybrid setups that
968have swap on faster devices than the filesystem, values beyond 100 can
969be considered. For example, if the random IO against the swap device
970is on average 2x faster than IO from the filesystem, swappiness should
971be 133 (x + 2x = 200, 2x = 133.33).
972
973At 0, the kernel will not initiate swap until the amount of free and
974file-backed pages is less than the high watermark in a zone.
975
976
977unprivileged_userfaultfd
978========================
979
980This flag controls the mode in which unprivileged users can use the
981userfaultfd system calls. Set this to 0 to restrict unprivileged users
982to handle page faults in user mode only. In this case, users without
983SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to
984succeed. Prohibiting use of userfaultfd for handling faults from kernel
985mode may make certain vulnerabilities more difficult to exploit.
986
987Set this to 1 to allow unprivileged users to use the userfaultfd system
988calls without any restrictions.
989
990The default value is 0.
991
992Another way to control permissions for userfaultfd is to use
993/dev/userfaultfd instead of userfaultfd(2). See
994Documentation/admin-guide/mm/userfaultfd.rst.
995
996user_reserve_kbytes
997===================
998
999When overcommit_memory is set to 2, "never overcommit" mode, reserve
1000min(3% of current process size, user_reserve_kbytes) of free memory.
1001This is intended to prevent a user from starting a single memory hogging
1002process, such that they cannot recover (kill the hog).
1003
1004user_reserve_kbytes defaults to min(3% of the current process size, 128MB).
1005
1006If this is reduced to zero, then the user will be allowed to allocate
1007all free memory with a single process, minus admin_reserve_kbytes.
1008Any subsequent attempts to execute a command will result in
1009"fork: Cannot allocate memory".
1010
1011Changing this takes effect whenever an application requests memory.
1012
1013
1014vfs_cache_pressure
1015==================
1016
1017This percentage value controls the tendency of the kernel to reclaim
1018the memory which is used for caching of directory and inode objects.
1019
1020At the default value of vfs_cache_pressure=100 the kernel will attempt to
1021reclaim dentries and inodes at a "fair" rate with respect to pagecache and
1022swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
1023to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
1024never reclaim dentries and inodes due to memory pressure and this can easily
1025lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
1026causes the kernel to prefer to reclaim dentries and inodes.
1027
1028Increasing vfs_cache_pressure significantly beyond 100 may have negative
1029performance impact. Reclaim code needs to take various locks to find freeable
1030directory and inode objects. With vfs_cache_pressure=1000, it will look for
1031ten times more freeable objects than there are.
1032
1033
1034watermark_boost_factor
1035======================
1036
1037This factor controls the level of reclaim when memory is being fragmented.
1038It defines the percentage of the high watermark of a zone that will be
1039reclaimed if pages of different mobility are being mixed within pageblocks.
1040The intent is that compaction has less work to do in the future and to
1041increase the success rate of future high-order allocations such as SLUB
1042allocations, THP and hugetlbfs pages.
1043
1044To make it sensible with respect to the watermark_scale_factor
1045parameter, the unit is in fractions of 10,000. The default value of
104615,000 means that up to 150% of the high watermark will be reclaimed in the
1047event of a pageblock being mixed due to fragmentation. The level of reclaim
1048is determined by the number of fragmentation events that occurred in the
1049recent past. If this value is smaller than a pageblock then a pageblocks
1050worth of pages will be reclaimed (e.g.  2MB on 64-bit x86). A boost factor
1051of 0 will disable the feature.
1052
1053
1054watermark_scale_factor
1055======================
1056
1057This factor controls the aggressiveness of kswapd. It defines the
1058amount of memory left in a node/system before kswapd is woken up and
1059how much memory needs to be free before kswapd goes back to sleep.
1060
1061The unit is in fractions of 10,000. The default value of 10 means the
1062distances between watermarks are 0.1% of the available memory in the
1063node/system. The maximum value is 3000, or 30% of memory.
1064
1065A high rate of threads entering direct reclaim (allocstall) or kswapd
1066going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate
1067that the number of free pages kswapd maintains for latency reasons is
1068too small for the allocation bursts occurring in the system. This knob
1069can then be used to tune kswapd aggressiveness accordingly.
1070
1071
1072zone_reclaim_mode
1073=================
1074
1075Zone_reclaim_mode allows someone to set more or less aggressive approaches to
1076reclaim memory when a zone runs out of memory. If it is set to zero then no
1077zone reclaim occurs. Allocations will be satisfied from other zones / nodes
1078in the system.
1079
1080This is value OR'ed together of
1081
1082=	===================================
10831	Zone reclaim on
10842	Zone reclaim writes dirty pages out
10854	Zone reclaim swaps pages
1086=	===================================
1087
1088zone_reclaim_mode is disabled by default.  For file servers or workloads
1089that benefit from having their data cached, zone_reclaim_mode should be
1090left disabled as the caching effect is likely to be more important than
1091data locality.
1092
1093Consider enabling one or more zone_reclaim mode bits if it's known that the
1094workload is partitioned such that each partition fits within a NUMA node
1095and that accessing remote memory would cause a measurable performance
1096reduction.  The page allocator will take additional actions before
1097allocating off node pages.
1098
1099Allowing zone reclaim to write out pages stops processes that are
1100writing large amounts of data from dirtying pages on other nodes. Zone
1101reclaim will write out dirty pages if a zone fills up and so effectively
1102throttle the process. This may decrease the performance of a single process
1103since it cannot use all of system memory to buffer the outgoing writes
1104anymore but it preserve the memory on other nodes so that the performance
1105of other processes running on other nodes will not be affected.
1106
1107Allowing regular swap effectively restricts allocations to the local
1108node unless explicitly overridden by memory policies or cpuset
1109configurations.
1110