Lines Matching full:will

118 huge pages although processes will also directly compact memory as required.
125 background. Write of a non zero value to this tunable will immediately
162 flusher threads will start writeback.
176 flusher threads will start writing out dirty data.
185 will itself start writeback.
193 value lower than this limit will be ignored and the old configuration will be
203 interval will be written out next time a flusher thread wakes up.
211 generating disk writes will itself start writing out dirty data.
220 an updated timestamp will never get chance to be written out. And, if the
222 by an atime update, a worker will be scheduled to make sure that inode
231 The kernel flusher threads will periodically wake up and write `old` data
241 Writing to this will cause the kernel to drop clean caches, as well as
257 This is a non-destructive operation and will not free any dirty objects.
259 `sync` prior to writing to /proc/sys/vm/drop_caches. This will minimize the
296 the entire HugeTLB hugepage, during which a free hugepage will be consumed
310 following requests to soft offline pages will not be performed:
321 This parameter affects whether the kernel will compact memory or direct
326 implies that the allocation will succeed as long as watermarks are met.
328 The kernel will not compact memory in a zone if the
370 will use the legacy (2.4) layout for all processes.
390 mechanism will also defend that region from allocations which could use
497 still having a valid copy on disk) the kernel will handle the failure
499 no other up-to-date copy of the data it will kill to prevent any data
539 allocations; if you set this to lower than 1024KB, your system will
542 Setting this too high will OOM your machine instantly.
551 (fallback from the local zone occurs) slabs will be reclaimed if more
568 This is a percentage of the total pages in each zone. Zone reclaim will
583 This file indicates the amount of address space which a user process will
587 default this value is set to 0 and no protections will be enforced by the
588 security module. Setting this value to something like 64k will allow the
599 tuning address space randomization. This value will be bounded
613 space randomization. This value will be bounded by the
638 buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
639 per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
655 buddy allocator will not be optimized meaning the extra overhead at allocation
657 will not be affected. If you want to make sure there are no optimized HugeTLB
659 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
704 Node order will fail!
713 This means that a memory allocation request for GFP_KERNEL will
723 will be used before ZONE_NORMAL exhaustion. This increases possibility of
740 by the kernel, so "zone" order will be selected.
743 order will be selected.
777 If this is set to zero, the OOM killer will scan through the entire
864 will apply, and the waiter will only be awakened if the lock can be taken.
871 If this is set to 0, the kernel will kill some rogue process,
873 system will survive.
911 online CPUs. If the user writes '0' to this sysctl, it will revert to
957 assumes equal IO cost and will thus apply memory pressure to the page
962 be more efficient than swap's random IO. An optimal value will require
963 experimentation and will also be workload-dependent.
973 At 0, the kernel will not initiate swap until the amount of free and
1006 If this is reduced to zero, then the user will be allowed to allocate
1008 Any subsequent attempts to execute a command will result in
1020 At the default value of vfs_cache_pressure=100 the kernel will attempt to
1023 to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
1030 directory and inode objects. With vfs_cache_pressure=1000, it will look for
1038 It defines the percentage of the high watermark of a zone that will be
1046 15,000 means that up to 150% of the high watermark will be reclaimed in the
1050 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1051 of 0 will disable the feature.
1077 zone reclaim occurs. Allocations will be satisfied from other zones / nodes
1096 reduction. The page allocator will take additional actions before
1101 reclaim will write out dirty pages if a zone fills up and so effectively
1105 of other processes running on other nodes will not be affected.