Lines Matching full:swap
6 * Swap reorganised 29.12.95, Stephen Tweedie
17 #include <linux/swap.h>
50 #include "swap.h"
69 * Some modules use swappable objects and may try to swap them out under
71 * check to see if any swap space is available.
82 static const char Bad_file[] = "Bad swap file entry ";
83 static const char Unused_file[] = "Unused swap file entry ";
84 static const char Bad_offset[] = "Bad swap offset entry ";
85 static const char Unused_offset[] = "Unused swap offset entry ";
145 * if one swap device is on the available plist, so the atomic can
163 /* Reclaim the swap entry anyway if possible */
166 * Reclaim the swap entry if there are no more mappings of the
170 /* Reclaim the swap entry if swap is getting full */
208 * returns number of pages in the folio that backs the swap entry. If positive,
210 * folio was associated with the swap entry.
244 entry = folio->swap; in __try_to_reclaim_swap()
259 * It's safe to delete the folio from swap cache only if the folio's in __try_to_reclaim_swap()
292 * swapon tell device that all the old swap contents can be discarded,
293 * to allow the swap device to optimize its wear-levelling.
302 /* Do not discard the swap header page! */ in discard_swap()
350 struct swap_info_struct *sis = swp_swap_info(folio->swap); in swap_folio_sector()
355 offset = swp_offset(folio->swap); in swap_folio_sector()
362 * swap allocation tell device that a cluster of swap can now be discarded,
363 * to allow the swap device to optimize its wear-levelling.
605 * If the swap is discardable, prepare discard the cluster in free_cluster()
858 /* in case no swap cache is reclaimed */ in swap_reclaim_full_clusters()
878 * Try to allocate swap entries with specified order and try set a new
895 /* Serialize HDD SWAP allocation for each device. */ in cluster_alloc_swap_entry()
948 * reclaimable (eg. lazy-freed swap cache) slots. in cluster_alloc_swap_entry()
1176 * Fast path try to get swap entries with specified order from current
1177 * CPU's swap entry pool (a cluster).
1256 * folio_alloc_swap - allocate swap space for a folio
1257 * @folio: folio we want to move to swap
1260 * Allocate swap space for the folio and add the folio to the
1261 * swap cache.
1264 * Return: Whether the folio was added to the swap cache.
1311 * deadlock in the swap out path. in folio_alloc_swap()
1399 * When we get a swap entry, if there aren't some other ways to
1400 * prevent swapoff, such as the folio in swap cache is locked, RCU
1401 * reader side is locked, etc., the swap entry may become invalid
1402 * because of swapoff. Then, we need to enclose all swap related
1404 * swap functions call get/put_swap_device() by themselves.
1410 * Check whether swap entry is valid in the swap device. If so,
1411 * return pointer to swap_info_struct, and keep the swap entry valid
1412 * via preventing the swap device from being swapoff, until
1431 * changing partly because the specified swap entry may be for another
1432 * swap device which has been swapoff. And in do_swap_page(), after
1433 * the page is read from the swap device, the PTE is verified not
1434 * changed with the page table locked to check whether the swap device
1523 * Drop the last HAS_CACHE flag of swap entries, caller have to
1570 * Caller has made sure that the swap device corresponding to entry
1592 * Called after dropping swapcache to decrease refcnt to swap entries.
1627 * This does not give an exact answer when swap count is continued,
1718 swp_entry_t entry = folio->swap; in folio_swapped()
1743 * hibernation is allocating its own swap pages for the image, in folio_swapcache_freeable()
1745 * the swap from a folio which has already been recorded in the in folio_swapcache_freeable()
1746 * image as a clean swapcache folio, and then reuse its swap for in folio_swapcache_freeable()
1749 * later read back in from swap, now with the wrong data. in folio_swapcache_freeable()
1761 * folio_free_swap() - Free the swap space used for this folio.
1764 * If swap is getting full, or if there are no more mappings of this folio,
1765 * then call folio_free_swap to free its swap space.
1767 * Return: true if we were able to release the swap space.
1782 * free_swap_and_cache_nr() - Release reference on range of swap entries and
1787 * For each swap entry in the contiguous range, release a reference. If any swap
1819 * Now go back over the range trying to reclaim the swap cache. This is in free_swap_and_cache_nr()
1821 * the swap once per folio in the common case. If we do in free_swap_and_cache_nr()
1824 * page but will only succeed once the swap slot for every subpage is in free_swap_and_cache_nr()
1831 * Folios are always naturally aligned in swap so in free_swap_and_cache_nr()
1833 * folio was found for the swap entry, so advance by 1 in free_swap_and_cache_nr()
1863 /* This is called for allocating swap entry, not cache */ in get_swap_page_of_type()
1879 * Find the swap type that corresponds to given device (if any).
1882 * from 0, in which the swap header is expected to be located.
1933 * corresponding to given index in swap_info (swap type).
1947 * Return either the total number of swap pages of given type, or the number
1979 * No need to decide whether this PTE shares the swap entry with others,
2031 * when reading from swap. This metadata may be indexed by swap entry in unuse_pte()
2351 * swap cache just before we acquired the page lock. The folio in try_to_unuse()
2352 * might even be back in swap cache on another swap area. But in try_to_unuse()
2363 * Lets check again to see if there are still swap entries in the map. in try_to_unuse()
2365 * Under global memory pressure, swap entries can be reinserted back in try_to_unuse()
2369 * above fails, that mm is likely to be freeing swap from in try_to_unuse()
2372 * folio_alloc_swap(), temporarily hiding that swap. It's easy in try_to_unuse()
2391 * After a successful try_to_unuse, if no swap is now in use, we know
2481 * A `swap extent' is a simple thing which maps a contiguous range of pages
2482 * onto a contiguous range of disk blocks. A rbtree of swap extents is
2488 * swap files identically.
2490 * Whether the swapdev is an S_ISREG file or an S_ISBLK blockdev, the swap
2500 * For all swap devices we set S_SWAPFILE across the life of the swapon. This
2501 * prevents users from writing to the swap device, which will corrupt memory.
2503 * The amount of disk space which a single swap extent represents varies.
2561 * low-to-high, while swap ordering is high-to-low in setup_swap_info()
2592 * which allocates swap pages from the highest available priority in _enable_swap_info()
2612 * Finished initializing swap device, now it's safe to reference it. in enable_swap_info()
2651 * Called after swap device's reference count is dead, so
2662 * Invalidate the percpu swap cluster cache, si->users in flush_percpu_swap_cluster()
2749 /* re-insert swap space back into swap_list */ in SYSCALL_DEFINE1()
2755 * Wait for swap operations protected by get/put_swap_device() in SYSCALL_DEFINE1()
2756 * to complete. Because of synchronize_rcu() here, all swap in SYSCALL_DEFINE1()
2759 * prevent folio_test_swapcache() and the following swap cache in SYSCALL_DEFINE1()
2801 /* Destroy swap account information */ in SYSCALL_DEFINE1()
2848 static void *swap_start(struct seq_file *swap, loff_t *pos) in swap_start() argument
2869 static void *swap_next(struct seq_file *swap, void *v, loff_t *pos) in swap_next() argument
2889 static void swap_stop(struct seq_file *swap, void *v) in swap_stop() argument
2894 static int swap_show(struct seq_file *swap, void *v) in swap_show() argument
2902 seq_puts(swap, "Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority\n"); in swap_show()
2910 len = seq_file_path(swap, file, " \t\n\\"); in swap_show()
2911 seq_printf(swap, "%*s%s\t%lu\t%s%lu\t%s%d\n", in swap_show()
3051 * Find out how many pages are allowed for a single swap device. There
3053 * 1) the number of bits for the swap offset in the swp_entry_t type, and
3054 * 2) the number of bits in the swap pte, as defined by the different
3057 * In order to find the largest possible bit mask, a swap entry with
3058 * swap type 0 and swap offset ~0UL is created, encoded to a swap pte,
3059 * decoded to a swp_entry_t again, and finally the swap offset is
3064 * of a swap pte.
3088 pr_err("Unable to find swap-space signature\n"); in read_swap_header()
3092 /* swap partition endianness hack... */ in read_swap_header()
3102 /* Check the swap header's sub-version */ in read_swap_header()
3104 pr_warn("Unable to handle swap header version %d\n", in read_swap_header()
3112 pr_warn("Empty swap-file\n"); in read_swap_header()
3116 pr_warn("Truncating oversized swap area, only using %luk out of %luk\n", in read_swap_header()
3130 pr_warn("Swap area shorter than signature indicates\n"); in read_swap_header()
3173 pr_warn("Empty swap-file\n"); in setup_swap_map_and_extents()
3335 * The swap subsystem needs a major overhaul to support this. in SYSCALL_DEFINE2()
3344 * Read the swap header. in SYSCALL_DEFINE2()
3363 /* OK, set up the swap map and apply the bad block list */ in SYSCALL_DEFINE2()
3383 * be above MAX_PAGE_ORDER incase of a large swap file. in SYSCALL_DEFINE2()
3415 * When discard is enabled for swap with no particular in SYSCALL_DEFINE2()
3416 * policy flagged, we set all swap discard flags here in in SYSCALL_DEFINE2()
3426 * perform discards for released swap page-clusters. in SYSCALL_DEFINE2()
3453 * swap device. in SYSCALL_DEFINE2()
3468 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s%s%s\n", in SYSCALL_DEFINE2()
3533 * Verify that nr swap entries are valid and increment their swap map counts.
3538 * - swap-cache reference is requested but there is already one. -> EEXIST
3539 * - swap-cache reference is requested but the entry is not used. -> ENOENT
3540 * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
3567 * swapin_readahead() doesn't check if a swap entry is valid, so the in __swap_duplicate()
3568 * swap entry could be SWAP_MAP_BAD. Check here with lock held. in __swap_duplicate()
3620 * Help swapoff by noting that swap entry belongs to shmem/tmpfs
3629 * Increase reference count of swap entry by 1.
3645 * @entry: first swap entry from which we allocate nr swap cache.
3647 * Called when allocating swap cache for existing swap entries,
3649 * -EEXIST means there is a swap cache.
3674 return swp_swap_info(folio->swap)->swap_file->f_mapping; in swapcache_mapping()
3680 return swap_cache_index(folio->swap); in __folio_swap_cache_index()
3685 * add_swap_count_continuation - called when a swap count is duplicated
3688 * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
3720 * __swap_duplicate(): the swap device may be swapoff in add_swap_count_continuation()
3733 * The higher the swap count, the more likely it is that tasks in add_swap_count_continuation()
3734 * will race to add swap count continuation: we need to avoid in add_swap_count_continuation()
3927 * We've already scheduled a throttle, avoid taking the global swap in __folio_throttle_swaprate()
3952 pr_emerg("Not enough memory for swap heads, swap is disabled\n"); in swapfile_init()