Lines Matching full:migration

3  * Memory Migration functionality - linux/mm/migrate.c
7 * Page migration was first developed in the context of the memory hotplug
8 * project. The main authors of the migration code are:
111 * compaction threads can race against page migration functions in isolate_movable_page()
115 * being (wrongly) re-isolated while it is under migration, in isolate_movable_page()
163 * from where they were once taken off for compaction/migration.
203 * Restore a potential migration pte to a working pte entry
227 /* PMD-mapped THP migration entry */ in remove_migration_pte()
241 * Recheck VMA as permissions can change since migration started in remove_migration_pte()
291 * Get rid of all migration entries and replace them by
308 * Something used the pte of a page under migration. We need to
309 * get to the page and wait until migration is finished.
331 * Once page cache replacement of page migration started, page_count in __migration_entry_wait()
688 * Migration functions
741 * async migration. Release the taken locks in buffer_migrate_lock_buffers()
837 * Migration function for pages with buffers. This function can only be used
886 * migration. Writeout may mean we loose the lock and the in writeout()
888 * At this point we know that the migration attempt cannot in writeout()
903 * Default handling if a filesystem does not provide a migration function.
909 /* Only writeback pages in full synchronous migration */ in fallback_migrate_page()
963 * for page migration. in move_to_new_page()
973 * isolation step. In that case, we shouldn't try migration. in move_to_new_page()
1052 * Only in the case of a full synchronous migration is it in __unmap_and_move()
1074 * of migration. File cache pages are no problem because of page_lock() in __unmap_and_move()
1075 * File Caches may use write_page() or lock_page() in migration, then, in __unmap_and_move()
1122 /* Establish migration ptes */ in __unmap_and_move()
1146 * If migration is successful, decrease refcount of the newpage in __unmap_and_move()
1221 * If migration is successful, releases reference grabbed during in unmap_and_move()
1257 * Counterpart of unmap_and_move_page() for hugepage migration.
1260 * because there is no race between I/O and migration for hugepage.
1268 * hugepage migration fails without data corruption.
1270 * There is also no race when direct I/O is issued on the page under migration,
1271 * because then pte is replaced with migration swap entry and direct I/O code
1272 * will wait in the page fault for migration to complete.
1287 * This check is necessary because some callers of hugepage migration in unmap_and_move_huge_page()
1290 * kicking migration. in unmap_and_move_huge_page()
1383 * If migration was not successful and there's a freeing callback, use in unmap_and_move_huge_page()
1397 * supplied as the target for the page migration
1401 * as the target of the page migration.
1402 * @put_new_page: The function used to free target pages if migration
1405 * @mode: The migration mode that specifies the constraints for
1406 * page migration, if any.
1407 * @reason: The reason for page migration.
1446 * during migration. in migrate_pages()
1464 * THP migration might be unsupported or the in migrate_pages()
1509 * removed from migration page list and not in migrate_pages()
1564 * clear __GFP_RECLAIM to make the migration callback in alloc_migration_target()
1771 /* The page is successfully queued for migration */ in do_pages_move()
1984 * Returns true if this is a safe migration target node for misplaced NUMA
2038 * migrate_misplaced_transhuge_page() skips page migration's usual in numamigrate_isolate_page()
2040 * has been isolated: a GUP pin, or any other pin, prevents migration. in numamigrate_isolate_page()
2056 * disappearing underneath us during migration. in numamigrate_isolate_page()
2154 /* Prepare a page as a migration target */ in migrate_misplaced_transhuge_page()
2419 * any kind of migration. Side effect is that it "freezes" the in migrate_vma_collect_pmd()
2432 * set up a special migration page table entry now. in migrate_vma_collect_pmd()
2440 /* Setup special migration page table entry */ in migrate_vma_collect_pmd()
2490 * @migrate: migrate struct containing all migration information
2522 * migrate_page_move_mapping(), except that here we allow migration of a
2546 * GUP will fail for those. Yet if there is a pending migration in migrate_vma_check_page()
2547 * a thread might try to wait on the pte migration entry and in migrate_vma_check_page()
2549 * differentiate a regular pin from migration wait. Hence to in migrate_vma_check_page()
2551 * infinite loop (one stoping migration because the other is in migrate_vma_check_page()
2552 * waiting on pte migration entry). We always return true here. in migrate_vma_check_page()
2572 * @migrate: migrate struct containing all migration information
2598 * a deadlock between 2 concurrent migration where each in migrate_vma_prepare()
2679 * migrate_vma_unmap() - replace page mapping with special migration pte entry
2680 * @migrate: migrate struct containing all migration information
2682 * Replace page mapping (CPU page table pte) with a special migration pte entry
2738 * @args: contains the vma, start, and pfns arrays for the migration
2759 * with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration from
2769 * properly set the destination entry like for regular migration. Note that
2771 * migration was successful for those entries after calling migrate_vma_pages()
2772 * just like for regular migration.
2975 * @migrate: migrate struct containing all migration information
2978 * struct page. This effectively finishes the migration from source page to the
3058 * @migrate: migrate struct containing all migration information
3060 * This replaces the special migration pte entry with either a mapping to the
3061 * new page if migration was successful for that page, or to the original page