Lines Matching full:pages
51 * 1<<6=64 pages -> 256K chunk when page size is 4K. This gives us
52 * the benefit that all the chunks are 64 pages aligned then the
57 * 1<<18=256K pages -> 1G chunk when page size is 4K. This is the
62 * 1<<31=2G pages -> 8T chunk when page size is 4K. This should be
79 /* Number of small pages copied (in size of TARGET_PAGE_SIZE) */
148 /* Postcopy priority thread is used to receive postcopy requested pages */
169 * An array of temp host huge pages to be used, one for each postcopy
215 /* A tree of pages that we requested to the source VM */
223 * The mutex helps to maintain the requested pages that we sent to the
237 * finished loading the urgent pages. If that happens, the two threads
239 * wait until all pages received.
303 /* pages already send at the beginning of current iteration */
306 /* pages transferred per second */
445 * This assures that we can't mix pages from one iteration through
446 * ram pages with pages for the following iteration. We really
448 * dirty pages. For historical reasons, we do that after each
491 * dirty bitmap only once for 1<<10=1K continuous guest pages