Lines Matching +full:dma +full:- +full:requests
31 data copies by bypassing the host networking stack. In particular, a TCP-based
32 migration, under certain types of memory-bound workloads, may take a more
38 over Converged Ethernet) as well as Infiniband-based. This implementation of
56 of RDMA migration may in fact be harmful to co-located VMs or other
65 bulk-phase round of the migration and can be enabled for extremely
66 high-performance RDMA hardware using the following command:
69 $ migrate_set_capability rdma-pin-all on # disabled by default
92 $ migrate_set_parameter max-bandwidth 40g # or whatever is the MAX of your RDMA device
96 qemu ..... -incoming rdma:host:port
101 $ migrate -d rdma:host:port
107 Using a 40gbps infiniband link performing a worst-case stress test,
111 $ apt-get install stress
112 $ stress --vm-bytes 7500M --vm 1 --vm-keep
123 1. rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
124 2. rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
132 the bulk round and does not need to be re-registered during the successive
152 for VM's ram) do not (to behave like an actual DMA).
157 2. (SEND only) work requests to be posted on both
176 as follows (migration-rdma.c):
179 2. Both sides post two RQ work requests
199 check this field and register all requests found in the array of commands located
201 The maximum number of repeats is hard-coded to 4096. This is a conservative
208 3. Ready (control-channel is available)
209 4. QEMU File (for sending non-live device state)
226 After ram block exchange is completed, we have two protocol-level
227 functions, responsible for communicating control-channel commands
240 5. Verify that the command-type and version received matches the one we expected.
277 when transmitting non-live state, such as devices or to send
295 at connection-setup time before any infiniband traffic is generated.
322 Finally: Negotiation happens with the Flags field: If the primary-VM
324 will return a zero-bit for that flag and the primary-VM will understand
326 capability on the primary-VM side.
337 describe above to deliver bytes without changing the upper-level
344 to hold on to the bytes received from control-channel's SEND
347 Each time we receive a complete "QEMU File" control-channel
356 asking for a new SEND message to re-fill the buffer.
361 At the beginning of the migration, (migration-rdma.c),
368 addresses and possibly includes pre-registered RDMA keys in case dynamic
369 page registration was disabled on the server-side, otherwise not.
374 Pages are migrated in "chunks" (hard-coded to 1 Megabyte right now).
392 Error-handling:
406 socket is broken during a non-RDMA based migration.
410 1. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
415 3. Also, some form of balloon-device usage tracking would also
417 4. Use LRU to provide more fine-grained direction of UNREGISTER
418 requests for unpinning memory in an overcommitted environment.
419 5. Expose UNREGISTER support to the user by way of workload-specific