Lines Matching +full:maximum +full:- +full:speed

31 data copies by bypassing the host networking stack. In particular, a TCP-based
32 migration, under certain types of memory-bound workloads, may take a more
38 over Converged Ethernet) as well as Infiniband-based. This implementation of
56 of RDMA migration may in fact be harmful to co-located VMs or other
65 bulk-phase round of the migration and can be enabled for extremely
66 high-performance RDMA hardware using the following command:
69 $ migrate_set_capability rdma-pin-all on # disabled by default
74 On the other hand, this will also significantly speed up the bulk round
89 First, set the migration speed to match your hardware's capabilities:
92 $ migrate_set_parameter max-bandwidth 40g # or whatever is the MAX of your RDMA device
96 qemu ..... -incoming rdma:host:port
101 $ migrate -d rdma:host:port
107 Using a 40gbps infiniband link performing a worst-case stress test,
111 $ apt-get install stress
112 $ stress --vm-bytes 7500M --vm 1 --vm-keep
123 1. rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
124 2. rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
132 the bulk round and does not need to be re-registered during the successive
176 as follows (migration-rdma.c):
201 The maximum number of repeats is hard-coded to 4096. This is a conservative
202 limit based on the maximum size of a SEND message along with empirical
203 observations on the maximum future benefit of simultaneous page registrations.
208 3. Ready (control-channel is available)
209 4. QEMU File (for sending non-live device state)
226 After ram block exchange is completed, we have two protocol-level
227 functions, responsible for communicating control-channel commands
240 5. Verify that the command-type and version received matches the one we expected.
277 when transmitting non-live state, such as devices or to send
295 at connection-setup time before any infiniband traffic is generated.
304 no length field. The maximum size of the 'private data' section
322 Finally: Negotiation happens with the Flags field: If the primary-VM
324 will return a zero-bit for that flag and the primary-VM will understand
326 capability on the primary-VM side.
337 describe above to deliver bytes without changing the upper-level
344 to hold on to the bytes received from control-channel's SEND
347 Each time we receive a complete "QEMU File" control-channel
356 asking for a new SEND message to re-fill the buffer.
361 At the beginning of the migration, (migration-rdma.c),
368 addresses and possibly includes pre-registered RDMA keys in case dynamic
369 page registration was disabled on the server-side, otherwise not.
374 Pages are migrated in "chunks" (hard-coded to 1 Megabyte right now).
392 Error-handling:
406 socket is broken during a non-RDMA based migration.
410 1. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
413 2. Use of the recent /proc/<pid>/pagemap would likely speed up
415 3. Also, some form of balloon-device usage tracking would also
417 4. Use LRU to provide more fine-grained direction of UNREGISTER
419 5. Expose UNREGISTER support to the user by way of workload-specific