#
1b1fdfea |
| 31-Jan-2013 |
Orit Wasserman <owasserm@redhat.com> |
Allow XBZRLE decoding without enabling the capability
Before this fix we couldn't load a guest from XBZRLE compressed file.
For example: The user activated the XBZRLE capability The user run migrat
Allow XBZRLE decoding without enabling the capability
Before this fix we couldn't load a guest from XBZRLE compressed file.
For example: The user activated the XBZRLE capability The user run migrate -d "exec:gzip -c > vm.gz" The user won't be able to load vm.gz and get an error.
Signed-off-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|
#
9c339485 |
| 20-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
Protect migration_bitmap_sync() with the ramlist lock
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Reviewed-by: Eric Blake <ebl
Protect migration_bitmap_sync() with the ramlist lock
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Reviewed-by: Eric Blake <eblake@redhat.com>
show more ...
|
#
fb3409de |
| 20-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
Unlock ramlist lock also in error case
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Reviewed-by: Eric Blake <eblake@redhat.com>
|
#
b823ceaa |
| 10-Dec-2012 |
Juan Quintela <quintela@redhat.com> |
ram: refactor ram_save_block() return value
It could only return 0 if we only found dirty xbzrle pages that hadn't changed (i.e. they were written with the same content). We don't care about that c
ram: refactor ram_save_block() return value
It could only return 0 if we only found dirty xbzrle pages that hadn't changed (i.e. they were written with the same content). We don't care about that case, it is the same than nothing dirty.
So now the return of the function is how much have it written, nothing else. Adjust callers.
And we also made ram_save_iterate() return the number of transferred bytes, not the number of transferred pages.
Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|
#
3f7d7b09 |
| 18-Oct-2012 |
Juan Quintela <quintela@redhat.com> |
ram: account the amount of transferred ram better
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
#
4c8ae0f6 |
| 17-Oct-2012 |
Juan Quintela <quintela@redhat.com> |
ram: optimize migration bitmap walking
Instead of testing each page individually, we search what is the next dirty page with a bitmap operation. We have to reorganize the code to move from a "for"
ram: optimize migration bitmap walking
Instead of testing each page individually, we search what is the next dirty page with a bitmap operation. We have to reorganize the code to move from a "for" loop, to a while(dirty) loop.
Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|
#
ece79318 |
| 17-Oct-2012 |
Juan Quintela <quintela@redhat.com> |
ram: Use memory_region_test_and_clear_dirty
This avoids having to do two walks over the dirty bitmap, once reading the dirty bits, and anthoer cleaning them.
Signed-off-by: Juan Quintela <quintela@
ram: Use memory_region_test_and_clear_dirty
This avoids having to do two walks over the dirty bitmap, once reading the dirty bits, and anthoer cleaning them.
Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|
#
5f718a15 |
| 17-Oct-2012 |
Juan Quintela <quintela@redhat.com> |
ram: Add last_sent_block
This is the last block from where we have sent data.
Signed-off-by: Orit Wasserman <owasserm@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
|
#
b23a9a5c |
| 17-Oct-2012 |
Juan Quintela <quintela@redhat.com> |
ram: rename last_block to last_seen_block
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
#
e4ed1541 |
| 21-Sep-2012 |
Juan Quintela <quintela@redhat.com> |
savevm: New save live migration method: pending
Code just now does (simplified for clarity)
if (qemu_savevm_state_iterate(s->file) == 1) { vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
savevm: New save live migration method: pending
Code just now does (simplified for clarity)
if (qemu_savevm_state_iterate(s->file) == 1) { vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); qemu_savevm_state_complete(s->file); }
Problem here is that qemu_savevm_state_iterate() returns 1 when it knows that remaining memory to sent takes less than max downtime.
But this means that we could end spending 2x max_downtime, one downtime in qemu_savevm_iterate, and the other in qemu_savevm_state_complete.
Changed code to:
pending_size = qemu_savevm_state_pending(s->file, max_size); DPRINTF("pending size %lu max %lu\n", pending_size, max_size); if (pending_size >= max_size) { ret = qemu_savevm_state_iterate(s->file); } else { vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); qemu_savevm_state_complete(s->file); }
So what we do is: at current network speed, we calculate the maximum number of bytes we can sent: max_size.
Then we ask every save_live section how much they have pending. If they are less than max_size, we move to complete phase, otherwise we do an iterate one.
This makes things much simpler, because now individual sections don't have to caluclate the bandwidth (it was implossible to do right from there).
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
b2a8658e |
| 17-Aug-2011 |
Umesh Deshpande <udeshpan@redhat.com> |
protect the ramlist with a separate mutex
Add the new mutex that protects shared state between ram_save_live and the iothread. If the iothread mutex has to be taken together with the ramlist mutex,
protect the ramlist with a separate mutex
Add the new mutex that protects shared state between ram_save_live and the iothread. If the iothread mutex has to be taken together with the ramlist mutex, the iothread shall always be _outside_.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Umesh Deshpande <udeshpan@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
show more ...
|
#
f798b07f |
| 18-Aug-2011 |
Umesh Deshpande <udeshpan@redhat.com> |
add a version number to ram_list
This will be used to detect if last_block might have become invalid across different calls to ram_save_live.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Sign
add a version number to ram_list
This will be used to detect if last_block might have become invalid across different calls to ram_save_live.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Umesh Deshpande <udeshpan@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
show more ...
|
#
abb26d63 |
| 14-Nov-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
exec: sort the memory from biggest to smallest
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
|
#
a3161038 |
| 14-Nov-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
exec: change RAM list to a TAILQ
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
|
#
0d6d3c87 |
| 14-Nov-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
exec: change ramlist from MRU order to a 1-item cache
Most of the time, only 2 items will be active (from/to for a string operation, or code/data). But TCG guests likely won't have gigabytes of mem
exec: change ramlist from MRU order to a 1-item cache
Most of the time, only 2 items will be active (from/to for a string operation, or code/data). But TCG guests likely won't have gigabytes of memory, so this actually goes down to 1 item.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|
#
244eaa75 |
| 12-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
migration: fix migration_bitmap leak
Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
|
#
9c17d615 |
| 17-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
softmmu: move include files to include/sysemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
1de7afc9 |
| 17-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
misc: move include files to include/qemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
caf71f86 |
| 17-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
migration: move include files to include/migration/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
83c9089e |
| 17-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
monitor: move include files to include/monitor/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
022c62cb |
| 17-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
exec: move include files to include/exec/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
7fa22f2b |
| 24-Oct-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
net: do not include net.h everywhere
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
077805fa |
| 25-Sep-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
janitor: do not rely on indirect inclusions of or from qemu-char.h
Various header files rely on qemu-char.h including qemu-config.h or main-loop.h, but they really do not need qemu-char.h at all (pa
janitor: do not rely on indirect inclusions of or from qemu-char.h
Various header files rely on qemu-char.h including qemu-config.h or main-loop.h, but they really do not need qemu-char.h at all (particularly interesting is the case of the block layer!). Clean this up, and also add missing inclusions of qemu-char.h itself.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
6f991980 |
| 17-Dec-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge commit '1dd3a74d2ee2d873cde0b390b536e45420b3fe05' into HEAD
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
a2cb15b0 |
| 12-Dec-2012 |
Michael S. Tsirkin <mst@redhat.com> |
pci: update all users to look in pci/
update all users so we can remove the makefile hack.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|