8597af76 | 06-Dec-2024 |
Peter Xu <peterx@redhat.com> |
migration/block: Rewrite disk activation
This patch proposes a flag to maintain disk activation status globally. It mostly rewrites disk activation mgmt for QEMU, including COLO and QMP command xen
migration/block: Rewrite disk activation
This patch proposes a flag to maintain disk activation status globally. It mostly rewrites disk activation mgmt for QEMU, including COLO and QMP command xen_save_devices_state.
Backgrounds ===========
We have two problems on disk activations, one resolved, one not.
Problem 1: disk activation recover (for switchover interruptions) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When migration is either cancelled or failed during switchover, especially when after the disks are inactivated, QEMU needs to remember re-activate the disks again before vm starts.
It used to be done separately in two paths: one in qmp_migrate_cancel(), the other one in the failure path of migration_completion().
It used to be fixed in different commits, all over the places in QEMU. So these are the relevant changes I saw, I'm not sure if it's complete list:
- In 2016, commit fe904ea824 ("migration: regain control of images when migration fails to complete")
- In 2017, commit 1d2acc3162 ("migration: re-active images while migration been canceled after inactive them")
- In 2023, commit 6dab4c93ec ("migration: Attempt disk reactivation in more failure scenarios")
Now since we have a slightly better picture maybe we can unify the reactivation in a single path.
One side benefit of doing so is, we can move the disk operation outside QMP command "migrate_cancel". It's possible that in the future we may want to make "migrate_cancel" be OOB-compatible, while that requires the command doesn't need BQL in the first place. This will already do that and make migrate_cancel command lightweight.
Problem 2: disk invalidation on top of invalidated disks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is an unresolved bug for current QEMU. Link in "Resolves:" at the end. It turns out besides the src switchover phase (problem 1 above), QEMU also needs to remember block activation on destination.
Consider two continuous migration in a row, where the VM was always paused. In that scenario, the disks are not activated even until migration completed in the 1st round. When the 2nd round starts, if QEMU doesn't know the status of the disks, it needs to try inactivate the disk again.
Here the issue is the block layer API bdrv_inactivate_all() will crash a QEMU if invoked on already inactive disks for the 2nd migration. For detail, see the bug link at the end.
Implementation ==============
This patch proposes to maintain disk activation with a global flag, so we know:
- If we used to inactivate disks for migration, but migration got cancelled, or failed, QEMU will know it should reactivate the disks.
- On incoming side, if the disks are never activated but then another migration is triggered, QEMU should be able to tell that inactivate is not needed for the 2nd migration.
We used to have disk_inactive, but it only solves the 1st issue, not the 2nd. Also, it's done in completely separate paths so it's extremely hard to follow either how the flag changes, or the duration that the flag is valid, and when we will reactivate the disks.
Convert the existing disk_inactive flag into that global flag (also invert its naming), and maintain the disk activation status for the whole lifecycle of qemu. That includes the incoming QEMU.
Put both of the error cases of source migration (failure, cancelled) together into migration_iteration_finish(), which will be invoked for either of the scenario. So from that part QEMU should behave the same as before. However with such global maintenance on disk activation status, we not only cleanup quite a few temporary paths that we try to maintain the disk activation status (e.g. in postcopy code), meanwhile it fixes the crash for problem 2 in one shot.
For freshly started QEMU, the flag is initialized to TRUE showing that the QEMU owns the disks by default.
For incoming migrated QEMU, the flag will be initialized to FALSE once and for all showing that the dest QEMU doesn't own the disks until switchover. That is guaranteed by the "once" variable.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2395 Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206230838.1111496-7-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
show more ...
|
87d67fad | 17-Jun-2024 |
Fabiano Rosas <farosas@suse.de> |
monitor: Stop removing non-duplicated fds
monitor_fdsets_cleanup() currently has three responsibilities:
1- Remove the fds that have been marked for removal(->removed=true) by qmp_remove_fd(). T
monitor: Stop removing non-duplicated fds
monitor_fdsets_cleanup() currently has three responsibilities:
1- Remove the fds that have been marked for removal(->removed=true) by qmp_remove_fd(). This is overly complicated, but ok.
2- Remove any file descriptors that have been passed into QEMU and never duplicated[1,2]. A file descriptor without duplicates indicates that no part of QEMU has made use of it. This is problematic because the current implementation does it only if the guest is not running and the monitor is closed.
3- Remove/free fdsets that have become empty due to the above removals. This is ok.
The scenario described in (2) is starting to show some cracks now that we're trying to consume fds from the migration code:
- Doing cleanup every time the last monitor connection closes works to reap unused fds, but also has the side effect of forcing the management layer to pass the file descriptors again in case of a disconnect/re-connect, if that happened to be the only monitor connection.
Another side effect is that removing an fd with qmp_remove_fd() is effectively delayed until the last monitor connection closes.
The usage of mon_refcount is also problematic because it's racy.
- Checking runstate_is_running() skips the cleanup unless the VM is running and avoids premature cleanup of the fds, but also has the side effect of blocking the legitimate removal of an fd via qmp_remove_fd() if the VM happens to be in another state.
This affects qmp_remove_fd() and qmp_query_fdsets() in particular because requesting a removal at a bad time (guest stopped) might cause an fd to never be removed, or to be removed at a much later point in time, causing the query command to continue showing the supposedly removed fd/fdset.
Note that file descriptors that *have* been duplicated are owned by the code that uses them and will be removed after qemu_close() is called. Therefore we've decided that the best course of action to avoid the undesired side-effects is to stop managing non-duplicated file descriptors.
1- efb87c1697 ("monitor: Clean up fd sets on monitor disconnect") 2- ebe52b592d ("monitor: Prevent removing fd from set during init")
Reviewed-by: Peter Xu <peterx@redhat.com> [fix logic mistake: s/fdset_free/fdset_free_if_empty] Signed-off-by: Fabiano Rosas <farosas@suse.de>
show more ...
|