xref: /qemu/docs/devel/migration/vfio.rst (revision 3228d311ab1882f75b04d080d33a71fc7a0bcac5)
1=====================
2VFIO device migration
3=====================
4
5Migration of virtual machine involves saving the state for each device that
6the guest is running on source host and restoring this saved state on the
7destination host. This document details how saving and restoring of VFIO
8devices is done in QEMU.
9
10Migration of VFIO devices consists of two phases: the optional pre-copy phase,
11and the stop-and-copy phase. The pre-copy phase is iterative and allows to
12accommodate VFIO devices that have a large amount of data that needs to be
13transferred. The iterative pre-copy phase of migration allows for the guest to
14continue whilst the VFIO device state is transferred to the destination, this
15helps to reduce the total downtime of the VM. VFIO devices opt-in to pre-copy
16support by reporting the VFIO_MIGRATION_PRE_COPY flag in the
17VFIO_DEVICE_FEATURE_MIGRATION ioctl.
18
19When pre-copy is supported, it's possible to further reduce downtime by
20enabling "switchover-ack" migration capability.
21VFIO migration uAPI defines "initial bytes" as part of its pre-copy data stream
22and recommends that the initial bytes are sent and loaded in the destination
23before stopping the source VM. Enabling this migration capability will
24guarantee that and thus, can potentially reduce downtime even further.
25
26To support migration of multiple devices that might do P2P transactions between
27themselves, VFIO migration uAPI defines an intermediate P2P quiescent state.
28While in the P2P quiescent state, P2P DMA transactions cannot be initiated by
29the device, but the device can respond to incoming ones. Additionally, all
30outstanding P2P transactions are guaranteed to have been completed by the time
31the device enters this state.
32
33All the devices that support P2P migration are first transitioned to the P2P
34quiescent state and only then are they stopped or started. This makes migration
35safe P2P-wise, since starting and stopping the devices is not done atomically
36for all the devices together.
37
38Thus, multiple VFIO devices migration is allowed only if all the devices
39support P2P migration. Single VFIO device migration is allowed regardless of
40P2P migration support.
41
42A detailed description of the UAPI for VFIO device migration can be found in
43the comment for the ``vfio_device_mig_state`` structure in the header file
44linux-headers/linux/vfio.h.
45
46VFIO implements the device hooks for the iterative approach as follows:
47
48* A ``save_setup`` function that sets up migration on the source.
49
50* A ``load_setup`` function that sets the VFIO device on the destination in
51  _RESUMING state.
52
53* A ``state_pending_estimate`` function that reports an estimate of the
54  remaining pre-copy data that the vendor driver has yet to save for the VFIO
55  device.
56
57* A ``state_pending_exact`` function that reads pending_bytes from the vendor
58  driver, which indicates the amount of data that the vendor driver has yet to
59  save for the VFIO device.
60
61* An ``is_active_iterate`` function that indicates ``save_live_iterate`` is
62  active only when the VFIO device is in pre-copy states.
63
64* A ``save_live_iterate`` function that reads the VFIO device's data from the
65  vendor driver during iterative pre-copy phase.
66
67* A ``switchover_ack_needed`` function that checks if the VFIO device uses
68  "switchover-ack" migration capability when this capability is enabled.
69
70* A ``save_state`` function to save the device config space if it is present.
71
72* A ``save_live_complete_precopy`` function that sets the VFIO device in
73  _STOP_COPY state and iteratively copies the data for the VFIO device until
74  the vendor driver indicates that no data remains.
75
76* A ``load_state`` function that loads the config section and the data
77  sections that are generated by the save functions above.
78
79* A ``load_state_buffer`` function that loads the device state and the device
80  config that arrived via multifd channels.
81  It's used only in the multifd mode.
82
83* ``cleanup`` functions for both save and load that perform any migration
84  related cleanup.
85
86
87The VFIO migration code uses a VM state change handler to change the VFIO
88device state when the VM state changes from running to not-running, and
89vice versa.
90
91Similarly, a migration state change handler is used to trigger a transition of
92the VFIO device state when certain changes of the migration state occur. For
93example, the VFIO device state is transitioned back to _RUNNING in case a
94migration failed or was canceled.
95
96System memory dirty pages tracking
97----------------------------------
98
99A ``log_global_start`` and ``log_global_stop`` memory listener callback informs
100the VFIO dirty tracking module to start and stop dirty page tracking. A
101``log_sync`` memory listener callback queries the dirty page bitmap from the
102dirty tracking module and marks system memory pages which were DMA-ed by the
103VFIO device as dirty. The dirty page bitmap is queried per container.
104
105Currently there are two ways dirty page tracking can be done:
106(1) Device dirty tracking:
107In this method the device is responsible to log and report its DMAs. This
108method can be used only if the device is capable of tracking its DMAs.
109Discovering device capability, starting and stopping dirty tracking, and
110syncing the dirty bitmaps from the device are done using the DMA logging uAPI.
111More info about the uAPI can be found in the comments of the
112``vfio_device_feature_dma_logging_control`` and
113``vfio_device_feature_dma_logging_report`` structures in the header file
114linux-headers/linux/vfio.h.
115
116(2) VFIO IOMMU module:
117In this method dirty tracking is done by IOMMU. However, there is currently no
118IOMMU support for dirty page tracking. For this reason, all pages are
119perpetually marked dirty, unless the device driver pins pages through external
120APIs in which case only those pinned pages are perpetually marked dirty.
121
122If the above two methods are not supported, all pages are perpetually marked
123dirty by QEMU.
124
125By default, dirty pages are tracked during pre-copy as well as stop-and-copy
126phase. So, a page marked as dirty will be copied to the destination in both
127phases. Copying dirty pages in pre-copy phase helps QEMU to predict if it can
128achieve its downtime tolerances. If QEMU during pre-copy phase keeps finding
129dirty pages continuously, then it understands that even in stop-and-copy phase,
130it is likely to find dirty pages and can predict the downtime accordingly.
131
132QEMU also provides a per device opt-out option ``pre-copy-dirty-page-tracking``
133which disables querying the dirty bitmap during pre-copy phase. If it is set to
134off, all dirty pages will be copied to the destination in stop-and-copy phase
135only.
136
137System memory dirty pages tracking when vIOMMU is enabled
138---------------------------------------------------------
139
140With vIOMMU, an IO virtual address range can get unmapped while in pre-copy
141phase of migration. In that case, the unmap ioctl returns any dirty pages in
142that range and QEMU reports corresponding guest physical pages dirty. During
143stop-and-copy phase, an IOMMU notifier is used to get a callback for mapped
144pages and then dirty pages bitmap is fetched from VFIO IOMMU modules for those
145mapped ranges. If device dirty tracking is enabled with vIOMMU, live migration
146will be blocked.
147
148Flow of state changes during Live migration
149===========================================
150
151Below is the state change flow during live migration for a VFIO device that
152supports both precopy and P2P migration. The flow for devices that don't
153support it is similar, except that the relevant states for precopy and P2P are
154skipped.
155The values in the parentheses represent the VM state, the migration state, and
156the VFIO device state, respectively.
157
158Live migration save path
159------------------------
160
161::
162
163                           QEMU normal running state
164                           (RUNNING, _NONE, _RUNNING)
165                                      |
166                     migrate_init spawns migration_thread
167            Migration thread then calls each device's .save_setup()
168                          (RUNNING, _SETUP, _PRE_COPY)
169                                      |
170                         (RUNNING, _ACTIVE, _PRE_COPY)
171  If device is active, get pending_bytes by .state_pending_{estimate,exact}()
172       If total pending_bytes >= threshold_size, call .save_live_iterate()
173                Data of VFIO device for pre-copy phase is copied
174      Iterate till total pending bytes converge and are less than threshold
175                                      |
176       On migration completion, the vCPUs and the VFIO device are stopped
177              The VFIO device is first put in P2P quiescent state
178                    (FINISH_MIGRATE, _ACTIVE, _PRE_COPY_P2P)
179                                      |
180                Then the VFIO device is put in _STOP_COPY state
181                     (FINISH_MIGRATE, _ACTIVE, _STOP_COPY)
182         .save_live_complete_precopy() is called for each active device
183      For the VFIO device, iterate in .save_live_complete_precopy() until
184                               pending data is 0
185                                      |
186                     (POSTMIGRATE, _COMPLETED, _STOP_COPY)
187            Migraton thread schedules cleanup bottom half and exits
188                                      |
189                           .save_cleanup() is called
190                        (POSTMIGRATE, _COMPLETED, _STOP)
191
192Live migration resume path
193--------------------------
194
195::
196
197             Incoming migration calls .load_setup() for each device
198                          (RESTORE_VM, _ACTIVE, _STOP)
199                                      |
200     For each device, .load_state() is called for that device section data
201                 transmitted via the main migration channel.
202     For data transmitted via multifd channels .load_state_buffer() is called
203                                   instead.
204                        (RESTORE_VM, _ACTIVE, _RESUMING)
205                                      |
206  At the end, .load_cleanup() is called for each device and vCPUs are started
207              The VFIO device is first put in P2P quiescent state
208                        (RUNNING, _ACTIVE, _RUNNING_P2P)
209                                      |
210                           (RUNNING, _NONE, _RUNNING)
211
212Postcopy
213========
214
215Postcopy migration is currently not supported for VFIO devices.
216