/qemu/docs/devel/migration/ |
H A D | compatibility.rst | 7 When we do migration, we have two QEMU processes: the source and the 18 Let's start with a practical example, we start with: 36 I am going to list the number of combinations that we can have. Let's 50 This are the easiest ones, we will not talk more about them in this 53 Now we start with the more interesting cases. Consider the case where 54 we have the same QEMU version in both sides (qemu-5.2) but we are using 72 because we have the limitation than qemu-5.1 doesn't know pc-5.2. So 77 This migration is known as newer to older. We need to make sure 78 when we are developing 5.2 we need to take care about not to break 79 migration to qemu-5.1. Notice that we can't make updates to [all …]
|
H A D | main.rst | 10 Restoring a guest is just the opposite operation: we need to load the 16 be relaxed a bit, but for now we can consider that configuration has 19 Once that we are able to save/restore a guest, a new functionality is 122 (which we do to varying degrees in the existing code). Check that offsets 170 We are declaring the state with name "pckbd". The ``version_id`` is 171 3, and there are 4 uint8_t fields in the KBDState structure. We 182 For devices that are ``qdev`` based, we can register the device in the class 232 When we migrate a device, we save/load the state as a series 233 of fields. Sometimes, due to bugs or new functionality, we need to 254 On the receiving side, if we found a subsection for a device that we [all …]
|
/qemu/docs/system/ |
H A D | introduction.rst | 82 For a non-x86 system where we emulate a broad range of machine types, 88 command line to launch VMs, we do want to highlight that there are a 152 In the following example we first define a ``virt`` machine which is a 153 general purpose platform for running Aarch64 guests. We enable 154 virtualisation so we can use KVM inside the emulated guest. As the 155 ``virt`` machine comes with some built in pflash devices we give them 156 names so we can override the defaults later. 164 We then define the 4 vCPUs using the ``max`` option which gives us all 165 the Arm features QEMU is capable of emulating. We enable a more 167 algorithm. We explicitly specify TCG acceleration even though QEMU [all …]
|
/qemu/target/hexagon/ |
H A D | README | 2 processor(DSP). We also support Hexagon Vector eXtensions (HVX). HVX 12 We presented an overview of the project at the 2019 KVM Forum. 38 We start with scripts that generate a bunch of include files. This 109 cases this is necessary for correct execution. We can also override for 113 The gen_tcg.h file has any overrides. For example, we could write 118 C semantics are specified only with macros, we can override the default with 125 In gen_tcg.h, we use the shortcode 129 There are also cases where we brute force the TCG code generation. 134 won't fit in a TCGv or TCGv_i64, so we pass TCGv_ptr variables to pass the 158 Notice that we also generate a variable named <operand>_off for each operand of [all …]
|
H A D | fma_emu.c | 163 * On the add/sub, we need to be able to shift out lots of bits, but need a 199 /* Keep around shifted out bits... we might need those later */ in accum_sub() 252 /* Keep around shifted out bits... we might need those later */ in accum_add() 327 * We want DF_MANTBITS bits of mantissa plus the leading one. in accum_round_float64() 328 * That means that we want DF_MANTBITS+1 bits, or 0x000000000000FF_FFFF in accum_round_float64() 329 * So we need to normalize right while the high word is non-zero and in accum_round_float64() 338 * We want to normalize left until we have a leading one in bit 24 in accum_round_float64() 339 * Theoretically, we only need to shift a maximum of one to the left if we in accum_round_float64() 340 * shifted out lots of bits from B, or if we had no shift / 1 shift sticky in accum_round_float64() 347 * OK, now we might need to denormalize because of potential underflow. in accum_round_float64() [all …]
|
/qemu/tests/tcg/ |
H A D | Makefile.target | 5 # These are complicated by the fact we want to build them for guest 6 # systems. This requires knowing what guests we are building and which 7 # ones we have cross-compilers for or docker images with 14 # We only include the host build system for SRC_PATH and we don't 15 # bother with the common rules.mk. We expect the following: 19 # BUILD_STATIC - are we building static binaries 27 # We also accept SPEED=slow to enable slower running tests 29 # We also expect to be in the tests build dir for the FOO-(linux-user|softmmu). 65 # to work around the pipe squashing the status we only pipe the result if 66 # we know it failed and then force failure at the end. [all …]
|
/qemu/docs/devel/ |
H A D | s390-dasd-ipl.rst | 34 the real operating system is loaded into memory and we are ready to hand 49 should contain the needed flags for the operating system we have loaded. The 50 psw's instruction address will point to the location in memory where we want 68 In theory we should merely have to do the following to IPL/boot a guest 79 When we start a channel program we pass the channel subsystem parameters via an 95 it from the disk. So we need to be able to handle this case. 100 Since we are forced to live with prefetch we cannot use the very simple IPL 101 procedure we defined in the preceding section. So we compensate by doing the 112 to read the very next record which will be IPL2. But since we are not reading 113 both IPL1 and IPL2 as part of the same channel program we must manually set [all …]
|
H A D | tcg-icount.rst | 38 until the next timer will expire. We store this budget as part of a 44 In the case of icount, before the flag is checked we subtract the 46 would cause the instruction budget to go negative we exit the main 49 was due to expire will expire exactly when we exit the main run loop. 54 While we can adjust the instruction budget for known events like timer 55 expiry we cannot do the same for MMIO. Every load/store we execute 56 might potentially trigger an I/O event, at which point we will need an 59 To deal with this case, when an I/O access is made we: 70 MMIO isn't the only type of operation for which we might need a
|
/qemu/migration/ |
H A D | migration-stats.h | 32 * based on MigrationStats. We change to Stat64 any counter that 38 * Number of bytes that were dirty last time that we synced with 39 * the guest memory. We use that to calculate the downtime. As 40 * the remaining dirty amounts to what we know that is still dirty 42 * since we synchronized bitmaps. 50 * Number of times we have synchronized guest bitmaps. 76 * Number of postcopy page faults that we have handled during 93 * Maximum amount of data we can send in a cycle. 118 * This is called when we know we start a new transfer cycle. 134 * Returns how many bytes have we transferred since the beginning of
|
H A D | migration.h | 63 * big enough and make sure we won't overflow easily. 75 * This points to the host page we're going to install for this temp page. 76 * It tells us after we've received the whole page, where we should put it. 100 * Used to sync thread creations. Note that we can't create threads in 116 /* Set this when we want the fault thread to quit */ 127 QemuMutex rp_mutex; /* We send replies from multiple threads */ 187 /* The coroutine we should enter (back) after failover */ 215 /* A tree of pages that we requested to the source VM */ 223 * The mutex helps to maintain the requested pages that we sent to the 285 * used for both the start or recover of a postcopy migration. We'll [all …]
|
/qemu/tests/tcg/multiarch/gdbstub/ |
H A D | interrupt.py | 16 Check that, if thread is resumed, we go back to the same thread when the 20 # Switch to the thread we're going to be running the test in. 26 # While there are cleaner ways to do this, we want to minimize the number of 28 # Ideally, there should be no difference between what we're doing here and 31 # For this to be safe, we only need the prologue of loop() to not have 32 # instructions that may have problems with what we're doing here. We don't 40 # Check whether the thread we're in after the interruption is the same we
|
/qemu/include/user/ |
H A D | safe-syscall.h | 31 * (Errnos are host errnos; we rely on QEMU_ERESTARTSYS not clashing 59 * for which we need to either return EINTR or arrange for the guest 72 * signal could arrive just before we make the host syscall inside libc, 81 * which are only technically blocking (ie which we know in practice won't 92 * The basic setup is that we make the host syscall via a known 96 * instruction then we change the PC to point at a "return 99 * Then in the main.c loop if we see this magic return value we adjust 104 * (1) signal came in just before we took the host syscall (a race); 105 * in this case we'll take the guest signal and have another go 111 * in this case we want to restart the guest syscall also, and so [all …]
|
/qemu/target/arm/tcg/ |
H A D | m_helper.c | 139 * user-only emulation we don't have the MPU. in HELPER() 140 * Luckily since we know we are NonSecure unprivileged (and that in in HELPER() 203 * What kind of stack write are we doing? This affects how exceptions 276 * By pending the exception at this point we are making in v7m_stack_write() 280 * later if we have two derived exceptions. in v7m_stack_write() 281 * The only case when we must not pend the exception but instead in v7m_stack_write() 282 * throw it away is if we are doing the push of the callee registers in v7m_stack_write() 283 * and we've already generated a derived exception (this is indicated in v7m_stack_write() 284 * by the caller passing STACK_IGNFAULTS). Even in this case we will in v7m_stack_write() 348 * By pending the exception at this point we are making in v7m_stack_read() [all …]
|
/qemu/target/hexagon/idef-parser/ |
H A D | README.rst | 10 To better understand the scope of the idef-parser, we'll explore an applicative 39 passed explicitly as function parameters. Among the passed parameters we will 51 After the instruction identifier, we have a series of parameters that represents 55 We will leverage this information to infer several information: 93 Then, we are generating the sum tinycode operator between the selected 100 The result of the addition is now stored in the temporary, we move it into the 109 instruction semantics in ``semantics_generated.pyinc`` that we need to consider. 130 we obtain the pseudo code 146 To finish the above example, after preprocessing ``J2_jumpr`` we obtain 224 With this initial portion of the grammar we are defining the instruction, its' [all …]
|
/qemu/docs/ |
H A D | rdma.txt | 162 on the receiver side is registered and pinned, we're 185 At this point, we define a control channel on top of SEND messages 226 After ram block exchange is completed, we have two protocol-level 234 1. We transmit a READY command to let the sender know that 235 we are *ready* to receive some data bytes on the control channel. 236 2. Before attempting to receive the expected command, we post another 237 RQ work request to replace the one we just used up. 240 5. Verify that the command-type and version received matches the one we expected. 247 2. Optionally: if we are expecting a response from the command 248 (that we have not yet transmitted), let's post an RQ [all …]
|
/qemu/tests/qemu-iotests/tests/ |
H A D | block-status-cache | 41 # which we are going to query to provoke a block-status inquiry with 63 We can provoke a want_zero=false call with `qemu-img map` over NBD with 64 x-dirty-bitmap=qemu:allocation-depth, so we first run a normal `map` 90 # We need to run this map twice: On the first call, we probably still 95 # If we did a want_zero=true call at this point, we would thus get 97 # we would get fresh block-status information from the driver, which 101 # Therefore, we need a second want_zero=false map to reproduce: 109 # subsequent map operation will be served from the cache, and so we can 116 # whether we get correct information (i.e. the same as we got on our 133 # we can only use the raw format
|
/qemu/include/qemu/ |
H A D | stats64.h | 131 /* High 32 bits are equal. Read low after high, otherwise we in stat64_min() 133 * 0x1234,0x8000 and we read it as 0x1234,0x0000). Pairs with in stat64_min() 142 /* See if we were lucky and a writer raced against us. The in stat64_min() 143 * barrier is theoretically unnecessary, but if we remove it in stat64_min() 144 * we may miss being lucky. in stat64_min() 153 /* If the value changes in any way, we have to take the lock. */ in stat64_min() 171 /* High 32 bits are equal. Read low after high, otherwise we in stat64_max() 173 * 0x1235,0x0000 and we read it as 0x1235,0x8000). Pairs with in stat64_max() 182 /* See if we were lucky and a writer raced against us. The in stat64_max() 183 * barrier is theoretically unnecessary, but if we remove it in stat64_max() [all …]
|
/qemu/target/arm/ |
H A D | arm-powerctl.c | 67 /* Initialize the cpu we are turning on */ in arm_set_cpu_on_async_work() 72 /* We check if the started CPU is now at the correct level */ in arm_set_cpu_on_async_work() 114 * if we are booting in AArch64 mode then "entry" needs to be 4 bytes in arm_set_cpu_on() 120 /* Retrieve the cpu we are powering up */ in arm_set_cpu_on() 150 * For now we don't support booting an AArch64 CPU in AArch32 mode in arm_set_cpu_on() 151 * TODO: We should add this support later in arm_set_cpu_on() 161 * If another CPU has powered the target on we are in the state in arm_set_cpu_on() 173 /* To avoid racing with a CPU we are just kicking off we do the in arm_set_cpu_on() 186 /* We are good to go */ in arm_set_cpu_on() 195 /* Initialize the cpu we are turning on */ in arm_set_cpu_on_and_reset_async_work() [all …]
|
/qemu/tests/qtest/migration/ |
H A D | framework.h | 63 * Our goal is to ensure that we run a single full migration 67 * We can't directly synchronize with the start of a migration 68 * so we have to apply some tricks monitoring memory that is 71 * Initially we set the migration bandwidth to an insanely 76 * so we can't let the entire migration pass run at this speed. 77 * Our intent is to let it run just long enough that we can 81 * Before migration starts, we write a 64-bit magic marker 88 * Finally we go back to the source and read a byte just 89 * before the marker until we see it flip in value. This 93 * IOW, we're guaranteed at least a 2nd migration pass [all …]
|
/qemu/tests/qemu-iotests/ |
H A D | 245 | 75 # Once the VM is shut down we can parse the log and see if qemu-io 108 # If key has the form "foo.bar" then we need to do 142 # We can reopen the image passing the same options 145 # We can also reopen passing a child reference in 'file' 148 # We cannot change any of these 184 # We can't reopen the image passing the same options, 'backing' is mandatory 187 # Everything works if we pass 'backing' using the existing node name 193 # We can't use a non-existing or empty (non-NULL) node as the backing image 197 # We can reopen the image just fine if we specify the backing options 203 # We cannot change any of these options [all …]
|
H A D | 108 | 106 # XXX: This should be the first free entry in the last L2 table, but we cannot 135 # Normally, qemu doesn't create empty refblocks, so we just have to do it by 175 # file end, then we would try to place the reftable in that refblock). 189 # We want to check whether the size of the image file increases due to 235 # will leave holes in the file, so we need to fill them up so we can 238 # length increases even with a chunk size of 512. Then we must have 246 # really matter which qemu-io calls we do here exactly 254 # $ofs is random for all we know) 265 # put before the space it covers. In this test case, we do not mind 268 # Before we make that space, we have to find out the host offset of [all …]
|
/qemu/block/ |
H A D | graph-lock.c | 120 * reader_count >= 1: we don't know if writer read has_writer == 0 or 1, in bdrv_graph_wrlock() 121 * but we need to wait. in bdrv_graph_wrlock() 126 * has_writer must be 0 while polling, otherwise we get a deadlock if in bdrv_graph_wrlock() 135 * We want to only check reader_count() after has_writer = 1 is visible in bdrv_graph_wrlock() 136 * to other threads. That way no more readers can sneak in after we've in bdrv_graph_wrlock() 181 /* make sure writer sees reader_count before we check has_writer */ in bdrv_graph_co_rdlock() 186 * has_writer == 1: we don't know if writer read reader_count == 0 in bdrv_graph_co_rdlock() 187 * or > 0, but we need to wait anyways because in bdrv_graph_co_rdlock() 206 * we will enter this critical section and call aio_wait_kick(). in bdrv_graph_co_rdlock() 210 * Additional check when we use the above lock to synchronize in bdrv_graph_co_rdlock() [all …]
|
/qemu/util/ |
H A D | lockcnt.c | 76 * only getting out of the loop if we can have another shot at the in qemu_lockcnt_cmpxchg_or_wait() 77 * fast path. Once we can, get out to compute the new destination in qemu_lockcnt_cmpxchg_or_wait() 135 /* If we were woken by another thread, we should also wake one because in qemu_lockcnt_inc() 136 * we are effectively releasing the lock that was given to us. This is in qemu_lockcnt_inc() 178 /* At this point we do not know if there are more waiters. Assume in qemu_lockcnt_dec_and_lock() 186 /* If we were woken by another thread, but we're returning in unlocked in qemu_lockcnt_dec_and_lock() 187 * state, we should also wake a thread because we are effectively in qemu_lockcnt_dec_and_lock() 219 /* At this point we do not know if there are more waiters. Assume in qemu_lockcnt_dec_if_lock() 226 /* If we were woken by another thread, but we're returning in unlocked in qemu_lockcnt_dec_if_lock() 227 * state, we should also wake a thread because we are effectively in qemu_lockcnt_dec_if_lock() [all …]
|
/qemu/docs/sphinx/ |
H A D | hxtool.py | 34 # We parse hx files with a state machine which may be in one of two 52 # empty we ignore the directive -- these are used only to add 55 # Return the heading text. We strip out any trailing ':' for 68 # Return the heading text. We strip out any trailing ':' for 100 # We build up lines of rST in this ViewList, which we will 142 # We had some rST fragments before the first 143 # DEFHEADING. We don't have a section to put 156 # Not a directive: put in output if we are in rST fragment 162 # We don't have multiple sections, so just parse the rst 163 # fragments into a dummy node so we can return the children. [all …]
|
/qemu/accel/tcg/ |
H A D | cpu-exec.c | 137 /* Print every 2s max if the guest is late. We limit the number in init_delay_params() 176 * We know that the first page matched, and an otherwise valid TB in tb_lookup_cmp() 178 * therefore we know that generating a new TB from the current PC in tb_lookup_cmp() 232 /* we should never be trying to look up an INVALID tb */ in tb_lookup() 301 * we would fail to make forward progress in reverse-continue. in check_for_breakpoints_slow() 313 * If we have an exact pc match, trigger the breakpoint. in check_for_breakpoints_slow() 371 * the tcg epilogue so that we return into cpu_tb_exec. 379 * By definition we've just finished a TB, so I/O is OK. in HELPER() 383 * The next TB, if we chain to it, will clear the flag again. in HELPER() 443 * until we actually need to modify the TB. The read-only copy, in cpu_tb_exec() [all …]
|