Lines Matching full:we

137     /* Print every 2s max if the guest is late. We limit the number  in init_delay_params()
176 * We know that the first page matched, and an otherwise valid TB in tb_lookup_cmp()
178 * therefore we know that generating a new TB from the current PC in tb_lookup_cmp()
232 /* we should never be trying to look up an INVALID tb */ in tb_lookup()
301 * we would fail to make forward progress in reverse-continue. in check_for_breakpoints_slow()
313 * If we have an exact pc match, trigger the breakpoint. in check_for_breakpoints_slow()
371 * the tcg epilogue so that we return into cpu_tb_exec.
379 * By definition we've just finished a TB, so I/O is OK. in HELPER()
383 * The next TB, if we chain to it, will clear the flag again. in HELPER()
443 * until we actually need to modify the TB. The read-only copy, in cpu_tb_exec()
446 * If we insist on touching both the RX and the RW pages, we in cpu_tb_exec()
455 /* We didn't start executing this TB (eg because the instruction in cpu_tb_exec()
456 * counter hit zero); we must restore the guest PC to the address in cpu_tb_exec()
480 * If gdb single-step, and we haven't raised another exception, in cpu_tb_exec()
524 * and we need to release any page locks held. In system mode we in cpu_exec_longjmp_cleanup()
525 * have one tcg_ctx per thread, so we know it was this cpu doing in cpu_exec_longjmp_cleanup()
530 * support such a thing. We'd have to properly register unwind info in cpu_exec_longjmp_cleanup()
568 * We only arrive in cpu_exec_step_atomic after beginning execution in cpu_exec_step_atomic()
569 * of an insn that includes an atomic operation we can't handle. in cpu_exec_step_atomic()
590 * As we start the exclusive region before codegen we must still in cpu_exec_step_atomic()
591 * be in the region if we longjump out of either the codegen or in cpu_exec_step_atomic()
602 * Get the rx view of the structure, from which we find the in tb_set_jmp_target()
712 * If user mode only, we simulate a fake exception which will be in cpu_handle_exception()
766 * If we have requested custom cflags with CF_NOIRQ we should in cpu_handle_interrupt()
768 * by the next TB we execute under normal cflags. in cpu_handle_interrupt()
774 /* Clear the interrupt flag now since we're processing in cpu_handle_interrupt()
818 * True when it is, and we should restart on a new TB, in cpu_handle_interrupt()
851 /* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */ in cpu_handle_interrupt()
855 /* Finally, check if we need to exit to the main loop. */ in cpu_handle_interrupt()
901 * If the next tb has more instructions than we have left to in cpu_loop_exec_tb()
902 * execute we need to ensure we find/generate a TB with exactly in cpu_loop_exec_tb()
920 /* if an exception is pending, we execute it here */ in cpu_exec_loop()
957 * We add the TB in the virtual pc hash table in cpu_exec_loop()
968 * We don't take care of direct jumps when address mapping in cpu_exec_loop()
977 /* See if we can patch the calling TB. */ in cpu_exec_loop()
1020 * what we have to do is sleep until it is 0. As for the in cpu_exec()
1021 * advance/delay we gain here, we try to fix it next time. in cpu_exec()