History log of /qemu/target/i386/tcg/seg_helper.c (Results 1 – 25 of 47)
Revision Date Author Comments
# ad441b8b 14-Aug-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386: implement TSS trap bit

Now that we can do so after the error code has been pushed, raising
the #DB exception for task-switch traps is trivial.

Signed-off-by: Paolo Bonzini <pbonzini@re

target/i386: implement TSS trap bit

Now that we can do so after the error code has been pushed, raising
the #DB exception for task-switch traps is trivial.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# c7c33283 10-Jul-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386: move push of error code to switch_tss_ra

Move it there so that it can be done before the TSS trap bit is
processed.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


# 84307cd6 24-Apr-2025 Philippe Mathieu-Daudé <philmd@linaro.org>

include: Remove 'exec/exec-all.h'

"exec/exec-all.h" is now fully empty, let's remove it.

Mechanical change running:

$ sed -i '/exec\/exec-all.h/d' $(git grep -wl exec/exec-all.h)

Signed-off-by:

include: Remove 'exec/exec-all.h'

"exec/exec-all.h" is now fully empty, let's remove it.

Mechanical change running:

$ sed -i '/exec\/exec-all.h/d' $(git grep -wl exec/exec-all.h)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.caveayland@nutanix.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20250424202412.91612-14-philmd@linaro.org>

show more ...


# fe1a3ace 24-Apr-2025 Philippe Mathieu-Daudé <philmd@linaro.org>

accel/tcg: Extract probe API out of 'exec/exec-all.h'

Declare probe methods in "accel/tcg/probe.h" to emphasize
they are specific to TCG accelerator.

Suggested-by: Richard Henderson <richard.hender

accel/tcg: Extract probe API out of 'exec/exec-all.h'

Declare probe methods in "accel/tcg/probe.h" to emphasize
they are specific to TCG accelerator.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.caveayland@nutanix.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20250424202412.91612-13-philmd@linaro.org>

show more ...


# 42fa9665 01-Apr-2025 Philippe Mathieu-Daudé <philmd@linaro.org>

exec: Restrict 'cpu_ldst.h' to accel/tcg/

Mechanical change using:

$ sed -i -e 's,exec/cpu_ldst,accel/tcg/cpu-ldst,' \
$(git grep -l exec/cpu_ldst.h)

Signed-off-by: Philippe Mathieu-Daud

exec: Restrict 'cpu_ldst.h' to accel/tcg/

Mechanical change using:

$ sed -i -e 's,exec/cpu_ldst,accel/tcg/cpu-ldst,' \
$(git grep -l exec/cpu_ldst.h)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

show more ...


# 8480f7c7 01-Apr-2025 Philippe Mathieu-Daudé <philmd@linaro.org>

target/i386: Restrict SoftMMU mmu_index() to TCG

Move x86_cpu_mmu_index() to tcg-cpu.c, convert
CPUClass::mmu_index() to TCGCPUOps::mmu_index().

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro

target/i386: Restrict SoftMMU mmu_index() to TCG

Move x86_cpu_mmu_index() to tcg-cpu.c, convert
CPUClass::mmu_index() to TCGCPUOps::mmu_index().

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20250401080938.32278-10-philmd@linaro.org>

show more ...


# 611c34a7 01-Apr-2025 Philippe Mathieu-Daudé <philmd@linaro.org>

target/i386: Restrict cpu_mmu_index_kernel() to TCG

Move cpu_mmu_index_kernel() to seg_helper.c.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.he

target/i386: Restrict cpu_mmu_index_kernel() to TCG

Move cpu_mmu_index_kernel() to seg_helper.c.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20250401080938.32278-9-philmd@linaro.org>

show more ...


# 8fa11a4d 06-Nov-2024 Alexander Graf <graf@amazon.com>

target/i386: Fix legacy page table walk

Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added
logic to run the page table walker even in real mode if we are in NPT
mode. That functi

target/i386: Fix legacy page table walk

Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added
logic to run the page table walker even in real mode if we are in NPT
mode. That function then determined whether real mode or paging is
active based on whether the pg_mode variable was 0.

Unfortunately pg_mode is 0 in two situations:

1) Paging is disabled (real mode)
2) Paging is in 2-level paging mode (32bit without PAE)

That means the walker now assumed that 2-level paging mode was real
mode, breaking NetBSD as well as Windows XP.

To fix that, this patch adds a new PG flag to pg_mode which indicates
whether paging is active at all and uses that to determine whether we
are in real mode or not.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2654
Fixes: b56617bbcb4 ("target/i386: Walk NPT in guest real mode")
Signed-off-by: Alexander Graf <graf@amazon.com>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Link: https://lore.kernel.org/r/20241106154329.67218-1-graf@amazon.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# e136648c 18-Jun-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386/tcg: Use DPL-level accesses for interrupts and call gates

Stack accesses should be explicit and use the privilege level of the
target stack. This ensures that SMAP is not applied when t

target/i386/tcg: Use DPL-level accesses for interrupts and call gates

Stack accesses should be explicit and use the privilege level of the
target stack. This ensures that SMAP is not applied when the target
stack is in ring 3.

This fixes a bug wherein i386/tcg assumed that an interrupt return, or a
far call using the CALL or JMP instruction, was always going from kernel
or user mode to kernel mode when using a call gate. This assumption is
violated if the call gate has a DPL that is greater than 0.

Analyzed-by: Robert R. Henry <rrh.henry@gmail.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/249
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# ded1db48 19-Aug-2024 Richard Henderson <richard.henderson@linaro.org>

target/i386: Fix tss access size in switch_tss_ra

The two limit_max variables represent size - 1, just like the
encoding in the GDT, thus the 'old' access was off by one.
Access the minimal size of

target/i386: Fix tss access size in switch_tss_ra

The two limit_max variables represent size - 1, just like the
encoding in the GDT, thus the 'old' access was off by one.
Access the minimal size of the new tss: the complete tss contains
the iopb, which may be a larger block than the access api expects,
and irrelevant because the iopb is not accessed during the
switch itself.

Fixes: 8b131065080a ("target/i386/tcg: use X86Access for TSS access")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2511
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240819074052.207783-1-richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>

show more ...


# bde8adb8 23-Jul-2024 Peter Maydell <peter.maydell@linaro.org>

target/i386: Remove dead assignment to ss in do_interrupt64()

Coverity points out that in do_interrupt64() in the "to inner
privilege" codepath we set "ss = 0", but because we also set
"new_stack =

target/i386: Remove dead assignment to ss in do_interrupt64()

Coverity points out that in do_interrupt64() in the "to inner
privilege" codepath we set "ss = 0", but because we also set
"new_stack = 1" there, later in the function we will always override
that value of ss with "ss = 0 | dpl".

Remove the unnecessary initialization of ss, which allows us to
reduce the scope of the variable to only where it is used. Borrow a
comment from helper_lcall_protected() that explains what "0 | dpl"
means here.

Resolves: Coverity CID 1527395
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240723162525.1585743-1-peter.maydell@linaro.org

show more ...


# 6a079f2e 19-Jun-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386/tcg: save current task state before loading new one

This is how the steps are ordered in the manual. EFLAGS.NT is
overwritten after the fact in the saved image.

Reviewed-by: Richard He

target/i386/tcg: save current task state before loading new one

This is how the steps are ordered in the manual. EFLAGS.NT is
overwritten after the fact in the saved image.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 8b131065 18-Jun-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386/tcg: use X86Access for TSS access

This takes care of probing the vaddr range in advance, and is also faster
because it avoids repeated TLB lookups. It also matches the Intel manual
bett

target/i386/tcg: use X86Access for TSS access

This takes care of probing the vaddr range in advance, and is also faster
because it avoids repeated TLB lookups. It also matches the Intel manual
better, as it says "Checks that the current (old) TSS, new TSS, and all
segment descriptors used in the task switch are paged into system memory";
note however that it's not clear how the processor checks for segment
descriptors, and this check is not included in the AMD manual.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 05d41bbc 19-Jun-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386/tcg: check for correct busy state before switching to a new task

This step is listed in the Intel manual: "Checks that the new task is available
(call, jump, exception, or interrupt) or

target/i386/tcg: check for correct busy state before switching to a new task

This step is listed in the Intel manual: "Checks that the new task is available
(call, jump, exception, or interrupt) or busy (IRET return)".

The AMD manual lists the same operation under the "Preventing recursion"
paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor
checks the busy bit in the IRET case.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 8053862a 18-Jun-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386/tcg: Compute MMU index once

Add the MMU index to the StackAccess struct, so that it can be cached
or (in the next patch) computed from information that is not in
CPUX86State.

Co-develop

target/i386/tcg: Compute MMU index once

Add the MMU index to the StackAccess struct, so that it can be cached
or (in the next patch) computed from information that is not in
CPUX86State.

Co-developed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 059368bc 17-Jun-2024 Richard Henderson <richard.henderson@linaro.org>

target/i386/tcg: Reorg push/pop within seg_helper.c

Interrupts and call gates should use accesses with the DPL as
the privilege level. While computing the applicable MMU index
is easy, the harder t

target/i386/tcg: Reorg push/pop within seg_helper.c

Interrupts and call gates should use accesses with the DPL as
the privilege level. While computing the applicable MMU index
is easy, the harder thing is how to plumb it in the code.

One possibility could be to add a single argument to the PUSH* macros
for the privilege level, but this is repetitive and risks confusion
between the involved privilege levels.

Another possibility is to pass both CPL and DPL, and adjusting both
PUSH* and POP* to use specific privilege levels (instead of using
cpu_{ld,st}*_data). This makes the code more symmetric.

However, a more complicated but much nicer approach is to use a structure
to contain the stack parameters, env, unwind return address, and rewrite
the macros into functions. The struct provides an easy home for the MMU
index as well.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 312ef324 18-Jun-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386/tcg: use PUSHL/PUSHW for error code

Do not pre-decrement esp, let the macros subtract the appropriate
operand size.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-

target/i386/tcg: use PUSHL/PUSHW for error code

Do not pre-decrement esp, let the macros subtract the appropriate
operand size.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 0bd385e7 11-Jun-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386/tcg: Allow IRET from user mode to user mode with SMAP

This fixes a bug wherein i386/tcg assumed an interrupt return using
the IRET instruction was always returning from kernel mode to ei

target/i386/tcg: Allow IRET from user mode to user mode with SMAP

This fixes a bug wherein i386/tcg assumed an interrupt return using
the IRET instruction was always returning from kernel mode to either
kernel mode or user mode. This assumption is violated when IRET is used
as a clever way to restore thread state, as for example in the dotnet
runtime. There, IRET returns from user mode to user mode.

This bug is that stack accesses from IRET and RETF, as well as accesses
to the parameters in a call gate, are normal data accesses using the
current CPL. This manifested itself as a page fault in the guest Linux
kernel due to SMAP preventing the access.

This bug appears to have been in QEMU since the beginning.

Analyzed-by: Robert R. Henry <rrh.henry@gmail.com>
Co-developed-by: Robert R. Henry <rrh.henry@gmail.com>
Signed-off-by: Robert R. Henry <rrh.henry@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# a7cf4949 17-Jun-2024 Richard Henderson <richard.henderson@linaro.org>

target/i386/tcg: Remove SEG_ADDL

This truncation is now handled by MMU_*32_IDX. The introduction of
MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses
with a high segment base

target/i386/tcg: Remove SEG_ADDL

This truncation is now handled by MMU_*32_IDX. The introduction of
MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses
with a high segment base (e.g. big real mode or vm86 mode), which did
not use SEG_ADDL.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# ae541c0e 25-May-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386: convert non-grouped, helper-based 2-byte opcodes

These have very simple generators and no need for complex group
decoding. Apart from LAR/LSL which are simplified to use
gen_op_deposit

target/i386: convert non-grouped, helper-based 2-byte opcodes

These have very simple generators and no need for complex group
decoding. Apart from LAR/LSL which are simplified to use
gen_op_deposit_reg_v and movcond, the code is generally lifted
from translate.c into the generators.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 69cb498c 29-May-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386: fix pushed value of EFLAGS.RF

When preparing an exception stack frame for a fault exception, the value
pushed for RF is 1. Take that into account. The same should be true
of interrupt

target/i386: fix pushed value of EFLAGS.RF

When preparing an exception stack frame for a fault exception, the value
pushed for RF is 1. Take that into account. The same should be true
of interrupts for repeated string instructions, but the situation there
is complicated.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# abdcc5c8 16-May-2024 Paolo Bonzini <pbonzini@redhat.com>

target/i386: set CC_OP in helpers if they want CC_OP_EFLAGS

Mark cc_op as clean and do not spill it at the end of the translation block.
Technically this is a tiny bit less efficient, but:

* it res

target/i386: set CC_OP in helpers if they want CC_OP_EFLAGS

Mark cc_op as clean and do not spill it at the end of the translation block.
Technically this is a tiny bit less efficient, but:

* it results in translations that are a tiny bit smaller

* for most of these instructions, it is not unlikely that they are close to
the end of the basic block, in which case cc_op would not be overwritten

* anyway the cost is probably dwarfed by that of computing flags.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 2455e9cf 25-Oct-2023 Paolo Bonzini <pbonzini@redhat.com>

target/i386: clean up cpu_cc_compute_all

cpu_cc_compute_all() has an argument that is always equal to CC_OP for historical
reasons (dating back to commit a7812ae4123, "TCG variable type checking.",

target/i386: clean up cpu_cc_compute_all

cpu_cc_compute_all() has an argument that is always equal to CC_OP for historical
reasons (dating back to commit a7812ae4123, "TCG variable type checking.", 2008-11-17,
which added the argument to helper_cc_compute_all). It does not make sense for the
argument to have any other value, so remove it and clean up some lines that are not
too long anymore.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# c35b2fb1 11-Oct-2023 Paolo Bonzini <pbonzini@redhat.com>

target/i386: fix shadowed variable pasto

Commit a908985971a ("target/i386/seg_helper: introduce tss_set_busy",
2023-09-26) failed to use the tss_selector argument of the new function,
which was ther

target/i386: fix shadowed variable pasto

Commit a908985971a ("target/i386/seg_helper: introduce tss_set_busy",
2023-09-26) failed to use the tss_selector argument of the new function,
which was therefore unused.

This shows up as a #GP fault when booting old versions of 32-bit
Linux.

Fixes: a908985971a ("target/i386/seg_helper: introduce tss_set_busy", 2023-09-26)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20231011135350.438492-1-pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>

show more ...


# 49958057 25-Sep-2023 Paolo Bonzini <pbonzini@redhat.com>

target/i386/seg_helper: remove shadowed variable

Return the width of the new task directly from switch_tss_ra.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


12