History log of /kvm-unit-tests/lib/x86/processor.h (Results 51 – 75 of 123)
Revision Date Author Comments
# 537d39df 22-Mar-2022 Maxim Levitsky <mlevitsk@redhat.com>

svm: add tests for LBR virtualization

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220322205613.250925-7-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


# c47292f4 27-Jan-2022 Jim Mattson <jmattson@google.com>

x86: Define wrtsc(tsc) as wrmsr(MSR_IA32_TSC, tsc)

Remove some inline assembly code duplication and opportunistically
replace the magic constant, "0x10," with "MSR_IA32_TSC."

Signed-off-by: Jim Mat

x86: Define wrtsc(tsc) as wrmsr(MSR_IA32_TSC, tsc)

Remove some inline assembly code duplication and opportunistically
replace the magic constant, "0x10," with "MSR_IA32_TSC."

Signed-off-by: Jim Mattson <jmattson@google.com>
Message-Id: <20220127215548.2016946-3-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 7a14c1d9 15-Oct-2021 Jim Mattson <jmattson@google.com>

x86: Fix operand size for lldt

The lldt instruction takes an r/m16 operand.

Fixes: 7d36db351752 ("Initial commit from qemu-kvm.git kvm/test/")
Signed-off-by: Jim Mattson <jmattson@google.com>
Messa

x86: Fix operand size for lldt

The lldt instruction takes an r/m16 operand.

Fixes: 7d36db351752 ("Initial commit from qemu-kvm.git kvm/test/")
Signed-off-by: Jim Mattson <jmattson@google.com>
Message-Id: <20211015195530.301237-2-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 7dfe5473 09-Sep-2021 Sean Christopherson <seanjc@google.com>

lib: Drop x86/processor.h's barrier() in favor of compiler.h version

Drop x86's duplicate version of barrier() in favor of the generic #define
provided by linux/compiler.h. Include compiler.h in th

lib: Drop x86/processor.h's barrier() in favor of compiler.h version

Drop x86's duplicate version of barrier() in favor of the generic #define
provided by linux/compiler.h. Include compiler.h in the all-encompassing
libcflat.h to pick up barrier() and other future goodies, e.g. new
attributes defines.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210909183207.2228273-2-seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# f4a8b68c 13-Aug-2021 Lara Lazier <laramglazier@gmail.com>

x86: Added LA57 support to is_canonical

In case LA57 is enabled, the function is_canonical would need to
check if the address is correctly sign-extended from bit 57 (instead of 48)
to bit 63.

Signe

x86: Added LA57 support to is_canonical

In case LA57 is enabled, the function is_canonical would need to
check if the address is correctly sign-extended from bit 57 (instead of 48)
to bit 63.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210813111833.42377-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 520e2789 12-Aug-2021 Babu Moger <Babu.Moger@amd.com>

nSVM: Fix NPT reserved bits test hang

SVM reserved bits tests hangs in a infinite loop. The test uses the
instruction 'rdtsc' to generate the random reserved bits. It hangs
while generating the vali

nSVM: Fix NPT reserved bits test hang

SVM reserved bits tests hangs in a infinite loop. The test uses the
instruction 'rdtsc' to generate the random reserved bits. It hangs
while generating the valid reserved bits.

The AMD64 Architecture Programmers Manual Volume 2: System
Programming manual says, When using the TSC to measure elapsed time,
programmers must be aware that for some implementations, the rate at
which the TSC is incremented varies based on the processor power
management state (Pstate). For other implementations, the TSC
increment rate is fixed and is not subject to power-management
related changes in processor frequency.

In AMD gen3 machine, the rdtsc value is a P state multiplier.
Here are the rdtsc value in 10 sucessive reads.
0 rdtsc = 0x1ec92919b9710
1 rdtsc = 0x1ec92919c01f0
2 rdtsc = 0x1ec92919c0f70
3 rdtsc = 0x1ec92919c18d0
4 rdtsc = 0x1ec92919c2060
5 rdtsc = 0x1ec92919c28d0
6 rdtsc = 0x1ec92919c30b0
7 rdtsc = 0x1ec92919c5660
8 rdtsc = 0x1ec92919c6150
9 rdtsc = 0x1ec92919c7c80

This test uses the lower nibble and right shifts to generate the
valid reserved bit. It loops forever because the lower nibble is always
zero.

Fixing the issue with replacing rdrand instruction if available or
skipping the test if we cannot generate the valid reserved bits.

Signed-off-by: Babu Moger <Babu.Moger@amd.com>
Message-Id: <162880842856.21995.11223675477768032640.stgit@bmoger-ubuntu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# db35bb32 26-Jul-2021 Krish Sadhukhan <krish.sadhukhan@oracle.com>

Test: x86: Add a #define for the RF bit in EFLAGS register

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Message-Id: <20210726180226.253738-3-krish.sadhukhan@oracle.com>
Signed-off-by:

Test: x86: Add a #define for the RF bit in EFLAGS register

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Message-Id: <20210726180226.253738-3-krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 7f8f7356 26-Jul-2021 Krish Sadhukhan <krish.sadhukhan@oracle.com>

Test: x86: Move setter/getter for Debug registers to common library

The setter/getter functions for the DR0..DR3 registers exist in debug.c
test and hence they can not be re-used by other tests. The

Test: x86: Move setter/getter for Debug registers to common library

The setter/getter functions for the DR0..DR3 registers exist in debug.c
test and hence they can not be re-used by other tests. Therefore, move
them to the common library.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Message-Id: <20210726180226.253738-2-krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# f6972bd6 22-Jul-2021 Lara Lazier <laramglazier@gmail.com>

nSVM: Added test for VGIF feature

When VGIF is enabled STGI executed in guest mode
sets bit 9, while CLGI clears bit 9 in the int_ctl (offset 60h)
of the VMCB.

Signed-off-by: Lara Lazier <laramglaz

nSVM: Added test for VGIF feature

When VGIF is enabled STGI executed in guest mode
sets bit 9, while CLGI clears bit 9 in the int_ctl (offset 60h)
of the VMCB.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210722131718.11667-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# b52bf046 22-Jun-2021 Sean Christopherson <seanjc@google.com>

x86: Add GBPAGES CPUID macro, clean up CPUID comments

Add a GBPAGES CPUID macro for a future NPT test and reorganize the
entries to be explicitly Basic vs. Extended, with a hint that Basic leafs
com

x86: Add GBPAGES CPUID macro, clean up CPUID comments

Add a GBPAGES CPUID macro for a future NPT test and reorganize the
entries to be explicitly Basic vs. Extended, with a hint that Basic leafs
come from Intel and Extended leafs come from AMD. Organizing by Intel
vs. AMD is at best misleading, e.g. if both support a feature, and at
worst flat out wrong, e.g. AMD defined NX and LM (not sure about RDPRU,
but avoiding such questions is the whole point of organizing by type).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210622210047.3691840-12-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# c986dbe8 09-Jun-2021 Nadav Amit <nadav.amit@gmail.com>

x86/vmx: skip error-code delivery tests for #CP

Old Intel CPUs, which do not support control protection exception, do
not expect an error code for #CP, while new ones expect an error-code.

Intel SD

x86/vmx: skip error-code delivery tests for #CP

Old Intel CPUs, which do not support control protection exception, do
not expect an error code for #CP, while new ones expect an error-code.

Intel SDM does not say that the delivery of an error-code for #CP is
conditional on anything, not even CPU support of CET. So it appears that
the correct testing is just to skip the error-code delivery test for
the #CP exception.

Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
Message-Id: <20210609182945.36849-9-nadav.amit@gmail.com>

show more ...


# 22abdd39 09-Jun-2021 Nadav Amit <nadav.amit@gmail.com>

x86/hypercall: enable the test on non-KVM environment

KVM knows to emulate both vmcall and vmmcall regardless of the
actual architecture. Native hardware does not behave this way. Based on
the avail

x86/hypercall: enable the test on non-KVM environment

KVM knows to emulate both vmcall and vmmcall regardless of the
actual architecture. Native hardware does not behave this way. Based on
the availability of test-device, figure out that the test is run on
non-KVM environment, and if so, run vmcall/vmmcall based on the actual
architecture.

Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
Message-Id: <20210609182945.36849-5-nadav.amit@gmail.com>

show more ...


# c865f654 09-Jun-2021 Cornelia Huck <cohuck@redhat.com>

x86: unify header guards

Standardize header guards to _ASMX86_HEADER_H_, _X86_HEADER_H_,
and X86_HEADER_H.

Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@red

x86: unify header guards

Standardize header guards to _ASMX86_HEADER_H_, _X86_HEADER_H_,
and X86_HEADER_H.

Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210609143712.60933-8-cohuck@redhat.com>

show more ...


# 88f0bb17 22-Apr-2021 Sean Christopherson <seanjc@google.com>

x86: msr: Test that always-canonical MSRs #GP on non-canonical value

Verify that WRMSR takes a #GP when writing a non-canonical value to a
MSR that always takes a 64-bit address. Specifically, AMD

x86: msr: Test that always-canonical MSRs #GP on non-canonical value

Verify that WRMSR takes a #GP when writing a non-canonical value to a
MSR that always takes a 64-bit address. Specifically, AMD doesn't
enforce a canonical address for the SYSENTER MSRs.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210422030504.3488253-14-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 142ff635 22-Apr-2021 Sean Christopherson <seanjc@google.com>

x86: msr: Verify 64-bit only MSRs fault on 32-bit hosts

Assert that 64-bit only MSRs take a #GP when read or written on 32-bit
hosts, as opposed to simply skipping the MSRs on 32-bit builds. Force

x86: msr: Verify 64-bit only MSRs fault on 32-bit hosts

Assert that 64-bit only MSRs take a #GP when read or written on 32-bit
hosts, as opposed to simply skipping the MSRs on 32-bit builds. Force
"-cpu max" so that CPUID can be used to check for 64-bit support.

Technically, the unit test could/should be even more aggressive and
require KVM to inject faults if the vCPU model doesn't support 64-bit
mode. But, there are no plans to go to that level of emulation in KVM,
and practically speaking there isn't much benefit as allowing a 32-bit
vCPU to access the MSRs on a 64-bit host is a benign virtualization hole.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210422030504.3488253-13-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 739f7de6 18-Feb-2021 Paolo Bonzini <pbonzini@redhat.com>

x86: clean up EFER definitions

The X86_EFER_LMA definition is wrong (it defines EFER.LME instead),
while X86_IA32_EFER is unused.

There are also two useless WRMSRs that try to set EFER_LMA (really

x86: clean up EFER definitions

The X86_EFER_LMA definition is wrong (it defines EFER.LME instead),
while X86_IA32_EFER is unused.

There are also two useless WRMSRs that try to set EFER_LMA (really
EFER.LME) in x86/pks.c and x86/pku.c. These are wrong too, not just
because the bit definition is incorrect but also because EFER.LME
must be set before CR0.PG. But they are useless because the two
tests are 64-bit only.

Clean them all up.

Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 79e53994 06-May-2020 Yang Weijiang <weijiang.yang@intel.com>

x86: Add test cases for user-mode CET validation

This unit test is intended to test user-mode CET support of KVM,
it's tested on Intel new platform. Two CET features: Shadow Stack
Protection(SHSTK)

x86: Add test cases for user-mode CET validation

This unit test is intended to test user-mode CET support of KVM,
it's tested on Intel new platform. Two CET features: Shadow Stack
Protection(SHSTK) and Indirect-Branch Tracking(IBT) are enclosed.

In SHSTK test, if the function return-address in normal stack is
tampered with a value not equal to the one on shadow-stack, #CP
(Control Protection Exception)will generated on function returning.
This feature is supported by processor itself, no compiler/link
option is required.

However, to enabled IBT, we need to add -fcf-protection=full in
compiler options, this makes the compiler insert endbr64 at the
very beginning of each jmp/call target given the binary is for
x86_64.

To get PASS results, the following conditions must be met:
1) The processor is powered with CET feature.
2) The kernel is patched with the latest CET kernel patches.
3) The KVM and QEMU are patched with the latest CET patches.
4) Use CET-enabled gcc to compile the test app.

v2:
- Removed extra dependency on test framework for user/kernel mode switch.
- Directly set #CP handler instead of through TSS.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20200506082110.25441-12-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# fdae6092 05-Nov-2020 Chenyi Qiang <chenyi.qiang@intel.com>

x86: Add tests for PKS

This unit-test is intended to test the KVM support for Protection Keys
for Supervisor Pages (PKS). If CR4.PKS is set in long mode, supervisor
pkeys are checked in addition to

x86: Add tests for PKS

This unit-test is intended to test the KVM support for Protection Keys
for Supervisor Pages (PKS). If CR4.PKS is set in long mode, supervisor
pkeys are checked in addition to normal paging protections and Access or
Write can be disabled via a MSR update without TLB flushes when
permissions change.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20201105081805.5674-9-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 1c320e18 13-Oct-2020 Yadong Qi <yadong.qi@intel.com>

x86: vmx: Add test for SIPI signal processing

The test verifies the following functionality:
A SIPI signal received when CPU is in VMX non-root mode:
if ACTIVITY_STATE == WAIT_SIPI
VMExi

x86: vmx: Add test for SIPI signal processing

The test verifies the following functionality:
A SIPI signal received when CPU is in VMX non-root mode:
if ACTIVITY_STATE == WAIT_SIPI
VMExit with (reason == 4)
else
SIPI signal is ignored

The test cases depends on IA32_VMX_MISC:bit(8), if this bit is 1
then the test cases would be executed, otherwise the test cases
would be skiped.

Signed-off-by: Yadong Qi <yadong.qi@intel.com>
Message-Id: <20201013052845.249113-1-yadong.qi@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 74a66858 29-Oct-2020 Jim Mattson <jmattson@google.com>

x86: vmx: Add test for L2 change of CR4.OSXSAVE

If L1 allows L2 to modify CR4.OSXSAVE, then L0 kvm recalculates the
guest's CPUID.01H:ECX.OSXSAVE bit when the L2 guest changes
CR4.OSXSAVE via MOV-to

x86: vmx: Add test for L2 change of CR4.OSXSAVE

If L1 allows L2 to modify CR4.OSXSAVE, then L0 kvm recalculates the
guest's CPUID.01H:ECX.OSXSAVE bit when the L2 guest changes
CR4.OSXSAVE via MOV-to-CR4. Verify that kvm also recalculates this
CPUID bit when loading L1's CR4 from the "host CR4" field of the
VMCS12.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Message-Id: <20201029171024.486256-1-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 7820ac52 21-Sep-2020 Krish Sadhukhan <krish.sadhukhan@oracle.com>

nVMX: Test Selector and Base Address fields of Guest Segment Registers on vmentry of nested guests

According to section "Checks on Guest Segment Registers" in Intel SDM vol 3C,
the following checks

nVMX: Test Selector and Base Address fields of Guest Segment Registers on vmentry of nested guests

According to section "Checks on Guest Segment Registers" in Intel SDM vol 3C,
the following checks are performed on the Guest Segment Registers on vmentry
of nested guests:

Selector fields:
— TR. The TI flag (bit 2) must be 0.
— LDTR. If LDTR is usable, the TI flag (bit 2) must be 0.
— SS. If the guest will not be virtual-8086 and the "unrestricted
guest" VM-execution control is 0, the RPL (bits 1:0) must equal
the RPL of the selector field for CS.1

Base-address fields:
— CS, SS, DS, ES, FS, GS. If the guest will be virtual-8086, the
address must be the selector field shifted left 4 bits (multiplied
by 16).
— The following checks are performed on processors that support Intel
64 architecture:
TR, FS, GS. The address must be canonical.
LDTR. If LDTR is usable, the address must be canonical.
CS. Bits 63:32 of the address must be zero.
SS, DS, ES. If the register is usable, bits 63:32 of the
address must be zero.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Message-Id: <20200921081027.23047-3-krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 63f684f3 03-Jul-2020 Sean Christopherson <sean.j.christopherson@intel.com>

x86: access: Add test for illegal toggling of CR4.LA57 in 64-bit mode

Add a test to verify that KVM correctly injects a #GP if the guest
attempts to toggle CR4.LA57 while 64-bit mode is active. Use

x86: access: Add test for illegal toggling of CR4.LA57 in 64-bit mode

Add a test to verify that KVM correctly injects a #GP if the guest
attempts to toggle CR4.LA57 while 64-bit mode is active. Use two
versions of the toggling, one to toggle only LA57 and a second to toggle
PSE in addition to LA57. KVM doesn't intercept LA57, i.e. toggling only
LA57 effectively tests the CPU, not KVM. Use PSE as the whipping boy as
it will not trigger a #GP on its own, is universally available, is
ignored in 64-bit mode, and most importantly is trapped by KVM.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200703021903.5683-1-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# b49a1a6d 08-May-2020 Jim Mattson <jmattson@google.com>

x86: VMX: Add a VMX-preemption timer expiration test

When the VMX-preemption timer is activated, code executing in VMX
non-root operation should never be able to record a TSC value beyond
the deadli

x86: VMX: Add a VMX-preemption timer expiration test

When the VMX-preemption timer is activated, code executing in VMX
non-root operation should never be able to record a TSC value beyond
the deadline imposed by adding the scaled VMX-preemption timer value
to the first TSC value observed by the guest after VM-entry.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Message-Id: <20200508203938.88508-1-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 33a6576c 09-Apr-2020 Krish Sadhukhan <krish.sadhukhan@oracle.com>

kvm-unit-tests: SVM: Add #defines for CR0.CD and CR0.NW

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Message-Id: <20200409205035.16830-3-krish.sadhukhan@oracle.com>
Signed-off-by: Pao

kvm-unit-tests: SVM: Add #defines for CR0.CD and CR0.NW

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Message-Id: <20200409205035.16830-3-krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 2b934609 04-Mar-2020 Xiaoyao Li <xiaoyao.li@intel.com>

x86: Move definition of some exception vectors into processor.h

Both processor.h and desc.h hold some definitions of exception vector.
put them together in processor.h

Signed-off-by: Xiaoyao Li <xi

x86: Move definition of some exception vectors into processor.h

Both processor.h and desc.h hold some definitions of exception vector.
put them together in processor.h

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


12345