History log of /kvm-unit-tests/lib/x86/ (Results 1 – 25 of 414)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
7044540505-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/msr: Add a testcase to verify SPEC_CTRL exists (or not) as expected

Verify that SPEC_CTRL can be read when it should exist, #GPs on all reads
and writes does not exist, and that various bits can

x86/msr: Add a testcase to verify SPEC_CTRL exists (or not) as expected

Verify that SPEC_CTRL can be read when it should exist, #GPs on all reads
and writes does not exist, and that various bits can be set when they're
supported.

Opportunistically define more AMD mitigation features.

Cc: Chao Gao <chao.gao@intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Link: https://lore.kernel.org/r/20250605192643.533502-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

5cd94b1b05-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/msr: Treat PRED_CMD as support if CPU has SBPB

The PRED_CMD MSR also exists if the CPU supports SBPB.

Link: https://lore.kernel.org/r/20250605192643.533502-2-seanjc@google.com
Signed-off-by: Se

x86/msr: Treat PRED_CMD as support if CPU has SBPB

The PRED_CMD MSR also exists if the CPU supports SBPB.

Link: https://lore.kernel.org/r/20250605192643.533502-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

cebc6ef710-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Move SEV MSR definitions to msr.h

Move the SEV MSR definitions to msr.h so that they're available for non-EFI
builds. There is nothing EFI specific about the architectural definitions.

Opport

x86: Move SEV MSR definitions to msr.h

Move the SEV MSR definitions to msr.h so that they're available for non-EFI
builds. There is nothing EFI specific about the architectural definitions.

Opportunistically massage the names to align with existing style.

No functional change intended.

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Link: https://lore.kernel.org/r/20250610195415.115404-15-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

3814731610-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location

Use X86_PROPERTY_SEV_C_BIT instead of open coding equivalent functionality,
and delete the overly-verbose CPUID_FN_ENCRYPT_MEM_C

x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location

Use X86_PROPERTY_SEV_C_BIT instead of open coding equivalent functionality,
and delete the overly-verbose CPUID_FN_ENCRYPT_MEM_CAPAB macro.

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Link: https://lore.kernel.org/r/20250610195415.115404-13-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

b643ae6210-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F

Define proper X86_FEATURE_* flags for CPUID 0x8000001F, and use them
instead of open coding equivalent checks in amd_sev_{,es_}enable

x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F

Define proper X86_FEATURE_* flags for CPUID 0x8000001F, and use them
instead of open coding equivalent checks in amd_sev_{,es_}enabled().

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Link: https://lore.kernel.org/r/20250610195415.115404-12-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

5d80d64d10-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/sev: Use VC_VECTOR from processor.h

Use VC_VECTOR (defined in processor.h along with all other known vectors)
and drop the one-off SEV_ES_VC_HANDLER_VECTOR macro.

No functional change intended.

x86/sev: Use VC_VECTOR from processor.h

Use VC_VECTOR (defined in processor.h along with all other known vectors)
and drop the one-off SEV_ES_VC_HANDLER_VECTOR macro.

No functional change intended.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-10-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

215e67c110-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information

Use the recently introduced X86_PROPERTY_PMU_* macros to get PMU
information instead of open coding equivalent functionality.

No f

x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information

Use the recently introduced X86_PROPERTY_PMU_* macros to get PMU
information instead of open coding equivalent functionality.

No functional change intended.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Sandipan Das <sandipan.das@amd.com>
Link: https://lore.kernel.org/r/20250610195415.115404-9-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

92dc5f7a10-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24]

Mask the set of available architectural events based on the bit vector
length to avoid marking reserved/undefined even

x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24]

Mask the set of available architectural events based on the bit vector
length to avoid marking reserved/undefined events as available. Per the
SDM:

EAX Bits 31-24: Length of EBX bit vector to enumerate architectural
performance monitoring events. Architectural event x is
supported if EBX[x]=0 && EAX[31:24]>x.

Suggested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-8-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

6c9e190710-Jun-2025 Sean Christopherson <seanjc@google.com>

x86/pmu: Mark all arch events as available on AMD, and rename fields

Mark all arch events as available on AMD, as AMD PMUs don't provide the
"not available" CPUID field, and the number of GP counter

x86/pmu: Mark all arch events as available on AMD, and rename fields

Mark all arch events as available on AMD, as AMD PMUs don't provide the
"not available" CPUID field, and the number of GP counters has nothing to
do with which architectural events are available/supported.

Rename gp_counter_mask_length to arch_event_mask_length, and
pmu_gp_counter_is_available() to pmu_arch_event_is_available(), to
reflect what the field and helper actually track.

Cc: Dapeng Mi <dapeng1.mi@linux.intel.com>
Fixes: b883751a ("x86/pmu: Update testcases to cover AMD PMU")
Tested-by: Sandipan Das <sandipan.das@amd.com>
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

25e295a510-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES

Add a definition for X86_PROPERTY_INTEL_PT_NR_RANGES, and use it instead
of open coding equivalent logic in the LA57 testcase that verifies the
canon

x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES

Add a definition for X86_PROPERTY_INTEL_PT_NR_RANGES, and use it instead
of open coding equivalent logic in the LA57 testcase that verifies the
canonical address behavior of PT MSRs.

No functional change intended.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

587db1e810-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}

Use X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} to implement get_supported_xcr0().

Opportunistically rename the helper and move

x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}

Use X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} to implement get_supported_xcr0().

Opportunistically rename the helper and move it to processor.h.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

9a3266bf10-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()

Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() instead of open coding a
*very* rough equivalent. Default to a maximum virtual address width o

x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()

Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() instead of open coding a
*very* rough equivalent. Default to a maximum virtual address width of
48 bits instead of 64 bits to better match real x86 CPUs (and Intel and
AMD architectures).

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

77ea6ad110-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Add X86_PROPERTY_* framework to retrieve CPUID values

Introduce X86_PROPERTY_* to allow retrieving values/properties from CPUID
leafs, e.g. MAXPHYADDR from CPUID.0x80000008. Use the same core

x86: Add X86_PROPERTY_* framework to retrieve CPUID values

Introduce X86_PROPERTY_* to allow retrieving values/properties from CPUID
leafs, e.g. MAXPHYADDR from CPUID.0x80000008. Use the same core code as
X86_FEATURE_*, the primary difference is that properties are multi-bit
values, whereas features enumerate a single bit.

Add this_cpu_has_p() to allow querying whether or not a property exists
based on the maximum leaf associated with the property, e.g. MAXPHYADDR
doesn't exist if the max leaf for 0x8000_xxxx is less than 0x8000_0008.

Use the new property infrastructure in cpuid_maxphyaddr() to prove that
the code works as intended. Future patches will convert additional code.

Note, the code, nomenclature, changelog, etc. are all stolen from KVM
selftests.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

361f623c10-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Encode X86_FEATURE_* definitions using a structure

Encode X86_FEATURE_* macros using a new "struct x86_cpu_feature" instead
of manually packing the values into a u64. Using a structure elimina

x86: Encode X86_FEATURE_* definitions using a structure

Encode X86_FEATURE_* macros using a new "struct x86_cpu_feature" instead
of manually packing the values into a u64. Using a structure eliminates
open code shifts and masks, and is largely self-documenting.

Opportunistically replace single tabs with single spaces after #define
for relevant code; the existing code uses a mix of both, and a single
space is far more common.

Note, the code and naming scheme are stolen from KVM selftests.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250610195415.115404-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

9fb56fa105-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Expand the suite of bitops to cover all set/clear operations

Provide atomic and non-atomic APIs for clearing and setting bits, along
with "test" versions to return the original value. Don't bo

x86: Expand the suite of bitops to cover all set/clear operations

Provide atomic and non-atomic APIs for clearing and setting bits, along
with "test" versions to return the original value. Don't bother with
"change" APIs, as they are highly unlikely to be needed.

Opportunistically move the existing definitions to bitops.h so that common
code can access the helpers.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250605192226.532654-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

486a097c04-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Cache availability of forced emulation during setup_idt()

Cache whether or not forced emulation is availability during setup_idt()
so that tests can force emulation (or not) in contexts where t

x86: Cache availability of forced emulation during setup_idt()

Cache whether or not forced emulation is availability during setup_idt()
so that tests can force emulation (or not) in contexts where taking an
exception of any kind will fail.

Link: https://lore.kernel.org/r/20250604183623.283300-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

63f9b98704-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Load IDT on BSP as part of setup_idt()

Load the IDT on the BSP as part of setup_idt(), to guarantee that the IDT
is loaded when setup_idt() runs (currently, the EFI boot path loads the
IDT _aft

x86: Load IDT on BSP as part of setup_idt()

Load the IDT on the BSP as part of setup_idt(), to guarantee that the IDT
is loaded when setup_idt() runs (currently, the EFI boot path loads the
IDT _after_ setup_idt(). This will allow probing for forced emulation,
which requires being able to handle a #UD, during setup_idt().

Link: https://lore.kernel.org/r/20250604183623.283300-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

3fcc851704-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Drop protection against setup_idt() being called multiple times

Now that setup_idt() is called exactly one for 32-bit, 64-bit, and EFI,
drop the "do once" protection.

Long, long ago, setup_idt

x86: Drop protection against setup_idt() being called multiple times

Now that setup_idt() is called exactly one for 32-bit, 64-bit, and EFI,
drop the "do once" protection.

Long, long ago, setup_idt() was called by individual tests, and so the
"do once" protection made a lot more sense. Now that (most) core setup
has been moved to the BSP's boot path, playing nice with calling
setup_idt() multiple times doesn't make any sense.

Link: https://lore.kernel.org/r/20250604183623.283300-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

5888870704-Jun-2025 Sean Christopherson <seanjc@google.com>

x86: Call setup_idt() from start{32,64}(), not from smp_init()

Call setup_idt() from the (non-EFI) 32-bit and 64-bit BSP boot paths so
that setup_idt() is called exactly once for all flavors. To be

x86: Call setup_idt() from start{32,64}(), not from smp_init()

Call setup_idt() from the (non-EFI) 32-bit and 64-bit BSP boot paths so
that setup_idt() is called exactly once for all flavors. To be able to
handle #VC, EFI calls setup_idt() early on, before smp_init().

This will allow moving the call to load_idt() into setup_idt(), without
creating weirdness, which in turn will allow taking faults in setup_idt(),
e.g. to probe for forced emulation support.

Link: https://lore.kernel.org/r/20250604183623.283300-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

d427851a04-Mar-2025 Sean Christopherson <seanjc@google.com>

x86: apic: Move helpers for querying APIC state to library code

Expose the helpers to query if an APIC is enabled, and in xAPIC vs. x2APIC
mode, to library code so that the helpers can be used by al

x86: apic: Move helpers for querying APIC state to library code

Expose the helpers to query if an APIC is enabled, and in xAPIC vs. x2APIC
mode, to library code so that the helpers can be used by all tests.

No funtional change intended.

Link: https://lore.kernel.org/r/20250304211223.124321-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...


/kvm-unit-tests/.editorconfig
/kvm-unit-tests/.travis.yml
/kvm-unit-tests/Makefile
/kvm-unit-tests/arm/Makefile.arm
/kvm-unit-tests/arm/Makefile.arm64
/kvm-unit-tests/arm/Makefile.common
/kvm-unit-tests/arm/cstart.S
/kvm-unit-tests/arm/cstart64.S
/kvm-unit-tests/arm/mte.c
/kvm-unit-tests/arm/run
/kvm-unit-tests/arm/unittests.cfg
/kvm-unit-tests/configure
/kvm-unit-tests/lib/arm/asm/arch_gicv3.h
/kvm-unit-tests/lib/arm/asm/assembler.h
/kvm-unit-tests/lib/arm/asm/gic-v2.h
/kvm-unit-tests/lib/arm/asm/gic-v3.h
/kvm-unit-tests/lib/arm/asm/gic.h
/kvm-unit-tests/lib/arm/asm/page.h
/kvm-unit-tests/lib/arm/asm/ptrace.h
/kvm-unit-tests/lib/arm/asm/sysreg.h
/kvm-unit-tests/lib/arm/asm/thread_info.h
/kvm-unit-tests/lib/arm/asm/timer.h
/kvm-unit-tests/lib/arm/setup.c
/kvm-unit-tests/lib/arm64/asm/arch_gicv3.h
/kvm-unit-tests/lib/arm64/asm/assembler.h
/kvm-unit-tests/lib/arm64/asm/mmu.h
/kvm-unit-tests/lib/arm64/asm/page.h
/kvm-unit-tests/lib/arm64/asm/pgtable-hwdef.h
/kvm-unit-tests/lib/arm64/asm/processor.h
/kvm-unit-tests/lib/arm64/asm/ptrace.h
/kvm-unit-tests/lib/arm64/asm/sysreg.h
/kvm-unit-tests/lib/asm-generic/page.h
/kvm-unit-tests/lib/auxinfo.h
/kvm-unit-tests/lib/cpumask.h
/kvm-unit-tests/lib/libcflat.h
/kvm-unit-tests/lib/libfdt/fdt.h
/kvm-unit-tests/lib/linux/compiler.h
/kvm-unit-tests/lib/linux/const.h
/kvm-unit-tests/lib/powerpc/asm/hcall.h
/kvm-unit-tests/lib/powerpc/asm/processor.h
/kvm-unit-tests/lib/powerpc/asm/rtas.h
/kvm-unit-tests/lib/ppc64/asm/page.h
/kvm-unit-tests/lib/ppc64/asm/ptrace.h
/kvm-unit-tests/lib/ppc64/asm/vpa.h
/kvm-unit-tests/lib/riscv/asm-offsets.c
/kvm-unit-tests/lib/riscv/asm/bug.h
/kvm-unit-tests/lib/riscv/asm/csr.h
/kvm-unit-tests/lib/riscv/asm/page.h
/kvm-unit-tests/lib/riscv/asm/sbi.h
/kvm-unit-tests/lib/riscv/io.c
/kvm-unit-tests/lib/riscv/sbi-sse-asm.S
/kvm-unit-tests/lib/riscv/sbi.c
/kvm-unit-tests/lib/riscv/setjmp.S
apic.h
/kvm-unit-tests/powerpc/cstart64.S
/kvm-unit-tests/riscv/.gitignore
/kvm-unit-tests/riscv/Makefile
/kvm-unit-tests/riscv/cstart.S
/kvm-unit-tests/riscv/run
/kvm-unit-tests/riscv/sbi-asm-offsets.c
/kvm-unit-tests/riscv/sbi-asm.S
/kvm-unit-tests/riscv/sbi-fwft.c
/kvm-unit-tests/riscv/sbi-sse.c
/kvm-unit-tests/riscv/sbi-tests.h
/kvm-unit-tests/riscv/sbi.c
/kvm-unit-tests/scripts/asm-offsets.mak
/kvm-unit-tests/scripts/mkstandalone.sh
/kvm-unit-tests/scripts/runtime.bash
/kvm-unit-tests/x86/Makefile.common
/kvm-unit-tests/x86/apic.c
/kvm-unit-tests/x86/debug.c
/kvm-unit-tests/x86/pmu.c
36fb9e8421-Feb-2025 Sean Christopherson <seanjc@google.com>

x86: Include libcflat.h in atomic.h for u64 typedef

Include libcflat.h in x86's atomic.h to pick up the u64 typedef, which is
used to define atomic64_t. The missing include results in build errors

x86: Include libcflat.h in atomic.h for u64 typedef

Include libcflat.h in x86's atomic.h to pick up the u64 typedef, which is
used to define atomic64_t. The missing include results in build errors if
a test includes atomic.h without (or before) libcflat.h.

lib/x86/atomic.h:162:1: error: unknown type name ‘u64’
162 | u64 atomic64_cmpxchg(atomic64_t *v, u64 old, u64 new);

Link: https://lore.kernel.org/r/20250221204148.2171418-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

0164d75901-Jul-2024 Binbin Wu <binbin.wu@linux.intel.com>

x86: Add test cases for LAM_{U48,U57}

This unit test covers:
1. CR3 LAM bits toggles.
2. Memory/MMIO access with user mode address containing LAM metadata.

Signed-off-by: Binbin Wu <binbin.wu@linux

x86: Add test cases for LAM_{U48,U57}

This unit test covers:
1. CR3 LAM bits toggles.
2. Memory/MMIO access with user mode address containing LAM metadata.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Link: https://lore.kernel.org/r/20240701073010.91417-5-binbin.wu@linux.intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

14520f8e01-Jul-2024 Robert Hoo <robert.hu@linux.intel.com>

x86: Add test case for LAM_SUP

This unit test covers:
1. CR4.LAM_SUP toggles.
2. Memory & MMIO access with supervisor mode address with LAM metadata.
3. INVLPG memory operand doesn't contain LAM met

x86: Add test case for LAM_SUP

This unit test covers:
1. CR4.LAM_SUP toggles.
2. Memory & MMIO access with supervisor mode address with LAM metadata.
3. INVLPG memory operand doesn't contain LAM meta data, if the address
is non-canonical form then the INVLPG is the same as a NOP (no #GP).
4. INVPCID memory operand (descriptor pointer) could contain LAM meta data,
however, the address in the descriptor should be canonical.

In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM.

LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368,
Chapter LINEAR ADDRESS MASKING (LAM)

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Link: https://lore.kernel.org/r/20240701073010.91417-4-binbin.wu@linux.intel.com
[sean: s/set/get for the helper, smush tests, call it "lam", use "-cpu max"]
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

0a6b8b7d01-Jul-2024 Binbin Wu <binbin.wu@linux.intel.com>

x86: Allow setting of CR3 LAM bits if LAM supported

If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48
(bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field.

Change the te

x86: Allow setting of CR3 LAM bits if LAM supported

If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48
(bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field.

Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57
on vmlaunch tests when LAM is supported.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Link: https://lore.kernel.org/r/20240701073010.91417-3-binbin.wu@linux.intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

8d1acfe415-Feb-2025 Xiong Zhang <xiong.y.zhang@intel.com>

x86: pmu: Remove duplicate code in pmu_init()

There are totally same code in pmu_init() helper, remove the duplicate
code.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Xiong Zhang

x86: pmu: Remove duplicate code in pmu_init()

There are totally same code in pmu_init() helper, remove the duplicate
code.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Link: https://lore.kernel.org/r/20250215013636.1214612-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

12345678910>>...17