History log of /linux/arch/x86/kvm/x86.c (Results 226 – 250 of 8691)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# c6141ba0 05-Mar-2025 Mark Brown <broonie@kernel.org>

ASoC: Merge up fixes

Merge branch 'for-6.14' of
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into
asoc-6.15 to avoid a bunch of add/add conflicts.


# 5fac6c27 04-Mar-2025 Mark Brown <broonie@kernel.org>

Add STM32MP25 SPI NOR support

Merge series from patrice.chotard@foss.st.com:

This series adds SPI NOR support for STM32MP25 SoCs from STMicroelectronics.

On STM32MP25 SoCs family, an Octo Memory M

Add STM32MP25 SPI NOR support

Merge series from patrice.chotard@foss.st.com:

This series adds SPI NOR support for STM32MP25 SoCs from STMicroelectronics.

On STM32MP25 SoCs family, an Octo Memory Manager block manages the muxing,
the memory area split, the chip select override and the time constraint
between its 2 Octo SPI children.

Due to these depedencies, this series adds support for:
- Octo Memory Manager driver (not applied for SPI).
- Octo SPI driver.
- yaml schema for Octo Memory Manager and Octo SPI drivers.

The device tree files adds Octo Memory Manager and its 2 associated Octo
SPI chidren in stm32mp251.dtsi and adds SPI NOR support in stm32mp257f-ev1
board.

show more ...


# cfdaa618 04-Mar-2025 Ingo Molnar <mingo@kernel.org>

Merge branch 'x86/cpu' into x86/asm, to pick up dependent commits

Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 1b4c36f9 04-Mar-2025 Ingo Molnar <mingo@kernel.org>

Merge branch 'x86/urgent' into x86/cpu, to pick up dependent commits

Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 1fff9f87 03-Mar-2025 Ingo Molnar <mingo@kernel.org>

Merge tag 'v6.14-rc5' into x86/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>


# cc76847b 03-Mar-2025 Bartosz Golaszewski <bartosz.golaszewski@linaro.org>

Merge tag 'v6.14-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into gpio/for-next

Linux 6.14-rc5


# ef2f7986 01-Mar-2025 Ingo Molnar <mingo@kernel.org>

Merge branch 'perf/urgent' into perf/core, to pick up dependent patches and fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 209cd6f2 01-Mar-2025 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
"ARM:

- Fix TCR_EL2 configuration to not use the ASID in TTBR1_EL2 and not
mess-up T1S

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
"ARM:

- Fix TCR_EL2 configuration to not use the ASID in TTBR1_EL2 and not
mess-up T1SZ/PS by using the HCR_EL2.E2H==0 layout.

- Bring back the VMID allocation to the vcpu_load phase, ensuring
that we only setup VTTBR_EL2 once on VHE. This cures an ugly race
that would lead to running with an unallocated VMID.

RISC-V:

- Fix hart status check in SBI HSM extension

- Fix hart suspend_type usage in SBI HSM extension

- Fix error returned by SBI IPI and TIME extensions for unsupported
function IDs

- Fix suspend_type usage in SBI SUSP extension

- Remove unnecessary vcpu kick after injecting interrupt via IMSIC
guest file

x86:

- Fix an nVMX bug where KVM fails to detect that, after nested
VM-Exit, L1 has a pending IRQ (or NMI).

- To avoid freeing the PIC while vCPUs are still around, which would
cause a NULL pointer access with the previous patch, destroy vCPUs
before any VM-level destruction.

- Handle failures to create vhost_tasks"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
kvm: retry nx_huge_page_recovery_thread creation
vhost: return task creation error instead of NULL
KVM: nVMX: Process events on nested VM-Exit if injectable IRQ or NMI is pending
KVM: x86: Free vCPUs before freeing VM state
riscv: KVM: Remove unnecessary vcpu kick
KVM: arm64: Ensure a VMID is allocated before programming VTTBR_EL2
KVM: arm64: Fix tcr_el2 initialisation in hVHE mode
riscv: KVM: Fix SBI sleep_type use
riscv: KVM: Fix SBI TIME error generation
riscv: KVM: Fix SBI IPI error generation
riscv: KVM: Fix hart suspend_type use
riscv: KVM: Fix hart suspend status check

show more ...


# 2a289aed 24-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Always set mp_state to RUNNABLE on wakeup from HLT

When emulating HLT and a wake event is already pending, explicitly mark
the vCPU RUNNABLE (via kvm_set_mp_state()) instead of assuming th

KVM: x86: Always set mp_state to RUNNABLE on wakeup from HLT

When emulating HLT and a wake event is already pending, explicitly mark
the vCPU RUNNABLE (via kvm_set_mp_state()) instead of assuming the vCPU is
already in the appropriate state. Barring a KVM bug, it should be
impossible for the vCPU to be in a non-RUNNABLE state, but there is no
advantage to relying on that to hold true, and ensuring the vCPU is made
RUNNABLE avoids non-deterministic behavior with respect to pv_unhalted.

E.g. if the vCPU is not already RUNNABLE, then depending on when
pv_unhalted is set, KVM could either leave the vCPU in the non-RUNNABLE
state (set before __kvm_emulate_halt()), or transition the vCPU to HALTED
and then RUNNABLE (pv_unhalted set after the kvm_vcpu_has_events() check).

Link: https://lore.kernel.org/r/20250224174156.2362059-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...


# 189ecdb3 27-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Snapshot the host's DEBUGCTL after disabling IRQs

Snapshot the host's DEBUGCTL after disabling IRQs, as perf can toggle
debugctl bits from IRQ context, e.g. when enabling/disabling events

KVM: x86: Snapshot the host's DEBUGCTL after disabling IRQs

Snapshot the host's DEBUGCTL after disabling IRQs, as perf can toggle
debugctl bits from IRQ context, e.g. when enabling/disabling events via
smp_call_function_single(). Taking the snapshot (long) before IRQs are
disabled could result in KVM effectively clobbering DEBUGCTL due to using
a stale snapshot.

Cc: stable@vger.kernel.org
Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com>
Link: https://lore.kernel.org/r/20250227222411.3490595-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...


# fb71c795 27-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Snapshot the host's DEBUGCTL in common x86

Move KVM's snapshot of DEBUGCTL to kvm_vcpu_arch and take the snapshot in
common x86, so that SVM can also use the snapshot.

Opportunistically c

KVM: x86: Snapshot the host's DEBUGCTL in common x86

Move KVM's snapshot of DEBUGCTL to kvm_vcpu_arch and take the snapshot in
common x86, so that SVM can also use the snapshot.

Opportunistically change the field to a u64. While bits 63:32 are reserved
on AMD, not mentioned at all in Intel's SDM, and managed as an "unsigned
long" by the kernel, DEBUGCTL is an MSR and therefore a 64-bit value.

Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Cc: stable@vger.kernel.org
Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com>
Link: https://lore.kernel.org/r/20250227222411.3490595-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...


# 0410c612 28-Feb-2025 Lucas De Marchi <lucas.demarchi@intel.com>

Merge drm/drm-next into drm-xe-next

Sync to fix conlicts between drm-xe-next and drm-intel-next.

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>


# c432bdcf 28-Feb-2025 Ulf Hansson <ulf.hansson@linaro.org>

pmdomain: Merge tag 'v6.14-rc4' from Linus into next

Linux 6.14-rc4


# 8918e180 28-Feb-2025 Jani Nikula <jani.nikula@intel.com>

Merge drm/drm-next into drm-intel-next

Sync to fix conlicts between drm-xe-next and drm-intel-next.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>


# 30667e55 27-Feb-2025 Ingo Molnar <mingo@kernel.org>

Merge branch 'x86/mm' into x86/cpu, to avoid conflicts

We are going to apply a new series that conflicts with pending
work in x86/mm, so merge in x86/mm to avoid it, and also to
refresh the x86/cpu

Merge branch 'x86/mm' into x86/cpu, to avoid conflicts

We are going to apply a new series that conflicts with pending
work in x86/mm, so merge in x86/mm to avoid it, and also to
refresh the x86/cpu branch with fixes.

Signed-off-by: Ingo Molnar <mingo@kernel.org>

show more ...


# 71628584 26-Feb-2025 Christian Brauner <brauner@kernel.org>

Merge patch series "prep patches for my mkdir series"

NeilBrown <neilb@suse.de> says:

These two patches are cleanup are dependencies for my mkdir changes and
subsequence directory locking changes.

Merge patch series "prep patches for my mkdir series"

NeilBrown <neilb@suse.de> says:

These two patches are cleanup are dependencies for my mkdir changes and
subsequence directory locking changes.

* patches from https://lore.kernel.org/r/20250226062135.2043651-1-neilb@suse.de: (2 commits)
nfsd: drop fh_update() from S_IFDIR branch of nfsd_create_locked()
nfs/vfs: discard d_exact_alias()

Link: https://lore.kernel.org/r/20250226062135.2043651-1-neilb@suse.de
Signed-off-by: Christian Brauner <brauner@kernel.org>

show more ...


# b2aba529 24-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: Drop kvm_arch_sync_events() now that all implementations are nops

Remove kvm_arch_sync_events() now that x86 no longer uses it (no other
arch has ever used it).

No functional change intended.

KVM: Drop kvm_arch_sync_events() now that all implementations are nops

Remove kvm_arch_sync_events() now that x86 no longer uses it (no other
arch has ever used it).

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Message-ID: <20250224235542.2562848-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# fd217326 24-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Fold guts of kvm_arch_sync_events() into kvm_arch_pre_destroy_vm()

Fold the guts of kvm_arch_sync_events() into kvm_arch_pre_destroy_vm(), as
the kvmclock and PIT background workers only n

KVM: x86: Fold guts of kvm_arch_sync_events() into kvm_arch_pre_destroy_vm()

Fold the guts of kvm_arch_sync_events() into kvm_arch_pre_destroy_vm(), as
the kvmclock and PIT background workers only need to be stopped before
destroying vCPUs (to avoid accessing vCPUs as they are being freed); it's
a-ok for them to be running while the VM is visible on the global vm_list.

Note, the PIT also needs to be stopped before IRQ routing is freed
(because KVM's IRQ routing is garbage and assumes there is always non-NULL
routing).

Opportunistically add comments to explain why KVM stops/frees certain
assets early.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250224235542.2562848-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# e4472125 24-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Unload MMUs during vCPU destruction, not before

When destroying a VM, unload a vCPU's MMUs as part of normal vCPU freeing,
instead of as a separate prepratory action. Unloading MMUs ahead

KVM: x86: Unload MMUs during vCPU destruction, not before

When destroying a VM, unload a vCPU's MMUs as part of normal vCPU freeing,
instead of as a separate prepratory action. Unloading MMUs ahead of time
is a holdover from commit 7b53aa565084 ("KVM: Fix vcpu freeing for guest
smp"), which "fixed" a rather egregious flaw where KVM would attempt to
free *all* MMU pages when destroying a vCPU.

At the time, KVM would spin on all MMU pages in a VM when free a single
vCPU, and so would hang due to the way KVM pins and zaps root pages
(roots are invalidated but not freed if they are pinned by a vCPU).

static void free_mmu_pages(struct kvm_vcpu *vcpu)
{
struct kvm_mmu_page *page;

while (!list_empty(&vcpu->kvm->active_mmu_pages)) {
page = container_of(vcpu->kvm->active_mmu_pages.next,
struct kvm_mmu_page, link);
kvm_mmu_zap_page(vcpu->kvm, page);
}
free_page((unsigned long)vcpu->mmu.pae_root);
}

Now that KVM doesn't try to free all MMU pages when destroying a single
vCPU, there's no need to unpin roots prior to destroying a vCPU.

Note! While KVM mostly destroys all MMUs before calling
kvm_arch_destroy_vm() (see commit f00be0cae4e6 ("KVM: MMU: do not free
active mmu pages in free_mmu_pages()")), unpinning MMU roots during vCPU
destruction will unfortunately trigger remote TLB flushes, i.e. will try
to send requests to all vCPUs.

Happily, thanks to commit 27592ae8dbe4 ("KVM: Move wiping of the kvm->vcpus
array to common code"), that's a non-issue as freed vCPUs are naturally
skipped by xa_for_each_range(), i.e. by kvm_for_each_vcpu(). Prior to that
commit, KVM x86 rather stupidly freed vCPUs one-by-one, and _then_
nullified them, one-by-one. I.e. triggering a VM-wide request would hit a
use-after-free.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250224235542.2562848-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# ed09b50b 24-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Don't load/put vCPU when unloading its MMU during teardown

Don't load (and then put) a vCPU when unloading its MMU during VM
destruction, as nothing in kvm_mmu_unload() accesses vCPU state

KVM: x86: Don't load/put vCPU when unloading its MMU during teardown

Don't load (and then put) a vCPU when unloading its MMU during VM
destruction, as nothing in kvm_mmu_unload() accesses vCPU state beyond the
root page/address of each MMU, i.e. can't possible need to run with the
vCPU loaded.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250224235542.2562848-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# ac3144f9 26-Feb-2025 Ingo Molnar <mingo@kernel.org>

Merge tag 'v6.14-rc4' into x86/fpu, to pick up fixes and refresh the branch

Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 17bcd714 24-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Free vCPUs before freeing VM state

Free vCPUs before freeing any VM state, as both SVM and VMX may access
VM state when "freeing" a vCPU that is currently "in" L2, i.e. that needs
to be ki

KVM: x86: Free vCPUs before freeing VM state

Free vCPUs before freeing any VM state, as both SVM and VMX may access
VM state when "freeing" a vCPU that is currently "in" L2, i.e. that needs
to be kicked out of nested guest mode.

Commit 6fcee03df6a1 ("KVM: x86: avoid loading a vCPU after .vm_destroy was
called") partially fixed the issue, but for unknown reasons only moved the
MMU unloading before VM destruction. Complete the change, and free all
vCPU state prior to destroying VM state, as nVMX accesses even more state
than nSVM.

In addition to the AVIC, KVM can hit a use-after-free on MSR filters:

kvm_msr_allowed+0x4c/0xd0
__kvm_set_msr+0x12d/0x1e0
kvm_set_msr+0x19/0x40
load_vmcs12_host_state+0x2d8/0x6e0 [kvm_intel]
nested_vmx_vmexit+0x715/0xbd0 [kvm_intel]
nested_vmx_free_vcpu+0x33/0x50 [kvm_intel]
vmx_free_vcpu+0x54/0xc0 [kvm_intel]
kvm_arch_vcpu_destroy+0x28/0xf0
kvm_vcpu_destroy+0x12/0x50
kvm_arch_destroy_vm+0x12c/0x1c0
kvm_put_kvm+0x263/0x3c0
kvm_vm_release+0x21/0x30

and an upcoming fix to process injectable interrupts on nested VM-Exit
will access the PIC:

BUG: kernel NULL pointer dereference, address: 0000000000000090
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
CPU: 23 UID: 1000 PID: 2658 Comm: kvm-nx-lpage-re
RIP: 0010:kvm_cpu_has_extint+0x2f/0x60 [kvm]
Call Trace:
<TASK>
kvm_cpu_has_injectable_intr+0xe/0x60 [kvm]
nested_vmx_vmexit+0x2d7/0xdf0 [kvm_intel]
nested_vmx_free_vcpu+0x40/0x50 [kvm_intel]
vmx_vcpu_free+0x2d/0x80 [kvm_intel]
kvm_arch_vcpu_destroy+0x2d/0x130 [kvm]
kvm_destroy_vcpus+0x8a/0x100 [kvm]
kvm_arch_destroy_vm+0xa7/0x1d0 [kvm]
kvm_destroy_vm+0x172/0x300 [kvm]
kvm_vcpu_release+0x31/0x50 [kvm]

Inarguably, both nSVM and nVMX need to be fixed, but punt on those
cleanups for the moment. Conceptually, vCPUs should be freed before VM
state. Assets like the I/O APIC and PIC _must_ be allocated before vCPUs
are created, so it stands to reason that they must be freed _after_ vCPUs
are destroyed.

Reported-by: Aaron Lewis <aaronlewis@google.com>
Closes: https://lore.kernel.org/all/20240703175618.2304869-2-aaronlewis@google.com
Cc: Jim Mattson <jmattson@google.com>
Cc: Yan Zhao <yan.y.zhao@intel.com>
Cc: Rick P Edgecombe <rick.p.edgecombe@intel.com>
Cc: Kai Huang <kai.huang@intel.com>
Cc: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250224235542.2562848-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...


# 0b119045 26-Feb-2025 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge tag 'v6.14-rc4' into next

Sync up with the mainline.


# b50cb2b1 01-Oct-2024 Sean Christopherson <seanjc@google.com>

KVM: x86: Use a dedicated flow for queueing re-injected exceptions

Open code the filling of vcpu->arch.exception in kvm_requeue_exception()
instead of bouncing through kvm_multiple_exception(), as r

KVM: x86: Use a dedicated flow for queueing re-injected exceptions

Open code the filling of vcpu->arch.exception in kvm_requeue_exception()
instead of bouncing through kvm_multiple_exception(), as re-injection
doesn't actually share that much code with "normal" injection, e.g. the
VM-Exit interception check, payload delivery, and nested exception code
is all bypassed as those flows only apply during initial injection.

When FRED comes along, the special casing will only get worse, as FRED
explicitly tracks nested exceptions and essentially delivers the payload
on the stack frame, i.e. re-injection will need more inputs, and normal
injection will have yet more code that needs to be bypassed when KVM is
re-injecting an exception.

No functional change intended.

Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Link: https://lore.kernel.org/r/20241001050110.3643764-2-xin@zytor.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...


# 4fa0efb4 15-Feb-2025 Sean Christopherson <seanjc@google.com>

KVM: x86: Rename and invert async #PF's send_user_only flag to send_always

Rename send_user_only to avoid "user", because KVM's ABI is to not inject
page faults into CPL0, whereas "user" in x86 is s

KVM: x86: Rename and invert async #PF's send_user_only flag to send_always

Rename send_user_only to avoid "user", because KVM's ABI is to not inject
page faults into CPL0, whereas "user" in x86 is specifically CPL3. Invert
the polarity to keep the naming simple and unambiguous. E.g. while KVM
often refers to CPL0 as "kernel", that terminology isn't ubiquitous, and
"send_kernel" could be misconstrued as "send only to kernel".

Link: https://lore.kernel.org/r/20250215010609.1199982-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...


12345678910>>...348