#
0cc3a351 |
| 22-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
lib: Use __ASSEMBLER__ instead of __ASSEMBLY__
Convert all non-x86 #ifdefs from __ASSEMBLY__ to __ASSEMBLER__, and remove all manual __ASSEMBLY__ #defines. __ASSEMBLY_ was inherited blindly from th
lib: Use __ASSEMBLER__ instead of __ASSEMBLY__
Convert all non-x86 #ifdefs from __ASSEMBLY__ to __ASSEMBLER__, and remove all manual __ASSEMBLY__ #defines. __ASSEMBLY_ was inherited blindly from the Linux kernel, and must be manually defined, e.g. through build rules or with the aforementioned explicit #defines in assembly code.
__ASSEMBLER__ on the other hand is automatically defined by the compiler when preprocessing assembly, i.e. doesn't require manually #defines for the code to function correctly.
Ignore x86, as x86 doesn't actually rely on __ASSEMBLY__ at the moment, and is undergoing a parallel cleanup.
Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Message-ID: <20250222014526.2302653-1-seanjc@google.com> [thuth: Fix three more occurances in libfdt.h and sbi-tests.h] Signed-off-by: Thomas Huth <thuth@redhat.com>
show more ...
|
#
201b9e8b |
| 03-Jul-2024 |
Andrew Jones <andrew.jones@linux.dev> |
Merge branch 'arm/queue' into 'master'
arm/arm64: LPA2 support and fpu/sve s/r test
See merge request kvm-unit-tests/kvm-unit-tests!61
|
#
cddb18bc |
| 12-Apr-2024 |
Alexandru Elisei <alexandru.elisei@arm.com> |
arm64: Expand SMCCC arguments and return values
PSCI uses the SMC Calling Convention (SMCCC) to communicate with the higher level software. PSCI uses at most 4 arguments and expend only one return v
arm64: Expand SMCCC arguments and return values
PSCI uses the SMC Calling Convention (SMCCC) to communicate with the higher level software. PSCI uses at most 4 arguments and expend only one return value. However, SMCCC has provisions for more arguments (upto 17 depending on the SMCCC version) and upto 10 distinct return values.
We are going to be adding tests that make use of it, so add support for the extended number of arguments and return values.
Also rename the SMCCC functions to generic, non-PSCI names, so they can be used for Realm services.
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Co-developed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [Fixed EFI compile error.] Signed-off-by: Andrew Jones <andrew.jones@linux.dev>
show more ...
|
#
e526bc78 |
| 01-Jul-2023 |
Andrew Jones <andrew.jones@linux.dev> |
Merge branch 'arm/queue' into 'master'
arm/arm64: EFI support, arm64 backtrace support, PMU test improvements, and more
See merge request kvm-unit-tests/kvm-unit-tests!43
|
#
23e17626 |
| 30-May-2023 |
Nikos Nikoleris <nikos.nikoleris@arm.com> |
arm64: Add a setup sequence for systems that boot through EFI
This change implements an alternative setup sequence for the system when we are booting through EFI. The memory map is discovered throug
arm64: Add a setup sequence for systems that boot through EFI
This change implements an alternative setup sequence for the system when we are booting through EFI. The memory map is discovered through EFI boot services and devices through ACPI.
This change is based on a change initially proposed by Andrew Jones <drjones@redhat.com>
Signed-off-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> [Changed __ALIGN to ALIGN as pointed out by Nadav Amit.] Signed-off-by: Andrew Jones <andrew.jones@linux.dev>
show more ...
|
#
07da5bc1 |
| 15-Mar-2022 |
Andrew Jones <drjones@redhat.com> |
Merge branch 'arm/queue' into 'master'
arm/queue: configure and run script improvements
See merge request kvm-unit-tests/kvm-unit-tests!27
|
#
7a84b7b2 |
| 11-Mar-2022 |
Thomas Huth <thuth@redhat.com> |
arm: Fix typos
Correct typos which were discovered with the "codespell" utility.
Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
|
#
74ff0e96 |
| 18-May-2021 |
Paolo Bonzini <bonzini@gnu.org> |
Merge branch 'arm/queue' into 'master'
arm/arm64: target-efi prep
This series mostly prepares kvm-unit-tests/arm for targeting EFI platforms. The actually EFI support will come in another series, b
Merge branch 'arm/queue' into 'master'
arm/arm64: target-efi prep
This series mostly prepares kvm-unit-tests/arm for targeting EFI platforms. The actually EFI support will come in another series, but these patches are good for removing assumptions from our memory maps and about our PSCI conduit, even if we never merge EFI support.
See merge request kvm-unit-tests/kvm-unit-tests!8
show more ...
|
#
bd5bd157 |
| 06-Apr-2021 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: psci: Don't assume method is hvc
The method can be smc in addition to hvc, and it will be when running on bare metal. Additionally, we move the invocations to assembly so we don't have to
arm/arm64: psci: Don't assume method is hvc
The method can be smc in addition to hvc, and it will be when running on bare metal. Additionally, we move the invocations to assembly so we don't have to rely on compiler assumptions. We also fix the prototype of psci_invoke. function_id should be an unsigned int, not an unsigned long.
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
5a2a7371 |
| 06-Apr-2021 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: setup: Consolidate memory layout assumptions
Keep as much memory layout assumptions as possible in init::start and a single setup function. This prepares us for calling setup() from diffe
arm/arm64: setup: Consolidate memory layout assumptions
Keep as much memory layout assumptions as possible in init::start and a single setup function. This prepares us for calling setup() from different start functions which have been linked with different linker scripts. To do this, stacktop is only referenced from init::start, making freemem_start a parameter to setup(). We also split mem_init() into three parts, one that populates the mem regions per the DT, one that populates the mem regions per assumptions, and one that does the mem init. The concept of a primary region is dropped, but we add a sanity check for the absence of memory holes, because we don't know how to deal with them yet.
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
c0edb3d2 |
| 31-Mar-2021 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: Move setup_vm into setup
Consolidate our setup calls to reduce the amount we need to do from init::start. Also remove a couple of pointless comments from setup().
Reviewed-by: Nikos Niko
arm/arm64: Move setup_vm into setup
Consolidate our setup calls to reduce the amount we need to do from init::start. Also remove a couple of pointless comments from setup().
Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
2da0f98c |
| 31-Mar-2021 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: Reorganize cstart assembler
Move secondary_entry helper functions out of .init and into .text, since secondary_entry isn't run at at "init" time. Actually, anything that is used after ini
arm/arm64: Reorganize cstart assembler
Move secondary_entry helper functions out of .init and into .text, since secondary_entry isn't run at at "init" time. Actually, anything that is used after init time should be in .text, as we may not include .init in some build configurations.
Reviewed-by Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
f583d924 |
| 30-Mar-2021 |
Paolo Bonzini <bonzini@gnu.org> |
Merge branch 'arm/queue' into 'master'
arm/arm64: Fixes, improvements, and prep for target-efi
See merge request kvm-unit-tests/kvm-unit-tests!6
|
#
e481f6be |
| 22-Mar-2021 |
Alexandru Elisei <alexandru.elisei@arm.com> |
arm/arm64: Remove unnecessary ISB when doing dcache maintenance
The dcache_by_line_op macro executes a DSB to complete the cache maintenance operations. According to ARM DDI 0487G.a, page B2-150:
"
arm/arm64: Remove unnecessary ISB when doing dcache maintenance
The dcache_by_line_op macro executes a DSB to complete the cache maintenance operations. According to ARM DDI 0487G.a, page B2-150:
"In addition, no instruction that appears in program order after the DSB instruction can alter any state of the system or perform any part of its functionality until the DSB completes other than:
- Being fetched from memory and decoded. - Reading the general-purpose, SIMD and floating-point, Special-purpose, or System registers that are directly or indirectly read without causing side-effects."
Similar definition for ARM in ARM DDI 0406C.d, page A3-150:
"In addition, no instruction that appears in program order after the DSB instruction can execute until the DSB completes."
This means that we don't need the ISB to prevent reordering of the cache maintenance instructions.
We are also not doing icache maintenance, where an ISB would be required for the PE to discard instructions speculated before the invalidation.
In conclusion, the ISB is unnecessary, so remove it.
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
b5f659be |
| 22-Mar-2021 |
Alexandru Elisei <alexandru.elisei@arm.com> |
arm/arm64: Remove dcache_line_size global variable
Compute the dcache line size when doing dcache maintenance instead of using a global variable computed in setup(), which allows us to do dcache mai
arm/arm64: Remove dcache_line_size global variable
Compute the dcache line size when doing dcache maintenance instead of using a global variable computed in setup(), which allows us to do dcache maintenance at any point in the boot process. This will be useful for running as an EFI app and it also aligns our implementation to that of the Linux kernel. As a result, the dcache_by_line_op assembly has been modified to take a range described by start address and size, instead of start and end addresses.
For consistency, the arm code has been similary modified.
Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
993c37be |
| 22-Mar-2021 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: Zero BSS and stack at startup
So far we've counted on QEMU or kvmtool implicitly zeroing all memory. With our goal of eventually supporting bare-metal targets with target-efi we should ex
arm/arm64: Zero BSS and stack at startup
So far we've counted on QEMU or kvmtool implicitly zeroing all memory. With our goal of eventually supporting bare-metal targets with target-efi we should explicitly zero any memory we expect to be zeroed ourselves. This obviously includes the BSS, but also the bootcpu's stack, as the bootcpu's thread-info lives in the stack and may get used in early setup to get the cpu index. Note, this means we still assume the bootcpu's cpu index to be zero. That assumption can be removed later.
Cc: Nikos Nikoleris <nikos.nikoleris@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: Nikos Nikoleris <nikos.nikoleris@arm.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
410b3bf0 |
| 31-Jan-2020 |
Alexandru Elisei <alexandru.elisei@arm.com> |
arm/arm64: Perform dcache clean + invalidate after turning MMU off
When the MMU is off, data accesses are to Device nGnRnE memory on arm64 [1] or to Strongly-Ordered memory on arm [2]. This means th
arm/arm64: Perform dcache clean + invalidate after turning MMU off
When the MMU is off, data accesses are to Device nGnRnE memory on arm64 [1] or to Strongly-Ordered memory on arm [2]. This means that the accesses are non-cacheable.
Perform a dcache clean to PoC so we can read the newer values from the cache after we turn the MMU off, instead of the stale values from memory.
Perform an invalidation so we can access the data written to memory after we turn the MMU back on. This prevents reading back the stale values we cleaned from the cache when we turned the MMU off.
Data caches are PIPT and the VAs are translated using the current translation tables, or an identity mapping (what Arm calls a "flat mapping") when the MMU is off [1, 2]. Do the clean + invalidate when the MMU is off so we don't depend on the current translation tables and we can make sure that the operation applies to the entire physical memory.
The patch was tested by hacking arm/selftest.c:
+#include <alloc_page.h> +#include <asm/mmu.h> int main(int argc, char **argv) { + int *x = alloc_page(); + report_prefix_push("selftest");
+ *x = 0x42; + mmu_disable(); + report(*x == 0x42, "read back value written with MMU on"); + *x = 0x50; + mmu_enable(current_thread_info()->pgtable); + report(*x == 0x50, "read back value written with MMU off"); + if (argc < 2) report_abort("no test specified");
Without the fix, the first report fails, and the test usually hangs before the second report. This is because mmu_enable pushes the LR register on the stack when the MMU is off, which means that the value will be written to memory. However, after asm_mmu_enable, the MMU is enabled, and we read it back from the dcache, thus getting garbage.
With the fix, the two reports pass.
[1] ARM DDI 0487E.a, section D5.2.9 [2] ARM DDI 0406C.d, section B3.2.1
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
3c13c642 |
| 08-Jan-2020 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge branch 'arm/queue' of https://github.com/rhdrjones/kvm-unit-tests into HEAD
|
#
f567e5ea |
| 31-Dec-2019 |
Alexandru Elisei <alexandru.elisei@arm.com> |
arm/arm64: Invalidate TLB before enabling MMU
Let's invalidate the TLB before enabling the MMU, not after, so we don't accidently use a stale TLB mapping. For arm, we add a TLBIALL operation, which
arm/arm64: Invalidate TLB before enabling MMU
Let's invalidate the TLB before enabling the MMU, not after, so we don't accidently use a stale TLB mapping. For arm, we add a TLBIALL operation, which applies only to the PE that executed the instruction [1]. For arm64, we already do that in asm_mmu_enable.
We now find ourselves in a situation where we issue an extra invalidation after asm_mmu_enable returns. Remove this redundant call to tlb_flush_all.
[1] ARM DDI 0406C.d, section B3.10.6
Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
20239feb |
| 31-Dec-2019 |
Alexandru Elisei <alexandru.elisei@arm.com> |
lib: arm/arm64: Remove unnecessary dcache maintenance operations
On ARMv7 with multiprocessing extensions (which are mandated by the virtualization extensions [1]), and on ARMv8, translation table w
lib: arm/arm64: Remove unnecessary dcache maintenance operations
On ARMv7 with multiprocessing extensions (which are mandated by the virtualization extensions [1]), and on ARMv8, translation table walks are coherent [2, 3], which means that no dcache maintenance operations are required when changing the tables. Remove the maintenance operations so that we do only the minimum required to ensure correctness.
Translation table walks are coherent if the memory where the tables themselves reside have the same shareability and cacheability attributes as the translation table walks. For ARMv8, this is already the case, and it is only a matter of removing the cache operations.
However, for ARMv7, translation table walks were being configured as Non-shareable (TTBCR.SH0 = 0b00) and Non-cacheable (TTBCR.{I,O}RGN0 = 0b00). Fix that by marking them as Inner Shareable, Normal memory, Inner and Outer Write-Back Write-Allocate Cacheable.
Because translation table walks are now coherent on arm, replace the TLBIMVAA operation with TLBIMVAAIS in flush_tlb_page, which acts on the Inner Shareable domain instead of being private to the PE.
The functions that update the translation table are called when the MMU is off, or to modify permissions, in the case of the cache test, so break-before-make is not necessary.
[1] ARM DDI 0406C.d, section B1.7 [2] ARM DDI 0406C.d, section B3.3.1 [3] ARM DDI 0487E.a, section D13.2.72 [4] ARM DDI 0487E.a, section K11.5.3
Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
efd13c38 |
| 28-Nov-2019 |
Andrew Jones <drjones@redhat.com> |
arm: Enable the VFP
Variable argument macros frequently depend on floating point registers. Indeed we needed to enable the VFP for arm64 since its introduction in order to use printf and the like. S
arm: Enable the VFP
Variable argument macros frequently depend on floating point registers. Indeed we needed to enable the VFP for arm64 since its introduction in order to use printf and the like. Somehow we didn't need to do that for arm32 until recently when compiling with GCC 9.
Tested-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
62bdc67f |
| 17-Jan-2018 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: allow setup_vm to be skipped
Determine whether or not to enable the MMU during setup with an auxinfo flag. This gives unit tests that need to start cpu0 at main() with the MMU off, and no
arm/arm64: allow setup_vm to be skipped
Determine whether or not to enable the MMU during setup with an auxinfo flag. This gives unit tests that need to start cpu0 at main() with the MMU off, and no page tables constructed, the option to do so. The physical page allocator is now used as the basis for alloc_ops, allowing both malloc() and page_alloc() to work without a setup_vm() call. The unit test can still call setup_vm() itself later. Secondaries will also start in their entry points with the MMU off. If page tables have already been constructed by another CPU, and are pointed to by e.g. 'pgtable', then the secondary can easily enable the MMU with mmu_enable(pgtable).
Naturally unit tests that start multiple CPUs with the MMU off need to keep track of each CPU's MMU enable status and which set of ops are pointed to by alloc_ops. Also note, spinlocks may not work as expected with the MMU off. IOW, this option gives a unit test plenty of rope to shoot itself with.
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
show more ...
|
#
9246de4c |
| 01-Jun-2017 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: smp: rename secondary_halt to do_idle
Also prepare the newly renamed function to come out of idle and use sev/wfe rather than a busy wait in smp_run().
Signed-off-by: Andrew Jones <drjon
arm/arm64: smp: rename secondary_halt to do_idle
Also prepare the newly renamed function to come out of idle and use sev/wfe rather than a busy wait in smp_run().
Signed-off-by: Andrew Jones <drjones@redhat.com> Message-Id: <20170601135002.26704-2-drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
543ce33c |
| 25-May-2017 |
Andrew Jones <drjones@redhat.com> |
lib/arm/smp: introduce smp_run
A common pattern is - run a function on all cpus - signal each cpu's completion with a cpumask - halt the secondaries when they're complete - have the primary wait
lib/arm/smp: introduce smp_run
A common pattern is - run a function on all cpus - signal each cpu's completion with a cpumask - halt the secondaries when they're complete - have the primary wait on the cpumask for all to complete
smp_run is a wrapper for that pattern. Also, we were allowing secondaries to go off in the weeds if they returned from their secondary entry function, which can be difficult to debug. A nice side-effect of adding this wrapper is we don't do that anymore, and can even know when a secondary has halted with the new cpu_halted_mask.
Signed-off-by: Andrew Jones <drjones@redhat.com> Message-Id: <20170525102849.22754-4-drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
4968651e |
| 13-Jan-2017 |
Andrew Jones <drjones@redhat.com> |
arm/arm64: enable environ
Give arm/arm64 unit tests access to environment variables. The environment variables are passed to the unit test with '-initrd env-file'.
Signed-off-by: Andrew Jones <drjo
arm/arm64: enable environ
Give arm/arm64 unit tests access to environment variables. The environment variables are passed to the unit test with '-initrd env-file'.
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
show more ...
|