|
Revision tags: v6.18-rc3, v6.18-rc2, v6.18-rc1, v6.17, v6.17-rc7, v6.17-rc6, v6.17-rc5, v6.17-rc4 |
|
| #
13cecc52
|
| 27-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: chacha: Consolidate into single module
Consolidate the ChaCha code into a single module (excluding chacha-block-generic.c which remains always built-in for random.c), similar to various
lib/crypto: chacha: Consolidate into single module
Consolidate the ChaCha code into a single module (excluding chacha-block-generic.c which remains always built-in for random.c), similar to various other algorithms:
- Each arch now provides a header file lib/crypto/$(SRCARCH)/chacha.h, replacing lib/crypto/$(SRCARCH)/chacha*.c. The header defines chacha_crypt_arch() and hchacha_block_arch(). It is included by lib/crypto/chacha.c, and thus the code gets built into the single libchacha module, with improved inlining in some cases.
- Whether arch-optimized ChaCha is buildable is now controlled centrally by lib/crypto/Kconfig instead of by lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain the same as before, and it remains enabled by default.
- Any additional arch-specific translation units for the optimized ChaCha code, such as assembly files, are now compiled by lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile.
This removes the last use for the Makefile and Kconfig files in the arm64, mips, powerpc, riscv, and s390 subdirectories of lib/crypto/. So also remove those files and the references to them.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250827151131.27733-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
c4b846ff
|
| 27-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: chacha: Remove unused function chacha_is_arch_optimized()
chacha_is_arch_optimized() is no longer used, so remove it.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.ke
lib/crypto: chacha: Remove unused function chacha_is_arch_optimized()
chacha_is_arch_optimized() is no longer used, so remove it.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250827151131.27733-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
|
Revision tags: v6.17-rc3, v6.17-rc2, v6.17-rc1, v6.16, v6.16-rc7, v6.16-rc6, v6.16-rc5, v6.16-rc4, v6.16-rc3 |
|
| #
4a32e5dc
|
| 19-Jun-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm: Move arch/arm/lib/crypto/ into lib/crypto/
Move the contents of arch/arm/lib/crypto/ into lib/crypto/arm/.
The new code organization makes a lot more sense for how this code actual
lib/crypto: arm: Move arch/arm/lib/crypto/ into lib/crypto/
Move the contents of arch/arm/lib/crypto/ into lib/crypto/arm/.
The new code organization makes a lot more sense for how this code actually works and is developed. In particular, it makes it possible to build each algorithm as a single module, with better inlining and dead code elimination. For a more detailed explanation, see the patchset which did this for the CRC library code: https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org/. Also see the patchset which did this for SHA-512: https://lore.kernel.org/linux-crypto/20250616014019.415791-1-ebiggers@kernel.org/
This is just a preparatory commit, which does the move to get the files into their new location but keeps them building the same way as before. Later commits will make the actual improvements to the way the arch-optimized code is integrated for each algorithm.
Add a gitignore entry for the removed directory arch/arm/lib/crypto/ so that people don't accidentally commit leftover generated files.
Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20250619191908.134235-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
|
Revision tags: v6.16-rc2, v6.16-rc1, v6.15, v6.15-rc7, v6.15-rc6 |
|
| #
bdc2a556
|
| 05-May-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: lib/chacha - add array bounds to function prototypes
Add explicit array bounds to the function prototypes for the parameters that didn't already get handled by the conversion to use chacha_s
crypto: lib/chacha - add array bounds to function prototypes
Add explicit array bounds to the function prototypes for the parameters that didn't already get handled by the conversion to use chacha_state:
- chacha_block_*(): Change 'u8 *out' or 'u8 *stream' to u8 out[CHACHA_BLOCK_SIZE].
- hchacha_block_*(): Change 'u32 *out' or 'u32 *stream' to u32 out[HCHACHA_OUT_WORDS].
- chacha_init(): Change 'const u32 *key' to 'const u32 key[CHACHA_KEY_WORDS]'. Change 'const u8 *iv' to 'const u8 iv[CHACHA_IV_SIZE]'.
No functional changes. This just makes it clear when fixed-size arrays are expected.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
98066f2f
|
| 05-May-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: lib/chacha - strongly type the ChaCha state
The ChaCha state matrix is 16 32-bit words. Currently it is represented in the code as a raw u32 array, or even just a pointer to u32. This weak
crypto: lib/chacha - strongly type the ChaCha state
The ChaCha state matrix is 16 32-bit words. Currently it is represented in the code as a raw u32 array, or even just a pointer to u32. This weak typing is error-prone. Instead, introduce struct chacha_state:
struct chacha_state { u32 x[16]; };
Convert all ChaCha and HChaCha functions to use struct chacha_state. No functional changes.
Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Kent Overstreet <kent.overstreet@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v6.15-rc5 |
|
| #
ef93f156
|
| 30-Apr-2025 |
Herbert Xu <herbert@gondor.apana.org.au> |
Revert "crypto: run initcalls for generic implementations earlier"
This reverts commit c4741b23059794bd99beef0f700103b0d983b3fd.
Crypto API self-tests no longer run at registration time and now occ
Revert "crypto: run initcalls for generic implementations earlier"
This reverts commit c4741b23059794bd99beef0f700103b0d983b3fd.
Crypto API self-tests no longer run at registration time and now occur either at late_initcall or upon the first use.
Therefore the premise of the above commit no longer exists. Revert it and subsequent additions of subsys_initcall and arch_initcall.
Note that lib/crypto calls will stay at subsys_initcall (or rather downgraded from arch_initcall) because they may need to occur before Crypto API registration.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v6.15-rc4 |
|
| #
714656a8
|
| 22-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: arm - move library functions to arch/arm/lib/crypto/
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the arm BLAKE2s, ChaCha, and Poly130
crypto: arm - move library functions to arch/arm/lib/crypto/
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the arm BLAKE2s, ChaCha, and Poly1305 library functions into a new directory arch/arm/lib/crypto/ that does not depend on CRYPTO. This mirrors the distinction between crypto/ and lib/crypto/.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v6.15-rc3 |
|
| #
8821d269
|
| 18-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: lib/chacha - restore ability to remove modules
Though the module_exit functions are now no-ops, they should still be defined, since otherwise the modules become unremovable.
Fixes: 08820553
crypto: lib/chacha - restore ability to remove modules
Though the module_exit functions are now no-ops, they should still be defined, since otherwise the modules become unremovable.
Fixes: 08820553f33a ("crypto: arm/chacha - remove the redundant skcipher algorithms") Fixes: 8c28abede16c ("crypto: arm64/chacha - remove the skcipher algorithms") Fixes: f7915484c020 ("crypto: powerpc/chacha - remove the skcipher algorithms") Fixes: ceba0eda8313 ("crypto: riscv/chacha - implement library instead of skcipher") Fixes: 632ab0978f08 ("crypto: x86/chacha - remove the skcipher algorithms") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v6.15-rc2, v6.15-rc1 |
|
| #
08820553
|
| 05-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: arm/chacha - remove the redundant skcipher algorithms
Since crypto/chacha.c now registers chacha20-$(ARCH), xchacha20-$(ARCH), and xchacha12-$(ARCH) skcipher algorithms that use the architec
crypto: arm/chacha - remove the redundant skcipher algorithms
Since crypto/chacha.c now registers chacha20-$(ARCH), xchacha20-$(ARCH), and xchacha12-$(ARCH) skcipher algorithms that use the architecture's ChaCha and HChaCha library functions, individual architectures no longer need to do the same. Therefore, remove the redundant skcipher algorithms and leave just the library functions.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
4aa6dc90
|
| 05-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: chacha - centralize the skcipher wrappers for arch code
Following the example of the crc32 and crc32c code, make the crypto subsystem register both generic and architecture-optimized chacha2
crypto: chacha - centralize the skcipher wrappers for arch code
Following the example of the crc32 and crc32c code, make the crypto subsystem register both generic and architecture-optimized chacha20, xchacha20, and xchacha12 skcipher algorithms, all implemented on top of the appropriate library functions. This eliminates the need for every architecture to implement the same skcipher glue code.
To register the architecture-optimized skciphers only when architecture-optimized code is actually being used, add a function chacha_is_arch_optimized() and make each arch implement it. Change each architecture's ChaCha module_init function to arch_initcall so that the CPU feature detection is guaranteed to run before chacha_is_arch_optimized() gets called by crypto/chacha.c. In the case of s390, remove the CPU feature based module autoloading, which is no longer needed since the module just gets pulled in via function linkage.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v6.14, v6.14-rc7 |
|
| #
ca17aa66
|
| 16-Mar-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: lib/chacha - remove unused arch-specific init support
All implementations of chacha_init_arch() just call chacha_init_generic(), so it is pointless. Just delete it, and replace chacha_init(
crypto: lib/chacha - remove unused arch-specific init support
All implementations of chacha_init_arch() just call chacha_init_generic(), so it is pointless. Just delete it, and replace chacha_init() with what was previously chacha_init_generic().
Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10 |
|
| #
fd16931a
|
| 13-Dec-2020 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: arm/chacha-neon - add missing counter increment
Commit 86cd97ec4b943af3 ("crypto: arm/chacha-neon - optimize for non-block size multiples") refactored the chacha block handling in the glue c
crypto: arm/chacha-neon - add missing counter increment
Commit 86cd97ec4b943af3 ("crypto: arm/chacha-neon - optimize for non-block size multiples") refactored the chacha block handling in the glue code in a way that may result in the counter increment to be omitted when calling chacha_block_xor_neon() to process a full block. This violates the skcipher API, which requires that the output IV is suitable for handling more input as long as the preceding input has been presented in round multiples of the block size. Also, the same code is exposed via the chacha library interface whose callers may actually rely on this increment to occur even for final blocks that are smaller than the chacha block size.
So increment the counter after calling chacha_block_xor_neon().
Fixes: 86cd97ec4b943af3 ("crypto: arm/chacha-neon - optimize for non-block size multiples") Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3 |
|
| #
86cd97ec
|
| 03-Nov-2020 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: arm/chacha-neon - optimize for non-block size multiples
The current NEON based ChaCha implementation for ARM is optimized for multiples of 4x the ChaCha block size (64 bytes). This makes sen
crypto: arm/chacha-neon - optimize for non-block size multiples
The current NEON based ChaCha implementation for ARM is optimized for multiples of 4x the ChaCha block size (64 bytes). This makes sense for block encryption, but given that ChaCha is also often used in the context of networking, it makes sense to consider arbitrary length inputs as well.
For example, WireGuard typically uses 1420 byte packets, and performing ChaCha encryption involves 5 invocations of chacha_4block_xor_neon() and 3 invocations of chacha_block_xor_neon(), where the last one also involves a memcpy() using a buffer on the stack to process the final chunk of 1420 % 64 == 12 bytes.
Let's optimize for this case as well, by letting chacha_4block_xor_neon() deal with any input size between 64 and 256 bytes, using NEON permutation instructions and overlapping loads and stores. This way, the 140 byte tail of a 1420 byte input buffer can simply be processed in one go.
This results in the following performance improvements for 1420 byte blocks, without significant impact on power-of-2 input sizes. (Note that Raspberry Pi is widely used in combination with a 32-bit kernel, even though the core is 64-bit capable)
Cortex-A8 (BeagleBone) : 7% Cortex-A15 (Calxeda Midway) : 21% Cortex-A53 (Raspberry Pi 3) : 3% Cortex-A72 (Raspberry Pi 4) : 19%
Cc: Eric Biggers <ebiggers@google.com> Cc: "Jason A . Donenfeld" <Jason@zx2c4.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3 |
|
| #
706024a5
|
| 22-Apr-2020 |
Jason A. Donenfeld <Jason@zx2c4.com> |
crypto: arch/lib - limit simd usage to 4k chunks
The initial Zinc patchset, after some mailing list discussion, contained code to ensure that kernel_fpu_enable would not be kept on for more than a 4
crypto: arch/lib - limit simd usage to 4k chunks
The initial Zinc patchset, after some mailing list discussion, contained code to ensure that kernel_fpu_enable would not be kept on for more than a 4k chunk, since it disables preemption. The choice of 4k isn't totally scientific, but it's not a bad guess either, and it's what's used in both the x86 poly1305, blake2s, and nhpoly1305 code already (in the form of PAGE_SIZE, which this commit corrects to be explicitly 4k for the former two).
Ard did some back of the envelope calculations and found that at 5 cycles/byte (overestimate) on a 1ghz processor (pretty slow), 4k means we have a maximum preemption disabling of 20us, which Sebastian confirmed was probably a good limit.
Unfortunately the chunking appears to have been left out of the final patchset that added the glue code. So, this commit adds it back in.
Fixes: 84e03fa39fbe ("crypto: x86/chacha - expose SIMD ChaCha routine as library function") Fixes: b3aad5bad26a ("crypto: arm64/chacha - expose arm64 ChaCha routine as library function") Fixes: a44a3430d71b ("crypto: arm/chacha - expose ARM ChaCha routine as library function") Fixes: d7d7b8535662 ("crypto: x86/poly1305 - wire up faster implementations for kernel") Fixes: f569ca164751 ("crypto: arm64/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation") Fixes: a6b803b3ddc7 ("crypto: arm/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation") Fixes: ed0356eda153 ("crypto: blake2s - x86_64 SIMD implementation") Cc: Eric Biggers <ebiggers@google.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: stable@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7 |
|
| #
0bc81767
|
| 17-Jan-2020 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: arm/chacha - fix build failured when kernel mode NEON is disabled
When the ARM accelerated ChaCha driver is built as part of a configuration that has kernel mode NEON disabled, we expect the
crypto: arm/chacha - fix build failured when kernel mode NEON is disabled
When the ARM accelerated ChaCha driver is built as part of a configuration that has kernel mode NEON disabled, we expect the compiler to propagate the build time constant expression IS_ENABLED(CONFIG_KERNEL_MODE_NEON) in a way that eliminates all the cross-object references to the actual NEON routines, which allows the chacha-neon-core.o object to be omitted from the build entirely.
Unfortunately, this fails to work as expected in some cases, and we may end up with a build error such as
chacha-glue.c:(.text+0xc0): undefined reference to `chacha_4block_xor_neon'
caused by the fact that chacha_doneon() has not been eliminated from the object code, even though it will never be called in practice.
Let's fix this by adding some IS_ENABLED(CONFIG_KERNEL_MODE_NEON) tests that are not strictly needed from a logical point of view, but should help the compiler infer that the NEON code paths are unreachable in those cases.
Fixes: b36d8c09e710c71f ("crypto: arm/chacha - remove dependency on generic ...") Reported-by: Russell King <linux@armlinux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1 |
|
| #
8394bfec
|
| 25-Nov-2019 |
Jason A. Donenfeld <Jason@zx2c4.com> |
crypto: arch - conditionalize crypto api in arch glue for lib code
For glue code that's used by Zinc, the actual Crypto API functions might not necessarily exist, and don't need to exist either. Bef
crypto: arch - conditionalize crypto api in arch glue for lib code
For glue code that's used by Zinc, the actual Crypto API functions might not necessarily exist, and don't need to exist either. Before this patch, there are valid build configurations that lead to a unbuildable kernel. This fixes it to conditionalize those symbols on the existence of the proper config entry.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
|
Revision tags: v5.4, v5.4-rc8, v5.4-rc7 |
|
| #
a44a3430
|
| 08-Nov-2019 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: arm/chacha - expose ARM ChaCha routine as library function
Expose the accelerated NEON ChaCha routine directly as a symbol export so that users of the ChaCha library API can use it directly.
crypto: arm/chacha - expose ARM ChaCha routine as library function
Expose the accelerated NEON ChaCha routine directly as a symbol export so that users of the ChaCha library API can use it directly.
Given that calls into the library API will always go through the routines in this module if it is enabled, switch to static keys to select the optimal implementation available (which may be none at all, in which case we defer to the generic implementation for all invocations).
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
b36d8c09
|
| 08-Nov-2019 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: arm/chacha - remove dependency on generic ChaCha driver
Instead of falling back to the generic ChaCha skcipher driver for non-SIMD cases, use a fast scalar implementation for ARM authored by
crypto: arm/chacha - remove dependency on generic ChaCha driver
Instead of falling back to the generic ChaCha skcipher driver for non-SIMD cases, use a fast scalar implementation for ARM authored by Eric Biggers. This removes the module dependency on chacha-generic altogether, which also simplifies things when we expose the ChaCha library interface from this module.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|