Searched full:optimized (Results 1 – 25 of 570) sorted by relevance
12345678910>>...23
/linux-5.10/Documentation/trace/ |
D | kprobes.rst | 193 instruction (the "optimized region") lies entirely within one function. 198 jump into the optimized region. Specifically: 203 optimized region -- Kprobes checks the exception tables to verify this); 204 - there is no near jump to the optimized region (other than to the first 207 - For each instruction in the optimized region, Kprobes verifies that 219 - the instructions from the optimized region 229 - Other instructions in the optimized region are probed. 236 If the kprobe can be optimized, Kprobes enqueues the kprobe to an 238 it. If the to-be-optimized probepoint is hit before being optimized, 249 optimized region [3]_. As you know, synchronize_rcu() can ensure [all …]
|
/linux-5.10/arch/arm/crypto/ |
D | Kconfig | 18 using optimized ARM assembler. 28 using optimized ARM NEON assembly, when NEON instructions are 55 using optimized ARM assembler and NEON, when available. 63 using optimized ARM assembler and NEON, when available. 70 Use optimized AES assembler routines for ARM platforms.
|
/linux-5.10/Documentation/devicetree/bindings/opp/ |
D | ti-omap5-opp-supply.txt | 26 "ti,omap5-opp-supply" - OMAP5+ optimized voltages in efuse(class0)VDD 28 "ti,omap5-core-opp-supply" - OMAP5+ optimized voltages in efuse(class0) VDD 33 optimized efuse configuration. Each item consists of the following: 35 efuse_offseet: efuse offset from reg where the optimized voltage is stored.
|
/linux-5.10/arch/ia64/lib/ |
D | io.c | 9 * This needs to be optimized. 24 * This needs to be optimized. 39 * This needs to be optimized.
|
D | clear_page.S | 56 // Optimized for Itanium 62 // Optimized for McKinley
|
/linux-5.10/drivers/opp/ |
D | ti-opp-supply.c | 25 * struct ti_opp_supply_optimum_voltage_table - optimized voltage table 27 * @optimized_uv: Optimized voltage from efuse 36 * @vdd_table: Optimized voltage mapping table 64 * _store_optimized_voltages() - store optimized voltages 68 * Picks up efuse based optimized voltages for VDD unique per device and 153 * Some older samples might not have optimized efuse in _store_optimized_voltages() 188 * Return: if a match is found, return optimized voltage, else return 211 dev_err_ratelimited(dev, "%s: Failed optimized voltage match for %d\n", in _get_optimal_vdd_voltage() 401 /* If we need optimized voltage */ in ti_opp_supply_probe()
|
/linux-5.10/arch/arm/kernel/ |
D | io.c | 43 * This needs to be optimized. 59 * This needs to be optimized. 75 * This needs to be optimized.
|
/linux-5.10/drivers/video/fbdev/aty/ |
D | atyfb.h | 228 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_le32() 241 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le32() 255 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le16() 267 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_8() 279 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_8()
|
/linux-5.10/Documentation/locking/ |
D | percpu-rw-semaphore.rst | 6 optimized for locking for reading. 26 The idea of using RCU for optimized rw-lock was introduced by
|
D | rt-mutex.rst | 40 RT-mutexes are optimized for fastpath operations and have no internal 42 without waiters. The optimized fastpath operations require cmpxchg
|
/linux-5.10/arch/x86/kernel/kprobes/ |
D | opt.c | 45 /* This function only handles jump-optimized kprobe */ in __recover_optprobed_insn() 57 * If the kprobe can be optimized, original bytes which can be in __recover_optprobed_insn() 169 /* Optimized kprobe call back function: called from optinsn */ 348 /* Check optimized_kprobe can actually be optimized. */ 363 /* Check the addr is within the optimized instructions. */ 371 /* Free optimized instruction slot */ 553 /* This kprobe is really able to run optimized path. */ in setup_detour_execution()
|
/linux-5.10/arch/x86/include/asm/ |
D | qspinlock_paravirt.h | 8 * and restored. So an optimized version of __pv_queued_spin_unlock() is 19 * Optimized assembly version of __raw_callee_save___pv_queued_spin_unlock
|
/linux-5.10/Documentation/devicetree/bindings/memory-controllers/ |
D | atmel,ebi.txt | 67 - atmel,smc-tdf-mode: "normal" or "optimized". When set to 68 "optimized" the data float time is optimized
|
/linux-5.10/drivers/staging/media/atomisp/pci/isp/kernels/tdf/tdf_1.0/ |
D | ia_css_tdf_types.h | 34 s32 thres_flat_table[64]; /** Final optimized strength table of NR for flat region. */ 35 s32 thres_detail_table[64]; /** Final optimized strength table of NR for detail region. */
|
/linux-5.10/include/linux/ |
D | omap-gpmc.h | 34 * gpmc_omap_onenand_set_timings - set optimized sync timings. 40 * Sets optimized timings for the @cs region based on @freq and @latency.
|
/linux-5.10/arch/sparc/lib/ |
D | strlen.S | 2 /* strlen.S: Sparc optimized strlen code 3 * Hand optimized from GNU libc's strlen
|
D | M7memset.S | 2 * M7memset.S: SPARC M7 optimized memset. 8 * M7memset.S: M7 optimized memset. 100 * (can create a more optimized version later.) 114 * (can create a more optimized version later.)
|
/linux-5.10/arch/x86/crypto/ |
D | twofish_glue.c | 2 * Glue Code for assembler optimized version of TWOFISH 98 MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized");
|
/linux-5.10/kernel/ |
D | kprobes.c | 410 * This must be called from arch-dep optimized caller. 426 /* Free optimized instructions and optimized_kprobe */ 478 * Return an optimized kprobe whose optimizing code replaces 667 /* Optimize kprobe if p is ready to be optimized */ 677 /* kprobes with post_handler can not be optimized */ in optimize_kprobe() 683 /* Check there is no other kprobes at the optimized instructions */ in optimize_kprobe() 687 /* Check if it is already optimized. */ in optimize_kprobe() 697 /* On unoptimizing/optimizing_list, op must have OPTIMIZED flag */ in optimize_kprobe() 713 /* Unoptimize a kprobe if p is optimized */ 719 return; /* This is not an optprobe nor optimized */ in unoptimize_kprobe() [all …]
|
/linux-5.10/arch/s390/include/asm/ |
D | checksum.h | 8 * Martin Schwidefsky (heavily optimized CKSM version) 55 * This is a version of ip_compute_csum() optimized for IP headers,
|
/linux-5.10/arch/arc/lib/ |
D | memset-archs.S | 10 * The memset implementation below is optimized to use prefetchw and prealloc 12 * If you want to implement optimized memset for other possible L1 data cache
|
/linux-5.10/arch/m68k/include/asm/ |
D | delay.h | 72 * the const factor (4295 = 2**32 / 1000000) can be optimized out when 88 * first constant multiplications gets optimized away if the delay is
|
/linux-5.10/arch/sparc/crypto/ |
D | crc32c_glue.c | 2 /* Glue code for CRC32C optimized for sparc64 crypto opcodes. 161 pr_info("Using sparc64 crc32c opcode optimized CRC32C implementation\n"); in crc32c_sparc64_mod_init()
|
/linux-5.10/tools/testing/kunit/ |
D | .gitignore | 2 # Byte-compiled / optimized / DLL files
|
/linux-5.10/crypto/ |
D | Kconfig | 478 SSE2 optimized implementation of the hash function used by the 486 AVX2 optimized implementation of the hash function used by the 669 optimized for 64bit platforms and can produce digests of any size 687 optimized for 8-32bit platforms and can produce digests of any size 774 tristate "Poly1305 authenticator algorithm (MIPS optimized)" 1060 Tiger is a hash function optimized for 64-bit processors while 1421 optimized using SPARC64 crypto opcodes. 1432 algorithm that is optimized for x86-64 processors. Two versions of 1451 an algorithm optimized for 64-bit processors with good performance 1498 SSSE3, AVX2, and AVX-512VL optimized implementations of the ChaCha20, [all …]
|
12345678910>>...23