/linux/Documentation/arch/powerpc/ |
H A D | kvm-nested.rst | 12 hypervisor has implemented them. The terms L0, L1, and L2 are used to 14 that would normally be called the "host" or "hypervisor". L1 is a 17 and controlled by L1 acting as a hypervisor. 22 Linux/KVM has had support for Nesting as an L0 or L1 since 2018 31 The L1 code was added:: 39 call made by the L1 to tell the L0 to start an L2 vCPU with the given 42 the L1 by the L0. The full L2 vCPU state is always transferred from 43 and to L1 when the L2 is run. The L0 doesn't keep any state on the L2 44 vCPU (except in the short sequence in the L0 on L1 -> L2 entry and L2 45 -> L1 exi [all...] |
/linux/Documentation/virt/kvm/x86/ |
H A D | running-nested-guests.rst | 19 | L1 (Guest Hypervisor) | 33 - L1 – level-1 guest; a VM running on L0; also called the "guest 36 - L2 – level-2 guest; a VM running on L1, this is the "nested guest" 45 metal, running the LPAR hypervisor), L1 (host hypervisor), L2 49 L1, and L2) for all architectures; and will largely focus on 148 able to start an L1 guest with:: 175 2. The guest hypervisor (L1) must be provided with the ``sie`` CPU 179 3. Now the KVM module can be loaded in the L1 (guest hypervisor):: 187 Migrating an L1 guest, with a *live* nested guest in it, to another 191 On AMD systems, once an L1 gues [all...] |
/linux/arch/arc/kernel/ |
H A D | entry-compact.S | 152 ; if L2 IRQ interrupted a L1 ISR, disable preemption 154 ; This is to avoid a potential L1-L2-L1 scenario 155 ; -L1 IRQ taken 156 ; -L2 interrupts L1 (before L1 ISR could run) 160 ; But both L1 and L2 re-enabled, so another L1 can be taken 161 ; while prev L1 is still unserviced 165 ; L2 interrupting L1 implie [all...] |
/linux/arch/arm/mm/ |
H A D | proc-xsc3.S | 42 * The cache line size of the L1 I, L1 D and unified L2 cache. 47 * The size of the L1 D cache. 63 * This macro cleans and invalidates the entire L1 D cache. 69 1: mcr p15, 0, \rd, c7, c14, 2 @ clean/invalidate L1 D line 116 mcr p15, 0, ip, c7, c7, 0 @ invalidate L1 caches and BTB 176 mcrne p15, 0, ip, c7, c5, 0 @ invalidate L1 I cache and BTB 200 mcrne p15, 0, r0, c7, c5, 1 @ invalidate L1 I line 201 mcr p15, 0, r0, c7, c14, 1 @ clean/invalidate L1 D line 233 1: mcr p15, 0, r0, c7, c10, 1 @ clean L1 [all...] |
/linux/arch/powerpc/perf/ |
H A D | power8-pmu.c | 46 * | | *- L1/L2/L3 cache_sel | 80 * else if cache_sel[1]: # L1 event 133 CACHE_EVENT_ATTR(L1-dcache-load-misses, PM_LD_MISS_L1); 134 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1); 136 CACHE_EVENT_ATTR(L1-dcache-prefetches, PM_L1_PREF); 137 CACHE_EVENT_ATTR(L1-dcache-store-misses, PM_ST_MISS_L1); 138 CACHE_EVENT_ATTR(L1-icache-load-misses, PM_L1_ICACHE_MISS); 139 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1); 140 CACHE_EVENT_ATTR(L1-icache-prefetches, PM_IC_PREF_WRITE);
|
H A D | power9-pmu.c | 30 * | | *- L1/L2/L3 cache_sel | 177 CACHE_EVENT_ATTR(L1-dcache-load-misses, PM_LD_MISS_L1_FIN); 178 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1); 179 CACHE_EVENT_ATTR(L1-dcache-prefetches, PM_L1_PREF); 180 CACHE_EVENT_ATTR(L1-dcache-store-misses, PM_ST_MISS_L1); 181 CACHE_EVENT_ATTR(L1-icache-load-misses, PM_L1_ICACHE_MISS); 182 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1); 183 CACHE_EVENT_ATTR(L1-icache-prefetches, PM_IC_PREF_WRITE);
|
H A D | power10-pmu.c | 29 * | | | *- L1/L2/L3 cache_sel | |*-radix_scope_qual 133 CACHE_EVENT_ATTR(L1-dcache-load-misses, PM_LD_MISS_L1); 134 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1); 135 CACHE_EVENT_ATTR(L1-dcache-prefetches, PM_LD_PREFETCH_CACHE_LINE_MISS); 136 CACHE_EVENT_ATTR(L1-dcache-store-misses, PM_ST_MISS_L1); 137 CACHE_EVENT_ATTR(L1-icache-load-misses, PM_L1_ICACHE_MISS); 138 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1); 139 CACHE_EVENT_ATTR(L1-icache-prefetches, PM_IC_PREF_REQ);
|
/linux/security/apparmor/include/ |
H A D | perms.h | 168 * TODO: optimize the walk, currently does subwalk of L2 for each P in L1 186 #define xcheck_ns_labels(L1, L2, FN, args...) \ argument 189 fn_for_each((L1), __p1, FN(__p1, (L2), args)); \ 193 #define xcheck_labels_profiles(L1, L2, FN, args...) \ argument 194 xcheck_ns_labels((L1), (L2), xcheck_ns_profile_label, (FN), args) 196 #define xcheck_labels(L1, L2, P, FN1, FN2) \ argument 197 xcheck(fn_for_each((L1), (P), (FN1)), fn_for_each((L2), (P), (FN2)))
|
H A D | label.h | 234 #define fn_for_each2_XXX(L1, L2, P, FN, ...) \ argument 238 label_for_each ## __VA_ARGS__(i, (L1), (L2), (P)) { \ 244 #define fn_for_each_in_merge(L1, L2, P, FN) \ argument 245 fn_for_each2_XXX((L1), (L2), P, FN, _in_merge) 246 #define fn_for_each_not_in_set(L1, L2, P, FN) \ argument 247 fn_for_each2_XXX((L1), (L2), P, FN, _not_in_set)
|
/linux/arch/hexagon/lib/ |
H A D | memset.S | 159 if (r2==#0) jump:nt .L1 186 if (p1) jump .L1 197 if (p0.new) jump:nt .L1 208 if (p0.new) jump:nt .L1 284 .L1: label
|
/linux/Documentation/locking/ |
H A D | lockdep-design.rst | 22 dependency can be understood as lock order, where L1 -> L2 suggests that 23 a task is attempting to acquire L2 while holding L1. From lockdep's 24 perspective, the two locks (L1 and L2) are not necessarily related; that 145 <L1> -> <L2> 146 <L2> -> <L1> 521 L1 -> L2 523 , which means lockdep has seen L1 held before L2 held in the same context at runtime. 524 And in deadlock detection, we care whether we could get blocked on L2 with L1 held, 525 IOW, whether there is a locker L3 that L1 blocks L3 and L2 gets blocked by L3. So 526 we only care about 1) what L1 block [all...] |
H A D | rt-mutex-design.rst | 47 grab lock L1 (owned by C) 139 Mutexes: L1, L2, L3, L4 141 A owns: L1 142 B blocked on L1 152 E->L4->D->L3->C->L2->B->L1->A 159 F->L5->B->L1->A 168 +->B->L1->A 180 G->L2->B->L1->A 188 G-+ +->B->L1->A 230 L1, L [all...] |
/linux/Documentation/translations/it_IT/locking/ |
H A D | lockdep-design.rst | 21 possono essere interpretate come il loro ordine; per esempio L1 -> L2 suggerisce 22 che un processo cerca di acquisire L2 mentre già trattiene L1. Dal punto di 23 vista di lockdep, i due blocchi (L1 ed L2) non sono per forza correlati: quella 143 <L1> -> <L2> 144 <L2> -> <L1> 531 L1 -> L2 533 Questo significa che lockdep ha visto acquisire L1 prima di L2 nello stesso 535 interessa sapere se possiamo rimanere bloccati da L2 mentre L1 viene trattenuto. 537 da L1 e un L2 che viene bloccato da L3. Dunque, siamo interessati a (1) quello 538 che L1 blocc [all...] |
/linux/arch/m68k/fpsp040/ |
H A D | setox.S | 104 | 3.1 R := X + N*L1, where L1 := single-precision(-log2/64). 105 | 3.2 R := R + N*L2, L2 := extended-precision(-log2/64 - L1). 106 | Notes: a) The way L1 and L2 are chosen ensures L1+L2 approximate 108 | b) N*L1 is exact because N is no longer than 22 bits and 109 | L1 is no longer than 24 bits. 110 | c) The calculation X+N*L1 is also exact due to cancellation. 111 | Thus, R is practically X+N(L1+L2) to full 64 bits. 241 | 3.1 R := X + N*L1, wher [all...] |
/linux/Documentation/devicetree/bindings/media/ |
H A D | st-rc.txt | 10 - rx-mode: can be "infrared" or "uhf". This property specifies the L1 13 - tx-mode: should be "infrared". This property specifies the L1
|
/linux/arch/alpha/boot/ |
H A D | main.c | 53 * code has the L1 page table identity-map itself in the second PTE 54 * in the L1 page table. Thus the L1-page is virtually addressable 59 #define L1 ((unsigned long *) 0x200802000) macro 71 pcb_va->ptbr = L1[1] >> 32; in pal_init()
|
H A D | bootp.c | 59 * code has the L1 page table identity-map itself in the second PTE 60 * in the L1 page table. Thus the L1-page is virtually addressable 65 #define L1 ((unsigned long *) 0x200802000) macro 77 pcb_va->ptbr = L1[1] >> 32; in pal_init()
|
/linux/Documentation/translations/zh_CN/arch/arm64/ |
H A D | memory.txt | 90 | | +---------------------> [38:30] L1 索引 105 | +-------------------------------> [47:42] L1 索引
|
/linux/arch/riscv/lib/ |
H A D | tishift.S | 10 beqz a2, .L1 21 .L1: label
|
/linux/Documentation/driver-api/ |
H A D | edac.rst | 155 - CPU caches (L1 and L2) 165 For example, a cache could be composed of L1, L2 and L3 levels of cache. 166 Each CPU core would have its own L1 cache, while sharing L2 and maybe L3 174 cpu/cpu0/.. <L1 and L2 block directory> 175 /L1-cache/ce_count 179 cpu/cpu1/.. <L1 and L2 block directory> 180 /L1-cache/ce_count 186 the L1 and L2 directories would be "edac_device_block's"
|
/linux/tools/perf/Documentation/ |
H A D | perf-c2c.txt | 224 L1Hit - store accesses that hit L1 225 L1Miss - store accesses that missed L1 228 Core Load Hit - FB, L1, L2 229 - count of load hits in FB (Fill Buffer), L1 and L2 cache 250 Store Refs - L1 Hit, L1 Miss, N/A 251 - % of store accesses that hit L1, missed L1 and N/A (no available) memory
|
/linux/arch/arm/mach-omap2/ |
H A D | sram242x.S | 39 str r3, [r2] @ go to L1-freq operation 42 mov r9, #0x1 @ set up for L1 voltage call 101 orr r5, r5, r9 @ bulld value for L0/L1-volt operation. 105 str r5, [r4] @ Force transition to L1 196 orr r8, r8, r9 @ bulld value for L0/L1-volt operation. 200 str r8, [r10] @ Force transition to L1
|
H A D | sram243x.S | 39 str r3, [r2] @ go to L1-freq operation 42 mov r9, #0x1 @ set up for L1 voltage call 101 orr r5, r5, r9 @ bulld value for L0/L1-volt operation. 105 str r5, [r4] @ Force transition to L1 196 orr r8, r8, r9 @ bulld value for L0/L1-volt operation. 200 str r8, [r10] @ Force transition to L1
|
/linux/lib/ |
H A D | test_dynamic_debug.c | 92 enum cat_level_names { L0 = 22, L1, L2, L3, L4, L5, L6, L7 }; enumerator 94 "L0", "L1", "L2", "L3", "L4", "L5", "L6", "L7"); 133 prdbg(L1); in do_levels()
|
/linux/Documentation/translations/zh_TW/arch/arm64/ |
H A D | memory.txt | 94 | | +---------------------> [38:30] L1 索引 109 | +-------------------------------> [47:42] L1 索引
|