Home
last modified time | relevance | path

Searched refs:we (Results 1 – 25 of 1421) sorted by relevance

12345678910>>...57

/linux/Documentation/driver-api/thermal/
H A Dcpu-idle-cooling.rst25 because of the OPP density, we can only choose an OPP with a power
35 If we can remove the static and the dynamic leakage for a specific
38 injection period, we can mitigate the temperature by modulating the
47 At a specific OPP, we can assume that injecting idle cycle on all CPUs
49 idle state target residency, we lead to dropping the static and the
132 - It is less than or equal to the latency we tolerate when the
134 user experience, reactivity vs performance trade off we want. This
137 - It is greater than the idle state’s target residency we want to go
138 for thermal mitigation, otherwise we end up consuming more energy.
143 When we reach the thermal trip point, we have to sustain a specified
[all …]
/linux/Documentation/filesystems/
H A Ddirectory-locking.rst10 When taking the i_rwsem on multiple non-directory objects, we
22 * lock the directory we are accessing (shared)
26 * lock the directory we are accessing (exclusive)
74 operations on directory trees, but we obviously do not have the full
75 picture of those - especially for network filesystems. What we have
77 Trees grow as we do operations; memory pressure prunes them. Normally
78 that's not a problem, but there is a nasty twist - what should we do
83 possibility that directory we see in one place gets moved by the server
84 to another and we run into it when we do a lookup.
86 For a lot of reasons we want to have the same directory present in dcache
[all …]
H A Dpropagate_umount.txt3 Umount propagation starts with a set of mounts we are already going to
4 take out. Ideally, we would like to add all downstream cognates to
39 is in the set, it will be resolved. However, we rely upon umount_tree()
51 We are given a closed set U and we want to find all mounts that have
64 subtrees of U, in which case we'd end up examining the same candidates
70 Note that if we run into a candidate we'd already seen, it must've been
73 if we find a child already added to the set, we know that everything
93 keep walking Propagation(p) from q until we find something
96 would get rid of that problem, but we need a sane implementation of
99 skip_them() being "repeat the forward-and-up part until we get NULL
[all …]
H A Dpath-lookup.txt49 the path given by the name's starting point (which we know in advance -- eg.
55 A parent, of course, must be a directory, and we must have appropriate
79 In order to lookup a dcache (parent, name) tuple, we take a hash on the tuple
81 in that bucket is then walked, and we do a full comparison of each entry
148 However, when inserting object 2 onto a new list, we end up with this:
161 Because we didn't wait for a grace period, there may be a concurrent lookup
182 As explained above, we would like to do path walking without taking locks or
188 than reloading from the dentry later on (otherwise we'd have interesting things
192 no non-atomic stores to shared data), and to recheck the seqcount when we are
194 Avoiding destructive or changing operations means we can easily unwind from
[all …]
H A Didmappings.rst23 on, we will always prefix ids with ``u`` or ``k`` to make it clear whether
24 we're talking about an id in the upper or lower idmapset.
42 that make it easier to understand how we can translate between idmappings. For
43 example, we know that the inverse idmapping is an order isomorphism as well::
49 Given that we are dealing with order isomorphisms plus the fact that we're
50 dealing with subsets we can embed idmappings into each other, i.e. we can
51 sensibly translate between different idmappings. For example, assume we've been
61 Because we're dealing with order isomorphic subsets it is meaningful to ask
64 mapping ``k11000`` up to ``u1000``. Afterwards, we can map ``u1000`` down using
69 If we were given the same task for the following three idmappings::
[all …]
/linux/Documentation/arch/x86/
H A Dentry_64.rst58 so. If we mess that up even slightly, we crash.
60 So when we have a secondary entry, already in kernel mode, we *must
61 not* use SWAPGS blindly - nor must we forget doing a SWAPGS when it's
87 If we are at an interrupt or user-trap/gate-alike boundary then we can
89 whether SWAPGS was already done: if we see that we are a secondary
90 entry interrupting kernel mode execution, then we know that the GS
91 base has already been switched. If it says that we interrupted
92 user-space execution then we must do the SWAPGS.
94 But if we are in an NMI/MCE/DEBUG/whatever super-atomic entry context,
96 stack but before we executed SWAPGS, then the only safe way to check
[all …]
/linux/Documentation/dev-tools/kunit/
H A Drun_wrapper.rst10 As long as we can build the kernel, we can run KUnit.
44 kunit_tool. This is useful if we have several different groups of
45 tests we want to run independently, or if we want to use pre-defined
64 If we want to run a specific set of tests (rather than those listed
65 in the KUnit ``defconfig``), we can provide Kconfig options in the
90 set in the kernel ``.config`` before running the tests. It warns if we
96 This means that we can use other tools
104 If we want to make manual changes to the KUnit build process, we
106 When running kunit_tool, from a ``.kunitconfig``, we can generate a
113 To build a KUnit kernel from the current ``.config``, we can use the
[all …]
/linux/Documentation/filesystems/xfs/
H A Dxfs-delayed-logging-design.rst16 transaction reservations are structured and accounted, and then move into how we
18 reservations bounds. At this point we need to explain how relogging works. With
113 individual modification is atomic, the chain is *not atomic*. If we crash half
140 complete, we can explicitly tag a transaction as synchronous. This will trigger
145 throughput to the IO latency limitations of the underlying storage. Instead, we
161 available to write the modification into the journal before we start making
164 log in the worst case. This means that if we are modifying a btree in the
165 transaction, we have to reserve enough space to record a full leaf-to-root split
166 of the btree. As such, the reservations are quite complex because we have to
173 again. Then we might have to update reverse mappings, which modifies yet
[all …]
H A Dxfs-self-describing-metadata.rst32 However, if we scale the filesystem up to 1PB, we now have 10x as much metadata
44 magic number in the metadata block, we have no other way of identifying what it
57 Hence we need to record more information into the metadata to allow us to
59 of analysis. We can't protect against every possible type of error, but we can
66 hence parse and verify the metadata object. IF we can't independently identify
72 magic numbers. Hence we can change the on-disk format of all these objects to
76 self identifying and we can do much more expansive automated verification of the
80 integrity checking. We cannot trust the metadata if we cannot verify that it has
81 not been changed as a result of external influences. Hence we need some form of
83 block. If we can verify the block contains the metadata it was intended to
[all …]
/linux/tools/lib/perf/Documentation/
H A Dlibperf-counting.txt73 Once the setup is complete we start by defining specific events using the `struct perf_event_attr`.
97 In this case we will monitor current process, so we create threads map with single pid (0):
110 Now we create libperf's event list, which will serve as holder for the events we want:
121 We create libperf's events for the attributes we defined earlier and add them to the list:
156 so we need to enable the whole list explicitly (both events).
158 From this moment events are counting and we can do our workload.
160 When we are done we disable the events list.
171 Now we need to get the counts from events, following code iterates through the
/linux/Documentation/gpu/amdgpu/display/
H A Dindex.rst22 DC case, we maintain a tree to centralize code from different parts. The shared
23 repository has integration tests with our Internal Linux CI farm, and we run a
28 When we upstream a new feature or some patches, we pack them in a patchset with
40 * Finally, developers wait a few days for community feedback before we merge
43 It is good to stress that the test phase is something that we take extremely
44 seriously, and we never merge anything that fails our validation. Follows an
62 In terms of test setup for CI and manual tests, we usually use:
65 #. In terms of userspace, we only use fully updated open-source components
67 #. Regarding IGT, we use the latest code from the upstream.
68 #. Most of the manual tests are conducted in the GNome but we also use KDE.
H A Ddcn-overview.rst8 (DCN) works, we need to start with an overview of the hardware pipeline. Below
10 generic diagram, and we have variations per ASIC.
14 Based on this diagram, we can pass through each block and briefly describe
60 setup or ignored accordingly with userspace demands. For example, if we
79 From DCHUB to MPC, we have a representation called dc_plane; from MPC to OPTC,
80 we have dc_stream, and the output (DIO) is handled by dc_link. Keep in mind
102 a one-to-one mapping of the link encoder to PHY, but we can configure the DCN
125 depth format), bit-depth reduction/dithering would kick in. In OPP, we would
127 Eventually, we output data in integer format at DIO.
133 overloaded with multiple meanings, so it is important to define what we mean
[all …]
/linux/Documentation/filesystems/ext4/
H A Dorphan.rst9 would leak. Similarly if we truncate or extend the file, we need not be able
10 to perform the operation in a single journalling transaction. In such case we
17 inode (we overload i_dtime inode field for this). However this filesystem
36 When a filesystem with orphan file feature is writeably mounted, we set
38 be valid orphan entries. In case we see this feature when mounting the
39 filesystem, we read the whole orphan file and process all orphan inodes found
40 there as usual. When cleanly unmounting the filesystem we remove the
/linux/Documentation/hid/
H A Dhid-bpf.rst30 With HID-BPF, we can apply this filtering in the kernel directly so userspace
33 Of course, given that this dead zone is specific to an individual device, we
38 HID-BPF allows the userspace program to load the program itself, ensuring we
39 only load the custom API when we have a user.
49 program has been verified by the user, we can embed the source code into the
62 Instead of using hidraw or creating new sysfs entries or ioctls, we can rely
82 screen we likely need to have a haptic click every 15 degrees. But when
89 What if we want to prevent other users to access a specific feature of a
92 With eBPF, we can intercept any HID command emitted to the device and
96 kernel/bpf program because we can intercept any incoming command.
[all …]
/linux/Documentation/scheduler/
H A Dschedutil.rst8 we know this is flawed, but it is the best workable approximation.
14 With PELT we track some metrics across the various scheduler entities, from
16 we use an Exponentially Weighted Moving Average (EWMA), each period (1024us)
35 Using this we track 2 key metrics: 'running' and 'runnable'. 'Running'
50 a big CPU, we allow architectures to scale the time delta with two ratios, one
53 For simple DVFS architectures (where software is in full control) we trivially
60 For more dynamic systems where the hardware is in control of DVFS we use
62 For Intel specifically, we use::
84 of DVFS and CPU type. IOW. we can transfer and compare them between CPUs.
124 migration, time progression) we call out to schedutil to update the hardware
[all …]
/linux/drivers/scsi/aic7xxx/
H A Daic79xx.seq85 * If we have completions stalled waiting for the qfreeze
109 * ENSELO is cleared by a SELDO, so we must test for SELDO
169 * Since this status did not consume a FIFO, we have to
170 * be a bit more dilligent in how we check for FIFOs pertaining
178 * count in the SCB. In this case, we allow the routine servicing
183 * we detect case 1, we will properly defer the post of the SCB
222 * bad SCSI status (currently only for underruns), we
223 * queue the SCB for normal completion. Otherwise, we
258 * If we have relatively few commands outstanding, don't
303 * one byte of lun information we support.
[all …]
H A Daic7xxx.seq52 * After starting the selection hardware, we check for reconnecting targets
54 * bus arbitration. The problem with this is that we must keep track of the
55 * SCB that we've already pulled from the QINFIFO and started the selection
56 * on just in case the reselection wins so that we can retry the selection at
104 * We have at least one queued SCB now and we don't have any
124 * before we completed the DMA operation. If it was,
211 /* The Target ID we were selected at */
239 * Watch ATN closely now as we pull in messages from the
285 * we've got a failed selection and must transition to bus
333 * Reselection has been initiated by a target. Make a note that we've been
[all …]
/linux/Documentation/arch/powerpc/
H A Dvmemmap_dedup.rst14 With 2M PMD level mapping, we require 32 struct pages and a single 64K vmemmap
18 With 1G PUD level mapping, we require 16384 struct pages and a single 64K
19 vmemmap page can contain 1024 struct pages (64K/sizeof(struct page)). Hence we
47 4K vmemmap page contains 64 struct pages(4K/sizeof(struct page)). Hence we
74 With 1G PUD level mapping, we require 262144 struct pages and a single 4K
75 vmemmap page can contain 64 struct pages (4K/sizeof(struct page)). Hence we
H A Dpci_iov_resource_on_powernv.rst40 The following section provides a rough description of what we have on P8
52 For DMA, MSIs and inbound PCIe error messages, we have a table (in
57 - For DMA we then provide an entire address space for each PE that can
63 - For MSIs, we have two windows in the address space (one at the top of
91 reserved for MSIs but this is not a problem at this point; we just
93 ignores that however and will forward in that space if we try).
100 Now, this is the "main" window we use in Linux today (excluding
105 Ideally we would like to be able to have individual functions in PEs
116 bits which are not conveyed by PowerBus but we don't use this.
118 * Can be configured to be segmented. When not segmented, we can
[all …]
H A Dkasan.txt39 checks can be delayed until after the MMU is set is up, and we can just not
44 linear mapping, using the same high-bits trick we use for the rest of the linear
47 - We'd like to place it near the start of physical memory. In theory we can do
48 this at run-time based on how much physical memory we have, but this requires
51 is hopefully something we can revisit once we get KASLR for Book3S.
53 - Alternatively, we can place the shadow at the _end_ of memory, but this
/linux/Documentation/sound/designs/
H A Djack-injection.rst10 validate ALSA userspace changes. For example, we change the audio
11 profile switching code in the pulseaudio, and we want to verify if the
13 in this case, we could inject plugin or plugout events to an audio
14 jack or to some audio jacks, we don't need to physically access the
26 To inject events to audio jacks, we need to enable the jack injection
28 change the state by hardware events anymore, we could inject plugin or
30 ``status``, after we finish our test, we need to disable the jack
/linux/Documentation/block/
H A Ddeadline-iosched.rst20 service time for a request. As we focus mainly on read latencies, this is
49 When we have to move requests from the io scheduler queue to the block
50 device dispatch queue, we always give a preference to reads. However, we
52 how many times we give preference to reads over writes. When that has been
53 done writes_starved number of times, we dispatch some writes based on the
68 that comes at basically 0 cost we leave that on. We simply disable the
/linux/tools/testing/selftests/net/packetdrill/
H A Dtcp_close_close-remote-fin-then-close.pkt2 // Verify behavior for the sequence: remote side sends FIN, then we close().
3 // Since the remote side (client) closes first, we test our LAST_ACK code path.
26 // Then we close.
33 // Verify that we send RST in response to any incoming segments
/linux/Documentation/networking/
H A Dfib_trie.rst37 verify that they actually do match the key we are searching for.
72 fib_find_node(). Inserting a new node means we might have to run the
107 slower than the corresponding fib_hash function, as we have to walk the
124 trie, key segment by key segment, until we find a leaf. check_leaf() does
127 If we find a match, we are done.
129 If we don't find a match, we enter prefix matching mode. The prefix length,
131 and we backtrack upwards through the trie trying to find a longest matching
137 the child index until we find a match or the child index consists of nothing but
140 At this point we backtrack (t->stats.backtrack++) up the trie, continuing to
143 At this point we will repeatedly descend subtries to look for a match, and there
/linux/Documentation/power/
H A Dfreezing-of-tasks.rst90 - freezes all tasks (including kernel threads) because we can't freeze
94 - thaws only kernel threads; this is particularly useful if we need to do
96 userspace tasks, or if we want to postpone the thawing of userspace tasks
99 - thaws all tasks (including kernel threads) because we can't thaw userspace
112 IV. Why do we do that?
118 hibernation. At the moment we have no simple means of checkpointing
120 metadata on disks, we cannot bring them back to the state from before the
132 2. Next, to create the hibernation image we need to free a sufficient amount of
133 memory (approximately 50% of available RAM) and we need to do that before
134 devices are deactivated, because we generally need them for swapping out.
[all …]

12345678910>>...57