Home
last modified time | relevance | path

Searched full:cpus (Results 1 – 25 of 2684) sorted by relevance

12345678910>>...108

/linux-6.8/tools/lib/perf/
Dcpumap.c21 RC_STRUCT(perf_cpu_map) *cpus = malloc(sizeof(*cpus) + sizeof(struct perf_cpu) * nr_cpus); in perf_cpu_map__alloc()
24 if (ADD_RC_CHK(result, cpus)) { in perf_cpu_map__alloc()
25 cpus->nr = nr_cpus; in perf_cpu_map__alloc()
26 refcount_set(&cpus->refcnt, 1); in perf_cpu_map__alloc()
33 struct perf_cpu_map *cpus = perf_cpu_map__alloc(1); in perf_cpu_map__new_any_cpu() local
35 if (cpus) in perf_cpu_map__new_any_cpu()
36 RC_CHK_ACCESS(cpus)->map[0].cpu = -1; in perf_cpu_map__new_any_cpu()
38 return cpus; in perf_cpu_map__new_any_cpu()
72 struct perf_cpu_map *cpus; in cpu_map__new_sysconf() local
81 …pr_warning("Number of online CPUs (%d) differs from the number configured (%d) the CPU map will on… in cpu_map__new_sysconf()
[all …]
/linux-6.8/tools/testing/selftests/riscv/hwprobe/
Dwhich-cpus.c22 "which-cpus: [-h] [<key=value> [<key=value> ...]]\n\n" in help()
25 " <key=value>, outputs the cpulist for cpus which all match the given set\n" in help()
29 static void print_cpulist(cpu_set_t *cpus) in print_cpulist() argument
33 if (!CPU_COUNT(cpus)) { in print_cpulist()
34 printf("cpus: None\n"); in print_cpulist()
38 printf("cpus:"); in print_cpulist()
39 for (int i = 0, c = 0; i < CPU_COUNT(cpus); i++, c++) { in print_cpulist()
40 if (start != end && !CPU_ISSET(c, cpus)) in print_cpulist()
43 while (!CPU_ISSET(c, cpus)) in print_cpulist()
59 static void do_which_cpus(int argc, char **argv, cpu_set_t *cpus) in do_which_cpus() argument
[all …]
/linux-6.8/drivers/cpuidle/
Dcoupled.c3 * coupled.c - helper functions to enter the same idle state on multiple cpus
24 * cpus cannot be independently powered down, either due to
31 * shared between the cpus (L2 cache, interrupt controller, and
33 * be tightly controlled on both cpus.
36 * WFI state until all cpus are ready to enter a coupled state, at
38 * cpus at approximately the same time.
40 * Once all cpus are ready to enter idle, they are woken by an smp
42 * cpus will find work to do, and choose not to enter idle. A
43 * final pass is needed to guarantee that all cpus will call the
46 * ready counter matches the number of online coupled cpus. If any
[all …]
/linux-6.8/sound/soc/intel/boards/
Dsof_board_helpers.c94 struct snd_soc_dai_link_component *cpus; in sof_intel_board_set_codec_link() local
104 /* cpus */ in sof_intel_board_set_codec_link()
105 cpus = devm_kzalloc(dev, sizeof(struct snd_soc_dai_link_component), in sof_intel_board_set_codec_link()
107 if (!cpus) in sof_intel_board_set_codec_link()
112 cpus->dai_name = devm_kasprintf(dev, GFP_KERNEL, "ssp%d-port", in sof_intel_board_set_codec_link()
115 cpus->dai_name = devm_kasprintf(dev, GFP_KERNEL, "SSP%d Pin", in sof_intel_board_set_codec_link()
118 if (!cpus->dai_name) in sof_intel_board_set_codec_link()
121 link->cpus = cpus; in sof_intel_board_set_codec_link()
143 struct snd_soc_dai_link_component *cpus; in sof_intel_board_set_dmic_link() local
145 /* cpus */ in sof_intel_board_set_dmic_link()
[all …]
/linux-6.8/arch/riscv/kernel/
Dsys_hwprobe.c20 const struct cpumask *cpus) in hwprobe_arch_id() argument
26 for_each_cpu(cpu, cpus) { in hwprobe_arch_id()
60 const struct cpumask *cpus) in hwprobe_isa_ext0() argument
79 for_each_cpu(cpu, cpus) { in hwprobe_isa_ext0()
142 static bool hwprobe_ext0_has(const struct cpumask *cpus, unsigned long ext) in hwprobe_ext0_has() argument
146 hwprobe_isa_ext0(&pair, cpus); in hwprobe_ext0_has()
150 static u64 hwprobe_misaligned(const struct cpumask *cpus) in hwprobe_misaligned() argument
155 for_each_cpu(cpu, cpus) { in hwprobe_misaligned()
174 const struct cpumask *cpus) in hwprobe_one_pair() argument
180 hwprobe_arch_id(pair, cpus); in hwprobe_one_pair()
[all …]
/linux-6.8/Documentation/admin-guide/cgroup-v1/
Dcpusets.rst31 2.2 Adding/removing cpus
43 Cpusets provide a mechanism for assigning a set of CPUs and Memory
57 include CPUs in its CPU affinity mask, and using the mbind(2) and
60 CPUs or Memory Nodes not in that cpuset. The scheduler will not
67 cpusets and which CPUs and Memory Nodes are assigned to each cpuset,
75 The management of large computer systems, with many processors (CPUs),
113 Cpusets provide a Linux kernel mechanism to constrain which CPUs and
117 CPUs a task may be scheduled (sched_setaffinity) and on which Memory
122 - Cpusets are sets of allowed CPUs and Memory Nodes, known to the
126 - Calls to sched_setaffinity are filtered to just those CPUs
[all …]
/linux-6.8/Documentation/timers/
Dno_hz.rst19 2. Omit scheduling-clock ticks on idle CPUs (CONFIG_NO_HZ_IDLE=y or
23 3. Omit scheduling-clock ticks on CPUs that are either idle or that
65 Omit Scheduling-Clock Ticks For Idle CPUs
78 scheduling-clock interrupts to idle CPUs, which is critically important
86 idle CPUs. That said, dyntick-idle mode is not free:
104 Omit Scheduling-Clock Ticks For CPUs With Only One Runnable Task
109 Note that omitting scheduling-clock ticks for CPUs with only one runnable
110 task implies also omitting them for idle CPUs.
113 sending scheduling-clock interrupts to CPUs with a single runnable task,
114 and such CPUs are said to be "adaptive-ticks CPUs". This is important
[all …]
/linux-6.8/tools/testing/selftests/cgroup/
Dtest_cpuset_prs.sh25 SUBPARTS_CPUS=$CGROUP2/.__DEBUG__.cpuset.cpus.subpartitions
26 CPULIST=$(cat $CGROUP2/cpuset.cpus.effective)
29 [[ $NR_CPUS -lt 8 ]] && skip_test "Test needs at least 8 cpus available!"
71 echo 0-6 > test/cpuset.cpus
72 echo root > test/cpuset.cpus.partition
73 cat test/cpuset.cpus.partition | grep -q invalid
75 echo member > test/cpuset.cpus.partition
76 echo "" > test/cpuset.cpus
114 echo $EXPECTED_VAL > cpuset.cpus.partition
116 ACTUAL_VAL=$(cat cpuset.cpus.partition)
[all …]
/linux-6.8/include/linux/
Dstop_machine.h13 * function to be executed on a single or multiple cpus preempting all
14 * other processes and monopolizing those cpus until it finishes.
18 * cpus are online.
99 * stop_machine: freeze the machine on all CPUs and run this function
102 * @cpus: the cpus to run the @fn() on (NULL = any online cpu)
114 int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
117 * stop_machine_cpuslocked: freeze the machine on all CPUs and run this function
120 * @cpus: the cpus to run the @fn() on (NULL = any online cpu)
125 int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
133 * Same as above, but instead of every CPU, only the logical CPUs of a
[all …]
Denergy_model.h44 * @cpus: Cpumask covering the CPUs of the domain. It's here
49 * In case of CPU device, a "performance domain" represents a group of CPUs
50 * whose performance is scaled together. All CPUs of a performance domain
52 * a 1-to-1 mapping with CPUFreq policies. In case of other devices the @cpus
59 unsigned long cpus[]; member
78 #define em_span_cpus(em) (to_cpumask((em)->cpus))
86 * maximum CPUs in such domain to 64.
92 * limits to number of CPUs in the Perf. Domain.
107 * In such scenario, where there are 4 CPUs in the Perf. Domain the 'sum_util'
136 * In case of CPUs, the power is the one of a single CPU in the domain,
[all …]
/linux-6.8/tools/lib/perf/tests/
Dtest-cpumap.c16 struct perf_cpu_map *cpus; in test_cpumap() local
24 cpus = perf_cpu_map__new_any_cpu(); in test_cpumap()
25 if (!cpus) in test_cpumap()
28 perf_cpu_map__get(cpus); in test_cpumap()
29 perf_cpu_map__put(cpus); in test_cpumap()
30 perf_cpu_map__put(cpus); in test_cpumap()
32 cpus = perf_cpu_map__new_online_cpus(); in test_cpumap()
33 if (!cpus) in test_cpumap()
36 perf_cpu_map__for_each_cpu(cpu, idx, cpus) in test_cpumap()
39 perf_cpu_map__put(cpus); in test_cpumap()
/linux-6.8/arch/riscv/kernel/vdso/
Dhwprobe.c12 size_t cpusetsize, unsigned long *cpus,
16 size_t cpusetsize, unsigned long *cpus, in riscv_vdso_get_values() argument
21 bool all_cpus = !cpusetsize && !cpus; in riscv_vdso_get_values()
27 * stashed away only for the "all cpus" case. If all CPUs are in riscv_vdso_get_values()
32 return riscv_hwprobe(pairs, pair_count, cpusetsize, cpus, flags); in riscv_vdso_get_values()
51 size_t cpusetsize, unsigned long *cpus, in riscv_vdso_get_cpus() argument
58 unsigned char *c = (unsigned char *)cpus; in riscv_vdso_get_cpus()
63 if (!cpusetsize || !cpus) in riscv_vdso_get_cpus()
74 return riscv_hwprobe(pairs, pair_count, cpusetsize, cpus, flags); in riscv_vdso_get_cpus()
103 size_t cpusetsize, unsigned long *cpus,
[all …]
/linux-6.8/tools/lib/perf/include/perf/
Dcpumap.h51 LIBPERF_API struct perf_cpu perf_cpu_map__cpu(const struct perf_cpu_map *cpus, int idx);
56 * the result is the number CPUs in the map plus one if the
59 LIBPERF_API int perf_cpu_map__nr(const struct perf_cpu_map *cpus);
73 #define perf_cpu_map__for_each_cpu(cpu, idx, cpus) \ argument
74 for ((idx) = 0, (cpu) = perf_cpu_map__cpu(cpus, idx); \
75 (idx) < perf_cpu_map__nr(cpus); \
76 (idx)++, (cpu) = perf_cpu_map__cpu(cpus, idx))
78 #define perf_cpu_map__for_each_cpu_skip_any(_cpu, idx, cpus) \ argument
79 for ((idx) = 0, (_cpu) = perf_cpu_map__cpu(cpus, idx); \
80 (idx) < perf_cpu_map__nr(cpus); \
[all …]
/linux-6.8/drivers/clk/sunxi/
Dclk-sun9i-cpus.c7 * Allwinner A80 CPUS clock driver
22 * sun9i_a80_cpus_clk_setup() - Setup function for a80 cpus composite clk
55 struct sun9i_a80_cpus_clk *cpus = to_sun9i_a80_cpus_clk(hw); in sun9i_a80_cpus_clk_recalc_rate() local
60 reg = readl(cpus->reg); in sun9i_a80_cpus_clk_recalc_rate()
155 struct sun9i_a80_cpus_clk *cpus = to_sun9i_a80_cpus_clk(hw); in sun9i_a80_cpus_clk_set_rate() local
162 reg = readl(cpus->reg); in sun9i_a80_cpus_clk_set_rate()
170 writel(reg, cpus->reg); in sun9i_a80_cpus_clk_set_rate()
188 struct sun9i_a80_cpus_clk *cpus; in sun9i_a80_cpus_setup() local
193 cpus = kzalloc(sizeof(*cpus), GFP_KERNEL); in sun9i_a80_cpus_setup()
194 if (!cpus) in sun9i_a80_cpus_setup()
[all …]
/linux-6.8/drivers/cpufreq/
Dcpufreq-dt.c30 cpumask_var_t cpus; member
50 if (cpumask_test_cpu(cpu, priv->cpus)) in cpufreq_dt_find_data()
129 cpumask_copy(policy->cpus, priv->cpus); in cpufreq_init()
211 if (!alloc_cpumask_var(&priv->cpus, GFP_KERNEL)) in dt_cpufreq_early_init()
214 cpumask_set_cpu(cpu, priv->cpus); in dt_cpufreq_early_init()
232 ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, priv->cpus); in dt_cpufreq_early_init()
238 * operating-points-v2 not supported, fallback to all CPUs share in dt_cpufreq_early_init()
240 * sharing CPUs. in dt_cpufreq_early_init()
242 if (dev_pm_opp_get_sharing_cpus(cpu_dev, priv->cpus)) in dt_cpufreq_early_init()
247 * Initialize OPP tables for all priv->cpus. They will be shared by in dt_cpufreq_early_init()
[all …]
/linux-6.8/Documentation/scheduler/
Dsched-energy.rst9 the impact of its decisions on the energy consumed by CPUs. EAS relies on an
10 Energy Model (EM) of the CPUs to select an energy efficient CPU for each task,
59 In short, EAS changes the way CFS tasks are assigned to CPUs. When it is time
64 knowledge about the platform's topology, which include the 'capacity' of CPUs,
72 differentiate CPUs with different computing throughput. The 'capacity' of a CPU
76 tasks and CPUs computed by the Per-Entity Load Tracking (PELT) mechanism. Thanks
79 energy trade-offs. The capacity of CPUs is provided via arch-specific code
99 Let us consider a platform with 12 CPUs, split in 3 performance domains
102 CPUs: 0 1 2 3 4 5 6 7 8 9 10 11
108 containing 6 CPUs. The two root domains are denoted rd1 and rd2 in the
[all …]
/linux-6.8/tools/perf/arch/arm64/util/
Dheader.c19 static int _get_cpuid(char *buf, size_t sz, struct perf_cpu_map *cpus) in _get_cpuid() argument
28 cpus = perf_cpu_map__get(cpus); in _get_cpuid()
30 for (cpu = 0; cpu < perf_cpu_map__nr(cpus); cpu++) { in _get_cpuid()
35 sysfs, RC_CHK_ACCESS(cpus)->map[cpu].cpu); in _get_cpuid()
54 perf_cpu_map__put(cpus); in _get_cpuid()
60 struct perf_cpu_map *cpus = perf_cpu_map__new_online_cpus(); in get_cpuid() local
63 if (!cpus) in get_cpuid()
66 ret = _get_cpuid(buf, sz, cpus); in get_cpuid()
68 perf_cpu_map__put(cpus); in get_cpuid()
78 if (!pmu || !pmu->cpus) in get_cpuid_str()
[all …]
/linux-6.8/Documentation/admin-guide/
Dkernel-per-CPU-kthreads.rst13 - Documentation/core-api/irq/irq-affinity.rst: Binding interrupts to sets of CPUs.
15 - Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
18 of CPUs.
21 call to bind tasks to sets of CPUs.
50 2. Do all eHCA-Infiniband-related work on other CPUs, including
53 provisioned only on selected CPUs.
101 with multiple CPUs, force them all offline before bringing the
102 first one back online. Once you have onlined the CPUs in question,
103 do not offline any other CPUs, because doing so could force the
104 timer back onto one of the CPUs in question.
[all …]
/linux-6.8/Documentation/power/
Dsuspend-and-cpuhotplug.rst27 |tasks | | cpus | | | | cpus | |tasks|
59 online CPUs
75 Note down these cpus in | P
100 | Call _cpu_up() [for all those cpus in the frozen_cpus mask, in a loop]
158 the non-boot CPUs are offlined or onlined, the _cpu_*() functions are called
177 update on the CPUs, as discussed below:
184 a. When all the CPUs are identical:
187 to apply the same microcode revision to each of the CPUs.
192 all CPUs, in order to handle case 'b' described below.
195 b. When some of the CPUs are different than the rest:
[all …]
/linux-6.8/tools/perf/tests/
Dopenat-syscall-all-cpus.c27 struct perf_cpu_map *cpus; in test__openat_syscall_event_on_all_cpus() local
40 cpus = perf_cpu_map__new_online_cpus(); in test__openat_syscall_event_on_all_cpus()
41 if (cpus == NULL) { in test__openat_syscall_event_on_all_cpus()
56 if (evsel__open(evsel, cpus, threads) < 0) { in test__openat_syscall_event_on_all_cpus()
64 perf_cpu_map__for_each_cpu(cpu, idx, cpus) { in test__openat_syscall_event_on_all_cpus()
69 * without CPU_ALLOC. 1024 cpus in 2010 still seems in test__openat_syscall_event_on_all_cpus()
91 evsel->core.cpus = perf_cpu_map__get(cpus); in test__openat_syscall_event_on_all_cpus()
95 perf_cpu_map__for_each_cpu(cpu, idx, cpus) { in test__openat_syscall_event_on_all_cpus()
121 perf_cpu_map__put(cpus); in test__openat_syscall_event_on_all_cpus()
129 TEST_CASE_REASON("Detect openat syscall event on all cpus",
[all …]
/linux-6.8/Documentation/devicetree/bindings/csky/
Dcpus.txt5 The device tree allows to describe the layout of CPUs in a system through
6 the "cpus" node, which in turn contains a number of subnodes (ie "cpu")
9 Only SMP system need to care about the cpus node and single processor
10 needn't define cpus node at all.
13 cpus and cpu node bindings definition
16 - cpus node
20 The node name must be "cpus".
22 A cpus node must define the following properties:
59 cpus {
/linux-6.8/Documentation/arch/arm64/
Dasymmetric-32bit.rst16 of the CPUs are capable of executing 32-bit user applications. On such
56 The subset of CPUs capable of running 32-bit tasks is described in
60 **Note:** CPUs are advertised by this file as they are detected and so
61 late-onlining of 32-bit-capable CPUs can result in the file contents
62 being modified by the kernel at runtime. Once advertised, CPUs are never
71 affinity mask contains 64-bit-only CPUs. In this situation, the kernel
88 of all 32-bit-capable CPUs of which the kernel is aware.
98 the 32-bit-capable CPUs of the requested affinity mask. On success, the
112 64-bit-only CPUs and admission control is enabled. Concurrent offlining
113 of 32-bit-capable CPUs may still necessitate the procedure described in
[all …]
Dbooting.rst193 be programmed with a consistent value on all CPUs. If entering the
199 All CPUs to be booted by the kernel must be part of the same coherency
214 - SCR_EL3.FIQ must have the same value across all CPUs the kernel is
229 all CPUs the kernel is executing on, and must stay constant
252 For CPUs with pointer authentication functionality:
264 For CPUs with Activity Monitors Unit v1 (AMUv1) extension present:
282 For CPUs with the Fine Grained Traps (FEAT_FGT) extension present:
288 For CPUs with support for HCRX_EL2 (FEAT_HCX) present:
294 For CPUs with Advanced SIMD and floating point support:
304 For CPUs with the Scalable Vector Extension (FEAT_SVE) present:
[all …]
/linux-6.8/sound/soc/fsl/
DKconfig2 menu "SoC Audio for Freescale CPUs"
4 comment "Common SoC Audio options for Freescale CPUs:"
13 support for the Freescale CPUs.
25 support for the Freescale CPUs.
35 support for the Freescale CPUs.
44 support for the NXP iMX CPUs.
53 support for the Freescale CPUs.
66 support for the Freescale CPUs.
76 (ESAI) support for the Freescale CPUs.
108 iMX CPUs. XCVR is a digital module that supports HDMI2.1 eARC,
[all …]
/linux-6.8/Documentation/arch/arm/
Dcluster-pm-race-avoidance.rst18 In a system containing multiple CPUs, it is desirable to have the
19 ability to turn off individual CPUs when the system is idle, reducing
22 In a system containing multiple clusters of CPUs, it is also desirable
27 of independently running CPUs, while the OS continues to run. This
92 CPUs in the cluster simultaneously modifying the state. The cluster-
104 referred to as a "CPU". CPUs are assumed to be single-threaded:
107 This means that CPUs fit the basic model closely.
216 A cluster is a group of connected CPUs with some common resources.
217 Because a cluster contains multiple CPUs, it can be doing multiple
272 which exact CPUs within the cluster play these roles. This must
[all …]

12345678910>>...108