Lines Matching +full:test +full:- +full:cpu
1 // SPDX-License-Identifier: GPL-2.0-only
26 * Any bug related to task migration is likely to be timing-dependent; perform
44 static int next_cpu(int cpu) in next_cpu() argument
47 * Advance to the next CPU, skipping those that weren't in the original in next_cpu()
51 * burn a lot cycles and the test will take longer than normal to in next_cpu()
55 cpu++; in next_cpu()
56 if (cpu > max_cpu) { in next_cpu()
57 cpu = min_cpu; in next_cpu()
58 TEST_ASSERT(CPU_ISSET(cpu, &possible_mask), in next_cpu()
59 "Min CPU = %d must always be usable", cpu); in next_cpu()
62 } while (!CPU_ISSET(cpu, &possible_mask)); in next_cpu()
64 return cpu; in next_cpu()
71 int r, i, cpu; in migration_worker() local
75 for (i = 0, cpu = min_cpu; i < NR_TASK_MIGRATIONS; i++, cpu = next_cpu(cpu)) { in migration_worker()
76 CPU_SET(cpu, &allowed_mask); in migration_worker()
81 * CPU ID reads. An odd sequence count indicates a migration in migration_worker()
82 * is in-progress, while a completely different count indicates in migration_worker()
89 * stable, i.e. while changing affinity is in-progress. in migration_worker()
98 CPU_CLR(cpu, &allowed_mask); in migration_worker()
101 * Wait 1-10us before proceeding to the next iteration and more in migration_worker()
114 * exit to userspace is necessary to give the test a chance in migration_worker()
115 * to check the rseq CPU ID (see #2). in migration_worker()
121 * 2. To let ioctl(KVM_RUN) make its way back to the test in migration_worker()
122 * before the next round of migration. The test's check on in migration_worker()
123 * the rseq CPU ID must wait for migration to complete in in migration_worker()
128 * 3. To ensure the read-side makes efficient forward progress, in migration_worker()
129 * e.g. if getcpu() involves a syscall. Stalling the read-side in migration_worker()
130 * means the test will spend more time waiting for getcpu() in migration_worker()
131 * to stabilize and less time trying to hit the timing-dependent in migration_worker()
134 * Because any bug in this area is likely to be timing-dependent, in migration_worker()
136 * as a best effort to avoid tuning the test to the point where in migration_worker()
141 * x86-64, but starts to require more iterations to reproduce in migration_worker()
144 * at 10us to keep test runtime reasonable while minimizing in migration_worker()
148 * e.g. failures occur on x86-64 with nanosleep(0), but at that in migration_worker()
166 * CPU_SET doesn't provide a FOR_EACH helper, get the min/max CPU that in calc_min_max_cpu()
172 min_cpu = -1; in calc_min_max_cpu()
173 max_cpu = -1; in calc_min_max_cpu()
179 if (min_cpu == -1) in calc_min_max_cpu()
186 "Only one usable CPU, task migration not possible"); in calc_min_max_cpu()
194 u32 cpu, rseq_cpu; in main() local
209 * CPU affinity. in main()
222 * Verify rseq's CPU matches sched's CPU. Ensure migration in main()
225 * count is odd (migration in-progress). in main()
230 * i.e. if a migration is in-progress. in main()
240 r = sys_getcpu(&cpu, NULL); in main()
247 TEST_ASSERT(rseq_cpu == cpu, in main()
248 "rseq CPU = %d, sched CPU = %d", rseq_cpu, cpu); in main()
252 * Sanity check that the test was able to enter the guest a reasonable in main()
255 * conservative ratio on x86-64, which can do _more_ KVM_RUNs than in main()