| #
5bdb4078
|
| 15-Apr-2026 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'sched_ext-for-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext
Pull sched_ext updates from Tejun Heo:
- cgroup sub-scheduler groundwork
Multiple BPF schedulers can
Merge tag 'sched_ext-for-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext
Pull sched_ext updates from Tejun Heo:
- cgroup sub-scheduler groundwork
Multiple BPF schedulers can be attached to cgroups and the dispatch path is made hierarchical. This involves substantial restructuring of the core dispatch, bypass, watchdog, and dump paths to be per-scheduler, along with new infrastructure for scheduler ownership enforcement, lifecycle management, and cgroup subtree iteration
The enqueue path is not yet updated and will follow in a later cycle
- scx_bpf_dsq_reenq() generalized to support any DSQ including remote local DSQs and user DSQs
Built on top of this, SCX_ENQ_IMMED guarantees that tasks dispatched to local DSQs either run immediately or get reenqueued back through ops.enqueue(), giving schedulers tighter control over queueing latency
Also useful for opportunistic CPU sharing across sub-schedulers
- ops.dequeue() was only invoked when the core knew a task was in BPF data structures, missing scheduling property change events and skipping callbacks for non-local DSQ dispatches from ops.select_cpu()
Fixed to guarantee exactly one ops.dequeue() call when a task leaves BPF scheduler custody
- Kfunc access validation moved from runtime to BPF verifier time, removing runtime mask enforcement
- Idle SMT sibling prioritization in the idle CPU selection path
- Documentation, selftest, and tooling updates. Misc bug fixes and cleanups
* tag 'sched_ext-for-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext: (134 commits) tools/sched_ext: Add explicit cast from void* in RESIZE_ARRAY() sched_ext: Make string params of __ENUM_set() const tools/sched_ext: Kick home CPU for stranded tasks in scx_qmap sched_ext: Drop spurious warning on kick during scheduler disable sched_ext: Warn on task-based SCX op recursion sched_ext: Rename scx_kf_allowed_on_arg_tasks() to scx_kf_arg_task_ok() sched_ext: Remove runtime kfunc mask enforcement sched_ext: Add verifier-time kfunc context filter sched_ext: Drop redundant rq-locked check from scx_bpf_task_cgroup() sched_ext: Decouple kfunc unlocked-context check from kf_mask sched_ext: Fix ops.cgroup_move() invocation kf_mask and rq tracking sched_ext: Track @p's rq lock across set_cpus_allowed_scx -> ops.set_cpumask sched_ext: Add select_cpu kfuncs to scx_kfunc_ids_unlocked sched_ext: Drop TRACING access to select_cpu kfuncs selftests/sched_ext: Fix wrong DSQ ID in peek_dsq error message sched_ext: Documentation: improve accuracy of task lifecycle pseudo-code selftests/sched_ext: Improve runner error reporting for invalid arguments sched_ext: Documentation: Fix scx_bpf_move_to_local kfunc name sched_ext: Documentation: Add ops.dequeue() to task lifecycle tools/sched_ext: Fix off-by-one in scx_sdt payload zeroing ...
show more ...
|
| #
658ad225
|
| 21-Feb-2026 |
Andrea Righi <arighi@nvidia.com> |
selftests/sched_ext: Add test to validate ops.dequeue() semantics
Add a new kselftest to validate that the new ops.dequeue() semantics work correctly for all task lifecycle scenarios, including the
selftests/sched_ext: Add test to validate ops.dequeue() semantics
Add a new kselftest to validate that the new ops.dequeue() semantics work correctly for all task lifecycle scenarios, including the distinction between terminal DSQs (where BPF scheduler is done with the task), user DSQs (where BPF scheduler manages the task lifecycle) and BPF data structures, regardless of which event performs the dispatch.
The test validates the following scenarios:
- From ops.select_cpu(): - scenario 0 (local DSQ): tasks dispatched to the local DSQ bypass the BPF scheduler entirely; they never enter BPF custody, so ops.dequeue() is not called, - scenario 1 (global DSQ): tasks dispatched to SCX_DSQ_GLOBAL also bypass the BPF scheduler, like the local DSQ; ops.dequeue() is not called, - scenario 2 (user DSQ): tasks dispatched to user DSQs from ops.select_cpu(): tasks enter BPF scheduler's custody with full enqueue/dequeue lifecycle tracking and state machine validation, expects 1:1 enqueue/dequeue pairing,
- From ops.enqueue(): - scenario 3 (local DSQ): same behavior as scenario 0, - scenario 4 (global DSQ): same behavior as scenario 1, - scenario 5 (user DSQ): same behavior as scenario 2, - scenario 6 (BPF internal queue): tasks are stored in a BPF queue from ops.enqueue() and consumed from ops.dispatch(); similarly to scenario 5, tasks enter BPF scheduler's custody with full lifecycle tracking and 1:1 enqueue/dequeue validation.
This verifies that: - terminal DSQ dispatch (local, global) don't trigger ops.dequeue(), - tasks dispatched to user DSQs, either from ops.select_cpu() or ops.enqueue(), enter BPF scheduler's custody and have exact 1:1 enqueue/dequeue pairing, - tasks stored to internal BPF data structures from ops.enqueue() enter BPF scheduler's custody and have exact 1:1 enqueue/dequeue pairing, - dispatch dequeues have no flags (normal workflow), - property change dequeues have the %SCX_DEQ_SCHED_CHANGE flag set, - no duplicate enqueues or invalid state transitions are happening.
Cc: Tejun Heo <tj@kernel.org> Cc: Emil Tsalapatis <emil@etsalapatis.com> Cc: Kuba Piecuch <jpiecuch@google.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
show more ...
|