Lines Matching full:idle
5 * Built-in idle CPU tracking policy.
14 /* Enable/disable built-in idle CPU selection policy */
17 /* Enable/disable per-node idle cpumasks */
28 * cpumasks to track idle CPUs within each NUMA node.
31 * from is used to track all the idle CPUs in the system.
39 * Global host-wide idle cpumasks (used when SCX_OPS_BUILTIN_IDLE_PER_NODE
45 * Per-node idle cpumasks.
50 * Return the idle masks associated to a target @node.
52 * NUMA_NO_NODE identifies the global idle cpumask.
61 * per-node idle cpumasks are disabled.
79 * cluster is not wholly idle either way. This also prevents in scx_idle_test_and_clear_cpu()
89 * @cpu is never cleared from the idle SMT mask. Ensure that in scx_idle_test_and_clear_cpu()
107 * Pick an idle CPU in a specific NUMA node.
135 * Tracks nodes that have not yet been visited when searching for an idle
141 * Search for an idle CPU across all nodes, excluding @node.
165 * SCX_OPS_BUILTIN_IDLE_PER_NODE and it's requesting an idle CPU in pick_idle_cpu_from_online_nodes()
184 * Find an idle CPU in the system, starting from @node.
323 * cache-aware / NUMA-aware scheduling optimizations in the default CPU idle
339 * single LLC domain, the idle CPU selection logic can choose any in scx_idle_update_selcpu_topology()
362 * for an idle CPU in the same domain twice is redundant. in scx_idle_update_selcpu_topology()
365 * optimization, as we would naturally select idle CPUs within in scx_idle_update_selcpu_topology()
379 pr_debug("sched_ext: LLC idle selection %s\n", in scx_idle_update_selcpu_topology()
381 pr_debug("sched_ext: NUMA idle selection %s\n", in scx_idle_update_selcpu_topology()
395 * Built-in CPU idle selection policy:
397 * 1. Prioritize full-idle cores:
398 * - always prioritize CPUs from fully idle cores (both logical CPUs are
399 * idle) to avoid interference caused by SMT.
412 * 5. Pick any idle CPU usable by the task.
422 * Return the picked CPU if idle, or a negative value otherwise.
444 * updating a cpumask every time we need to select an idle CPU (which in scx_select_cpu_dfl()
465 * If the waker's CPU is cache affine and prev_cpu is idle, in scx_select_cpu_dfl()
483 * Checking only for the presence of idle CPUs is also in scx_select_cpu_dfl()
485 * piled up on it even if there is an idle core elsewhere on in scx_select_cpu_dfl()
499 * If CPU has SMT, any wholly idle CPU is likely a better pick than in scx_select_cpu_dfl()
500 * partially idle @prev_cpu. in scx_select_cpu_dfl()
504 * Keep using @prev_cpu if it's part of a fully idle core. in scx_select_cpu_dfl()
513 * Search for any fully idle core in the same LLC domain. in scx_select_cpu_dfl()
522 * Search for any fully idle core in the same NUMA node. in scx_select_cpu_dfl()
531 * Search for any full-idle core usable by the task. in scx_select_cpu_dfl()
533 * If the node-aware idle CPU selection policy is enabled in scx_select_cpu_dfl()
543 * Give up if we're strictly looking for a full-idle SMT in scx_select_cpu_dfl()
553 * Use @prev_cpu if it's idle. in scx_select_cpu_dfl()
561 * Search for any idle CPU in the same LLC domain. in scx_select_cpu_dfl()
570 * Search for any idle CPU in the same NUMA node. in scx_select_cpu_dfl()
579 * Search for any idle CPU usable by the task. in scx_select_cpu_dfl()
581 * If the node-aware idle CPU selection policy is enabled in scx_select_cpu_dfl()
595 * Initialize global and per-node idle cpumasks.
601 /* Allocate global idle cpumasks */ in scx_idle_init_masks()
605 /* Allocate per-node idle cpumasks */ in scx_idle_init_masks()
620 static void update_builtin_idle(int cpu, bool idle) in update_builtin_idle() argument
625 assign_cpu(cpu, idle_cpus, idle); in update_builtin_idle()
632 if (idle) { in update_builtin_idle()
648 * Update the idle state of a CPU to @idle.
651 * scheduler of an actual idle state transition (idle to busy or vice
652 * versa). If @do_notify is false, only the idle state in the idle masks is
655 * This distinction is necessary, because an idle CPU can be "reserved" and
658 * to idle without a true state transition. Refreshing the idle masks
659 * without invoking ops.update_idle() ensures accurate idle state tracking
663 void __scx_update_idle(struct rq *rq, bool idle, bool do_notify) in __scx_update_idle() argument
671 * the idle thread and vice versa. in __scx_update_idle()
673 * Idle transitions are indicated by do_notify being set to true, in __scx_update_idle()
677 SCX_CALL_OP(SCX_KF_REST, update_idle, rq, cpu_of(rq), idle); in __scx_update_idle()
680 * Update the idle masks: in __scx_update_idle()
681 * - for real idle transitions (do_notify == true) in __scx_update_idle()
682 * - for idle-to-idle transitions (indicated by the previous task in __scx_update_idle()
683 * being the idle thread, managed by pick_task_idle()) in __scx_update_idle()
685 * Skip updating idle masks if the previous task is not the idle in __scx_update_idle()
687 * transitioning from a task to the idle thread (calling this in __scx_update_idle()
690 * In this way we can avoid updating the idle masks twice, in __scx_update_idle()
695 update_builtin_idle(cpu, idle); in __scx_update_idle()
703 * Consider all online cpus idle. Should converge to the actual state in reset_idle_masks()
751 scx_ops_error("per-node idle tracking is disabled"); in validate_node()
781 scx_ops_error("built-in idle tracking is disabled"); in check_builtin_idle_enabled()
807 * @is_idle: out parameter indicating whether the returned CPU is idle
814 * currently idle and thus a good candidate for direct dispatching.
846 * idle-tracking per-CPU cpumask of a target NUMA node.
849 * Returns an empty cpumask if idle tracking is not enabled, if @node is
867 * scx_bpf_get_idle_cpumask - Get a referenced kptr to the idle-tracking
870 * Returns an empty mask if idle tracking is not enabled, or running on a
892 * idle-tracking, per-physical-core cpumask of a target NUMA node. Can be
896 * Returns an empty cpumask if idle tracking is not enabled, if @node is
917 * scx_bpf_get_idle_smtmask - Get a referenced kptr to the idle-tracking,
921 * Returns an empty mask if idle tracking is not enabled, or running on a
946 * either the percpu, or SMT idle-tracking cpumask.
953 * a reference to a global idle cpumask, which is read-only in the in scx_bpf_put_idle_cpumask()
960 * scx_bpf_test_and_clear_cpu_idle - Test and clear @cpu's idle state
961 * @cpu: cpu to test and clear idle for
963 * Returns %true if @cpu was idle and its idle state was successfully cleared.
981 * scx_bpf_pick_idle_cpu_node - Pick and claim an idle cpu from @node
986 * Pick and claim an idle cpu in @cpus_allowed from the NUMA node @node.
988 * Returns the picked idle cpu number on success, or -%EBUSY if no matching
1010 * scx_bpf_pick_idle_cpu - Pick and claim an idle cpu
1014 * Pick and claim an idle cpu in @cpus_allowed. Returns the picked idle cpu
1017 * Idle CPU tracking may race against CPU scheduling state transitions. For
1019 * idle state. If the caller then assumes that there will be dispatch events on
1035 scx_ops_error("per-node idle tracking is enabled"); in scx_bpf_pick_idle_cpu()
1046 * scx_bpf_pick_any_cpu_node - Pick and claim an idle cpu if available
1052 * Pick and claim an idle cpu in @cpus_allowed. If none is available, pick any
1053 * CPU in @cpus_allowed. Guaranteed to succeed and returns the picked idle cpu
1060 * the CPU idle state).
1063 * set, this function can't tell which CPUs are idle and will always pick any
1090 * scx_bpf_pick_any_cpu - Pick and claim an idle cpu if available or pick any CPU
1094 * Pick and claim an idle cpu in @cpus_allowed. If none is available, pick any
1095 * CPU in @cpus_allowed. Guaranteed to succeed and returns the picked idle cpu
1100 * set, this function can't tell which CPUs are idle and will always pick any
1112 scx_ops_error("per-node idle tracking is enabled"); in scx_bpf_pick_any_cpu()