Lines Matching defs:slabs

7  * and only uses a centralized lock to manage a pool of partial slabs.
63 * The role of the slab_mutex is to protect the list of all the slabs
80 * Frozen slabs
90 * CPU partial slabs
92 * The partially empty slabs cached on the CPU partial list are used
94 * These slabs are not frozen, but are also exempt from list management,
107 * the partial slab counter. If taken then no new slabs may be added or
108 * removed from the lists nor make the number of partial slabs be modified.
109 * (Note that the total number of slabs is an atomic value that may be
114 * slabs, operations can continue without any centralized lock. F.e.
115 * allocating a long series of objects that fill up slabs does not require
154 * Allocations only occur from these slabs called cpu slabs.
157 * operations no list for full slabs is used. If an object in a full slab is
159 * We track full slabs for debugging purposes though because otherwise we
175 * One use of this flag is to mark slabs that are
285 * Minimum number of partial slabs. These will be left on the partial
291 * Maximum number of desirable partial slabs.
292 * The existence of more partial slabs makes kmem_cache_shrink
410 struct slab *partial; /* Partially allocated slabs */
630 * slabs on the per cpu partial list, in order to limit excessive
631 * growth of the list. For simplicity we assume that the slabs will
1557 * Tracking of fully allocated slabs for debugging purposes.
1702 * @slabs: return start of list of slabs, or NULL when there's no list
1708 parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init)
1718 * No options but restriction on slabs. This means full
1719 * debugging for slabs matching a pattern.
1764 *slabs = ++str;
1766 *slabs = NULL;
1817 * slabs means debugging is only changed for those slabs, so the global
1853 * then only the select slabs will receive the debug option(s).
2772 * Management of partially allocated slabs.
2891 * Racy check. If we mistakenly see no partial slabs then we
2955 * instead of attempting to obtain partial slabs from other nodes.
2959 * may return off node objects because partial slabs are obtained
2965 * This means scanning over all nodes to look for partial slabs which
3097 * unfreezes the slabs and puts it on the proper list.
3224 * Put all the cpu partial slabs to the node partial list.
3263 int slabs = 0;
3270 if (drain && oldslab->slabs >= s->cpu_partial_slabs) {
3279 slabs = oldslab->slabs;
3283 slabs++;
3285 slab->slabs = slabs;
3423 * Use the cpu notifier to insure that the cpu slabs are flushed when
3559 * it to meet the limit on the number of slabs to scan.
3612 pr_warn(" node %d: slabs: %ld, objs: %ld, free: %ld\n",
4476 * have a longer lifetime than the cpu slabs in most processing loads.
4524 * other processors updating the list of slabs.
5428 * Increasing the allocation order reduces the number of times that slabs
5435 * and slab fragmentation. A higher order reduces the number of partial slabs
5450 * be problematic to put into order 0 slabs because there may be too much
5457 * less a concern for large slabs though which are rarely used.
5692 * Per cpu partial lists mainly contain slabs that just have one
5887 * Attempt to free all partial slabs on a node.
6112 * kmem_cache_shrink discards empty slabs and promotes the slabs filled
6116 * The slabs with the least items are placed last. This results in them
6140 * Build lists of slabs to discard or promote.
6151 /* We do not keep full slabs on the list */
6164 * Promote the slabs filled up most to the head of the
6172 /* Release empty slabs */
6268 * Basic setup of slabs
6345 /* Now we can use the kmem_cache to allocate kmalloc slabs */
6437 * The larger the object size is, the more slabs we want on the partial
6530 pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
6543 pr_err("SLUB: %s %ld slabs counted but counter=%ld\n",
6743 SL_ALL, /* All slabs */
6744 SL_PARTIAL, /* Only partially allocated slabs */
6745 SL_CPU, /* Only slabs used for cpu caches */
6746 SL_OBJECTS, /* Determine allocated objects not slabs */
6747 SL_TOTAL /* Determine object capacity not slabs */
6802 x = data_race(slab->slabs);
6996 int slabs = 0;
7007 slabs += data_race(slab->slabs);
7011 /* Approximate half-full slabs, see slub_set_cpu_partial() */
7012 objects = (slabs * oo_objects(s->oo)) / 2;
7013 len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs);
7021 slabs = data_race(slab->slabs);
7022 objects = (slabs * oo_objects(s->oo)) / 2;
7024 cpu, objects, slabs);
7073 SLAB_ATTR_RO(slabs);
7462 * get here for aliasable slabs so we do not need to support