1				CGROUPS
2				-------
3
4Written by Paul Menage <menage@google.com> based on
5Documentation/cgroups/cpusets.txt
6
7Original copyright statements from cpusets.txt:
8Portions Copyright (C) 2004 BULL SA.
9Portions Copyright (c) 2004-2006 Silicon Graphics, Inc.
10Modified by Paul Jackson <pj@sgi.com>
11Modified by Christoph Lameter <clameter@sgi.com>
12
13CONTENTS:
14=========
15
161. Control Groups
17  1.1 What are cgroups ?
18  1.2 Why are cgroups needed ?
19  1.3 How are cgroups implemented ?
20  1.4 What does notify_on_release do ?
21  1.5 What does clone_children do ?
22  1.6 How do I use cgroups ?
232. Usage Examples and Syntax
24  2.1 Basic Usage
25  2.2 Attaching processes
26  2.3 Mounting hierarchies by name
27  2.4 Notification API
283. Kernel API
29  3.1 Overview
30  3.2 Synchronization
31  3.3 Subsystem API
324. Questions
33
341. Control Groups
35=================
36
371.1 What are cgroups ?
38----------------------
39
40Control Groups provide a mechanism for aggregating/partitioning sets of
41tasks, and all their future children, into hierarchical groups with
42specialized behaviour.
43
44Definitions:
45
46A *cgroup* associates a set of tasks with a set of parameters for one
47or more subsystems.
48
49A *subsystem* is a module that makes use of the task grouping
50facilities provided by cgroups to treat groups of tasks in
51particular ways. A subsystem is typically a "resource controller" that
52schedules a resource or applies per-cgroup limits, but it may be
53anything that wants to act on a group of processes, e.g. a
54virtualization subsystem.
55
56A *hierarchy* is a set of cgroups arranged in a tree, such that
57every task in the system is in exactly one of the cgroups in the
58hierarchy, and a set of subsystems; each subsystem has system-specific
59state attached to each cgroup in the hierarchy.  Each hierarchy has
60an instance of the cgroup virtual filesystem associated with it.
61
62At any one time there may be multiple active hierarchies of task
63cgroups. Each hierarchy is a partition of all tasks in the system.
64
65User level code may create and destroy cgroups by name in an
66instance of the cgroup virtual file system, specify and query to
67which cgroup a task is assigned, and list the task pids assigned to
68a cgroup. Those creations and assignments only affect the hierarchy
69associated with that instance of the cgroup file system.
70
71On their own, the only use for cgroups is for simple job
72tracking. The intention is that other subsystems hook into the generic
73cgroup support to provide new attributes for cgroups, such as
74accounting/limiting the resources which processes in a cgroup can
75access. For example, cpusets (see Documentation/cgroups/cpusets.txt) allows
76you to associate a set of CPUs and a set of memory nodes with the
77tasks in each cgroup.
78
791.2 Why are cgroups needed ?
80----------------------------
81
82There are multiple efforts to provide process aggregations in the
83Linux kernel, mainly for resource tracking purposes. Such efforts
84include cpusets, CKRM/ResGroups, UserBeanCounters, and virtual server
85namespaces. These all require the basic notion of a
86grouping/partitioning of processes, with newly forked processes ending
87in the same group (cgroup) as their parent process.
88
89The kernel cgroup patch provides the minimum essential kernel
90mechanisms required to efficiently implement such groups. It has
91minimal impact on the system fast paths, and provides hooks for
92specific subsystems such as cpusets to provide additional behaviour as
93desired.
94
95Multiple hierarchy support is provided to allow for situations where
96the division of tasks into cgroups is distinctly different for
97different subsystems - having parallel hierarchies allows each
98hierarchy to be a natural division of tasks, without having to handle
99complex combinations of tasks that would be present if several
100unrelated subsystems needed to be forced into the same tree of
101cgroups.
102
103At one extreme, each resource controller or subsystem could be in a
104separate hierarchy; at the other extreme, all subsystems
105would be attached to the same hierarchy.
106
107As an example of a scenario (originally proposed by vatsa@in.ibm.com)
108that can benefit from multiple hierarchies, consider a large
109university server with various users - students, professors, system
110tasks etc. The resource planning for this server could be along the
111following lines:
112
113       CPU :          "Top cpuset"
114                       /       \
115               CPUSet1         CPUSet2
116                  |               |
117               (Professors)    (Students)
118
119               In addition (system tasks) are attached to topcpuset (so
120               that they can run anywhere) with a limit of 20%
121
122       Memory : Professors (50%), Students (30%), system (20%)
123
124       Disk : Professors (50%), Students (30%), system (20%)
125
126       Network : WWW browsing (20%), Network File System (60%), others (20%)
127                               / \
128               Professors (15%)  students (5%)
129
130Browsers like Firefox/Lynx go into the WWW network class, while (k)nfsd go
131into NFS network class.
132
133At the same time Firefox/Lynx will share an appropriate CPU/Memory class
134depending on who launched it (prof/student).
135
136With the ability to classify tasks differently for different resources
137(by putting those resource subsystems in different hierarchies) then
138the admin can easily set up a script which receives exec notifications
139and depending on who is launching the browser he can
140
141    # echo browser_pid > /sys/fs/cgroup/<restype>/<userclass>/tasks
142
143With only a single hierarchy, he now would potentially have to create
144a separate cgroup for every browser launched and associate it with
145appropriate network and other resource class.  This may lead to
146proliferation of such cgroups.
147
148Also lets say that the administrator would like to give enhanced network
149access temporarily to a student's browser (since it is night and the user
150wants to do online gaming :))  OR give one of the students simulation
151apps enhanced CPU power,
152
153With ability to write pids directly to resource classes, it's just a
154matter of :
155
156       # echo pid > /sys/fs/cgroup/network/<new_class>/tasks
157       (after some time)
158       # echo pid > /sys/fs/cgroup/network/<orig_class>/tasks
159
160Without this ability, he would have to split the cgroup into
161multiple separate ones and then associate the new cgroups with the
162new resource classes.
163
164
165
1661.3 How are cgroups implemented ?
167---------------------------------
168
169Control Groups extends the kernel as follows:
170
171 - Each task in the system has a reference-counted pointer to a
172   css_set.
173
174 - A css_set contains a set of reference-counted pointers to
175   cgroup_subsys_state objects, one for each cgroup subsystem
176   registered in the system. There is no direct link from a task to
177   the cgroup of which it's a member in each hierarchy, but this
178   can be determined by following pointers through the
179   cgroup_subsys_state objects. This is because accessing the
180   subsystem state is something that's expected to happen frequently
181   and in performance-critical code, whereas operations that require a
182   task's actual cgroup assignments (in particular, moving between
183   cgroups) are less common. A linked list runs through the cg_list
184   field of each task_struct using the css_set, anchored at
185   css_set->tasks.
186
187 - A cgroup hierarchy filesystem can be mounted  for browsing and
188   manipulation from user space.
189
190 - You can list all the tasks (by pid) attached to any cgroup.
191
192The implementation of cgroups requires a few, simple hooks
193into the rest of the kernel, none in performance critical paths:
194
195 - in init/main.c, to initialize the root cgroups and initial
196   css_set at system boot.
197
198 - in fork and exit, to attach and detach a task from its css_set.
199
200In addition a new file system, of type "cgroup" may be mounted, to
201enable browsing and modifying the cgroups presently known to the
202kernel.  When mounting a cgroup hierarchy, you may specify a
203comma-separated list of subsystems to mount as the filesystem mount
204options.  By default, mounting the cgroup filesystem attempts to
205mount a hierarchy containing all registered subsystems.
206
207If an active hierarchy with exactly the same set of subsystems already
208exists, it will be reused for the new mount. If no existing hierarchy
209matches, and any of the requested subsystems are in use in an existing
210hierarchy, the mount will fail with -EBUSY. Otherwise, a new hierarchy
211is activated, associated with the requested subsystems.
212
213It's not currently possible to bind a new subsystem to an active
214cgroup hierarchy, or to unbind a subsystem from an active cgroup
215hierarchy. This may be possible in future, but is fraught with nasty
216error-recovery issues.
217
218When a cgroup filesystem is unmounted, if there are any
219child cgroups created below the top-level cgroup, that hierarchy
220will remain active even though unmounted; if there are no
221child cgroups then the hierarchy will be deactivated.
222
223No new system calls are added for cgroups - all support for
224querying and modifying cgroups is via this cgroup file system.
225
226Each task under /proc has an added file named 'cgroup' displaying,
227for each active hierarchy, the subsystem names and the cgroup name
228as the path relative to the root of the cgroup file system.
229
230Each cgroup is represented by a directory in the cgroup file system
231containing the following files describing that cgroup:
232
233 - tasks: list of tasks (by pid) attached to that cgroup.  This list
234   is not guaranteed to be sorted.  Writing a thread id into this file
235   moves the thread into this cgroup.
236 - cgroup.procs: list of tgids in the cgroup.  This list is not
237   guaranteed to be sorted or free of duplicate tgids, and userspace
238   should sort/uniquify the list if this property is required.
239   Writing a thread group id into this file moves all threads in that
240   group into this cgroup.
241 - notify_on_release flag: run the release agent on exit?
242 - release_agent: the path to use for release notifications (this file
243   exists in the top cgroup only)
244
245Other subsystems such as cpusets may add additional files in each
246cgroup dir.
247
248New cgroups are created using the mkdir system call or shell
249command.  The properties of a cgroup, such as its flags, are
250modified by writing to the appropriate file in that cgroups
251directory, as listed above.
252
253The named hierarchical structure of nested cgroups allows partitioning
254a large system into nested, dynamically changeable, "soft-partitions".
255
256The attachment of each task, automatically inherited at fork by any
257children of that task, to a cgroup allows organizing the work load
258on a system into related sets of tasks.  A task may be re-attached to
259any other cgroup, if allowed by the permissions on the necessary
260cgroup file system directories.
261
262When a task is moved from one cgroup to another, it gets a new
263css_set pointer - if there's an already existing css_set with the
264desired collection of cgroups then that group is reused, else a new
265css_set is allocated. The appropriate existing css_set is located by
266looking into a hash table.
267
268To allow access from a cgroup to the css_sets (and hence tasks)
269that comprise it, a set of cg_cgroup_link objects form a lattice;
270each cg_cgroup_link is linked into a list of cg_cgroup_links for
271a single cgroup on its cgrp_link_list field, and a list of
272cg_cgroup_links for a single css_set on its cg_link_list.
273
274Thus the set of tasks in a cgroup can be listed by iterating over
275each css_set that references the cgroup, and sub-iterating over
276each css_set's task set.
277
278The use of a Linux virtual file system (vfs) to represent the
279cgroup hierarchy provides for a familiar permission and name space
280for cgroups, with a minimum of additional kernel code.
281
2821.4 What does notify_on_release do ?
283------------------------------------
284
285If the notify_on_release flag is enabled (1) in a cgroup, then
286whenever the last task in the cgroup leaves (exits or attaches to
287some other cgroup) and the last child cgroup of that cgroup
288is removed, then the kernel runs the command specified by the contents
289of the "release_agent" file in that hierarchy's root directory,
290supplying the pathname (relative to the mount point of the cgroup
291file system) of the abandoned cgroup.  This enables automatic
292removal of abandoned cgroups.  The default value of
293notify_on_release in the root cgroup at system boot is disabled
294(0).  The default value of other cgroups at creation is the current
295value of their parents notify_on_release setting. The default value of
296a cgroup hierarchy's release_agent path is empty.
297
2981.5 What does clone_children do ?
299---------------------------------
300
301If the clone_children flag is enabled (1) in a cgroup, then all
302cgroups created beneath will call the post_clone callbacks for each
303subsystem of the newly created cgroup. Usually when this callback is
304implemented for a subsystem, it copies the values of the parent
305subsystem, this is the case for the cpuset.
306
3071.6 How do I use cgroups ?
308--------------------------
309
310To start a new job that is to be contained within a cgroup, using
311the "cpuset" cgroup subsystem, the steps are something like:
312
313 1) mount -t tmpfs cgroup_root /sys/fs/cgroup
314 2) mkdir /sys/fs/cgroup/cpuset
315 3) mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
316 4) Create the new cgroup by doing mkdir's and write's (or echo's) in
317    the /sys/fs/cgroup virtual file system.
318 5) Start a task that will be the "founding father" of the new job.
319 6) Attach that task to the new cgroup by writing its pid to the
320    /sys/fs/cgroup/cpuset/tasks file for that cgroup.
321 7) fork, exec or clone the job tasks from this founding father task.
322
323For example, the following sequence of commands will setup a cgroup
324named "Charlie", containing just CPUs 2 and 3, and Memory Node 1,
325and then start a subshell 'sh' in that cgroup:
326
327  mount -t tmpfs cgroup_root /sys/fs/cgroup
328  mkdir /sys/fs/cgroup/cpuset
329  mount -t cgroup cpuset -ocpuset /sys/fs/cgroup/cpuset
330  cd /sys/fs/cgroup/cpuset
331  mkdir Charlie
332  cd Charlie
333  /bin/echo 2-3 > cpuset.cpus
334  /bin/echo 1 > cpuset.mems
335  /bin/echo $$ > tasks
336  sh
337  # The subshell 'sh' is now running in cgroup Charlie
338  # The next line should display '/Charlie'
339  cat /proc/self/cgroup
340
3412. Usage Examples and Syntax
342============================
343
3442.1 Basic Usage
345---------------
346
347Creating, modifying, using the cgroups can be done through the cgroup
348virtual filesystem.
349
350To mount a cgroup hierarchy with all available subsystems, type:
351# mount -t cgroup xxx /sys/fs/cgroup
352
353The "xxx" is not interpreted by the cgroup code, but will appear in
354/proc/mounts so may be any useful identifying string that you like.
355
356Note: Some subsystems do not work without some user input first.  For instance,
357if cpusets are enabled the user will have to populate the cpus and mems files
358for each new cgroup created before that group can be used.
359
360As explained in section `1.2 Why are cgroups needed?' you should create
361different hierarchies of cgroups for each single resource or group of
362resources you want to control. Therefore, you should mount a tmpfs on
363/sys/fs/cgroup and create directories for each cgroup resource or resource
364group.
365
366# mount -t tmpfs cgroup_root /sys/fs/cgroup
367# mkdir /sys/fs/cgroup/rg1
368
369To mount a cgroup hierarchy with just the cpuset and memory
370subsystems, type:
371# mount -t cgroup -o cpuset,memory hier1 /sys/fs/cgroup/rg1
372
373To change the set of subsystems bound to a mounted hierarchy, just
374remount with different options:
375# mount -o remount,cpuset,blkio hier1 /sys/fs/cgroup/rg1
376
377Now memory is removed from the hierarchy and blkio is added.
378
379Note this will add blkio to the hierarchy but won't remove memory or
380cpuset, because the new options are appended to the old ones:
381# mount -o remount,blkio /sys/fs/cgroup/rg1
382
383To Specify a hierarchy's release_agent:
384# mount -t cgroup -o cpuset,release_agent="/sbin/cpuset_release_agent" \
385  xxx /sys/fs/cgroup/rg1
386
387Note that specifying 'release_agent' more than once will return failure.
388
389Note that changing the set of subsystems is currently only supported
390when the hierarchy consists of a single (root) cgroup. Supporting
391the ability to arbitrarily bind/unbind subsystems from an existing
392cgroup hierarchy is intended to be implemented in the future.
393
394Then under /sys/fs/cgroup/rg1 you can find a tree that corresponds to the
395tree of the cgroups in the system. For instance, /sys/fs/cgroup/rg1
396is the cgroup that holds the whole system.
397
398If you want to change the value of release_agent:
399# echo "/sbin/new_release_agent" > /sys/fs/cgroup/rg1/release_agent
400
401It can also be changed via remount.
402
403If you want to create a new cgroup under /sys/fs/cgroup/rg1:
404# cd /sys/fs/cgroup/rg1
405# mkdir my_cgroup
406
407Now you want to do something with this cgroup.
408# cd my_cgroup
409
410In this directory you can find several files:
411# ls
412cgroup.procs notify_on_release tasks
413(plus whatever files added by the attached subsystems)
414
415Now attach your shell to this cgroup:
416# /bin/echo $$ > tasks
417
418You can also create cgroups inside your cgroup by using mkdir in this
419directory.
420# mkdir my_sub_cs
421
422To remove a cgroup, just use rmdir:
423# rmdir my_sub_cs
424
425This will fail if the cgroup is in use (has cgroups inside, or
426has processes attached, or is held alive by other subsystem-specific
427reference).
428
4292.2 Attaching processes
430-----------------------
431
432# /bin/echo PID > tasks
433
434Note that it is PID, not PIDs. You can only attach ONE task at a time.
435If you have several tasks to attach, you have to do it one after another:
436
437# /bin/echo PID1 > tasks
438# /bin/echo PID2 > tasks
439	...
440# /bin/echo PIDn > tasks
441
442You can attach the current shell task by echoing 0:
443
444# echo 0 > tasks
445
446You can use the cgroup.procs file instead of the tasks file to move all
447threads in a threadgroup at once. Echoing the pid of any task in a
448threadgroup to cgroup.procs causes all tasks in that threadgroup to be
449be attached to the cgroup. Writing 0 to cgroup.procs moves all tasks
450in the writing task's threadgroup.
451
452Note: Since every task is always a member of exactly one cgroup in each
453mounted hierarchy, to remove a task from its current cgroup you must
454move it into a new cgroup (possibly the root cgroup) by writing to the
455new cgroup's tasks file.
456
457Note: Due to some restrictions enforced by some cgroup subsystems, moving
458a process to another cgroup can fail.
459
4602.3 Mounting hierarchies by name
461--------------------------------
462
463Passing the name=<x> option when mounting a cgroups hierarchy
464associates the given name with the hierarchy.  This can be used when
465mounting a pre-existing hierarchy, in order to refer to it by name
466rather than by its set of active subsystems.  Each hierarchy is either
467nameless, or has a unique name.
468
469The name should match [\w.-]+
470
471When passing a name=<x> option for a new hierarchy, you need to
472specify subsystems manually; the legacy behaviour of mounting all
473subsystems when none are explicitly specified is not supported when
474you give a subsystem a name.
475
476The name of the subsystem appears as part of the hierarchy description
477in /proc/mounts and /proc/<pid>/cgroups.
478
4792.4 Notification API
480--------------------
481
482There is mechanism which allows to get notifications about changing
483status of a cgroup.
484
485To register new notification handler you need:
486 - create a file descriptor for event notification using eventfd(2);
487 - open a control file to be monitored (e.g. memory.usage_in_bytes);
488 - write "<event_fd> <control_fd> <args>" to cgroup.event_control.
489   Interpretation of args is defined by control file implementation;
490
491eventfd will be woken up by control file implementation or when the
492cgroup is removed.
493
494To unregister notification handler just close eventfd.
495
496NOTE: Support of notifications should be implemented for the control
497file. See documentation for the subsystem.
498
4993. Kernel API
500=============
501
5023.1 Overview
503------------
504
505Each kernel subsystem that wants to hook into the generic cgroup
506system needs to create a cgroup_subsys object. This contains
507various methods, which are callbacks from the cgroup system, along
508with a subsystem id which will be assigned by the cgroup system.
509
510Other fields in the cgroup_subsys object include:
511
512- subsys_id: a unique array index for the subsystem, indicating which
513  entry in cgroup->subsys[] this subsystem should be managing.
514
515- name: should be initialized to a unique subsystem name. Should be
516  no longer than MAX_CGROUP_TYPE_NAMELEN.
517
518- early_init: indicate if the subsystem needs early initialization
519  at system boot.
520
521Each cgroup object created by the system has an array of pointers,
522indexed by subsystem id; this pointer is entirely managed by the
523subsystem; the generic cgroup code will never touch this pointer.
524
5253.2 Synchronization
526-------------------
527
528There is a global mutex, cgroup_mutex, used by the cgroup
529system. This should be taken by anything that wants to modify a
530cgroup. It may also be taken to prevent cgroups from being
531modified, but more specific locks may be more appropriate in that
532situation.
533
534See kernel/cgroup.c for more details.
535
536Subsystems can take/release the cgroup_mutex via the functions
537cgroup_lock()/cgroup_unlock().
538
539Accessing a task's cgroup pointer may be done in the following ways:
540- while holding cgroup_mutex
541- while holding the task's alloc_lock (via task_lock())
542- inside an rcu_read_lock() section via rcu_dereference()
543
5443.3 Subsystem API
545-----------------
546
547Each subsystem should:
548
549- add an entry in linux/cgroup_subsys.h
550- define a cgroup_subsys object called <name>_subsys
551
552If a subsystem can be compiled as a module, it should also have in its
553module initcall a call to cgroup_load_subsys(), and in its exitcall a
554call to cgroup_unload_subsys(). It should also set its_subsys.module =
555THIS_MODULE in its .c file.
556
557Each subsystem may export the following methods. The only mandatory
558methods are create/destroy. Any others that are null are presumed to
559be successful no-ops.
560
561struct cgroup_subsys_state *create(struct cgroup_subsys *ss,
562				   struct cgroup *cgrp)
563(cgroup_mutex held by caller)
564
565Called to create a subsystem state object for a cgroup. The
566subsystem should allocate its subsystem state object for the passed
567cgroup, returning a pointer to the new object on success or a
568negative error code. On success, the subsystem pointer should point to
569a structure of type cgroup_subsys_state (typically embedded in a
570larger subsystem-specific object), which will be initialized by the
571cgroup system. Note that this will be called at initialization to
572create the root subsystem state for this subsystem; this case can be
573identified by the passed cgroup object having a NULL parent (since
574it's the root of the hierarchy) and may be an appropriate place for
575initialization code.
576
577void destroy(struct cgroup_subsys *ss, struct cgroup *cgrp)
578(cgroup_mutex held by caller)
579
580The cgroup system is about to destroy the passed cgroup; the subsystem
581should do any necessary cleanup and free its subsystem state
582object. By the time this method is called, the cgroup has already been
583unlinked from the file system and from the child list of its parent;
584cgroup->parent is still valid. (Note - can also be called for a
585newly-created cgroup if an error occurs after this subsystem's
586create() method has been called for the new cgroup).
587
588int pre_destroy(struct cgroup_subsys *ss, struct cgroup *cgrp);
589
590Called before checking the reference count on each subsystem. This may
591be useful for subsystems which have some extra references even if
592there are not tasks in the cgroup. If pre_destroy() returns error code,
593rmdir() will fail with it. From this behavior, pre_destroy() can be
594called multiple times against a cgroup.
595
596int can_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
597	       struct cgroup_taskset *tset)
598(cgroup_mutex held by caller)
599
600Called prior to moving one or more tasks into a cgroup; if the
601subsystem returns an error, this will abort the attach operation.
602@tset contains the tasks to be attached and is guaranteed to have at
603least one task in it.
604
605If there are multiple tasks in the taskset, then:
606  - it's guaranteed that all are from the same thread group
607  - @tset contains all tasks from the thread group whether or not
608    they're switching cgroups
609  - the first task is the leader
610
611Each @tset entry also contains the task's old cgroup and tasks which
612aren't switching cgroup can be skipped easily using the
613cgroup_taskset_for_each() iterator. Note that this isn't called on a
614fork. If this method returns 0 (success) then this should remain valid
615while the caller holds cgroup_mutex and it is ensured that either
616attach() or cancel_attach() will be called in future.
617
618void cancel_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
619		   struct cgroup_taskset *tset)
620(cgroup_mutex held by caller)
621
622Called when a task attach operation has failed after can_attach() has succeeded.
623A subsystem whose can_attach() has some side-effects should provide this
624function, so that the subsystem can implement a rollback. If not, not necessary.
625This will be called only about subsystems whose can_attach() operation have
626succeeded. The parameters are identical to can_attach().
627
628void attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
629	    struct cgroup_taskset *tset)
630(cgroup_mutex held by caller)
631
632Called after the task has been attached to the cgroup, to allow any
633post-attachment activity that requires memory allocations or blocking.
634The parameters are identical to can_attach().
635
636void fork(struct cgroup_subsy *ss, struct task_struct *task)
637
638Called when a task is forked into a cgroup.
639
640void exit(struct cgroup_subsys *ss, struct task_struct *task)
641
642Called during task exit.
643
644int populate(struct cgroup_subsys *ss, struct cgroup *cgrp)
645(cgroup_mutex held by caller)
646
647Called after creation of a cgroup to allow a subsystem to populate
648the cgroup directory with file entries.  The subsystem should make
649calls to cgroup_add_file() with objects of type cftype (see
650include/linux/cgroup.h for details).  Note that although this
651method can return an error code, the error code is currently not
652always handled well.
653
654void post_clone(struct cgroup_subsys *ss, struct cgroup *cgrp)
655(cgroup_mutex held by caller)
656
657Called during cgroup_create() to do any parameter
658initialization which might be required before a task could attach.  For
659example in cpusets, no task may attach before 'cpus' and 'mems' are set
660up.
661
662void bind(struct cgroup_subsys *ss, struct cgroup *root)
663(cgroup_mutex and ss->hierarchy_mutex held by caller)
664
665Called when a cgroup subsystem is rebound to a different hierarchy
666and root cgroup. Currently this will only involve movement between
667the default hierarchy (which never has sub-cgroups) and a hierarchy
668that is being created/destroyed (and hence has no sub-cgroups).
669
6704. Questions
671============
672
673Q: what's up with this '/bin/echo' ?
674A: bash's builtin 'echo' command does not check calls to write() against
675   errors. If you use it in the cgroup file system, you won't be
676   able to tell whether a command succeeded or failed.
677
678Q: When I attach processes, only the first of the line gets really attached !
679A: We can only return one error code per call to write(). So you should also
680   put only ONE pid.
681
682