Lines Matching +full:system +full:- +full:cache +full:- +full:controller
1 .. SPDX-License-Identifier: GPL-2.0
9 :Authors: - Fenghua Yu <fenghua.yu@intel.com>
10 - Tony Luck <tony.luck@intel.com>
11 - Vikas Shivappa <vikas.shivappa@intel.com>
22 CAT (Cache Allocation Technology) "cat_l3", "cat_l2"
24 CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc"
29 To use the feature mount the file system::
31 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps]] /sys/fs/resctrl
36 Enable code/data prioritization in L3 cache allocations.
38 Enable code/data prioritization in L2 cache allocations.
40 Enable the MBA Software Controller(mba_sc) to specify MBA
45 RDT features are orthogonal. A particular system may support only
46 monitoring, only control, or both monitoring and control. Cache
47 pseudo-locking is a unique way of using cache control to "pin" or
48 "lock" data in the cache. Details can be found in
49 "Cache Pseudo-Locking".
53 only those files and directories supported by the system will be created.
67 Cache resource(L3/L2) subdirectory contains the following files
84 setting up exclusive cache partitions. Note that
86 own settings for cache use which can over-ride
93 Corresponding region is unused. When the system's
118 Corresponding region is pseudo-locked. No
138 non-linear. This field is purely informational
149 "per-thread":
168 counter can be considered for re-use.
172 via the file system (making new directories or writing to any of the
181 mask f7 has non-consecutive 1-bits
187 system. The default group is the root directory which, immediately
188 after mounting, owns all the tasks and cpus in the system and can make
191 On a system with RDT control features additional directories can be
196 On a system with RDT monitoring the root directory and other top level
224 When the resource group is in pseudo-locked mode this file will
226 pseudo-locked region.
237 Each resource has its own line and format - see below for details.
248 cache pseudo-locked region is created by first writing
249 "pseudo-locksetup" to the "mode" file before writing the cache
250 pseudo-locked region's schemata to the resource group's "schemata"
251 file. On successful pseudo-locked region creation the mode will
252 automatically change to "pseudo-locked".
258 RDT event. E.g. on a system with two L3 domains there will
268 -------------------------
273 1) If the task is a member of a non-default group, then the schemata
283 -------------------------
284 1) If a task is a member of a MON group, or non-default CTRL_MON group
295 Notes on cache occupancy monitoring and control
298 this only affects *new* cache allocations by the task. E.g. you may have
299 a task in a monitor group showing 3 MB of cache occupancy. If you move
302 the new group zero. When the task accesses locations still in cache from
303 before the move, the h/w does not update any counters. On a busy system
304 you will likely see the occupancy in the old group go down as cache lines
305 are evicted and re-used while the occupancy in the new group rises as
306 the task accesses memory and loads into the cache are counted based on
309 The same applies to cache allocation control. Moving a task to a group
310 with a smaller cache partition will not evict any cache lines. The
320 max_threshold_occupancy - generic concepts
321 ------------------------------------------
324 the RMID is still tagged the cache lines of the previous user of RMID.
325 Hence such RMIDs are placed on limbo list and checked back if the cache
326 occupancy has gone down. If there is a time when system has a lot of
327 limbo RMIDs but which are not ready to be used, user may see an -EBUSY
333 Schemata files - general concepts
334 ---------------------------------
337 in each of the instances of that resource on the system.
339 Cache IDs
340 ---------
341 On current generation systems there is one L3 cache per socket and L2
344 caches on a socket, multiple cores could share an L2 cache. So instead
346 a resource we use a "Cache ID". At a given cache level this will be a
347 unique number across the whole system (but it isn't guaranteed to be a
349 CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
351 Cache Bit Masks (CBM)
352 ---------------------
353 For cache resources we describe the portion of the cache that is available
355 by each cpu model (and may be different for different cache levels). It
357 the resctrl file system in "info/{resource}/cbm_mask". Intel hardware
359 0x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
360 and 0xA are not. On a system with a 20-bit mask each bit represents 5%
361 of the capacity of the cache. You could partition the cache into four
414 Controller(mba_sc)" which reads the actual bandwidth using MBM counters
420 where as user can switch to the "MBA software controller" mode using
425 ----------------------------------------------------------------
431 ------------------------------------------------------------------
439 ------------------------
452 ------------------------------------------
454 Memory b/w domain is L3 cache.
460 ---------------------------------------------
462 Memory bandwidth domain is L3 cache.
468 ---------------------------------
482 Cache Pseudo-Locking
484 CAT enables a user to specify the amount of cache space that an
485 application can fill. Cache pseudo-locking builds on the fact that a
486 CPU can still read and write data pre-allocated outside its current
487 allocated area on a cache hit. With cache pseudo-locking, data can be
488 preloaded into a reserved portion of cache that no application can
489 fill, and from that point on will only serve cache hits. The cache
490 pseudo-locked memory is made accessible to user space where an
494 The creation of a cache pseudo-locked region is triggered by a request
496 to be pseudo-locked. The cache pseudo-locked region is created as follows:
498 - Create a CAT allocation CLOSNEW with a CBM matching the schemata
499 from the user of the cache region that will contain the pseudo-locked
501 on the system and no future overlap with this cache region is allowed
502 while the pseudo-locked region exists.
503 - Create a contiguous region of memory of the same size as the cache
505 - Flush the cache, disable hardware prefetchers, disable preemption.
506 - Make CLOSNEW the active CLOS and touch the allocated memory to load
507 it into the cache.
508 - Set the previous CLOS as active.
509 - At this point the closid CLOSNEW can be released - the cache
510 pseudo-locked region is protected as long as its CBM does not appear in
511 any CAT allocation. Even though the cache pseudo-locked region will from
513 any CLOS will be able to access the memory in the pseudo-locked region since
514 the region continues to serve cache hits.
515 - The contiguous region of memory loaded into the cache is exposed to
516 user-space as a character device.
518 Cache pseudo-locking increases the probability that data will remain
519 in the cache via carefully configuring the CAT feature and controlling
521 cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
522 “locked” data from cache. Power management C-states may shrink or
523 power off cache. Deeper C-states will automatically be restricted on
524 pseudo-locked region creation.
526 It is required that an application using a pseudo-locked region runs
528 with the cache on which the pseudo-locked region resides. A sanity check
529 within the code will not allow an application to map pseudo-locked memory
530 unless it runs with affinity to cores associated with the cache on which the
531 pseudo-locked region resides. The sanity check is only done during the
535 Pseudo-locking is accomplished in two stages:
537 1) During the first stage the system administrator allocates a portion
538 of cache that should be dedicated to pseudo-locking. At this time an
540 cache portion, and exposed as a character device.
541 2) During the second stage a user-space application maps (mmap()) the
542 pseudo-locked memory into its address space.
544 Cache Pseudo-Locking Interface
545 ------------------------------
546 A pseudo-locked region is created using the resctrl interface as follows:
549 2) Change the new resource group's mode to "pseudo-locksetup" by writing
550 "pseudo-locksetup" to the "mode" file.
551 3) Write the schemata of the pseudo-locked region to the "schemata" file. All
555 On successful pseudo-locked region creation the "mode" file will contain
556 "pseudo-locked" and a new character device with the same name as the resource
558 by user space in order to obtain access to the pseudo-locked memory region.
560 An example of cache pseudo-locked region creation and usage can be found below.
562 Cache Pseudo-Locking Debugging Interface
563 ----------------------------------------
564 The pseudo-locking debugging interface is enabled by default (if
568 location is present in the cache. The pseudo-locking debugging interface uses
569 the tracing infrastructure to provide two ways to measure cache residency of
570 the pseudo-locked region:
574 example below). In this test the pseudo-locked region is traversed at
576 are disabled. This also provides a substitute visualization of cache
578 2) Cache hit and miss measurements using model specific precision counters if
579 available. Depending on the levels of cache on the system the pseudo_lock_l2
582 When a pseudo-locked region is created a new debugfs directory is created for
584 write-only file, pseudo_lock_measure, is present in this directory. The
585 measurement of the pseudo-locked region depends on the number written to this
593 writing "2" to the pseudo_lock_measure file will trigger the L2 cache
594 residency (cache hits and misses) measurement captured in the
597 writing "3" to the pseudo_lock_measure file will trigger the L3 cache
598 residency (cache hits and misses) measurement captured in the
606 In this example a pseudo-locked region named "newlock" was created. Here is
638 Example of cache hits/misses debugging
640 In this example a pseudo-locked region named "newlock" was created on the L2
641 cache of a platform. Here is how we can obtain details of the cache hits
653 # _-----=> irqs-off
654 # / _----=> need-resched
655 # | / _---=> hardirq/softirq
656 # || / _--=> preempt-depth
658 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
660 pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0
668 On a two socket machine (one L3 cache per socket) with just four bits
669 for cache bit masks, minimum b/w of 10% with a memory bandwidth
673 # mount -t resctrl resctrl /sys/fs/resctrl
683 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
684 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
689 Note that unlike cache masks, memory b/w cannot specify whether these
691 b/w that the group may be able to use and the system admin can configure
694 If resctrl is using the software controller (mba_sc) then user can enter the
706 Again two sockets, but this time with a more realistic 20-bit mask.
709 processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
710 neighbors, each of the two real-time tasks exclusively occupies one quarter
711 of L3 cache on socket 0.
714 # mount -t resctrl resctrl /sys/fs/resctrl
718 50% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
724 it access to the "top" 25% of the cache on socket 0.
737 # taskset -cp 1 1234
739 Ditto for the second real time task (with the remaining 25% of cache)::
744 # taskset -cp 2 5678
746 For the same 2 socket system with memory b/w resource and CAT L3 the
753 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
759 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
763 A single socket system which has real-time tasks running on core 4-7 and
764 non real-time workload assigned to core 0-3. The real-time tasks share text
770 # mount -t resctrl resctrl /sys/fs/resctrl
774 50% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
780 to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
787 Finally we move core 4-7 over to the new group and make sure that the
788 kernel and the tasks running there get 50% of the cache. They should
789 also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
790 siblings and only the real time threads are scheduled on the cores 4-7.
798 mode allowing sharing of their cache allocations. If one resource group
799 configures a cache allocation then nothing prevents another resource group
803 system with two L2 cache instances that can be configured with an 8-bit
805 25% of each cache instance.
808 # mount -t resctrl resctrl /sys/fs/resctrl/
812 cache::
825 -sh: echo: write error: Invalid argument
852 The bit_usage will reflect how the cache is used::
860 -sh: echo: write error: Invalid argument
864 Example of Cache Pseudo-Locking
866 Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
871 # mount -t resctrl resctrl /sys/fs/resctrl/
874 Ensure that there are bits available that can be pseudo-locked, since only
875 unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
884 Create a new resource group that will be associated with the pseudo-locked
885 region, indicate that it will be used for a pseudo-locked region, and
886 configure the requested pseudo-locked region capacity bitmask::
889 # echo pseudo-locksetup > newlock/mode
892 On success the resource group's mode will change to pseudo-locked, the
893 bit_usage will reflect the pseudo-locked region, and the character device
894 exposing the pseudo-locked region will exist::
897 pseudo-locked
900 # ls -l /dev/pseudo_lock/newlock
901 crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock
906 * Example code to access one page of pseudo-locked cache region
919 * cores associated with the pseudo-locked region. Here the cpu
956 /* Application interacts with pseudo-locked memory @mapping */
970 ----------------------------
975 As an example, the allocation of an exclusive reservation of L3 cache
978 1. Read the cbmmasks from each directory or the per-resource "bit_usage"
1009 $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
1013 $ cat create-dir.sh
1015 mask = function-of(output.txt)
1019 $ flock /sys/fs/resctrl/ ./create-dir.sh
1038 exit(-1);
1050 exit(-1);
1062 exit(-1);
1071 if (fd == -1) {
1073 exit(-1);
1087 ----------------------
1094 ------------------------------------------------------------------------
1095 On a two socket machine (one L3 cache per socket) with just four bits
1096 for cache bit masks::
1098 # mount -t resctrl resctrl /sys/fs/resctrl
1110 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1111 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1138 --------------------------------------------
1139 On a two socket machine (one L3 cache per socket)::
1141 # mount -t resctrl resctrl /sys/fs/resctrl
1158 ---------------------------------------------------------------------
1160 Assume a system like HSW has only CQM and no CAT support. In this case
1165 This can also be used to profile jobs cache size footprint before being
1169 # mount -t resctrl resctrl /sys/fs/resctrl
1193 -----------------------------------
1195 A single socket system which has real time tasks running on cores 4-7
1196 and non real time tasks on other cpus. We want to monitor the cache
1200 # mount -t resctrl resctrl /sys/fs/resctrl
1204 Move the cpus 4-7 over to p1::