/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-v1/ |
H A D | memory.json | 4 "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions." 16 "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions." 20 "PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
|
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n1/ |
H A D | memory.json | 4 "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions." 16 "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions." 20 "PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
|
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/ |
H A D | memory.json | 4 "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions." 12 "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions." 16 "PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
|
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/ |
H A D | memory.json | 4 "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions." 16 "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions." 20 "PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
|
/linux/fs/jfs/ |
H A D | jfs_extent.c | 119 * try to allocate a smaller number of blocks (producing a smaller in extAlloc() 120 * extent), with this smaller number of blocks consisting of the in extAlloc() 121 * requested number of blocks rounded down to the next smaller in extAlloc() 124 * is smaller than the number of blocks per page. in extAlloc() 285 * a smaller number of blocks (producing a smaller extent), with 286 * this smaller number of blocks consisting of the requested 287 * number of blocks rounded down to the next smaller power of 2 290 * is smaller tha [all...] |
/linux/tools/perf/pmu-events/arch/powerpc/power8/ |
H A D | memory.json | 53 "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for a demand load", 54 "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro" 83 "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for a demand load", 84 "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or" 119 "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)", 120 "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro" 203 "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)", 204 "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or"
|
/linux/Documentation/filesystems/spufs/ |
H A D | spufs.rst | 87 If a count smaller than four is requested, read returns -1 and 101 If a count smaller than four is requested, read returns -1 and 121 operations on an open wbox file are: write(2) If a count smaller than 146 If a count smaller than four is requested, read returns -1 and 198 If a count smaller than four is requested, read returns -1 and 204 If a count smaller than four is requested, write returns -1 and 219 If a count smaller than four is requested, read returns -1 and 225 If a count smaller than four is requested, write returns -1 and
|
/linux/Documentation/networking/ |
H A D | ipsec.rst | 20 defined in section 3, is not smaller than the size of the original 31 where IP datagrams of size smaller than the threshold are sent in the 37 is smaller than the threshold or the compressed len is larger than original
|
/linux/lib/ |
H A D | linear_ranges.c | 132 * input value. Value is matching if it is equal or smaller than given 136 * value smaller or equal to given value 172 * input value. Value is matching if it is equal or smaller than given 174 * @found is set true. If a range with values smaller than given value is found 175 * but the range max is being smaller than given value, then the range's 180 * range with a value smaller or equal to given value
|
/linux/include/media/i2c/ |
H A D | ov7670.h | 12 int min_width; /* Filter out smaller sizes */ 13 int min_height; /* Filter out smaller sizes */
|
/linux/tools/testing/selftests/bpf/progs/ |
H A D | pyperf180.c | 10 * to specify which cpu version is used for compilation. So a smaller 15 * repo checkpoint does not have __BPF_CPU_VERSION__, a smaller unroll_count
|
/linux/drivers/staging/media/atomisp/pci/isp/kernels/vf/vf_1.0/ |
H A D | ia_css_vf.host.c | 46 * be smaller than the requested viewfinder resolution. 64 /* downscale until width smaller than the viewfinder width. We don't in sh_css_vf_downscale_log2() 73 /* now width is smaller, so we go up one step */ in sh_css_vf_downscale_log2()
|
/linux/arch/powerpc/kernel/ |
H A D | cacheinfo.c | 422 static void link_cache_lists(struct cache *smaller, struct cache *bigger) in link_cache_lists() argument 424 while (smaller->next_local) { in link_cache_lists() 425 if (smaller->next_local == bigger) in link_cache_lists() 427 smaller = smaller->next_local; in link_cache_lists() 430 smaller->next_local = bigger; in link_cache_lists() 436 WARN_ONCE((smaller->level == 1 && bigger->level > 2) || in link_cache_lists() 437 (smaller->level > 1 && bigger->level != smaller->level + 1), in link_cache_lists() 439 smaller in link_cache_lists() [all...] |
/linux/drivers/target/iscsi/ |
H A D | iscsi_target_nodeattrib.c | 56 pr_err("Requested DataOut Timeout %u smaller than" in iscsit_na_dataout_timeout() 81 pr_err("Requested DataOut Timeout Retries %u smaller" in iscsit_na_dataout_timeout_retries() 112 pr_err("Requested NopIn Timeout %u smaller than" in iscsit_na_nopin_timeout() 162 pr_err("Requested NopIn Response Timeout %u smaller" in iscsit_na_nopin_response_timeout()
|
/linux/include/uapi/linux/ |
H A D | falloc.h | 23 * smaller depending on the filesystem and/or the configuration of the 54 * boundaries, but this boundary may be larger or smaller depending on 71 * block size boundaries, but this boundary may be larger or smaller
|
/linux/Documentation/scheduler/ |
H A D | sched-deadline.rst | 90 then, if the scheduling deadline is smaller than the current time, or 357 If, instead, the total utilization is smaller than M, then non real-time 364 maximum tardiness of each task is smaller or equal than 380 of the tasks running on such a CPU is smaller or equal than 1. 384 running on such a CPU is smaller or equal than 1: 405 such a time with the interval size t. If h(t) is smaller than t (that is, 407 smaller than the size of the interval) for all the possible values of t, then 430 period smaller than the one of the first task. Hence, if all the tasks 433 smaller than the absolute deadline of Task_1, which is t + P). As a 456 As seen, enforcing that the total utilization is smaller tha [all...] |
/linux/sound/usb/usx2y/ |
H A D | usbusx2y.h | 17 * Bigger is safer operation, smaller gives lower latencies. 25 * this define out, and thereby produce smaller, faster code.
|
/linux/Documentation/userspace-api/media/dvb/ |
H A D | dmx-reqbufs.rst | 54 number allocated in the ``count`` field. The ``count`` can be smaller than the number requested, even zero, when the driver runs out of free memory. A larger 57 at ``size``, and can be smaller than what's requested.
|
/linux/usr/ |
H A D | Kconfig | 150 is slowest among the choices. The initramfs size is about 10% smaller 165 slowest. The initramfs size is about 33% smaller with LZMA in 177 30% smaller with XZ in comparison to gzip. Decompression speed is
|
/linux/kernel/rcu/ |
H A D | Kconfig | 17 smaller systems. 28 smaller systems. 189 want the default because the smaller leaf-level fanout keeps 281 be especially helpful for smaller numbers of CPUs, where
|
/linux/Documentation/core-api/ |
H A D | swiotlb.rst | 125 max_sectors_kb might be even smaller, such as 252 KiB. 159 IO_TLB_SEGSIZE. Multiple smaller bounce buffers may co-exist in a single slot 167 maximum parallelism, but since an area can't be smaller than IO_TLB_SEGSIZE 213 allocation may not be available. The dynamic pool allocator tries smaller sizes 219 the number of areas will likely be smaller. For example, with a new pool size 231 few CPUs. It allows the default swiotlb pool to be smaller so that memory is
|
/linux/drivers/gpu/drm/i915/gem/ |
H A D | i915_gem_lmem.c | 64 * smaller, where the internal fragmentation cost is too great when rounding up 69 * @page_size. If this is smaller than the regions min_page_size then it can
|
/linux/drivers/md/dm-vdo/indexer/ |
H A D | config.h | 65 /* Smaller (16), Small (64) or large (256) indices */ 91 /* Smaller (16), Small (64) or large (256) indices */
|
/linux/Documentation/admin-guide/device-mapper/ |
H A D | dm-integrity.rst | 44 is capped at the size of the metadata area, but may be smaller, thereby 45 requiring multiple buffers to represent the full metadata area. A smaller 46 buffer size will produce a smaller resulting read/write operation to the 194 Use a smaller padding of the tag area that is more
|
/linux/include/net/ |
H A D | netmem.h | 182 * e.g. when it's a header buffer, performs faster and generates smaller 256 * system memory, performs faster and generates smaller object code (no 310 * e.g. when it's a header buffer, performs faster and generates smaller 357 * e.g. when it's a header buffer, performs faster and generates smaller
|