Lines Matching +full:ram +full:- +full:code
14 Cleancache can be thought of as a page-granularity victim cache for clean
17 PFRA "evicts" a page, it first attempts to use cleancache code to
20 of unknown and possibly time-varying size.
22 Later, when a cleancache-enabled filesystem wishes to access a page
28 in Xen (using hypervisor memory) and zcache (using in-kernel compressed
48 Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a
51 (presumably about-to-be-evicted) page into cleancache and associate it with
62 to treat the pool as shared using a 128-bit UUID as a key. On systems
72 If a get_page is successful on a non-shared pool, the page is invalidated
79 Note that cleancache must enforce put-put-get coherency and get-get
82 subsequent get can never return the stale data (AAA). For get-get coherency,
122 saved in transcendent memory (RAM that is otherwise not directly
126 Cleancache (and its sister code "frontswap") provide interfaces for
128 fast kernel-directly-addressable RAM and slower DMA/asynchronous devices.
131 as with compression) or secretly moved (as might be useful for write-
132 balancing for some RAM-like devices). Evicted page-cache pages (and
133 swap pages) are a great use for this kind of slower-than-RAM-but-much-
134 faster-than-disk transcendent memory, and the cleancache (and frontswap)
135 "page-object-oriented" specification provides a nice way to read and
136 write -- and indirectly "name" -- the pages.
140 virtual machines. This is really hard to do with RAM and efforts to
142 well-publicized special-case workloads). Cleancache -- and frontswap --
144 of flexibility for more dynamic, flexible RAM multiplexing.
146 "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
148 optimize RAM utilization. And when guest OS's are induced to surrender
149 underutilized RAM (e.g. with "self-ballooning"), page cache pages
154 physical systems as well. The zcache driver acts as a memory-hungry
156 the proposed "RAMster" driver shares RAM across multiple physical
166 cleancache is config'ed off and turn into a function-pointer-
167 compare-to-NULL if config'ed on but no backend claims the ops
168 functions, or to a compare-struct-element-to-negative if a
176 incomplete and one or more hooks in fs-specific code are required.
182 be sufficient to validate the concept, the opt-in approach means
194 The one-page-at-a-time copy semantics simplifies the implementation
196 do fancy things on-the-fly like page compression and
201 or for real kernel-addressable RAM, it makes perfect sense for
204 * Why is non-shared cleancache "exclusive"? And where is the
210 put-after-get for inclusive becomes common, the interface could
219 especially when memory pressure is high (e.g. when RAM is
223 cleancache replaces I/O with memory-copy-CPU-overhead; on older
224 single-core systems with slow memory-copy speeds, cleancache
230 Filesystems that are well-behaved and conform to certain
239 - The FS should be block-device-based (e.g. a ram-based FS such
241 - To ensure coherency/correctness, the FS must ensure that all
244 - To ensure coherency/correctness, either inode numbers must
245 be unique across the lifetime of the on-disk file OR the
247 - The FS must call the VFS superblock alloc and deactivate routines
249 - To maximize performance, all pages fetched from the FS should
252 - Currently, the FS blocksize must be the same as PAGESIZE. This
255 - A clustered FS should invoke the "shared_init_fs" cleancache
275 The cleancache_enabled flag is checked in all of the frequently-used
279 tens-of-thousands) of unnecessary function calls per second. So the