Lines Matching +full:add +full:- +full:fs
7 Cleancache can be thought of as a page-granularity victim cache for clean
13 of unknown and possibly time-varying size.
15 Later, when a cleancache-enabled filesystem wishes to access a page
21 in Xen (using hypervisor memory) and zcache (using in-kernel compressed
42 Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a
45 (presumably about-to-be-evicted) page into cleancache and associate it with
55 to treat the pool as shared using a 128-bit UUID as a key. On systems
65 If a get_page is successful on a non-shared pool, the page is flushed (thus
72 Note that cleancache must enforce put-put-get coherency and get-get
75 subsequent get can never return the stale data (AAA). For get-get coherency,
90 succ_gets - number of gets that were successful
91 failed_gets - number of gets that failed
92 puts - number of puts attempted (all "succeed")
93 flushes - number of flushes attempted
110 fast kernel-directly-addressable RAM and slower DMA/asynchronous devices.
113 as with compression) or secretly moved (as might be useful for write-
114 balancing for some RAM-like devices). Evicted page-cache pages (and
115 swap pages) are a great use for this kind of slower-than-RAM-but-much-
116 faster-than-disk transcendent memory, and the cleancache (and frontswap)
117 "page-object-oriented" specification provides a nice way to read and
118 write -- and indirectly "name" -- the pages.
124 well-publicized special-case workloads). Cleancache -- and frontswap --
128 "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
131 underutilized RAM (e.g. with "self-ballooning"), page cache pages
136 physical systems as well. The zcache driver acts as a memory-hungry
148 cleancache is config'ed off and turn into a function-pointer-
149 compare-to-NULL if config'ed on but no backend claims the ops
150 functions, or to a compare-struct-element-to-negative if a
158 incomplete and one or more hooks in fs-specific code are required.
164 be sufficient to validate the concept, the opt-in approach means
166 existing filesystems should make it very easy to add more
169 The total impact of the hooks to existing fs and mm files is only
176 The one-page-at-a-time copy semantics simplifies the implementation
178 do fancy things on-the-fly like page compression and
183 or for real kernel-addressable RAM, it makes perfect sense for
186 4) Why is non-shared cleancache "exclusive"? And where is the
192 put-after-get for inclusive becomes common, the interface could
193 be easily extended to add a "get_no_flush" call.
205 cleancache replaces I/O with memory-copy-CPU-overhead; on older
206 single-core systems with slow memory-copy speeds, cleancache
210 6) How do I add cleancache support for filesystem X? (Boaz Harrash)
212 Filesystems that are well-behaved and conform to certain
215 poorly layered filesystems must either add additional hooks
221 - The FS should be block-device-based (e.g. a ram-based FS such
223 - To ensure coherency/correctness, the FS must ensure that all
225 add hooks to do the equivalent cleancache "flush" operations
226 - To ensure coherency/correctness, either inode numbers must
227 be unique across the lifetime of the on-disk file OR the
228 FS must provide an "encode_fh" function.
229 - The FS must call the VFS superblock alloc and deactivate routines
230 or add hooks to do the equivalent cleancache calls done there.
231 - To maximize performance, all pages fetched from the FS should
232 go through the do_mpag_readpage routine or the FS should add
234 - Currently, the FS blocksize must be the same as PAGESIZE. This
237 - A clustered FS should invoke the "shared_init_fs" cleancache
257 The cleancache_enabled flag is checked in all of the frequently-used
261 tens-of-thousands) of unnecessary function calls per second. So the