Lines Matching full:the
15 does in the kernel.
21 This document captures the design of the online filesystem check feature for
23 The purpose of this document is threefold:
25 - To help kernel distributors understand exactly what the XFS online fsck
28 - To help people reading the code to familiarize themselves with the relevant
29 concepts and design points before they start digging into the code.
31 - To help developers maintaining the system by capturing the reasons
34 As the online fsck code is merged, the links in this document to topic branches
37 This document is licensed under the terms of the GNU Public License, v2.
38 The primary author is Darrick J. Wong.
41 Part 1 defines what fsck tools are and the motivations for writing a new one.
44 Part 4 discusses the user interface and the intended usage modes of the new
46 Parts 5 and 6 show off the high level components and how they fit together, and
64 - Retrieve the named data blobs at any time.
71 operations internal to the filesystem, such as internal consistency checking
73 Summary metadata, as the name implies, condense information contained in
76 The filesystem check (fsck) tool examines all the metadata in a filesystem
83 As a word of caution -- the primary goal of most Linux fsck tools is to restore
84 the filesystem metadata to a consistent state, not to maximize the data
88 Filesystems of the 20th century generally lacked any redundancy in the ondisk
98 | System administrators avoid data loss by increasing the number of |
99 | separate storage systems through the creation of backups; and they avoid |
100 | downtime by increasing the redundancy of each storage system through the |
102 | fsck tools address only the first problem. |
105 TLDR; Show Me the Code!
108 Code is posted to the kernel.org git trees as follows:
112 Each kernel patchset adding an online repair function will use the same branch
113 name across the kernel, xfsprogs, and fstests git repos.
118 The online fsck tool described here will be the third tool in the history of
122 The first program, ``xfs_check``, was created as part of the XFS debugger
124 It walks all metadata in the filesystem looking for inconsistencies in the
129 The second program, ``xfs_repair``, was created to be faster and more robust
130 than the first program.
134 while it scans the metadata of the entire filesystem.
135 The most important feature of this tool is its ability to respond to
138 Space usage metadata are rebuilt from the observed file metadata.
143 The current XFS tools leave several problems unsolved:
145 1. **User programs** suddenly **lose access** to the filesystem when unexpected
146 shutdowns occur as a result of silent corruptions in the metadata.
149 2. **Users** experience a **total loss of service** during the recovery period
152 3. **Users** experience a **total loss of service** if the filesystem is taken
155 4. **Data owners** cannot **check the integrity** of their stored data without
158 performed by the storage system administrator might suffice.
161 with corruptions if they **lack the means** to assess filesystem health
162 while the filesystem is online.
171 Given this definition of the problems to be solved and the actors who would
172 benefit, the proposed solution is a third fsck tool that acts on a running
178 ``xfs_scrub`` is the name of the driver program.
179 The rest of this document presents the goals and use cases of the new fsck
181 discusses the similarities and differences with existing tools.
186 | Throughout this document, the existing offline fsck tool can also be |
188 | The userspace driver program for the new online fsck tool can be |
190 | The kernel portion of online fsck that validates metadata is called |
191 | "online scrub", and portion of the kernel that fixes metadata is called |
195 The naming hierarchy is broken up into objects known as directories and files
196 and the physical space is split into pieces known as allocation groups.
198 contain the damage when corruptions occur.
199 The division of the filesystem into principal objects (allocation groups and
201 repairs on a subset of the filesystem.
204 Even if a piece of filesystem metadata can only be regenerated by scanning the
205 entire system, the scan can still be done in the background while other file
209 metadata to enable targeted checking and repair operations while the system
219 The first is the userspace driver program ``xfs_scrub``, which is responsible
221 reacting to the outcomes appropriately, and reporting results to the system
223 The second and third are in the kernel, which implements functions to check
229 | For brevity, this document shortens the phrase "online fsck work |
233 Scrub item types are delineated in a manner consistent with the Unix design
241 the offline fsck program can handle.
242 However, online fsck cannot be running 100% of the time, which means that
244 If these errors cause the next mount to fail, offline fsck is the only
246 This limitation means that maintenance of the offline fsck tool will continue.
247 A second limitation of online fsck is that it must follow the same resource
248 sharing and lock acquisition rules as the regular filesystem.
253 However, both of these limitations are acceptable tradeoffs to satisfy the
262 The userspace driver program ``xfs_scrub`` splits the work of checking and
265 on the success of all previous phases.
266 The seven phases are as follows:
268 1. Collect geometry information about the mounted filesystem and computer,
269 discover the online fsck capabilities of the kernel, and open the
275 If corruption is found in the inode header or inode btree and ``xfs_scrub``
278 Repairs are implemented by using the information in the scrub item to
279 resubmit the kernel scrub call with the repair flag enabled; this is
280 discussed in the next section.
283 3. Check all metadata of every file in the filesystem.
292 phase, if the caller permits them.
293 Before starting repairs, the summary counters are checked and any necessary
294 repairs are performed so that subsequent repairs will not fail the resource
297 made somewhere in the filesystem.
298 Free space in the filesystem is trimmed at the end of phase 4 if the
301 5. By the start of this phase, all primary and secondary filesystem metadata
303 Summary counters such as the free space counts and quota resource counts
309 6. If the caller asks for a media scan, read all allocated and written data
310 file extents in the filesystem.
311 The ability to use hardware-assisted data file integrity checking is new
312 to online fsck; neither of the previous tools have this capability.
313 If media errors occur, they will be mapped to the owning files and reported.
315 7. Re-check the summary counters and presents the caller with a summary of
324 The kernel scrub code uses a three-step strategy for checking and repairing
325 the one aspect of a metadata object represented by a scrub item:
327 1. The scrub item of interest is checked for corruptions; opportunities for
328 optimization; and for values that are directly controlled by the system
330 If the item is not corrupt or does not need optimization, resource are
331 released and the positive scan results are returned to userspace.
332 If the item is corrupt or could be optimized but the caller does not permit
333 this, resources are released and the negative scan results are returned to
335 Otherwise, the kernel moves on to the second step.
337 2. The repair function is called to rebuild the data structure.
339 rather than try to salvage the existing structure.
340 If the repair fails, the scan results from the first step are returned to
342 Otherwise, the kernel moves on to the third step.
344 3. In the third step, the kernel runs the same checks over the new metadata
345 item to assess the efficacy of the repairs.
346 The results of the reassessment are returned to userspace.
358 users either because they are directly created by the user or they index
359 objects created by the user
376 Scrub obeys the same rules as regular filesystem accesses for resource and lock
379 Primary metadata objects are the simplest for scrub to process.
380 The principal filesystem object (either an allocation group or an inode) that
381 owns the item being scrubbed is locked to guard against concurrent updates.
382 The check function examines every record associated with the type for obvious
385 Repairs for this class of scrub item are simple, since the repair function
386 starts by holding all the resources acquired in the previous step.
387 The repair function scans available metadata as needed to record all the
388 observations needed to complete the structure.
389 Next, it stages the observations in a new ondisk structure and commits it
390 atomically to complete the repair.
391 Finally, the storage from the old data structure are carefully reaped.
393 Because ``xfs_scrub`` locks a primary object for the duration of the repair,
394 this is effectively an offline repair operation performed on a subset of the
396 This minimizes the complexity of the repair code because it is not necessary to
398 any other part of the filesystem.
400 trying to access the damaged structure will be blocked until repairs complete.
401 The only infrastructure needed by the repair code are the staging area for
403 Despite these limitations, the advantage that online repair holds is clear:
404 targeted work on individual shards of the filesystem avoids total loss of
413 in-memory array prior to formatting the new ondisk structure, which is very
414 similar to the list-based algorithm discussed in section 2.3 ("List-Based
416 However, any data structure builder that maintains a resource lock for the
417 duration of the repair is *always* an offline algorithm.
425 but are only needed for online fsck or for reorganization of the filesystem.
434 to the secondary object but needs to check primary metadata, which runs counter
435 to the usual order of resource acquisition.
436 Frequently, this means that full filesystems scans are necessary to rebuild the
441 Under these conditions, ``xfs_scrub`` cannot lock resources for the entire
442 duration of the repair.
446 Depending on the requirements of the specific repair function, the staging
447 index will either have the same format as the ondisk structure or a design
449 The next step is to release all locks and start the filesystem scan.
450 When the repair scanner needs to record an observation, the staging data are
451 locked long enough to apply the update.
452 While the filesystem scan is in progress, the repair function hooks the
453 filesystem so that it can apply pending filesystem updates to the staging
455 Once the scan is done, the owning object is re-locked, the live data is used to
456 write a new ondisk structure, and the repairs are committed atomically.
457 The hooks are disabled and the staging staging area is freed.
458 Finally, the storage from the old data structure are carefully reaped.
462 Live filesystem code has to be hooked so that the repair function can observe
464 The staging area has to become a fully functional parallel structure so that
465 updates can be merged from the hooks.
466 Finally, the hook, the filesystem scan, and the inode locking model must be
468 should be applied to the staging structure.
470 In theory, the scrub implementation could apply these same techniques for
473 Programs attempting to access the damaged structures are not blocked from
477 Inspiration for the secondary metadata repair strategy was drawn from section
483 The sidecar index mentioned above bears some resemblance to the side file
486 build the new structure as quickly as possible; and an auxiliary structure that
487 captures all updates that would be committed to the index by other threads were
488 the new index already online.
489 After the index building scan finishes, the updates recorded in the side file
490 are applied to the new index.
491 To avoid conflicts between the index builder and other writer threads, the
492 builder maintains a publicly visible cursor that tracks the progress of the
493 scan through the record space.
494 To avoid duplication of work between the side file and the index builder, side
495 file updates are elided when the record ID for the update is greater than the
496 cursor position within the record ID space.
498 To minimize changes to the rest of the codebase, XFS online repair keeps the
500 In other words, there is no attempt to expose the keyspace of the new index
502 The complexity of such an approach would be very high and perhaps more
505 **Future Work Question**: Can the full scan and live update code used to
509 employed these live scans to build a shadow copy of the metadata and then
510 compared the shadow records to the ondisk records.
511 However, doing that is a fair amount more work than what the checking functions
513 The live scans and hooks were developed much later.
514 That in turn increases the runtime of those scrub functions.
519 Metadata structures in this last category summarize the contents of primary
522 smaller than the primary metadata which they represent.
533 acquisition follow the same paths as regular filesystem accesses.
535 The superblock summary counters have special requirements due to the underlying
536 implementation of the incore counters, and will be treated separately.
537 Check and repair of the other types of summary counters (quota resource counts
538 and file link counts) employ the same filesystem scanning and hooking
539 techniques as outlined above, but because the underlying data are sets of
540 integer counters, the staging data need not be a fully functional mirror of the
550 quotacheck can use the incremental view deltas described in section 2.14 to
551 track pending changes to the block and inode usage counts in each transaction,
552 and commit those changes to a dquot side file when the transaction commits.
553 Delta tracking is necessary for dquots because the index builder scans inodes,
554 whereas the data structure being rebuilt is an index of dquots.
555 Link count checking combines the view deltas and commit step into one because
556 it sets attributes of the objects being scanned instead of writing them to a
564 During the development of online fsck, several risk factors were identified
565 that may make the feature unsuitable for certain distributors and users.
569 - **Decreased performance**: Adding metadata indices to the filesystem
570 increases the time cost of persisting changes to disk, and the reverse space
572 System administrators who require the maximum performance can disable the
574 reduces the ability of online fsck to find inconsistencies and repair them.
576 - **Incorrect repairs**: As with all software, there might be defects in the
577 software that result in incorrect repairs being written to the filesystem.
578 Systematic fuzz testing (detailed in the next section) is employed by the
580 The kernel build system provides Kconfig options (``CONFIG_XFS_ONLINE_SCRUB``
583 The xfsprogs build system has a configure option (``--enable-scrub=no``) that
584 disables building of the ``xfs_scrub`` binary, though this is not a risk
585 mitigation if the kernel functionality remains enabled.
589 If the keyspaces of several metadata indices overlap in some manner but a
590 coherent narrative cannot be formed from records collected, then the repair
592 To reduce the chance that a repair will fail with a dirty transaction and
593 render the filesystem unusable, the online repair functions have been
594 designed to stage and validate all new records before committing the new
599 and the ability to perform administrative changes.
600 Running this automatically in the background scares people, so the systemd
601 background service is configured to run with only the privileges required.
602 Obviously, this cannot address certain problems like the kernel crashing or
603 deadlocking, but it should be sufficient to prevent the scrub process from
604 escaping and reconfiguring the system.
605 The cron job does not have this protection.
609 spraying exploit code onto the public mailing list for instant zero-day
611 In the view of this author, the benefit is realized only when the fuzz
612 operators help to **fix** the flaws, but this opinion apparently is not
614 The XFS maintainers' continuing ability to manage these events presents an
615 ongoing risk to the stability of the development process.
616 Automated testing should front-load some of the risk while the feature is
628 1. Detect inconsistencies in the metadata;
635 that the software behaves within expectations.
637 of every aspect of a fsck tool until the introduction of low-cost virtual
639 With ample hardware availability in mind, the testing strategy for the online
640 fsck project involves differential analysis against the existing fsck tools and
647 The primary goal of any free software QA effort is to make testing as
648 inexpensive and widespread as possible to maximize the scaling advantages of
650 In other words, testing should maximize the breadth of filesystem configuration
652 This improves code quality by enabling the authors of online fsck to find and
656 The Linux filesystem community shares a common QA testing suite,
660 would run both the ``xfs_check`` and ``xfs_repair -n`` commands on the test and
662 This provides a level of assurance that the kernel and the fsck tools stay in
664 During development of the online checking code, fstests was modified to run
665 ``xfs_scrub -n`` between each test to ensure that the new checking code
666 produces the same results as the two existing fsck tools.
669 ``xfs_repair`` to rebuild the filesystem's metadata indices between tests.
671 after it exists, or trigger complaints from the online check.
673 To complete the first phase of development of online repair, fstests was
675 This enables a comparison of the effectiveness of online repair as compared to
676 the existing offline repair tools.
684 to test the rather common fault that entire metadata blocks get corrupted.
685 This required the creation of fstests library code that can create a filesystem
688 a single block of a specific type of metadata object, trash it with the
689 existing ``blocktrash`` command in ``xfs_db``, and test the reaction of a
692 This earlier test suite enabled XFS developers to test the ability of the
693 in-kernel validation functions and the ability of the offline fsck tool to
694 detect and eliminate the inconsistent metadata.
695 This part of the test suite was extended to cover online fsck in exactly the
700 * For each metadata object existing on the filesystem:
704 * Test the reactions of:
706 1. The kernel verifiers to stop obviously bad metadata
713 The testing plan for online fsck includes extending the existing fs testing
715 of every metadata field of every metadata object in the filesystem.
717 block in the filesystem to simulate the effects of memory corruption and
719 Given that fstests already contains the ability to create a filesystem
720 containing every metadata format known to the filesystem, ``xfs_db`` can be
725 * For each metadata object existing on the filesystem...
735 3. Toggle the most significant bit
736 4. Toggle the middle bit
737 5. Toggle the least significant bit
740 8. Randomize the contents
742 * ...test the reactions of:
744 1. The kernel verifiers to stop obviously bad metadata
751 This is quite the combinatoric explosion!
754 check the responses of XFS' fsck tools.
755 Since the introduction of the fuzz testing framework, these tests have been
758 The enhanced testing was used to finalize the deprecation of ``xfs_check`` by
760 the older tool.
762 These tests have been very valuable for ``xfs_scrub`` in the same ways -- they
763 allow the online fsck developers to compare online fsck against offline fsck,
764 and they enable XFS developers to find deficiencies in the code base.
777 A unique requirement to online fsck is the ability to operate on a filesystem
780 impact on the running system, the online repair code should never introduce
781 inconsistencies into the filesystem metadata, and regular workloads should
784 the following ways:
790 * Race ``fsstress`` and ``xfs_scrub -n`` to ensure that checking the whole
793 force-repairing the whole filesystem doesn't cause problems.
795 freezing and thawing the filesystem.
797 remounting the filesystem read-only and read-write.
798 * The same, but running ``fsx`` instead of ``fsstress``. (Not done yet?)
800 Success is defined by the ability to run all of these tests without observing
806 and the `evolution of existing per-function stress testing
812 The primary user of online fsck is the system administrator, just like offline
821 For administrators who want the absolute freshest information about the
824 The program checks every piece of metadata in the filesystem while the
825 administrator waits for the results to be reported, just like the existing
828 option to increase the verbosity of the information reported.
830 A new feature of ``xfs_scrub`` is the ``-x`` option, which employs the error
831 correction capabilities of the hardware to check data file contents.
832 The media scan is not enabled by default because it may dramatically increase
835 The output of a foreground invocation is captured in the system log.
837 The ``xfs_scrub_all`` program walks the list of mounted filesystems and
839 It serializes scans for any filesystems that resolve to the same top level
845 To reduce the workload of system administrators, the ``xfs_scrub`` package
848 The background service configures scrub to run with as little privilege as
849 possible, the lowest CPU and IO priority, and in a CPU-constrained single
851 This can be tuned by the systemd administrator at any time to suit the latency
854 The output of the background service is also captured in the system log.
856 errors) can be emailed automatically by setting the ``EMAIL_ADDR`` environment
857 variable in the following service files:
863 The decision to enable the background scan is left to the system administrator.
864 This can be done by enabling either of the following services:
869 This automatic weekly scan is configured out of the box to perform an
873 redundancy can be provided elsewhere above the filesystem, or the storage
876 The systemd unit file definitions have been subjected to a security audit
877 (as of systemd 249) to ensure that the xfs_scrub processes have as little
878 access to the rest of the system as possible.
880 were restricted to the minimum required, sandboxing was set up to the maximal
881 extent possible with sandboxing and system call filtering; and access to the
882 filesystem tree was restricted to the minimum needed to start the program and
883 access the filesystem being scanned.
884 The service definition files restrict CPU usage to 80% of one CPU core, and
886 This measure was taken to minimize delays in the rest of the filesystem.
887 No such hardening has been performed for the cron job.
890 `Enabling the xfs_scrub background service
897 The information is updated whenever ``xfs_scrub`` is run, or whenever
898 inconsistencies are detected in the filesystem metadata during regular
900 System administrators should use the ``health`` command of ``xfs_spaceman`` to
902 If problems have been observed, the administrator can schedule a reduced
903 service window to run the online repair tool to correct the problem.
904 Failing that, the administrator can decide to schedule a maintenance window to
905 run the traditional offline repair tool to correct the problem.
907 **Future Work Question**: Should the health reporting integrate with the new
912 *Answer*: These questions remain unanswered, but should be a part of the
925 This section discusses the key algorithms and data structures of the kernel
926 code that provide the ability to check and repair metadata while the system
928 The first chapters in this section reveal the pieces that provide the
930 The remainder of this section presents the mechanisms through which XFS
936 Starting with XFS version 5 in 2012, XFS updated the format of nearly every
938 "unique" identifier (UUID), an owner code, the ondisk address of the block,
940 When loading a block buffer from disk, the magic number, UUID, owner, and
941 ondisk address confirm that the retrieved block matches the specific owner of
942 the current filesystem, and that the information contained in the block is
943 supposed to be found at the ondisk address.
944 The first three components enable checking tools to disregard alleged metadata
945 that doesn't belong to the filesystem, and the fourth component enables the
948 Whenever a file system operation modifies a block, the change is submitted
949 to the log as part of a transaction.
950 The log then processes these transactions marking them done once they are
952 The logging code maintains the checksum and the log sequence number of the last
955 be introduced between the computer and its storage devices.
957 log updates to the filesystem.
960 the filesystem to detect obvious corruption when reading metadata blocks from
964 For more information, please see the documentation for
970 The original design of XFS (circa 1993) is an improvement upon 1980s Unix
975 the filesystem, even at the cost of data integrity.
976 Filesystems designers in the early 21st century choose different strategies to
980 For XFS, a different redundancy strategy was chosen to modernize the design:
983 By adding a new index, the filesystem retains most of its ability to scale
984 well to heavily threaded workloads involving large datasets, since the primary
985 file metadata (the directory tree, the file block map, and the allocation
987 Like any system that improves redundancy, the reverse-mapping feature increases
989 However, it has two critical advantages: first, the reverse index is key to
992 Second, the different ondisk storage format of the reverse mapping btree
993 defeats device-level deduplication because the filesystem requires real
999 | A criticism of adding the secondary index is that it does nothing to |
1000 | improve the robustness of user data storage itself. |
1003 | copy-writes, which age the filesystem prematurely. |
1006 | As for metadata, the complexity of adding a new secondary index of space |
1010 | layers in the kernel. |
1013 The information captured in a reverse space mapping record is as follows:
1021 uint64_t rm_offset; /* offset within the owner */
1025 The first two fields capture the location and size of the physical space,
1027 The owner field tells scrub which metadata structure or file inode have been
1029 For space allocated to files, the offset field tells scrub where the space was
1030 mapped within the file fork.
1031 Finally, the flags field provides extra information about the space usage --
1035 Online filesystem checking judges the consistency of each primary metadata
1037 The reverse mapping index plays a key role in the consistency checking process
1040 Program runtime and ease of resource acquisition are the only real limits to
1044 * The absence of an entry in the free space information.
1045 * The absence of an entry in the inode index.
1046 * The absence of an entry in the reference count data if the file is not
1048 * The correspondence of an entry in the reverse mapping information.
1053 the above primary metadata are in doubt.
1054 The checking code for most primary metadata follows a path similar to the
1057 2. Proving the consistency of secondary metadata with the primary metadata is
1061 btree block requires locking the file and searching the entire btree to
1062 confirm the block.
1063 Instead, scrub relies on rigorous cross-referencing during the primary space
1066 3. Consistency scans must use non-blocking lock acquisition primitives if the
1067 required locking order is not the same order used by regular filesystem
1069 For example, if the filesystem normally takes a file ILOCK before taking
1070 the AGF buffer lock but scrub wants to take a file ILOCK while holding
1072 This means that forward progress during this part of a scan of the reverse
1077 The details of how these records are staged, written to disk, and committed
1078 into the filesystem are covered in subsequent sections.
1083 The first step of checking a metadata structure is to examine every record
1084 contained within the structure and its relationship with the rest of the
1087 metadata from wreaking havoc on the system.
1088 Each of these layers contributes information that helps the kernel to make
1089 three decisions about the health of a metadata structure:
1092 - Is this structure inconsistent with the rest of the system
1094 - Is there so much damage around the filesystem that cross-referencing is not
1096 - Can the structure be optimized to improve performance or reduce the size of
1098 - Does the structure contain data that is not inconsistent but deserves review
1099 by the system administrator (``XFS_SCRUB_OFLAG_WARNING``) ?
1101 The following sections describe how the metadata scrubbing process works.
1106 The lowest layer of metadata protection in XFS are the metadata verifiers built
1107 into the buffer cache.
1108 These functions perform inexpensive internal consistency checking of the block
1111 - Does the block belong to this filesystem?
1113 - Does the block belong to the structure that asked for the read?
1117 - Is the type of data stored in the block within a reasonable range of what
1120 - Does the physical location of the block match the location it was read from?
1122 - Does the block checksum match the data?
1124 The scope of the protections here are very limited -- verifiers can only
1125 establish that the filesystem code is reasonably free of gross corruption bugs
1126 and that the storage system is reasonably competent at retrieval.
1127 Corruption problems observed at runtime cause the generation of health reports,
1128 failed system calls, and in the extreme case, filesystem shutdowns if the
1129 corrupt metadata force the cancellation of a dirty transaction.
1132 block of a structure in the course of checking the structure.
1135 failure to cross-reference once the full examination is complete.
1142 After the buffer cache, the next level of metadata protection is the internal
1143 record verification code built into the filesystem.
1144 These checks are split between the buffer verifiers, the in-filesystem users of
1145 the buffer cache, and the scrub code itself, depending on the amount of higher
1147 The scope of checking is still internal to the block.
1150 - Does the type of data stored in the block match what scrub is expecting?
1152 - Does the block belong to the owning structure that asked for the read?
1154 - If the block contains records, do the records fit within the block?
1156 - If the block tracks internal free space information, is it consistent with
1157 the record areas?
1159 - Are the records contained inside the block free of obvious corruptions?
1163 within the dynamically allocated parts of an allocation group and within
1164 the filesystem.
1168 Btree records spanning an interval of the btree keyspace are checked for
1179 that a value is within the possible range.
1194 - Quota timer expiration (if resource usage exceeds the soft limit)
1199 After internal block checks, the next higher level of checking is
1201 For regular runtime code, the cost of these checks is considered to be
1204 The exact set of cross-referencing is highly dependent on the context of the
1207 The XFS btree code has keyspace scanning functions that online fsck uses to
1209 Specifically, scrub can scan the key space of an index to determine if that
1211 For the reverse mapping btree, it is possible to mask parts of the key for the
1212 purposes of performing a keyspace scan so that scrub can decide if the rmap
1213 btree contains records mapping a certain extent of physical space without the
1214 sparsenses of the rest of the rmap keyspace getting in the way.
1216 Btree blocks undergo the following checks before cross-referencing:
1218 - Does the type of data stored in the block match what scrub is expecting?
1220 - Does the block belong to the owning structure that asked for the read?
1222 - Do the records fit within the block?
1224 - Are the records contained inside the block free of obvious corruptions?
1226 - Are the name hashes in the correct order?
1228 - Do node pointers within the btree point to valid block addresses for the type
1231 - Do child pointers point towards the leaves?
1233 - Do sibling pointers point across the same level?
1235 - For each node block record, does the record key accurate reflect the contents
1236 of the child block?
1243 - Does the reverse mapping index list only the appropriate owner as the
1246 - Are none of the blocks claimed as free space?
1248 - If these aren't file data blocks, are none of the blocks claimed as space
1255 - If there's a parent node block, do the keys listed for this block match the
1258 - Do the sibling pointers point to valid blocks? Of the same level?
1260 - Do the child pointers point to valid blocks? Of the next level down?
1266 - Does the reverse mapping index list no owners of this space?
1268 - Is this space not claimed by the inode index for inodes?
1270 - Is it not mentioned by the reference count index?
1272 - Is there a matching record in the other free space btree?
1280 - Do cleared bits in the holemask correspond with inode clusters?
1282 - Do set bits in the freemask correspond with inode records with zero link
1289 - Do all the fields that summarize information about the file forks actually
1292 - Does each inode with zero link count correspond to a record in the free
1299 - Is this space not mentioned by the inode btrees?
1301 - If this is a CoW fork mapping, does it correspond to a CoW entry in the
1308 - Within the space subkeyspace of the rmap btree (that is to say, all
1309 records mapped to a particular space extent and ignoring the owner info),
1310 are there the same number of reverse mapping records for each block as the
1313 Proposed patchsets are the series to find gaps in
1333 Both the kernel and userspace can access the keys and values, subject to
1335 Most typically these fragments are metadata about the file -- origins, security
1341 A file's extended attributes are stored in blocks mapped by the attr fork.
1342 The mappings point to leaf blocks, remote value blocks, or dabtree blocks.
1343 Block 0 in the attribute fork is always the top of the structure, but otherwise
1344 each of the three types of blocks can be found at any offset in the attr fork.
1345 Leaf blocks contain attribute key records that point to the name and the value.
1346 Names are always stored elsewhere in the same leaf block.
1347 Values that are less than 3/4 the size of a filesystem block are also stored
1348 elsewhere in the same leaf block.
1350 If the leaf information exceeds a single filesystem block, a dabtree (also
1351 rooted at block 0) is created to map hashes of the attribute names to leaf
1352 blocks in the attr fork.
1354 Checking an extended attribute structure is not so straightforward due to the
1356 Scrub must read each block mapped by the attr fork and ignore the non-leaf
1359 1. Walk the dabtree in the attr fork (if present) to ensure that there are no
1360 irregularities in the blocks or dabtree mappings that do not point to
1363 2. Walk the blocks of the attr fork looking for leaf blocks.
1366 a. Validate that the name does not contain invalid characters.
1368 b. Read the attr value.
1369 This performs a named lookup of the attr name to ensure the correctness
1370 of the dabtree.
1371 If the value is stored in a remote block, this also validates the
1372 integrity of the remote value block.
1377 The filesystem directory tree is a directed acylic graph structure, with files
1378 constituting the nodes, and directory entries (dirents) constituting the edges.
1382 Each directory file must have exactly one directory pointing to the file.
1389 The first partition contains directory entry data blocks.
1392 If the directory entry data grows beyond one block, the second partition (which
1394 information and an index that maps hashes of the dirent names to directory data
1395 blocks in the first partition.
1397 If this second partition grows beyond one block, the third partition is
1400 If the free space has been separated and the second partition grows again
1406 1. Walk the dabtree in the second partition (if present) to ensure that there
1407 are no irregularities in the blocks or dabtree mappings that do not point to
1410 2. Walk the blocks of the first partition looking for directory entries.
1413 a. Does the name contain no invalid characters?
1415 b. Does the inumber correspond to an actual, allocated inode?
1417 c. Does the child inode have a nonzero link count?
1419 d. If a file type is included in the dirent, does it match the type of the
1422 e. If the child is a subdirectory, does the child's dotdot pointer point
1423 back to the parent?
1425 f. If the directory has a second partition, perform a named lookup of the
1426 dirent name to ensure the correctness of the dabtree.
1428 3. Walk the free space list in the third partition (if present) to ensure that
1429 the free spaces it describes are really unused.
1438 As stated in previous sections, the directory/attribute btree (dabtree) index
1440 Internally, it maps a 32-bit hash of the name to a block offset within the
1443 The internal structure of a dabtree closely resembles the btrees that record
1446 The format of leaf and node records are the same -- each entry points to the
1447 next level down in the hierarchy, with dabtree node records pointing to dabtree
1449 in the fork.
1451 Checking and cross-referencing the dabtree is very similar to what is done for
1454 - Does the type of data stored in the block match what scrub is expecting?
1456 - Does the block belong to the owning structure that asked for the read?
1458 - Do the records fit within the block?
1460 - Are the records contained inside the block free of obvious corruptions?
1462 - Are the name hashes in the correct order?
1464 - Do node pointers within the dabtree point to valid fork offsets for dabtree
1467 - Do leaf pointers within the dabtree point to valid fork offsets for directory
1470 - Do child pointers point towards the leaves?
1472 - Do sibling pointers point across the same level?
1474 - For each dabtree node record, does the record key accurate reflect the
1475 contents of the child dabtree block?
1477 - For each dabtree leaf record, does the record key accurate reflect the
1478 contents of the directory or attr block?
1486 In theory, the amount of available resources (data blocks, inodes, realtime
1487 extents) can be found by walking the entire filesystem.
1489 maintain summaries of this information in the superblock.
1490 Cross-referencing these values against the filesystem metadata should be a
1491 simple matter of walking the free space and inode metadata in each AG and the
1501 After performing a repair, the checking code is run a second time to validate
1502 the new structure, and the results of the health assessment are recorded
1503 internally and returned to the calling process.
1504 This step is critical for enabling system administrator to monitor the status
1505 of the filesystem and the progress of any repairs.
1506 For developers, it is a useful means to judge the efficacy of error detection
1507 and correction in the online and offline checking tools.
1514 These chains, once committed to the log, are restarted during log recovery if
1515 the system crashes while processing the chain.
1516 Because the AG header buffers are unlocked between transactions within a chain,
1520 the metadata are temporarily inconsistent with each other, and rebuilding is
1528 The count should be bumped whenever a new item is added to the chain.
1529 The count should be dropped when the filesystem has locked the AG header
1530 buffers and finished the work.
1532 * When online fsck wants to examine an AG, it should lock the AG header
1534 If the count is zero, proceed with the checking operation.
1535 If it is nonzero, cycle the buffer locks to allow the chain to make forward
1540 Details about the discovery of this situation are presented in the
1541 :ref:`next section <chain_coordination>`, and details about the solution
1546 Discovery of the Problem
1549 Midway through the development of online scrubbing, the fsstress tests
1553 The root cause of these reports is the eventual consistency model introduced by
1554 the expansion of deferred work items and compound transaction chains when
1564 items to commit to freeing some space in one transaction while deferring the
1566 The transaction sequence looks like this:
1568 1. The first transaction contains a physical update to the file's block mapping
1569 structures to remove the mapping from the btree blocks.
1570 It then attaches to the in-memory transaction an action item to schedule
1575 Returning to the example above, the action item tracks the freeing of both
1576 the unmapped space from AG 7 and the block mapping btree (BMBT) block from
1578 Deferred frees recorded in this manner are committed in the log by creating
1579 an EFI log item from the ``struct xfs_extent_free_item`` object and
1580 attaching the log item to the transaction.
1581 When the log is persisted to disk, the EFI item is written into the ondisk
1585 2. The second transaction contains a physical update to the free space btrees
1586 of AG 3 to release the former BMBT block and a second physical update to the
1587 free space btrees of AG 7 to release the unmapped file space.
1588 Observe that the physical updates are resequenced in the correct order
1590 Attached to the transaction is a an extent free done (EFD) log item.
1591 The EFD contains a pointer to the EFI logged in transaction #1 so that log
1592 recovery can tell if the EFI needs to be replayed.
1594 If the system goes down after transaction #1 is written back to the filesystem
1595 but before #2 is committed, a scan of the filesystem metadata would show
1597 of the unmapped space.
1600 reconstruct the incore state of the intent item and finish it.
1601 In the example above, the log must replay both frees described in the recovered
1602 EFI to complete the recovery phase.
1606 * Log items must be added to a transaction in the correct order to prevent
1607 conflicts with principal objects that are not held by the transaction.
1609 completed before the last update to free the extent, and extents should not
1610 be reallocated until that last update commits to the log.
1614 but as long as the first subtlety is handled, this should not affect the
1617 * Unmounting the filesystem flushes all pending work to disk, which means that
1618 offline fsck never sees the temporary inconsistencies caused by deferred
1624 During the design phase of the reverse mapping and reflink features, it was
1625 decided that it was impractical to cram all the reverse mapping updates for a
1629 * The block mapping update itself
1630 * A reverse mapping update for the block mapping update
1631 * Fixing the freelist
1632 * A reverse mapping update for the freelist fix
1634 * A shape change to the block mapping btree
1635 * A reverse mapping update for the btree update
1636 * Fixing the freelist (again)
1637 * A reverse mapping update for the freelist fix
1639 * An update to the reference counting information
1640 * A reverse mapping update for the refcount update
1641 * Fixing the freelist (a third time)
1642 * A reverse mapping update for the freelist fix
1645 * Fixing the freelist (a fourth time)
1646 * A reverse mapping update for the freelist fix
1648 * Freeing the space used by the block mapping btree
1649 * Fixing the freelist (a fifth time)
1650 * A reverse mapping update for the freelist fix
1655 remove the space from a staging area and again to map it into the file!
1659 This reduces the worst case size of transaction reservations by breaking the
1660 work into a long chain of small updates, which increases the degree of eventual
1661 consistency in the system.
1665 However, online fsck changes the rules -- remember that although physical
1666 updates to per-AG structures are coordinated by locking the buffers for AG
1669 all the validation work without releasing the lock.
1670 If the main lock for a space btree is an AG header buffer lock, scrub may have
1673 mapping update but not the corresponding refcount update, the two AG btrees
1676 If a repair is attempted in this state, the results will be catastrophic!
1682 acquire the higher level lock in AG order before making any changes.
1685 without simulating the entire operation.
1687 make the filesystem very slow.
1689 2. Make the deferred work coordinator code aware of consecutive intent items
1690 targeting the same AG and have it hold the AG header buffers locked across
1691 the transaction roll between updates.
1692 This would introduce a lot of complexity into the coordinator since it is
1693 only loosely coupled with the actual deferred work items.
1694 It would also fail to solve the problem because deferred work items can
1699 protect the data structure being scrubbed to look for pending operations.
1700 The checking and repair operations must factor these pending operations into
1701 the evaluations being performed.
1702 This solution is a nonstarter because it is *extremely* invasive to the main
1712 There are two key properties to the drain mechanism.
1713 First, the counter is incremented when a deferred work item is *queued* to a
1714 transaction, and it is decremented after the associated intent done log item is
1716 The second property is that deferred work can be added to a transaction without
1718 locking that AG header buffer to log the physical updates and the intent done
1720 The first property enables scrub to yield to running transaction chains, which
1722 The second property of the drain is key to the correct coordination of scrub,
1725 For regular filesystem code, the drain works as follows:
1727 1. Call the appropriate subsystem function to add a deferred work item to a
1730 2. The function calls ``xfs_defer_drain_bump`` to increase the counter.
1732 3. When the deferred item manager wants to finish the deferred work item, it
1735 4. The ``->finish_item`` implementation logs some changes and calls
1736 ``xfs_defer_drain_drop`` to decrease the sloppy counter and wake up any threads
1737 waiting on the drain.
1739 5. The subtransaction commits, which unlocks the resource associated with the
1742 For scrub, the drain works as follows:
1744 1. Lock the resource(s) associated with the metadata being scrubbed.
1745 For example, a scan of the refcount btree would lock the AGI and AGF header
1748 2. If the counter is zero (``xfs_defer_drain_busy`` returns false), there are no
1749 chains in progress and the operation may proceed.
1751 3. Otherwise, release the resources grabbed in step 1.
1753 4. Wait for the intent counter to reach zero (``xfs_defer_drain_intents``), then go
1756 To avoid polling in step 4, the drain provides a waitqueue for scrub threads to
1757 be woken up whenever the intent count drops to zero.
1759 The proposed patchset is the
1768 Online fsck for XFS separates the regular filesystem from the checking and
1770 However, there are a few parts of online fsck (such as the intent drains, and
1771 later, live update hooks) where it is useful for the online fsck code to know
1772 what's going on in the rest of the filesystem.
1773 Since it is not expected that online fsck will be constantly running in the
1774 background, it is very important to minimize the runtime overhead imposed by
1775 these hooks when online fsck is compiled into the kernel but not actively
1777 Taking locks in the hot path of a writer thread to access a data structure only
1778 to find that no further action is necessary is expensive -- on the author's
1780 Fortunately, the kernel supports dynamic code patching, which enables XFS to
1783 This sled has an overhead of however long it takes the instruction decoder to
1784 skip past the sled, which seems to be on the order of less than 1ns and
1787 When online fsck enables the static key, the sled is replaced with an
1788 unconditional branch to call the hook code.
1789 The switchover is quite expensive (~22000ns) but is paid entirely by the
1791 enter online fsck at the same time, or if multiple filesystems are being
1792 checked at the same time.
1793 Changing the branch direction requires taking the CPU hotplug lock, and since
1796 accessed in the memory reclaim paths.
1797 To minimize contention on the CPU hotplug lock, care should be taken not to
1801 filesystem operations when xfs_scrub is not running, the intended usage
1804 - The hooked part of XFS should declare a static-scoped static key that
1806 The ``DEFINE_STATIC_KEY_FALSE`` macro takes care of this.
1807 The static key itself should be declared as a ``static`` variable.
1809 - When deciding to invoke code that's only used by scrub, the regular
1810 filesystem should call the ``static_branch_unlikely`` predicate to avoid the
1811 scrub-only hook code if the static key is not enabled.
1813 - The regular filesystem should export helper functions that call
1814 ``static_branch_inc`` to enable and ``static_branch_dec`` to disable the
1816 Wrapper functions make it easy to compile out the relevant code if the kernel
1820 the ``xchk_fsgates_enable`` from the setup function to enable a specific
1824 Callers had better be sure they really need the functionality gated by the
1825 static key; the ``TRY_HARDER`` flag is useful here.
1829 If it detects a conflict between scrub and the running transactions, it will
1831 If the caller of the helper has not enabled the static key, the helper will
1832 return -EDEADLOCK, which should result in the scrub being restarted with the
1834 The scrub setup function should detect that flag, enable the static key, and
1835 try the scrub again.
1838 For more information, please see the kernel documentation of
1846 Some online checking functions work by scanning the filesystem to build a
1847 shadow copy of an ondisk metadata structure in memory and comparing the two
1849 For online repair to rebuild a metadata structure, it must compute the record
1850 set that will be stored in the new structure before it can persist that new
1854 To meet these goals, the kernel needs to collect a large amount of information
1855 in a place that doesn't require the correct operation of the filesystem.
1863 and eliminate the possibility of indexed lookups.
1865 * Kernel memory is pinned, which can drive the system into OOM conditions.
1867 * The system might not have sufficient memory to stage all the information.
1869 At any given time, online fsck does not need to keep the entire record set in
1871 Continued development of online fsck demonstrated that the ability to perform
1873 Fortunately, the Linux kernel already has a facility for byte-addressable and
1878 Hence, the ``xfile`` was born!
1883 | The first edition of online repair inserted records into a new btree as |
1887 | The second edition solved the half-rebuilt structure problem by storing |
1888 | everything in memory, but frequently ran the system out of memory. |
1890 | The third edition solved the OOM problem by using linked lists, but the |
1891 | memory overhead of the list pointers was extreme. |
1897 A survey of the intended uses of xfiles suggested these use cases:
1911 To support the first four use cases, high level data structures wrap the xfile
1913 The rest of this section discusses the interfaces that the xfile presents to
1915 The fifth use case is discussed in the :ref:`realtime summary <rtsummary>` case
1918 The most general storage interface supported by the xfile enables the reading
1919 and writing of arbitrary quantities of data at arbitrary offsets in the xfile.
1922 XFS is very record-based, which suggests that the ability to load and store
1926 They are internally the same as pread and pwrite, except that they treat any
1929 behavior because the only reaction is to abort the operation back to userspace.
1932 However, no discussion of file access idioms is complete without answering the
1936 Online fsck must not drive the system into OOM conditions, which means that
1938 tmpfs can only push a pagecache folio to the swap cache if the folio is neither
1939 pinned nor locked, which means the xfile must not pin too many folios.
1941 Short term direct access to xfile contents is done by locking the pagecache
1945 term direct access to xfile contents is done by bumping the folio refcount,
1946 mapping it into kernel address space, and dropping the folio lock.
1948 the shrinker infrastructure to know when to release folios.
1950 The ``xfile_get_page`` and ``xfile_put_page`` functions are provided to
1951 retrieve the (locked) folio that backs part of an xfile and to release it.
1952 The only code to use these folio lease functions are the xfarray
1953 :ref:`sorting<xfarray_sort>` algorithms and the :ref:`in-memory
1959 For security reasons, xfiles must be owned privately by the kernel.
1960 They are marked ``S_PRIVATE`` to prevent interference from the security system,
1964 To avoid locking recursion issues with the VFS, all accesses to the shmfs file
1965 are performed by manipulating the page cache directly.
1966 xfile writers call the ``->write_begin`` and ``->write_end`` functions of the
1967 xfile's address space to grab writable pages, copy the caller's buffer into the
1968 page, and release the pages.
1970 before copying the contents into the caller's buffer.
1971 In other words, xfiles ignore the VFS read and write code paths to avoid
1976 If an xfile is shared between threads to stage repairs, the caller must provide
1979 other threads to provide updates to the scanned data, the scrub function must
1990 Directories have a set of fixed-size dirent records that point to the names,
1994 During a repair, scrub needs to stage new records during the gathering step and
1995 retrieve them during the btree building step.
1997 Although this requirement can be satisfied by calling the read and write
1998 methods of the xfile directly, it is simpler for callers for there to be a
2001 The ``xfarray`` abstraction presents a linear array for fixed-size records atop
2002 the byte-accessible xfile.
2011 covered in the next section.
2013 The first type of caller handles records that are indexed by position.
2015 during the collection step.
2017 The typical use case are quota records or file link count records.
2019 ``xfarray_store`` functions, which wrap the similarly-named xfile functions to
2027 The second type of caller handles records that are not indexed by position
2029 The typical use case here is rebuilding space btrees and key/value btrees.
2030 These callers can add records to the array without caring about array indices
2031 via the ``xfarray_append`` function, which stores a record at the end of the
2034 rebuilding btree data), the ``xfarray_sort`` function can arrange the sorted
2037 The third type of caller is a bag, which is useful for counting records.
2038 The typical use case here is constructing space extent reference counts from
2040 Records can be put in the bag in any order, they can be removed from the bag
2042 The ``xfarray_store_anywhere`` function is used to insert a record in any
2043 null record slot in the bag; and the ``xfarray_unset`` function removes a
2044 record from the bag.
2046 The proposed patchset is the
2053 Most users of the xfarray require the ability to iterate the records stored in
2054 the array.
2055 Callers can probe every possible array index with the following:
2069 For xfarray users that want to iterate a sparse array, the ``xfarray_iter``
2070 function ignores indices in the xfarray that have never been written to by
2072 of the array that are not populated with memory pages.
2073 Once it finds a page, it will skip the zeroed areas of the page.
2087 During the fourth demonstration of online repair, a community reviewer remarked
2091 The btree insertion code in XFS is responsible for maintaining correct ordering
2092 of the records, so naturally the xfarray must also support sorting the record
2098 The sorting algorithm used in the xfarray is actually a combination of adaptive
2099 quicksort and a heapsort subalgorithm in the spirit of
2101 `pdqsort <https://github.com/orlp/pdqsort>`_, with customizations for the Linux
2104 advantage of the binary subpartitioning offered by quicksort, but it also uses
2105 heapsort to hedge against performance collapse if the chosen quicksort pivots
2108 gulf between the two implementations.
2110 The Linux kernel already contains a reasonably fast implementation of heapsort.
2111 It only operates on regular C arrays, which limits the scope of its usefulness.
2112 There are two key places where the xfarray uses it:
2117 of the xfarray into a memory buffer, and sorting the buffer.
2119 In other words, ``xfarray`` uses heapsort to constrain the nested recursion of
2123 A good pivot splits the set to sort in half, leading to the divide and conquer
2125 A poor pivot barely splits the subset at all, leading to O(n\ :sup:`2`)
2127 The xfarray sort routine tries to avoid picking a bad pivot by sampling nine
2128 records into a memory buffer and using the kernel heapsort to identify the
2129 median of the nine.
2134 of the triads, and then sort the middle value of each triad to determine the
2137 It turned out to be much more performant to read the nine elements into a
2138 memory buffer, run the kernel's in-memory heapsort on the buffer, and choose
2139 the 4th element of that buffer as the pivot.
2140 Tukey's ninthers are described in J. W. Tukey, `The ninther, a technique for
2145 The partitioning of quicksort is fairly textbook -- rearrange the record
2146 subset around the pivot, then set up the current and next stack frames to
2147 sort with the larger and the smaller halves of the pivot, respectively.
2148 This keeps the stack space requirements to log2(record count).
2150 As a final performance optimization, the hi and lo scanning phase of quicksort
2151 keeps examined xfile pages mapped in the kernel for as long as possible to
2154 accounting for the application of heapsort directly onto xfile pages.
2164 and each extended attribute needs to store both the attribute name and value.
2165 The names, keys, and values can consume a large amount of memory, so the
2171 The store function returns a magic cookie for every object that it persists.
2172 Later, callers provide this cookie to the ``xblob_load`` to recall the object.
2173 The ``xfblob_free`` function frees a specific blob, and the ``xfblob_truncate``
2176 The details of repairing directories and extended attributes will be discussed
2182 The proposed patchset is at the start of the
2191 The chapter about :ref:`secondary metadata<secondary_metadata>` mentioned that
2193 between a live metadata scan of the filesystem and writer threads that are
2195 Keeping the scan data up to date requires requires the ability to propagate
2196 metadata updates from the filesystem into the data being collected by the scan.
2198 applying them before writing the new metadata to disk, but this leads to
2199 unbounded memory consumption if the rest of the system is very busy.
2200 Another option is to skip the side-log and commit live updates from the
2201 filesystem directly into the scan data, which trades more overhead for a lower
2203 In both cases, the data structure holding the scan results must support indexed
2207 fsck employs the second strategy of committing live updates directly into
2212 mapping records: the existing rmap btree code!
2215 Recall that the :ref:`xfile <xfile>` abstraction represents memory pages as a
2216 regular file, which means that the kernel can create byte or block addressable
2218 The XFS buffer cache specializes in abstracting IO to block-oriented address
2219 spaces, which means that adaptation of the buffer cache to interface with
2220 xfiles enables reuse of the entire btree library.
2222 The next few sections describe how they actually work.
2224 The proposed patchset is the
2233 The first is to make it possible for the ``struct xfs_buftarg`` structure to
2234 host the ``struct xfs_buf`` rhashtable, because normally those are held by a
2236 The second change is to modify the buffer ``ioapply`` function to "read" cached
2237 pages from the xfile and "write" cached pages back to the xfile.
2238 Multiple access to individual buffers is controlled by the ``xfs_buf`` lock,
2239 since the xfile does not provide any locking on its own.
2240 With this adaptation in place, users of the xfile-backed buffer cache use
2241 exactly the same APIs as users of the disk-backed buffer cache.
2242 The separation between xfile and buffer cache implies higher memory usage since
2245 Today, however, it simply eliminates the need for new code.
2252 These blocks use the same header format as an on-disk btree, but the in-memory
2253 block verifiers ignore the checksums, assuming that xfile memory is no more
2257 The very first block of an xfile backing an xfbtree contains a header block.
2258 The header describes the owner, height, and the block number of the root
2261 To allocate a btree block, use ``xfile_seek_data`` to find a gap in the file.
2262 If there are no gaps, create one by extending the length of the xfile.
2263 Preallocate space for the block with ``xfile_prealloc``, and hand back the
2266 ``FALLOC_FL_PUNCH_HOLE``) to remove the memory page from the xfile.
2277 pointing to the xfile.
2279 3. Pass the buffer cache target, buffer ops, and other information to
2280 ``xfbtree_create`` to write an initial tree header and root block to the
2283 the creation function.
2285 all the necessary details for callers.
2288 4. Pass the xfbtree object to the btree cursor creation function for the
2290 Following the example above, ``xfs_rmapbt_mem_cursor`` takes care of this
2293 5. Pass the btree cursor to the regular btree functions to make queries against
2294 and to update the in-memory btree.
2295 For example, a btree cursor for an rmap xfbtree can be passed to the
2297 See the :ref:`next section<xfbtree_commit>` for information on dealing with
2300 6. When finished, delete the btree cursor, destroy the xfbtree object, free the
2301 buffer target, and the destroy the xfile to release all resources.
2308 Although it is a clever hack to reuse the rmap btree code to handle the staging
2309 structure, the ephemeral nature of the in-memory btree block storage presents
2311 The XFS transaction manager must not commit buffer log items for buffers backed
2312 by an xfile because the log format does not understand updates for devices
2313 other than the data device.
2314 An ephemeral xfbtree probably will not exist by the time the AIL checkpoints
2315 log transactions back into the filesystem, and certainly won't exist during
2318 remove the buffer log items from the transaction and write the updates into the
2319 backing xfile before committing or cancelling the transaction.
2321 The ``xfbtree_trans_commit`` and ``xfbtree_trans_cancel`` functions implement
2324 1. Find each buffer log item whose buffer targets the xfile.
2326 2. Record the dirty/ordered status of the log item.
2328 3. Detach the log item from the buffer.
2330 4. Queue the buffer to a special delwri list.
2332 5. Clear the transaction dirty flag if the only dirty log items were the ones
2335 6. Submit the delwri list to commit the changes to the xfile, if the updates
2338 After removing xfile logged buffers from the transaction in this manner, the
2347 the incore records to be sorted prior to commit, but was very slow and leaked
2348 blocks if the system went down during a repair.
2349 Loading records one at a time also meant that repair could not control the
2350 loading factor of the blocks in the new btree.
2352 Fortunately, the venerable ``xfs_repair`` tool had a more efficient means for
2357 To prepare for online fsck, each of the four bulk loaders were studied, notes
2358 were taken, and the four were refactored into a single generic btree bulk
2365 The zeroth step of bulk loading is to assemble the entire record set that will
2366 be stored in the new btree, and sort the records.
2367 Next, call ``xfs_btree_bload_compute_geometry`` to compute the shape of the
2368 btree from the record set, the type of btree, and any load factor preferences.
2371 First, the geometry computation computes the minimum and maximum records that
2372 will fit in a leaf block from the size of a btree block and the size of the
2374 Roughly speaking, the maximum number of records is::
2378 The XFS design specifies that btree blocks should be merged when possible,
2379 which means the minimum number of records is half of maxrecs::
2383 The next variable to determine is the desired loading factor.
2385 Choosing minrecs is undesirable because it wastes half the block.
2389 The default loading factor was chosen to be 75% of maxrecs, which provides a
2394 If space is tight, the loading factor will be set to maxrecs to try to avoid
2399 Load factor is computed for btree node blocks using the combined size of the
2400 btree key and pointer as the record size::
2406 Once that's done, the number of leaf blocks required to store the record set
2411 The number of node blocks needed to point to the next level down in the tree
2417 The entire computation is performed recursively until the current level only
2419 The resulting geometry is as follows:
2421 - For AG-rooted btrees, this level is the root level, so the height of the new
2422 tree is ``level + 1`` and the space needed is the summation of the number of
2425 - For inode-rooted btrees where the records in the top level do not fit in the
2426 inode fork area, the height is ``level + 2``, the space needed is the
2427 summation of the number of blocks on each level, and the inode fork points to
2428 the root block.
2430 - For inode-rooted btrees where the records in the top level can be stored in
2431 the inode fork area, then the root block can be stored in the inode, the
2432 height is ``level + 1``, and the space needed is one less than the summation
2433 of the number of blocks on each level.
2434 This only becomes relevant when non-bmap btrees gain the ability to root in
2442 Once repair knows the number of blocks needed for the new btree, it allocates
2443 those blocks using the free space information.
2444 Each reserved extent is tracked separately by the btree builder state data.
2445 To improve crash resilience, the reservation code also logs an Extent Freeing
2446 Intent (EFI) item in the same transaction as each space allocation and attaches
2447 its in-memory ``struct xfs_extent_free_item`` object to the space reservation.
2448 If the system goes down, log recovery will use the unfinished EFIs to free the
2449 unused space, the free space, leaving the filesystem unchanged.
2451 Each time the btree builder claims a block for the btree from a reserved
2452 extent, it updates the in-memory reservation to reflect the claimed space.
2454 reduce the number of EFIs in play.
2456 While repair is writing these new btree blocks, the EFIs created for the space
2457 reservations pin the tail of the ondisk log.
2458 It's possible that other parts of the system will remain busy and push the head
2459 of the log towards the pinned tail.
2460 To avoid livelocking the filesystem, the EFIs must not pin the tail of the log
2462 To alleviate this problem, the dynamic relogging capability of the deferred ops
2463 mechanism is reused here to commit a transaction at the log head containing an
2464 EFD for the old EFI and new EFI at the head.
2465 This enables the log to release the old EFI to keep the log moving forwards.
2467 EFIs have a role to play during the commit and reaping phases; please see the
2468 next section and the section about :ref:`reaping<reaping>` for more details.
2470 Proposed patchsets are the
2473 and the
2478 Writing the New Tree
2481 This part is pretty simple -- the btree builder (``xfs_btree_bulkload``) claims
2482 a block from the reserved list, writes the new btree block header, fills the
2483 rest of the block with records, and adds the new leaf block to a list of
2491 Sibling pointers are set every time a new block is added to the level::
2498 When it finishes writing the record leaf blocks, it moves on to the node
2500 To fill a node block, it walks each block in the next level down in the tree
2501 to compute the relevant keys and write them into the parent node::
2513 When it reaches the root level, it is ready to commit the new btree!::
2530 The first step to commit the new btree is to persist the btree blocks to disk
2533 in the recent past, so the builder must use ``xfs_buf_delwri_queue_here`` to
2534 remove the (stale) buffer from the AIL list before it can write the new blocks
2539 Once the new blocks have been persisted to disk, control returns to the
2540 individual repair function that called the bulk loader.
2541 The repair function must log the location of the new root in a transaction,
2542 clean up the space reservations that were made for the new btree, and reap the
2545 1. Commit the location of the new btree root.
2549 a. Log Extent Freeing Done (EFD) items for all the space that was consumed
2550 by the btree builder. The new EFDs must point to the EFIs attached to
2551 the reservation to prevent log recovery from freeing the new blocks.
2554 extent free work item to be free the unused space later in the
2557 c. The EFDs and EFIs logged in steps 2a and 2b must not overrun the
2558 reservation of the committing transaction.
2559 If the btree loading code suspects this might be about to happen, it must
2560 call ``xrep_defer_finish`` to clear out the deferred work and obtain a
2563 3. Clear out the deferred work a second time to finish the commit and clean
2564 the repair transaction.
2566 The transaction rolling in steps 2c and 3 represent a weakness in the repair
2567 algorithm, because a log flush and a crash before the end of the reap step can
2569 Online repair functions minimize the chances of this occurring by using very
2572 Repair moves on to reaping the old blocks, which will be presented in a
2575 Case Study: Rebuilding the Inode Index
2578 The high level process to rebuild the inode index btree is:
2580 1. Walk the reverse mapping records to generate ``struct xfs_inobt_rec``
2581 records from the inode chunk information and a bitmap of the old inode btree
2584 2. Append the records to an xfarray in inode order.
2586 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2587 of blocks needed for the inode btree.
2588 If the free space inode btree is enabled, call it again to estimate the
2589 geometry of the finobt.
2591 4. Allocate the number of blocks computed in the previous step.
2593 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2594 generate the internal node blocks.
2595 If the free space inode btree is enabled, call it again to load the finobt.
2597 6. Commit the location of the new btree root block(s) to the AGI.
2599 7. Reap the old btree blocks using the bitmap created in step 1.
2603 The inode btree maps inumbers to the ondisk location of the associated
2604 inode records, which means that the inode btrees can be rebuilt from the
2606 Reverse mapping records with an owner of ``XFS_RMAP_OWN_INOBT`` marks the
2607 location of the old inode btree blocks.
2608 Each reverse mapping record with an owner of ``XFS_RMAP_OWN_INODES`` marks the
2610 A cluster is the smallest number of ondisk inodes that can be allocated or
2613 For the space represented by each inode cluster, ensure that there are no
2614 records in the free space btrees nor any records in the reference count btree.
2615 If there are, the space metadata inconsistencies are reason enough to abort the
2618 ondisk inodes and to decide if the file is allocated
2620 Accumulate the results of successive inode cluster buffer reads until there is
2622 numbers in the inumber keyspace.
2623 If the chunk is sparse, the chunk record may include holes.
2625 Once the repair function accumulates one chunk's worth of data, it calls
2626 ``xfarray_append`` to add the inode btree record to the xfarray.
2627 This xfarray is walked twice during the btree creation step -- once to populate
2628 the inode btree with all inode chunk records, and a second time to populate the
2630 The number of records for the inode btree is the number of xfarray records,
2631 but the record count for the free inode btree has to be computed as inode chunk
2632 records are stored in the xfarray.
2634 The proposed patchset is the
2639 Case Study: Rebuilding the Space Reference Counts
2642 Reverse mapping records are used to rebuild the reference count information.
2645 Imagine the reverse mapping entries as rectangles representing extents of
2646 physical blocks, and that the rectangles can be laid down to allow them to
2648 From the diagram below, it is apparent that a reference count record must start
2649 or end wherever the height of the stack changes.
2650 In other words, the record emission stimulus is level-triggered::
2659 The ondisk reference count btree does not store the refcount == 0 cases because
2660 the free space btree already records which blocks are free.
2661 Extents being used to stage copy-on-write operations should be the only records
2663 Single-owner file blocks aren't recorded in either the free space or the
2666 The high level process to rebuild the reference count btree is:
2668 1. Walk the reverse mapping records to generate ``struct xfs_refcount_irec``
2670 the xfarray.
2671 Any records owned by ``XFS_RMAP_OWN_COW`` are also added to the xfarray
2673 are tracked in the refcount btree.
2678 2. Sort the records in physical extent order, putting the CoW staging extents
2679 at the end of the xfarray.
2680 This matches the sorting order of records in the refcount btree.
2682 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2683 of blocks needed for the new tree.
2685 4. Allocate the number of blocks computed in the previous step.
2687 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2688 generate the internal node blocks.
2690 6. Commit the location of new btree root block to the AGF.
2692 7. Reap the old btree blocks using the bitmap created in step 1.
2694 Details are as follows; the same algorithm is used by ``xfs_repair`` to
2697 - Until the reverse mapping btree runs out of records:
2699 - Retrieve the next record from the btree and put it in a bag.
2701 - Collect all records with the same starting block from the btree and put
2702 them in the bag.
2704 - While the bag isn't empty:
2706 - Among the mappings in the bag, compute the lowest block number where the
2708 This position will be either the starting block number of the next
2709 unprocessed reverse mapping or the next block after the shortest mapping
2710 in the bag.
2712 - Remove all mappings from the bag that end at this position.
2714 - Collect all reverse mappings that start at this position from the btree
2715 and put them in the bag.
2717 - If the size of the bag changed and is greater than one, create a new
2718 refcount record associating the block number range that we just walked to
2719 the size of the bag.
2721 The bag-like structure in this case is a type 2 xfarray as discussed in the
2723 Reverse mappings are added to the bag using ``xfarray_store_anywhere`` and
2727 The proposed patchset is the
2735 The high level process to rebuild a data/attr fork mapping btree is:
2737 1. Walk the reverse mapping records to generate ``struct xfs_bmbt_rec``
2738 records from the reverse mapping records for that inode and fork.
2740 Compute the bitmap of the old bmap btree blocks from the ``BMBT_BLOCK``
2743 2. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2744 of blocks needed for the new tree.
2746 3. Sort the records in file offset order.
2748 4. If the extent records would fit in the inode fork immediate area, commit the
2751 5. Allocate the number of blocks computed in the previous step.
2753 6. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2754 generate the internal node blocks.
2756 7. Commit the new btree root block to the inode fork immediate area.
2758 8. Reap the old btree blocks using the bitmap created in step 1.
2761 First, it's possible to move the fork offset to adjust the sizes of the
2762 immediate areas if the data and attr forks are not both in BMBT format.
2765 Third, the incore extent map must be reloaded carefully to avoid disturbing
2768 The proposed patchset is the
2779 suspect, there is a question of how to find and dispose of the blocks that
2780 belonged to the old structure.
2781 The laziest method of course is not to deal with them at all, but this slowly
2782 leads to service degradations as space leaks out of the filesystem.
2783 Hopefully, someone will schedule a rebuild of the free space information to
2785 Offline repair rebuilds all space metadata after recording the usage of
2786 the files and directories that it decides not to clear, hence it can build new
2787 structures in the discovered free space and avoid the question of reaping.
2789 As part of a repair, online fsck relies heavily on the reverse mapping records
2790 to find space that is owned by the corresponding rmap owner yet truly free.
2794 Permitting the block allocator to hand them out again will not push the system
2797 For space metadata, the process of finding extents to dispose of generally
2801 The space reservations used to create the new metadata can be used here if
2802 the same rmap owner code is used to denote all of the objects being rebuilt.
2804 2. Survey the reverse mapping data to create a bitmap of space owned by the
2805 same ``XFS_RMAP_OWN_*`` number for the metadata that is being preserved.
2807 3. Use the bitmap disunion operator to subtract (1) from (2).
2808 The remaining set bits represent candidate extents that could be freed.
2809 The process moves on to step 4 below.
2813 new structure attached to a temporary file and swapping the forks.
2814 Afterward, the mappings in the old file fork are the candidate blocks for
2817 The process for disposing of old extents is as follows:
2819 4. For each candidate extent, count the number of reverse mapping records for
2820 the first block in that extent that do not have the same rmap owner for the
2823 - If zero, the block has a single owner and can be freed.
2825 - If not, the block is part of a crosslinked structure and must not be
2828 5. Starting with the next block in the extent, figure out how many more blocks
2829 have the same zero/nonzero other owner status as that first block.
2831 6. If the region is crosslinked, delete the reverse mapping entry for the
2832 structure being repaired and move on to the next region.
2834 7. If the region is to be freed, mark any corresponding buffers in the buffer
2837 8. Free the region and move on.
2840 Transactions are of finite size, so the reaping process must be careful to roll
2841 the transactions to avoid overruns.
2848 This is also a window in which a crash during the reaping process can leak
2851 minimize the chances of this occurring.
2853 The proposed patchset is the
2861 Old reference count and inode btrees are the easiest to reap because they have
2862 rmap records with special owner codes: ``XFS_RMAP_OWN_REFC`` for the refcount
2863 btree, and ``XFS_RMAP_OWN_INOBT`` for the inode and free inode btrees.
2864 Creating a list of extents to reap the old btree blocks is quite simple,
2867 1. Lock the relevant AGI/AGF header buffers to prevent allocation and frees.
2869 2. For each reverse mapping record with an rmap owner corresponding to the
2870 metadata structure being rebuilt, set the corresponding range in a bitmap.
2872 3. Walk the current data structures that have the same rmap owner.
2873 For each block visited, clear that range in the above bitmap.
2875 4. Each set bit in the bitmap represents a block that could be a block from the
2878 are the blocks that might be freeable.
2880 If it is possible to maintain the AGF lock throughout the repair (which is the
2881 common case), then step 2 can be performed at the same time as the reverse
2882 mapping record walk that creates the records for the new btree.
2884 Case Study: Rebuilding the Free Space Indices
2887 The high level process to rebuild the free space indices is:
2889 1. Walk the reverse mapping records to generate ``struct xfs_alloc_rec_incore``
2890 records from the gaps in the reverse mapping btree.
2892 2. Append the records to an xfarray.
2894 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2897 4. Allocate the number of blocks computed in the previous step from the free
2900 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2901 generate the internal node blocks for the free space by length index.
2902 Call it again for the free space by block number index.
2904 6. Commit the locations of the new btree root blocks to the AGF.
2906 7. Reap the old btree blocks by looking for space that is not recorded by the
2907 reverse mapping btree, the new free space btrees, or the AGFL.
2909 Repairing the free space btrees has three key complications over a regular
2912 First, free space is not explicitly tracked in the reverse mapping records.
2913 Hence, the new free space records must be inferred from gaps in the physical
2914 space component of the keyspace of the reverse mapping btree.
2916 Second, free space repairs cannot use the common btree reservation code because
2917 new blocks are reserved out of the free space btrees.
2918 This is impossible when repairing the free space btrees themselves.
2919 However, repair holds the AGF buffer lock for the duration of the free space
2920 index reconstruction, so it can use the collected free space information to
2921 supply the blocks for the new free space btrees.
2922 It is not necessary to back each reserved extent with an EFI because the new
2923 free space btrees are constructed in what the ondisk filesystem thinks is
2925 However, if reserving blocks for the new btrees from the collected free space
2926 information changes the number of free space records, repair must re-estimate
2927 the new free space btree geometry with the new record count until the
2929 As part of committing the new btrees, repair must ensure that reverse mappings
2930 are created for the reserved blocks and that unused reserved blocks are
2931 inserted into the free space btrees.
2933 is atomic, similar to the other btree repair functions.
2935 Third, finding the blocks to reap after the repair is not overly
2937 Blocks for the free space btrees and the reverse mapping btrees are supplied by
2938 the AGFL.
2939 Blocks put onto the AGFL have reverse mapping records with the owner
2941 This ownership is retained when blocks move from the AGFL into the free space
2942 btrees or the reverse mapping btrees.
2944 creates a bitmap (``ag_owner_bitmap``) of all the space claimed by
2946 The repair context maintains a second bitmap corresponding to the rmap btree
2947 blocks and the AGFL blocks (``rmap_agfl_bitmap``).
2948 When the walk is complete, the bitmap disunion operation ``(ag_owner_bitmap &
2949 ~rmap_agfl_bitmap)`` computes the extents that are used by the old free space
2951 These blocks can then be reaped using the methods outlined above.
2953 The proposed patchset is the
2964 As mentioned in the previous section, blocks on the AGFL, the two free space
2965 btree blocks, and the reverse mapping btree blocks all have reverse mapping
2966 records with ``XFS_RMAP_OWN_AG`` as the owner.
2967 The full process of gathering reverse mapping records and building a new btree
2968 are described in the case study of
2970 discussion is that the new rmap btree will not contain any records for the old
2971 rmap btree, nor will the old btree blocks be tracked in the free space btrees.
2972 The list of candidate reaping blocks is computed by setting the bits
2973 corresponding to the gaps in the new rmap btree records, and then clearing the
2974 bits corresponding to extents in the free space btrees and the current AGFL
2976 The result ``(new_rmapbt_gaps & ~(agfl | bnobt_records))`` are reaped using the
2979 The rest of the process of rebuildng the reverse mapping btree is discussed
2982 The proposed patchset is the
2987 Case Study: Rebuilding the AGFL
2990 The allocation group free block list (AGFL) is repaired as follows:
2992 1. Create a bitmap for all the space that the reverse mapping data claims is
2995 2. Subtract the space used by the two free space btrees and the rmap btree.
2997 3. Subtract any space that the reverse mapping data claims is owned by any
2998 other owner, to avoid re-adding crosslinked blocks to the AGFL.
3000 4. Once the AGFL is full, reap any blocks leftover.
3002 5. The next operation to fix the freelist will right-size the list.
3012 careful to access the ondisk metadata *only* when the ondisk metadata is so
3013 badly damaged that the filesystem cannot load the in-memory representation.
3015 specialized resource acquisition functions that return either the in-memory
3017 update to the ondisk location.
3019 The only repairs that should be made to the ondisk inode buffers are whatever
3020 is necessary to get the in-core structure loaded.
3021 This means fixing whatever is caught by the inode cluster buffer and inode fork
3022 verifiers, and retrying the ``iget`` operation.
3023 If the second ``iget`` fails, the repair has failed.
3025 Once the in-memory representation is loaded, repair can lock the inode and can
3029 Dealing with the data and attr fork extent counts and the file block counts is
3030 more complicated, because computing the correct value requires traversing the
3031 forks, or if that fails, leaving the fields invalid and waiting for the fork
3034 The proposed patchset is the
3043 an in-memory representation, and hence are subject to the same cache coherency
3045 Somewhat confusingly, both are known as dquots in the XFS codebase.
3047 The only repairs that should be made to the ondisk quota record buffers are
3048 whatever is necessary to get the in-core structure loaded.
3049 Once the in-memory representation is loaded, the only attributes needing
3052 Quota usage counters are checked, repaired, and discussed separately in the
3055 The proposed patchset is the
3067 This information could be compiled by walking the free space and inode indexes,
3068 but this is a slow process, so XFS maintains a copy in the ondisk superblock
3069 that should reflect the ondisk metadata, at least when the filesystem has been
3073 Writer threads reserve the worst-case quantities of resources from the
3075 It is therefore only necessary to serialize on the superblock when the
3078 The lazy superblock counter feature introduced in XFS v5 took this even further
3079 by training log recovery to recompute the summary counters from the AG headers,
3080 which eliminated the need for most transactions even to touch the superblock.
3081 The only time XFS commits the summary counters is at filesystem unmount.
3082 To reduce contention even further, the incore counter is implemented as a
3084 global incore counter and can satisfy small allocations from the local batch.
3086 The high-performance nature of the summary counters makes it difficult for
3088 while the system is running.
3089 Although online fsck can read the filesystem metadata to compute the correct
3090 values of the summary counters, there's no way to hold the value of a percpu
3091 counter stable, so it's quite possible that the counter will be out of date by
3092 the time the walk is complete.
3095 For repairs, the in-memory counters must be stabilized while walking the
3096 filesystem metadata to get an accurate reading and install it in the percpu
3099 To satisfy this requirement, online fsck must prevent other programs in the
3100 system from initiating new writes to the filesystem, it must disable background
3102 exit the kernel.
3103 Once that has been established, scrub can walk the AG free space indexes, the
3104 inode btrees, and the realtime bitmap to compute the correct value of all
3106 This is very similar to a filesystem freeze, though not all of the pieces are
3109 - The final freeze state is set one higher than ``SB_FREEZE_COMPLETE`` to
3110 prevent other threads from thawing the filesystem, or other scrub threads
3113 - It does not quiesce the log.
3115 With this code in place, it is now possible to pause the filesystem for just
3116 long enough to check and correct the summary counters.
3121 | The initial implementation used the actual VFS filesystem freeze |
3123 | With the filesystem frozen, it is possible to resolve the counter values |
3124 | with exact precision, but there are many problems with calling the VFS |
3127 | - Other programs can unfreeze the filesystem without our knowledge. |
3130 | - Adding an extra lock to prevent others from thawing the filesystem |
3131 | required the addition of a ``->freeze_super`` function to wrap |
3134 | the VFS ``freeze_super`` and ``thaw_super`` functions can drop the |
3135 | last reference to the VFS superblock, and any subsequent access |
3137 | This can happen if the filesystem is unmounted while the underlying |
3138 | block device has frozen the filesystem. |
3139 | This problem could be solved by grabbing extra references to the |
3140 | superblock, but it felt suboptimal given the other inadequacies of |
3143 | - The log need not be quiesced to check the summary counters, but a VFS |
3147 | - Quiescing the log means that XFS flushes the (possibly incorrect) |
3148 | counters to disk as part of cleaning the log. |
3150 | - A bug in the VFS meant that freeze could complete even when |
3151 | sync_filesystem fails to flush the filesystem and returns an error. |
3155 The proposed patchset is the
3163 Certain types of metadata can only be checked by walking every file in the
3164 entire filesystem to record observations and comparing the observations against
3168 However, it is not practical to shut down the entire filesystem to examine
3169 hundreds of billions of files because the downtime would be excessive.
3170 Therefore, online fsck must build the infrastructure to manage a live scan of
3171 all the files in the filesystem.
3174 - How does scrub manage the scan while it is collecting data?
3176 - How does the scan keep abreast of changes being made to the system by other
3184 In the original Unix filesystems of the 1970s, each directory entry contained
3190 UNIX, 6th Edition*, (Dept. of Computer Science, the University of New South
3192 `"Implementation of the File System"
3193 <https://archive.org/details/bstj57-6-1905/page/n8/mode/1up>`_, from *The UNIX
3194 Time-Sharing System*, (The Bell System Technical Journal, July 1978), pp.
3198 the space in the data section filesystem.
3200 though the inodes themselves are sparsely distributed within the keyspace.
3201 Scans proceed in a linear fashion across the inumber keyspace, starting from
3203 Naturally, a scan through a keyspace requires a scan cursor object to track the
3206 The first part of this scan cursor object tracks the inode that will be
3207 examined next; call this the examination cursor.
3208 Somewhat less obviously, the scan cursor object must also track which parts of
3209 the keyspace have already been visited, which is critical for deciding if a
3210 concurrent filesystem update needs to be incorporated into the scan data.
3211 Call this the visited inode cursor.
3213 Advancing the scan cursor is a multi-step process encapsulated in
3216 1. Lock the AGI buffer of the AG containing the inode pointed to by the visited
3219 advancing the cursor.
3221 2. Use the per-AG inode btree to look up the next inumber after the one that
3226 a. Move the examination cursor to the point of the inumber keyspace that
3227 corresponds to the start of the next AG.
3229 b. Adjust the visited inode cursor to indicate that it has "visited" the
3230 last possible inode in the current AG's inode keyspace.
3231 XFS inumbers are segmented, so the cursor needs to be marked as having
3232 visited the entire keyspace up to just before the start of the next AG's
3235 c. Unlock the AGI and return to step 1 if there are unexamined AGs in the
3238 d. If there are no more AGs to examine, set both cursors to the end of the
3240 The scan is now complete.
3244 a. Move the examination cursor ahead to the next inode marked as allocated
3245 by the inode btree.
3247 b. Adjust the visited inode cursor to point to the inode just prior to where
3248 the examination cursor is now.
3249 Because the scanner holds the AGI buffer lock, no inodes could have been
3250 created in the part of the inode keyspace that the visited inode cursor
3253 5. Get the incore inode for the inumber of the examination cursor.
3254 By maintaining the AGI buffer lock until this point, the scanner knows that
3255 it was safe to advance the examination cursor across the entire keyspace,
3257 the filesystem until the scan releases the incore inode.
3259 6. Drop the AGI lock and return the incore inode to the caller.
3261 Online fsck functions scan all files in the filesystem as follows:
3265 2. Advance the scan cursor (``xchk_iscan_iter``) to get the next inode.
3268 a. Lock the inode to prevent updates during the scan.
3270 b. Scan the inode.
3272 c. While still holding the inode lock, adjust the visited inode cursor
3275 d. Unlock and release the inode.
3277 8. Call ``xchk_iscan_teardown`` to complete the scan.
3279 There are subtleties with the inode cache that complicate grabbing the incore
3280 inode for the caller.
3281 Obviously, it is an absolute requirement that the inode metadata be consistent
3282 enough to load it into the inode cache.
3283 Second, if the incore inode is stuck in some intermediate state, the scan
3284 coordinator must release the AGI and push the main filesystem to get the inode
3287 The proposed patches are the
3291 The first user of the new functionality is the
3300 always obtained (``xfs_iget``) outside of transaction context because the
3301 creation of the incore context for an existing file does not require metadata
3304 part of file creation must be performed in transaction context because the
3305 filesystem must ensure the atomicity of the ondisk inode btree index updates
3306 and the initialization of the actual ondisk inode.
3312 - The VFS may decide to kick off writeback as part of a ``DONTCACHE`` inode
3317 - An unlinked file may have lost its last reference, in which case the entire
3319 the ondisk metadata and freeing the inode.
3322 Inactivation has two parts -- the VFS part, which initiates writeback on all
3323 dirty file pages, and the XFS part, which cleans up XFS-specific information
3324 and frees the inode if it was unlinked.
3325 If the inode is unlinked (or unconnected after a file handle operation), the
3326 kernel drops the inode into the inactivation machinery immediately.
3344 7. Space on the data and realtime devices for the transaction.
3357 Resources are often released in the reverse order, though this is not required.
3359 an object that normally is acquired in a later stage of the locking order, and
3360 then decide to cross-reference the object with an object that is acquired
3361 earlier in the order.
3362 The next few sections detail the specific ways in which online fsck takes care
3370 This isn't much of a problem for ``iget`` since it can operate in the context
3371 of an existing transaction, as long as all of the bound resources are acquired
3372 before the inode reference in the regular filesystem.
3374 When the VFS ``iput`` function is given a linked inode with no other
3375 references, it normally puts the inode on an LRU list in the hope that it can
3376 save time if another process re-opens the file before the system runs out
3378 Filesystem callers can short-circuit the LRU process by setting a ``DONTCACHE``
3379 flag on the inode to cause the kernel to try to drop the inode into the
3382 In the past, inactivation was always done from the process that dropped the
3385 On the other hand, if there is no scrub transaction, it is desirable to drop
3387 To capture these nuances, the online fsck code has a separate ``xchk_irele``
3388 function to set or clear the ``DONTCACHE`` flag to get the required release
3402 In regular filesystem code, the VFS and XFS will acquire multiple IOLOCK locks
3403 in a well-known order: parent → child when updating the directory tree, and
3404 in numerical order of the addresses of their ``struct inode`` object otherwise.
3405 For regular files, the MMAPLOCK can be acquired after the IOLOCK to stop page
3408 the addresses of their ``struct address_space`` objects.
3409 Due to the structure of existing filesystem code, IOLOCKs and MMAPLOCKs must be
3415 scanner, the scrub process holds the IOLOCK of the file being scanned and it
3416 needs to take the IOLOCK of the file at the other end of the directory link.
3417 If the directory tree is corrupt because it contains a cycle, ``xfs_scrub``
3418 cannot use the regular inode locking functions and avoid becoming trapped in an
3422 needs to take a second lock of the same class, it uses trylock to avoid an ABBA
3424 If the trylock fails, scrub drops all inode locks and use trylock loops to
3427 scrub avoids deadlocking the filesystem or becoming an unresponsive process.
3428 However, trylock loops means that online fsck must be prepared to measure the
3429 resource being scrubbed before and after the lock cycle to detect changes and
3437 Consider the directory parent pointer repair code as an example.
3438 Online fsck must verify that the dotdot dirent of a directory points up to a
3439 parent directory, and that the parent directory contains exactly one dirent
3440 pointing down to the child directory.
3442 walk of every directory on the filesystem while holding the child locked, and
3443 while updates to the directory tree are being made.
3444 The coordinated inode scan provides a way to walk the filesystem without the
3446 The child directory is kept locked to prevent updates to the dotdot dirent, but
3447 if the scanner fails to lock a parent, it can drop and relock both the child
3448 and the prospective parent.
3449 If the dotdot entry changes while the directory is unlocked, then a move or
3450 rename operation must have changed the child's parentage, and the scan can
3453 The proposed patchset is the
3463 The second piece of support that online fsck functions need during a full
3464 filesystem scan is the ability to stay informed about updates being made by
3465 other threads in the filesystem, since comparisons against the past are useless
3472 In this case, the downstream consumer is always an online fsck function.
3473 Because multiple fsck functions can run in parallel, online fsck uses the Linux
3478 Because these hooks are private to the XFS module, the information passed along
3479 contains exactly what the checking function needs to update its observations.
3481 The current implementation of XFS hooks uses SRCU notifier chains to reduce the
3485 However, it may turn out that the combination of blocking chains and static
3488 The following pieces are necessary to hook a certain point in the filesystem:
3494 about the action.
3497 around the ``xfs_hooks`` and ``xfs_hook`` objects to take advantage of type
3500 - A callsite in the regular filesystem code must be chosen to call
3501 ``xfs_hooks_call`` with the action code and data structure.
3502 This place should be adjacent to (and not earlier than) the place where
3503 the filesystem update is committed to the transaction.
3504 In general, when the filesystem calls a hook chain, it should be able to
3507 However, the exact requirements are very dependent on the context of the hook
3508 caller and the callee.
3510 - The online fsck function should define a structure to hold scan data, a lock
3511 to coordinate access to the scan data, and a ``struct xfs_hook`` object.
3512 The scanner function and the regular filesystem code must acquire resources
3513 in the same order; see the next section for details.
3515 - The online fsck code must contain a C function to catch the hook action code
3517 If the object being updated has already been visited by the scan, then the
3518 hook information must be applied to the scan data.
3520 - Prior to unlocking inodes to start the scan, online fsck must call
3521 ``xfs_hooks_setup`` to initialize the ``struct xfs_hook``, and
3522 ``xfs_hooks_add`` to enable the hook.
3524 - Online fsck must call ``xfs_hooks_del`` to disable the hook once the scan is
3527 The number of hooks should be kept to a minimum to reduce complexity.
3528 Static keys are used to reduce the overhead of filesystem hooks to nearly
3536 The code paths of the online fsck scanning code and the :ref:`hooked<fshooks>`
3565 These rules must be followed to ensure correct interactions between the
3566 checking code and the code making an update to the filesystem:
3568 - Prior to invoking the notifier call chain, the filesystem function being
3569 hooked must acquire the same lock that the scrub scanning function acquires
3570 to scan the inode.
3572 - The scanning function and the scrub hook function must coordinate access to
3573 the scan data by acquiring a lock on the scan data.
3575 - Scrub hook function must not add the live update information to the scan
3576 observations unless the inode being updated has already been scanned.
3577 The scan coordinator has a helper predicate (``xchk_iscan_want_live_update``)
3580 - Scrub hook functions must not change the caller's state, including the
3582 They must not acquire any resources that might conflict with the filesystem
3585 - The hook function can abort the inode scan to avoid breaking the other rules.
3587 The inode scan APIs are pretty simple:
3591 - ``xchk_iscan_iter`` grabs a reference to the next inode in the scan or
3595 visited in the scan.
3596 This is critical for hook functions to decide if they need to update the
3599 - ``xchk_iscan_mark_visited`` to mark an inode as having been visited in the
3602 - ``xchk_iscan_teardown`` to finish the scan
3604 This functionality is also a part of the
3614 It is useful to compare the mount time quotacheck code to the online repair
3617 it does the following:
3619 1. Make sure the ondisk dquots are in good enough shape that all the incore
3620 dquots will actually load, and zero the resource usage counters in the
3623 2. Walk every inode in the filesystem.
3624 Add each file's resource usage to the incore dquot.
3627 If the incore dquot is not being flushed, add the ondisk buffer backing the
3630 4. Write the buffer list to disk.
3633 filesystem objects until the newly collected metadata reflect all filesystem
3636 index implemented with a sparse ``xfarray``, and only writes to the real dquots
3637 once the scan is complete.
3641 1. The inodes involved are joined and locked to a transaction.
3643 2. For each dquot attached to the file:
3645 a. The dquot is locked.
3647 b. A quota reservation is added to the dquot's resource usage.
3648 The reservation is recorded in the transaction.
3650 c. The dquot is unlocked.
3652 3. Changes in actual quota usage are tracked in the transaction.
3656 a. The dquot is locked again.
3659 the dquot.
3661 c. The dquot is unlocked.
3664 The step 2 hook creates a shadow version of the transaction dquot context
3665 (``dqtrx``) that operates in a similar manner to the regular code.
3666 The step 4 hook commits the shadow ``dqtrx`` changes to the shadow dquots.
3667 Notice that both hooks are called with the inode locked, which is how the
3668 live update coordinates with the inode scanner.
3670 The quotacheck scan looks like this:
3674 2. For each inode returned by the inode scan iterator:
3676 a. Grab and lock the inode.
3679 realtime blocks) and add that to the shadow dquots for the user, group,
3680 and project ids associated with the inode.
3682 c. Unlock and release the inode.
3684 3. For each dquot in the system:
3686 a. Grab and lock the dquot.
3688 b. Check the dquot against the shadow dquots created by the scan and updated
3689 by the live hooks.
3693 If repairs are desired, the real and shadow dquots are locked and their
3694 resource counts are set to the values in the shadow dquot.
3696 The proposed patchset is the
3707 The coordinated inode scanner is used to visit all directories on the
3710 During the scanning phase, each entry in a directory generates observation
3713 1. If the entry is a dotdot (``'..'``) entry of the root directory, the
3714 directory's parent link count is bumped because the root directory's dotdot
3717 2. If the entry is a dotdot entry of a subdirectory, the parent's backref
3720 3. If the entry is neither a dot nor a dotdot entry, the target file's parent
3723 4. If the target is a subdirectory, the parent's child link count is bumped.
3725 A crucial point to understand about how the link count inode scanner interacts
3726 with the live update hooks is that the scan cursor tracks which *parent*
3728 In other words, the live updates ignore any update about ``A → B`` when A has
3731 accounted as a backref counter in the shadow data for A, since child dotdot
3732 entries affect the parent's link count.
3733 Live update hooks are carefully placed in all parts of the filesystem that
3737 For any file, the correct link count is the number of parents plus the number
3740 The backref information is used to detect inconsistencies in the number of
3741 links pointing to child subdirectories and the number of dotdot entries
3744 After the scan completes, the link count of each file can be checked by locking
3745 both the inode and the shadow data, and comparing the link counts.
3749 If repairs are desired, the inode's link count is set to the value in the
3751 If no parents are found, the file must be :ref:`reparented <orphanage>` to the
3752 orphanage to prevent the file from being lost forever.
3754 The proposed patchset is the
3764 Most repair functions follow the same pattern: lock filesystem resources,
3765 walk the surviving ondisk metadata looking for replacement metadata records,
3766 and use an :ref:`in-memory array <xfarray>` to store the gathered observations.
3767 The primary advantage of this approach is the simplicity and modularity of the
3768 repair code -- code and data are entirely contained within the scrub module,
3769 do not require hooks in the main filesystem, and are usually the most efficient
3771 A secondary advantage of this repair approach is atomicity -- once the kernel
3772 decides a structure is corrupt, no other threads can access the metadata until
3773 the kernel finishes repairing and revalidating the metadata.
3775 For repairs going on within a shard of the filesystem, these advantages
3776 outweigh the delays inherent in locking the shard while repairing parts of the
3778 Unfortunately, repairs to the reverse mapping btree cannot use the "standard"
3780 every file in the filesystem, and the filesystem cannot stop.
3783 <liveupdate>`, and an :ref:`in-memory rmap btree <xfbtree>` to complete the
3788 2. While holding the locks on the AGI and AGF buffers acquired during the
3790 staging extents, and the internal log.
3794 4. Hook into rmap updates for the AG being repaired so that the live scan data
3795 can receive updates to the rmap btree from the rest of the filesystem during
3796 the file scan.
3799 decide if the mapping matches the AG of interest.
3802 a. Create a btree cursor for the in-memory btree.
3804 b. Use the rmap code to add the record to the in-memory btree.
3806 c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3807 xfbtree changes to the xfile.
3809 6. For each live update received via the hook, decide if the owner has already
3811 If so, apply the live update into the scan data:
3813 a. Create a btree cursor for the in-memory btree.
3815 b. Replay the operation into the in-memory btree.
3817 c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3818 xfbtree changes to the xfile.
3819 This is performed with an empty transaction to avoid changing the
3822 7. When the inode scan finishes, create a new scrub transaction and relock the
3825 8. Compute the new btree geometry using the number of rmap records in the
3828 9. Allocate the number of blocks computed in the previous step.
3830 10. Perform the usual btree bulk loading and commit to install the new rmap
3833 11. Reap the old rmap btree blocks as discussed in the case study about how
3836 12. Free the xfbtree now that it not needed.
3838 The proposed patchset is the
3848 information for the realtime volume, and quota records.
3853 attributes) use blocks mapped in the file fork offset address space that point
3856 the file fork offset address space.
3858 Because file forks can consume as much space as the entire filesystem, repairs
3861 the XFS filesystem, writes a new structure at the correct offsets into the
3862 temporary file, and atomically swaps the fork mappings (and hence the fork
3863 contents) to commit the repair.
3864 Once the repair is complete, the old fork can be reaped as necessary; if the
3865 system goes down during the reap, the iunlink code will delete the blocks
3868 **Note**: All space usage and inode indices in the filesystem *must* be
3870 This dependency is the reason why online repair can only use pageable kernel
3873 Swapping metadata extents with a temporary file requires the owner field of the
3874 block headers to match the file being repaired and not the temporary file. The
3878 There is a downside to the reaping process -- if the system crashes during the
3879 reap phase and the fork extents are crosslinked, the iunlink processing will
3880 fail because freeing space will find the extra reverse mappings and abort.
3884 They are not linked into a directory and the entire file will be reaped when
3885 the last reference to the file is lost.
3886 The key differences are that these files must have no access permission outside
3887 the kernel at all, they must be specially marked to prevent them from being
3888 opened by handle, and they must never be linked into the directory tree.
3893 | In the initial iteration of file metadata repair, the damaged metadata |
3894 | blocks would be scanned for salvageable data; the extents in the file |
3897 | This strategy did not survive the introduction of the atomic repair |
3900 | The second iteration explored building a second structure at a high |
3901 | offset in the fork from the salvage data, reaping the old extents, and |
3902 | using a ``COLLAPSE_RANGE`` operation to slide the new extents into |
3907 | - Array structures are linearly addressed, and the regular filesystem |
3908 | codebase does not have the concept of a linear offset that could be |
3909 | applied to the record offset computation to build an alternate copy. |
3911 | - Extended attributes are allowed to use the entire attr fork offset |
3915 | different part of the fork address space, the atomic repair commit |
3917 | a log assisted ``COLLAPSE_RANGE`` operation to ensure that the old |
3920 | - A crash after construction of the secondary tree but before the range |
3921 | collapse would leave unreachable blocks in the file fork. |
3928 | - Directory entry blocks and quota records record the file fork offset |
3929 | in the header area of each block. |
3937 | Were the atomic commit to use a range collapse operation, each block |
3938 | would have to be rewritten very carefully to preserve the graph |
3943 | This lead to the introduction of temporary file staging. |
3949 Online repair code should use the ``xrep_tempfile_create`` function to create a
3950 temporary file inside the filesystem.
3951 This allocates an inode, marks the in-core inode private, and attaches it to
3952 the scrub context.
3953 These files are hidden from userspace, may not be added to the directory tree,
3956 Temporary files only use two inode locks: the IOLOCK and the ILOCK.
3957 The MMAPLOCK is not needed here, because there must not be page faults from
3959 The usage patterns of these two locks are the same as for any other XFS file --
3960 access to file data are controlled via the IOLOCK, and access to file metadata
3961 are controlled via the ILOCK.
3962 Locking helpers are provided so that the temporary file and its lock state can
3963 be cleaned up by the scrub context.
3964 To comply with the nested locking strategy laid out in the :ref:`inode
3965 locking<ilocking>` section, it is recommended that scrub functions use the
3970 1. ``xrep_tempfile_copyin`` can be used to set the contents of a regular
3973 2. The regular directory, symbolic link, and extended attribute functions can
3974 be used to write to the temporary file.
3977 must be conveyed to the file being repaired, which is the topic of the next
3980 The proposed patches are in the
3989 it, it must commit the new changes into the existing file.
3990 It is not possible to swap the inumbers of two files, so instead the new
3991 metadata must replace the old.
3992 This suggests the need for the ability to swap extents, but the existing extent
3993 swapping code used by the file defragmenting tool ``xfs_fsr`` is not sufficient
3996 a. When the reverse-mapping btree is enabled, the swap code must keep the
4001 b. Reverse-mapping is critical for the operation of online fsck, so the old
4008 change in file contents, even if the operation is interrupted.
4010 d. Online repair needs to swap the contents of two files that are by definition
4012 For directory and xattr repairs, the user-visible contents might be the
4013 same, but the contents of individual blocks may be very different.
4015 e. Old blocks in the file may be cross-linked with another structure and must
4016 not reappear if the system goes down mid-repair.
4019 of log intent item to track the progress of an operation to exchange two file
4021 The new deferred operation type chains together the same transactions used by
4022 the reverse-mapping extent swap code.
4023 The new log item records the progress of the exchange to ensure that once an
4026 The new ``XFS_SB_FEAT_INCOMPAT_LOG_ATOMIC_SWAP`` log-incompatible feature flag
4027 in the superblock protects these new log item records from being replayed on
4030 The proposed patchset is the
4038 | Starting with XFS v5, the superblock contains a |
4039 | ``sb_features_log_incompat`` field to indicate that the log contains |
4042 | In short, log incompat features protect the log contents against kernels |
4043 | that will not understand the contents. |
4044 | Unlike the other superblock feature bits, log incompat bits are |
4046 | The log cleans itself after its contents have been committed into the |
4047 | filesystem, either as part of an unmount or because the system is |
4049 | Because upper level code can be working on a transaction at the same |
4050 | time that the log cleans itself, it is necessary for upper level code to |
4051 | communicate to the log when it is going to use a log incompatible |
4054 | The log coordinates access to incompatible features through the use of |
4056 | The log cleaning code tries to take this rwsem in exclusive mode to |
4057 | clear the bit; if the lock attempt fails, the feature bit remains set. |
4059 | transaction by calling ``xlog_use_incompat_feat``, which takes the rwsem |
4061 | The code supporting a log incompat feature should create wrapper |
4062 | functions to obtain the log feature and call |
4063 | ``xfs_add_incompat_log_feature`` to set the feature bits in the primary |
4065 | The superblock update is performed transactionally, so the wrapper to |
4066 | obtain log assistance must be called just prior to the creation of the |
4067 | transaction that uses the functionality. |
4068 | For a file operation, this step must happen after taking the IOLOCK |
4069 | and the MMAPLOCK, but before allocating the transaction. |
4070 | When the transaction is complete, the ``xlog_drop_incompat_feat`` |
4071 | function is called to release the feature. |
4072 | The feature bit will not be cleared from the superblock until the log |
4076 | log incompat features and provide convenience wrappers around the |
4084 The goal is to exchange all file fork mappings between two file fork offset
4086 There are likely to be many extent mappings in each fork, and the edges of
4087 the mappings aren't necessarily aligned.
4088 Furthermore, there may be other updates that need to happen after the swap,
4091 This is roughly the format of the new deferred extent swap work item:
4096 /* Inodes participating in the operation. */
4105 /* Set these file sizes after the operation, unless negative. */
4113 The new log intent item contains enough information to track two logical fork
4116 Each step of a swap operation exchanges the largest file range mapping possible
4117 from one file to the other.
4118 After each step in the swap operation, the two startoff fields are incremented
4119 and the blockcount field is decremented to reflect the progress made.
4120 The flags field captures behavioral parameters such as swapping the attr fork
4121 instead of the data fork and other work to be done after the extent swap.
4122 The two isize fields are used to swap the file size at the end of the operation
4123 if the file data fork is the target of the swap operation.
4125 When the extent swap is initiated, the sequence of operations is as follows:
4127 1. Create a deferred work item for the extent swap.
4128 At the start, it should contain the entirety of the file ranges to be
4131 2. Call ``xfs_defer_finish`` to process the exchange.
4133 This will log an extent swap intent item to the transaction for the deferred
4136 3. Until ``sxi_blockcount`` of the deferred extent swap work item is zero,
4138 a. Read the block maps of both file ranges starting at ``sxi_startoff1`` and
4139 ``sxi_startoff2``, respectively, and compute the longest extent that can
4141 This is the minimum of the two ``br_blockcount`` s in the mappings.
4142 Keep advancing through the file forks until at least one of the mappings
4144 Mutual holes, unwritten extents, and extent mappings to the same physical
4147 For the next few steps, this document will refer to the mapping that came
4148 from file 1 as "map1", and the mapping that came from file 2 as "map2".
4158 f. Log the block, quota, and extent count updates for both files.
4160 g. Extend the ondisk size of either file if necessary.
4162 h. Log an extent swap done log item for the extent swap intent log item
4163 that was read at the start of step 3.
4165 i. Compute the amount of file range that has just been covered.
4169 j. Increase the starting offsets of ``sxi_startoff1`` and ``sxi_startoff2``
4170 by the number of blocks computed in the previous step, and decrease
4171 ``sxi_blockcount`` by the same quantity.
4172 This advances the cursor.
4174 k. Log a new extent swap intent log item reflecting the advanced state of
4175 the work item.
4177 l. Return the proper error code (EAGAIN) to the deferred operation manager
4179 The operation manager completes the deferred work in steps 3b-3e before
4180 moving back to the start of step 3.
4185 If the filesystem goes down in the middle of an operation, log recovery will
4186 find the most recent unfinished extent swap log intent item and restart from
4189 the old broken structure or the new one, and never a mismash of both.
4196 First, regular files require the page cache to be flushed to disk before the
4198 Like any filesystem operation, extent swapping must determine the maximum
4200 the operation, and reserve that quantity of resources to avoid an unrecoverable
4202 The preparation step scans the ranges of both files to estimate:
4204 - Data device blocks needed to handle the repeated updates to the fork
4207 - Increase in quota usage for both files, if the two files do not share the
4209 - The number of extent mappings that will be added to each file.
4212 to different extents on the realtime volume, which could happen if the
4215 The need for precise estimation increases the run time of the swap operation,
4217 The filesystem must not run completely out of free space, nor can the extent
4219 Regular users are required to abide the quota limits, though metadata repairs
4225 Extended attributes, symbolic links, and directories can set the fork format to
4226 "local" and treat the fork as a literal area for data storage.
4229 - If both forks are in local format and the fork areas are large enough, the
4230 swap is performed by copying the incore fork contents, logging both forks,
4232 The atomic extent swap mechanism is not necessary, since this can be done
4235 - If both forks map blocks, then the regular atomic extent swap is used.
4238 The contents of the local format fork are converted to a block to perform the
4240 The conversion to block format must be done in the same transaction that
4241 logs the initial extent swap intent log item.
4242 The regular atomic extent swap is used to exchange the mappings.
4243 Special flags are set on the swap operation so that the transaction can be
4244 rolled one more time to convert the second file's fork back to local format
4245 so that the second file will be ready to go as soon as the ILOCK is dropped.
4247 Extended attributes and directories stamp the owning inode into every block,
4248 but the buffer verifiers do not actually check the inode number!
4250 referential integrity, so prior to performing the extent swap, online repair
4251 builds every block in the new data structure with the owner field of the file
4254 After a successful swap operation, the repair operation must reap the old fork
4255 blocks by processing each fork mapping through the standard :ref:`file extent
4257 If the filesystem should go down during the reap part of the repair, the
4258 iunlink processing at the end of recovery will free both the temporary file and
4260 However, this iunlink processing omits the cross-link detection of online
4270 2. Use the staging data to write out new contents into the temporary repair
4272 The same fork must be written to as is being repaired.
4274 3. Commit the scrub transaction, since the swap estimation step must be
4278 the appropriate resource reservations, locks, and fill out a ``struct
4279 xfs_swapext_req`` with the details of the swap operation.
4281 5. Call ``xrep_tempswap_contents`` to swap the contents.
4283 6. Commit the transaction to complete the repair.
4287 Case Study: Repairing the Realtime Summary File
4290 In the "realtime" section of an XFS filesystem, free space is tracked via a
4292 Each bit in the bitmap represents one realtime extent, which is a multiple of
4293 the filesystem block size between 4KiB and 1GiB in size.
4294 The realtime summary file indexes the number of free extents of a given size to
4295 the offset of the block within the realtime free space bitmap where those free
4297 In other words, the summary file helps the allocator find free extents by
4298 length, similar to what the free space by count (cntbt) btree does for the data
4301 The summary file itself is a flat file (with no block headers or checksums!)
4303 counters to match the number of blocks in the rt bitmap.
4304 Each counter records the number of free extents that start in that bitmap block
4307 To check the summary file against the bitmap:
4309 1. Take the ILOCK of both the realtime bitmap and summary files.
4311 2. For each free space extent recorded in the bitmap:
4313 a. Compute the position in the summary file that contains a counter that
4316 b. Read the counter from the xfile.
4318 c. Increment it, and write it back to the xfile.
4320 3. Compare the contents of the xfile against the ondisk file.
4322 To repair the summary file, write the xfile contents into the temporary file
4323 and use atomic extent swap to commit the new contents.
4324 The temporary file is then reaped.
4326 The proposed patchset is the
4335 Values are limited in size to 64KiB, but there is no limit in the number of
4337 The attribute fork is unpartitioned, which means that the root of the attribute
4341 user-provided names with the user-provided values.
4343 If the leaf information expands beyond a single block, a directory/attribute
4349 1. Walk the attr fork mappings of the file being repaired to find the attribute
4353 a. Walk the attr leaf block to find candidate keys.
4356 1. Check the name for problems, and ignore the name if there are.
4358 2. Retrieve the value.
4359 If that succeeds, add the name and value to the staging xfarray and
4362 2. If the memory usage of the xfarray and xfblob exceed a certain amount of
4363 memory or there are no more attr fork blocks to examine, unlock the file and
4364 add the staged extended attributes to the temporary file.
4366 3. Use atomic extent swapping to exchange the new and old extended attribute
4368 The old attribute blocks are now attached to the temporary file.
4370 4. Reap the temporary file.
4372 The proposed patchset is the
4382 The offline repair tool scans all inodes to find files with nonzero link count,
4385 moved to the ``/lost+found`` directory.
4388 The best that online repair can do at this time is to read directory data
4390 move orphans back into the directory tree.
4391 The salvage process is discussed in the case study at the end of this section.
4392 The :ref:`file link count fsck <nlinks>` code takes care of fixing link counts
4393 and moving orphans to the ``/lost+found`` directory.
4398 Unlike extended attributes, directory blocks are all the same size, so
4401 1. Find the parent of the directory.
4402 If the dotdot entry is not unreadable, try to confirm that the alleged
4403 parent has a child entry pointing back to the directory being repaired.
4404 Otherwise, walk the filesystem to find it.
4406 2. Walk the first partition of data fork of the directory to find the directory
4410 a. Walk the directory data block to find candidate entries.
4413 i. Check the name for problems, and ignore the name if there are.
4415 ii. Retrieve the inumber and grab the inode.
4416 If that succeeds, add the name, inode number, and file type to the
4419 3. If the memory usage of the xfarray and xfblob exceed a certain amount of
4420 memory or there are no more directory data blocks to examine, unlock the
4421 directory and add the staged dirents into the temporary directory.
4422 Truncate the staging files.
4424 4. Use atomic extent swapping to exchange the new and old directory structures.
4425 The old directory blocks are now attached to the temporary file.
4427 5. Reap the temporary file.
4429 **Future Work Question**: Should repair revalidate the dentry cache when
4435 ensure that one of the following apply:
4437 1. The cached dentry reflects an ondisk dirent in the new directory.
4439 2. The cached dentry no longer has a corresponding ondisk dirent in the new
4440 directory and the dentry can be purged from the cache.
4442 3. The cached dentry no longer has an ondisk dirent but the dentry cannot be
4444 This is the problem case.
4446 Unfortunately, the current dentry cache design doesn't provide a means to walk
4450 The proposed patchset is the
4458 A parent pointer is a piece of file metadata that enables a user to locate the
4459 file's parent directory without having to traverse the directory tree from the
4461 Without them, reconstruction of directory trees is hindered in much the same
4462 way that the historic lack of reverse space mapping information once hindered
4464 The parent pointer feature, however, makes total directory reconstruction
4467 XFS parent pointers include the dirent name and location of the entry within
4468 the parent directory.
4470 parents in the form ``(parent_inum, parent_gen, dirent_pos) → (dirent_name)``.
4471 The directory checking process can be strengthened to ensure that the target of
4472 each dirent also contains a parent pointer pointing back to the dirent.
4473 Likewise, each parent pointer can be checked by ensuring that the target of
4475 the parent pointer.
4478 **Note**: The ondisk format of parent pointers is not yet finalized.
4486 | extended attribute in the child that could be used to identify the |
4491 | 1. The XFS codebase of the late 2000s did not have the infrastructure to |
4492 | enforce strong referential integrity in the directory tree. |
4494 | followed up with the corresponding change to the reverse links. |
4501 | 3. The extended attribute did not record the name of the directory entry |
4502 | in the parent, so the SGI parent pointer implementation cannot be |
4503 | used to reconnect the directory tree. |
4507 | point before the maximum file link count is achieved. |
4509 | The original parent pointer design was too unstable for something like |
4512 | second implementation that solves all shortcomings of the first. |
4514 | manipulations of the extended attribute structures. |
4515 | This solves the referential integrity problem by making it possible to |
4516 | commit a dirent update and a parent pointer update in the same |
4518 | Chandan increased the maximum extent counts of both data and attribute |
4519 | forks, thereby ensuring that the extended attribute structure can grow |
4520 | to handle the maximum hardlink count of any file. |
4529 1. Set up a temporary directory for generating the new directory structure,
4533 2. Set up an inode scanner and hook into the directory entry code to receive
4536 3. For each parent pointer found in each file scanned, decide if the parent
4537 pointer references the directory of interest.
4540 a. Stash an addname entry for this dirent in the xfarray for later.
4542 b. When finished scanning that file, flush the stashed updates to the
4545 4. For each live directory update received via the hook, decide if the child
4549 a. Stash an addname or removename entry for this dirent update in the
4551 We cannot write directly to the temporary directory because hook
4553 Instead, we stash updates in the xfarray and rely on the scanner thread
4554 to apply the stashed updates to the temporary directory.
4556 5. When the scan is complete, atomically swap the contents of the temporary
4557 directory and the directory being repaired.
4558 The temporary directory now contains the damaged directory structure.
4560 6. Reap the temporary directory.
4562 7. Update the dirent position field of parent pointers as necessary.
4563 This may require the queuing of a substantial number of xattr log intent
4566 The proposed patchset is the
4571 **Unresolved Question**: How will repair ensure that the ``dirent_pos`` fields
4572 match in the reconstructed directory?
4576 1. The field could be designated advisory, since the other three values are
4577 sufficient to find the entry in the parent.
4581 the referential integrity problem but runs the risk that dirent creation
4582 will fail due to conflicts with the free space in the directory.
4584 These conflicts could be resolved by appending the directory entry and
4585 amending the xattr code to support updating an xattr key and reindexing the
4586 dabtree, though this would have to be performed with the parent directory
4589 3. Same as above, but remove the old parent pointer entry and add a new one
4592 4. Change the ondisk xattr format to ``(parent_inum, name) → (parent_gen)``,
4593 which would provide the attr name uniqueness that we require, without
4594 forcing repair code to update the dirent position.
4595 Unfortunately, this requires changes to the xattr code to support attr
4598 5. Change the ondisk xattr format to ``(parent_inum, hash(name)) →
4600 If the hash is sufficiently resistant to collisions (e.g. sha256) then
4601 this should provide the attr name uniqueness that we require.
4604 Discussion is ongoing under the `parent pointers patch deluge
4617 2. Set up an inode scanner and hook into the directory entry code to receive
4620 3. For each directory entry found in each directory scanned, decide if the
4621 dirent references the file of interest.
4624 a. Stash an addpptr entry for this parent pointer in the xfblob and xfarray
4627 b. When finished scanning the directory, flush the stashed updates to the
4630 4. For each live directory update received via the hook, decide if the parent
4634 a. Stash an addpptr or removepptr entry for this dirent update in the
4636 We cannot write parent pointers directly to the temporary file because
4638 Instead, we stash updates in the xfarray and rely on the scanner thread
4639 to apply the stashed parent pointer updates to the temporary file.
4641 5. Copy all non-parent pointer extended attributes to the temporary file.
4643 6. When the scan is complete, atomically swap the attribute fork of the
4644 temporary file and the file being repaired.
4645 The temporary file now contains the damaged extended attribute structure.
4647 7. Reap the temporary file.
4649 The proposed patchset is the
4659 Parent pointer checks are therefore a second pass to be added to the existing
4662 1. After the set of surviving files has been established (i.e. phase 6),
4663 walk the surviving directories of each AG in the filesystem.
4664 This is already performed as part of the connectivity checks.
4666 2. For each directory entry found, record the name in an xfblob, and store
4670 3. For each AG in the filesystem,
4672 a. Sort the per-AG tuples in order of child_ag_inum, parent_inum, and
4675 b. For each inode in the AG,
4677 1. Scan the inode for parent pointers.
4678 Record the names in a per-file xfblob, and store ``(parent_inum,
4681 2. Sort the per-file tuples in order of parent_inum, and dirent_pos.
4683 3. Position one slab cursor at the start of the inode's records in the
4685 This should be trivial since the per-AG tuples are in child inumber
4688 4. Position a second slab cursor at the start of the per-file tuple slab.
4690 5. Iterate the two cursors in lockstep, comparing the parent_ino and
4691 dirent_pos fields of the records under each cursor.
4693 a. Tuples in the per-AG list but not the per-file list are missing and
4694 need to be written to the inode.
4696 b. Tuples in the per-file list but not the per-AG list are dangling
4697 and need to be removed from the inode.
4699 c. For tuples in both lists, update the parent_gen and name components
4700 of the parent pointer if necessary.
4704 The proposed patchset is the
4710 challenging because it currently uses a single-pass scan of the filesystem
4714 1. The first pass of the scan zaps corrupt inodes, forks, and attributes
4718 2. The next pass records parent pointers pointing to the directories noted
4719 as being corrupt in the first pass.
4720 This second pass may have to happen after the phase 4 scan for duplicate
4723 3. The third pass resets corrupt directories to an empty shortform directory.
4724 Free space metadata has not been ensured yet, so repair cannot yet use the
4727 4. At the start of phase 6, space metadata have been rebuilt.
4728 Use the parent pointer information recorded during step 2 to reconstruct
4729 the dirents and add them to the now-empty directories.
4735 The Orphanage
4740 The root of the filesystem is a directory, and each entry in a directory points
4742 Unfortunately, a disruption in the directory graph pointers result in a
4746 Without parent pointers, the directory parent pointer online scrub code can
4748 back to the child directory and the file link count checker can detect a file
4749 that isn't pointed to by any directory in the filesystem.
4750 If such a file has a positive link count, the file is an orphan.
4754 This should reduce the incidence of files ending up in ``/lost+found``.
4756 When orphans are found, they should be reconnected to the directory tree.
4757 Offline fsck solves the problem by creating a directory ``/lost+found`` to
4758 serve as an orphanage, and linking orphan files into the orphanage by using the
4759 inumber as the name.
4760 Reparenting a file to the orphanage does not reset any of its permissions or
4763 This process is more involved in the kernel than it is in userspace.
4764 The directory and file link count repair setup functions must use the regular
4765 VFS mechanisms to create the orphanage directory with all the necessary
4769 Orphaned files are adopted by the orphanage as follows:
4771 1. Call ``xrep_orphanage_try_create`` at the start of the scrub setup function
4772 to try to ensure that the lost and found directory actually exists.
4773 This also attaches the orphanage directory to the scrub context.
4775 2. If the decision is made to reconnect a file, take the IOLOCK of both the
4776 orphanage and the file being reattached.
4777 The ``xrep_orphanage_iolock_two`` function follows the inode locking
4781 to compute the new name in the orphanage and the block reservation required.
4783 4. Use ``xrep_orphanage_adoption_prep`` to reserve resources to the repair
4786 5. Call ``xrep_orphanage_adopt`` to reparent the orphaned file into the lost
4787 and found, and update the kernel dentry cache.
4789 The proposed patches are in the
4797 This section discusses the key algorithms and data structures of the userspace
4798 program, ``xfs_scrub``, that provide the ability to drive metadata checks and
4799 repairs in the kernel, verify file data, and look for other potential problems.
4806 Recall the :ref:`phases of fsck work<scrubphases>` outlined earlier.
4807 That structure follows naturally from the data dependencies designed into the
4811 a. Filesystem summary counts depend on consistency within the inode indices,
4812 the allocation group space btrees, and the realtime volume space
4815 b. Quota resource counts depend on consistency within the quota file data
4816 forks, inode indices, inode records, and the forks of every file on the
4819 c. The naming hierarchy depends on consistency within the directory and
4824 the file forks that map directory and extended attribute data to physical
4827 e. The file forks depends on consistency within inode records and the space
4828 metadata indices of the allocation groups and the realtime volume.
4831 f. Inode records depends on consistency within the inode metadata indices.
4833 g. Realtime space metadata depend on the inode records and data forks of the
4836 h. The allocation group metadata indices (free space, inodes, reference count,
4837 and reverse mapping btrees) depend on consistency within the AG headers and
4838 between all the AG metadata btrees.
4840 i. ``xfs_scrub`` depends on the filesystem being mounted and kernel support
4844 operations in the ``xfs_scrub`` program:
4846 - Phase 1 checks that the provided path maps to an XFS filesystem and detect
4847 the kernel's scrubbing abilities, which validates group (i).
4865 Notice that the data dependencies between groups are enforced by the structure
4866 of the program flow.
4874 if the program has been invoked manually from a command line.
4875 This requires careful scheduling to keep the threads as evenly loaded as
4878 Early iterations of the ``xfs_scrub`` inode scanner naïvely created a single
4880 Each workqueue item walked the inode btree (with ``XFS_IOC_INUMBERS``) to find
4883 The file handle was then passed to a function to generate scrub items for each
4885 This simple algorithm leads to thread balancing problems in phase 3 if the
4886 filesystem contains one AG with a few large sparse files and the rest of the
4888 The inode scan dispatch function was not sufficiently granular; it should have
4889 been dispatching at the level of individual inodes, or, to constrain memory
4894 Just like before, the first workqueue is seeded with one workqueue item per AG,
4896 The second workqueue, however, is configured with an upper bound on the number
4898 Each inode btree chunk found by the first workqueue's workers are queued to the
4902 If the second workqueue is too full, the workqueue add function blocks the
4903 first workqueue's workers until the backlog eases.
4904 This doesn't completely solve the balancing problem, but reduces it enough to
4907 The proposed patchsets are the scrub
4910 and the
4922 functioning of the inode indices to find inodes to scan.
4933 In the original design of ``xfs_scrub``, it was thought that repairs would be
4934 so infrequent that the ``struct xfs_scrub_metadata`` objects used to
4935 communicate with the kernel could also be used as the primary object to
4937 With recent increases in the number of optimizations possible for a given
4945 The :ref:`data dependencies <scrubcheck>` outlined earlier still apply, which
4946 means that ``xfs_scrub`` must try to complete the repair work scheduled by
4948 The repair process is as follows:
4950 1. Start a round of repair with a workqueue and enough workers to keep the CPUs
4951 as busy as the user desires.
4955 i. Ask the kernel to repair everything listed in the repair item for a
4958 ii. Make a note if the kernel made any progress in reducing the number
4961 iii. If the object no longer requires repairs, revalidate all metadata
4963 If the revalidation succeeds, drop the repair item.
4964 If not, requeue the item for more repairs.
4966 b. If any repairs were made, jump back to 1a to retry all the phase 2 items.
4970 i. Ask the kernel to repair everything listed in the repair item for a
4973 ii. Make a note if the kernel made any progress in reducing the number
4976 iii. If the object no longer requires repairs, revalidate all metadata
4978 If the revalidation succeeds, drop the repair item.
4979 If not, requeue the item for more repairs.
4981 d. If any repairs were made, jump back to 1c to retry all the phase 3 items.
4987 Complain if the repairs were not successful, since this is the last chance
4992 Corrupt file data blocks reported by phase 6 cannot be recovered by the
4995 The proposed patchsets are the
4998 refactoring of the
5004 and the
5012 If ``xfs_scrub`` succeeds in validating the filesystem metadata by the end of
5014 the filesystem.
5015 These names consist of the filesystem label, names in directory entries, and
5016 the names of extended attributes.
5017 Like most Unix filesystems, XFS imposes the sparest of constraints on the
5024 - Null bytes are not allowed in the filesystem label.
5026 Directory entries and attribute keys store the length of the name explicitly
5028 For this section, the term "naming domain" refers to any place where names are
5029 presented together -- all the names in a directory, or all the attributes of a
5032 Although the Unix naming constraints are very permissive, the reality of most
5036 with the C library because the kernel expects null-terminated names.
5037 In the common case, therefore, names found in an XFS filesystem are actually
5040 To maximize its expressiveness, the Unicode standard defines separate control
5042 systems around the world.
5043 For example, the character "Cyrillic Small Letter A" U+0430 "а" often renders
5046 The standard also permits characters to be constructed in multiple ways --
5049 For example, the character "Angstrom Sign U+212B "Å" can also be expressed
5054 Like the standards that preceded it, Unicode also defines various control
5055 characters to alter the presentation of text.
5056 For example, the character "Right-to-Left Override" U+202E can trick some
5059 If the character "Zero Width Space" U+200B is encountered in a file name, the
5060 name will render identically to a name that does not have the zero width
5065 The kernel, in its indifference to upper level encoding schemes, permits this.
5066 Most filesystem drivers persist the byte sequence names that are given to them
5067 by the VFS.
5070 sections 4 and 5 of the
5073 When ``xfs_scrub`` detects UTF-8 encoding in use on a system, it uses the
5074 Unicode normalization form NFD in conjunction with the confusable name
5081 All of these potential issues are reported to the system administrator during
5087 The system administrator can elect to initiate a media scan of all file data
5089 This scan after validation of all filesystem metadata (except for the summary
5091 The scan starts by calling ``FS_IOC_GETFSMAP`` to scan the filesystem space map
5094 they were data fork extents to reduce the command setup overhead.
5095 When the space map scan accumulates a region larger than 32MB, a media
5096 verification request is sent to the disk as a directio read of the raw block
5099 If the verification read fails, ``xfs_scrub`` retries with single-block reads
5100 to narrow down the failure to the specific region of the media and recorded.
5101 When it has finished issuing verification requests, it again uses the space
5102 mapping ioctl to map the recorded media errors back to metadata structures
5110 It is hoped that the reader of this document has followed the designs laid out
5114 Although the scope of this work is daunting, it is hoped that this guide will
5117 Please feel free to contact the XFS mailing list with questions.
5122 As discussed earlier, a second frontend to the atomic extent swap mechanism is
5125 This frontend has been out for review for several years now, though the
5127 the proposal has not been pushed very hard.
5132 As mentioned earlier, XFS has long had the ability to swap extents between
5134 The earliest form of this was the fork swap mechanism, where the entire
5135 contents of data forks could be exchanged between two files by exchanging the
5138 some log support to continue rewriting the owner fields of BMBT blocks during
5140 When the reverse mapping btree was later added to XFS, the only way to maintain
5141 the consistency of the fork mappings with the reverse mapping index was to
5144 This mechanism is identical to steps 2-3 from the procedure above except for
5145 the new tracking items, because the atomic extent swap mechanism is an
5147 For the narrow case of file defragmentation, the file contents must be
5148 identical, so the recovery guarantees are not much of a gain.
5150 Atomic extent swapping is much more flexible than the existing swapext
5151 implementations because it can guarantee that the caller never sees a mix of
5154 The extra flexibility enables several new use cases:
5158 Next, it opens a temporary file and calls the file clone operation to reflink
5159 the first file's contents into the temporary file.
5160 Writes to the original file should instead be written to the temporary file.
5161 Finally, the process calls the atomic extent swap system call
5162 (``FIEXCHANGE_RANGE``) to exchange the file contents, thereby committing all
5163 of the updates to the original file, or none of them.
5167 - **Transactional file updates**: The same mechanism as above, but the caller
5168 only wants the commit to occur if the original file's contents have not
5170 To make this happen, the calling process snapshots the file modification and
5171 change timestamps of the original file before reflinking its data to the
5173 When the program is ready to commit the changes, it passes the timestamps
5174 into the kernel as arguments to the atomic extent swap system call.
5175 The kernel only commits the changes if the provided timestamps match the
5179 logical sector size matching the filesystem block size to force all writes
5180 to be aligned to the filesystem block size.
5181 Stage all writes to a temporary file, and when that is complete, call the
5182 atomic extent swap system call with a flag to indicate that holes in the
5190 As it turns out, the :ref:`refactoring <scrubrepair>` of repair items mentioned
5192 Since 2018, the cost of making a kernel call has increased considerably on some
5193 systems to mitigate the effects of speculative execution attacks.
5195 reduce the number of times an execution path crosses a security boundary.
5197 With vectorized scrub, userspace pushes to the kernel the identity of a
5199 simple representation of the data dependencies between the selected scrub
5201 The kernel executes as much of the caller's plan as it can until it hits a
5208 The relevant patchsets are the
5219 One serious shortcoming of the online fsck code is that the amount of time that
5220 it can spend in the kernel holding resource locks is basically unbounded.
5221 Userspace is allowed to send a fatal signal to the process which will cause
5223 for userspace to provide a time budget to the kernel.
5224 Given that the scrub codebase has helpers to detect fatal signals, it shouldn't
5226 operation and abort the operation if it exceeds budget.
5227 However, most repair functions have the property that once they begin to touch
5228 ondisk metadata, the operation cannot be cancelled cleanly, after which a QoS
5234 Over the years, many XFS users have requested the creation of a program to
5235 clear a portion of the physical storage underlying a filesystem so that it
5239 The first piece the ``clearspace`` program needs is the ability to read the
5241 This already exists in the form of the ``FS_IOC_GETFSMAP`` ioctl.
5242 The second piece it needs is a new fallocate mode
5243 (``FALLOC_FL_MAP_FREE_SPACE``) that allocates the free space in a region and
5245 Call this file the "space collector" file.
5246 The third piece is the ability to force an online repair.
5248 To clear all the metadata out of a portion of physical storage, clearspace
5249 uses the new fallocate map-freespace call to map any free space in that region
5250 to the space collector file.
5252 ``GETFSMAP`` and issues forced repair requests on the data structure.
5253 This often results in the metadata being rebuilt somewhere that is not being
5255 After each relocation, clearspace calls the "map free space" function again to
5256 collect any newly freed space in the region being cleared.
5258 To clear all the file data out of a portion of the physical storage, clearspace
5259 uses the FSMAP information to find relevant file data blocks.
5260 Having identified a good target, it uses the ``FICLONERANGE`` call on that part
5261 of the file to try to share the physical space with a dummy file.
5262 Cloning the extent means that the original owners cannot overwrite the
5264 Clearspace makes its own copy of the frozen extent in an area that is not being
5265 cleared, and uses ``FIEDEUPRANGE`` (or the :ref:`atomic extent swap
5266 <swapext_if_unchanged>` feature) to change the target file's data extent
5267 mapping away from the area being cleared.
5268 When all other mappings have been moved, clearspace reflinks the space into the
5271 There are further optimizations that could apply to the above algorithm.
5275 the operation completes.
5278 With the refcount information exposed, clearspace can quickly find the longest,
5279 most shared data extents in the filesystem, and target them first.
5281 **Future Work Question**: How might the filesystem move inode chunks?
5284 that creates a new file with the old contents and then locklessly runs around
5285 the filesystem updating directory entries.
5286 The operation cannot complete if the filesystem goes down.
5288 hidden behind a jump label, and a log item that tracks the kernel walking the
5290 The trouble is, the kernel can't do anything about open files, since it cannot
5293 **Future Work Question**: Can static keys be used to minimize the cost of
5297 Until the first revocation, the bailout code need not be in the call path at
5300 The relevant patchsets are the
5311 Removing the end of the filesystem ought to be a simple matter of evacuating
5312 the data and metadata at the end of the filesystem, and handing the freed space
5313 to the shrink code.
5314 That requires an evacuation of the space at end of the filesystem, which is a