Lines Matching full:to
25 - To help kernel distributors understand exactly what the XFS online fsck
28 - To help people reading the code to familiarize themselves with the relevant
31 - To help developers maintaining the system by capturing the reasons
34 As the online fsck code is merged, the links in this document to topic branches
35 will be replaced with links to code.
43 and how it is tested to ensure correct functionality.
71 operations internal to the filesystem, such as internal consistency checking
77 to look for errors.
78 In addition to looking for obvious metadata corruptions, fsck also
79 cross-references different types of metadata records with each other to look
82 to correct any problems found.
83 As a word of caution -- the primary goal of most Linux fsck tools is to restore
84 the filesystem metadata to a consistent state, not to maximize the data
89 format, which means that fsck can only respond to errors by erasing files until
92 it is now possible to regenerate data structures when non-catastrophic errors
108 Code is posted to the kernel.org git trees as follows:
119 XFS (on Linux) to check and repair filesystems.
125 metadata, though it lacks any ability to repair what it finds.
126 Due to its high memory requirements and inability to repair things, this
129 The second program, ``xfs_repair``, was created to be faster and more robust
132 It uses extent-based in-memory data structures to reduce memory consumption,
133 and tries to schedule readahead IO appropriately to reduce I/O waiting time
135 The most important feature of this tool is its ability to respond to
137 to eliminate problems.
145 1. **User programs** suddenly **lose access** to the filesystem when unexpected
153 offline to **look for problems** proactively.
157 This may expose them to substantial billing costs when a linear media scan
160 5. **System administrators** cannot **schedule** a maintenance window to deal
161 with corruptions if they **lack the means** to assess filesystem health
168 malicious actors **exploit quirks of Unicode** to place misleading names
171 Given this definition of the problems to be solved and the actors who would
175 This new third program has three components: an in-kernel facility to check
176 metadata, an in-kernel facility to repair metadata, and a userspace driver
177 program to drive fsck activity on a live filesystem.
180 tool, describes its major design points in connection to those goals, and
187 | referred to by its current name "``xfs_repair``". |
189 | referred to as "``xfs_scrub``". |
197 Sharding enables better performance on highly parallel systems and helps to
200 inodes) means that there are ample opportunities to perform targeted checks and
209 metadata to enable targeted checking and repair operations while the system
211 This capability will be coupled to automatic system management so that
217 Because it is necessary for online fsck to lock and scan live metadata objects,
221 reacting to the outcomes appropriately, and reporting results to the system
223 The second and third are in the kernel, which implements functions to check
230 | item" to "scrub item". |
234 philosophy, which is to say that each item should handle one aspect of a
240 In principle, online fsck should be able to check and to repair everything that
244 If these errors cause the next mount to fail, offline fsck is the only
249 This means that scrub cannot take *any* shortcuts to save time, because doing
250 so could lead to concurrency problems.
253 However, both of these limitations are acceptable tradeoffs to satisfy the
254 different motivations of online fsck, which are to **minimize system downtime**
255 and to **increase predictability of operation**.
276 is permitted to perform repairs, then those scrub items are repaired to
278 Repairs are implemented by using the information in the scrub item to
281 Optimizations and all other repairs are deferred to phase 4.
285 If repairs are needed and ``xfs_scrub`` is permitted to perform repairs,
288 Optimizations, deferred repairs, and unsuccessful repairs are deferred to
295 reservation step due to wildly incorrect summary counters.
311 The ability to use hardware-assisted data file integrity checking is new
312 to online fsck; neither of the previous tools have this capability.
313 If media errors occur, they will be mapped to the owning files and reported.
331 released and the positive scan results are returned to userspace.
333 this, resources are released and the negative scan results are returned to
335 Otherwise, the kernel moves on to the second step.
337 2. The repair function is called to rebuild the data structure.
339 rather than try to salvage the existing structure.
340 If the repair fails, the scan results from the first step are returned to
342 Otherwise, the kernel moves on to the third step.
345 item to assess the efficacy of the repairs.
346 The results of the reassessment are returned to userspace.
357 Metadata structures in this category should be most familiar to filesystem
379 Primary metadata objects are the simplest for scrub to process.
381 owns the item being scrubbed is locked to guard against concurrent updates.
383 errors and cross-references healthy records against other metadata to look for
387 The repair function scans available metadata as needed to record all the
388 observations needed to complete the structure.
390 atomically to complete the repair.
396 This minimizes the complexity of the repair code because it is not necessary to
397 handle concurrent updates from other threads, nor is it necessary to access
400 trying to access the damaged structure will be blocked until repairs complete.
402 observations and a means to write new structures to disk.
413 in-memory array prior to formatting the new ondisk structure, which is very
414 similar to the list-based algorithm discussed in section 2.3 ("List-Based
433 This class of metadata is difficult for scrub to process because scrub attaches
434 to the secondary object but needs to check primary metadata, which runs counter
435 to the usual order of resource acquisition.
436 Frequently, this means that full filesystems scans are necessary to rebuild the
438 Check functions can be limited in scope to reduce runtime.
440 long time to complete.
444 Instead, repair functions set up an in-memory staging structure to store
448 specific to that repair function.
449 The next step is to release all locks and start the filesystem scan.
450 When the repair scanner needs to record an observation, the staging data are
451 locked long enough to apply the update.
453 filesystem so that it can apply pending filesystem updates to the staging
455 Once the scan is done, the owning object is re-locked, the live data is used to
461 comes at a high cost to code complexity.
462 Live filesystem code has to be hooked so that the repair function can observe
464 The staging area has to become a fully functional parallel structure so that
468 should be applied to the staging structure.
473 Programs attempting to access the damaged structures are not blocked from
483 The sidecar index mentioned above bears some resemblance to the side file
485 Their method consists of an index builder that extracts relevant record data to
487 captures all updates that would be committed to the index by other threads were
490 are applied to the new index.
491 To avoid conflicts between the index builder and other writer threads, the
494 To avoid duplication of work between the side file and the index builder, side
498 To minimize changes to the rest of the codebase, XFS online repair keeps the
499 replacement index hidden until it's completely ready to go.
500 In other words, there is no attempt to expose the keyspace of the new index
503 appropriate to building *new* indices.
505 **Future Work Question**: Can the full scan and live update code used to
506 facilitate a repair also be used to implement a comprehensive check?
509 employed these live scans to build a shadow copy of the metadata and then
510 compared the shadow records to the ondisk records.
521 These are often used to speed up resource usage queries, and are many times
535 The superblock summary counters have special requirements due to the underlying
550 quotacheck can use the incremental view deltas described in section 2.14 to
551 track pending changes to the block and inode usage counts in each transaction,
552 and commit those changes to a dquot side file when the transaction commits.
556 it sets attributes of the objects being scanned instead of writing them to a
566 Steps can be taken to mitigate or eliminate those risks, though at a cost to
569 - **Decreased performance**: Adding metadata indices to the filesystem
570 increases the time cost of persisting changes to disk, and the reverse space
574 reduces the ability of online fsck to find inconsistencies and repair them.
577 software that result in incorrect repairs being written to the filesystem.
579 authors to find bugs early, but it might not catch everything.
581 and ``CONFIG_XFS_ONLINE_REPAIR``) to enable distributors to choose not to
587 - **Inability to repair**: Sometimes, a filesystem is too badly damaged to be
592 To reduce the chance that a repair will fail with a dirty transaction and
594 designed to stage and validate all new records before committing the new
597 - **Misbehavior**: Online fsck requires many privileges -- raw IO to block
599 and the ability to perform administrative changes.
601 background service is configured to run with only the privileges required.
603 deadlocking, but it should be sufficient to prevent the scrub process from
607 - **Fuzz Kiddiez**: There are many people now who seem to think that running
608 automated fuzz testing of ondisk artifacts to find mischievous behavior and
612 operators help to **fix** the flaws, but this opinion apparently is not
614 The XFS maintainers' continuing ability to manage these events presents an
615 ongoing risk to the stability of the development process.
619 Many of these risks are inherent to software programming.
634 Demonstrations of correct operation are necessary to build users' confidence
636 Unfortunately, it was not really feasible to perform regular exhaustive testing
647 The primary goal of any free software QA effort is to make testing as
648 inexpensive and widespread as possible to maximize the scaling advantages of
652 This improves code quality by enabling the authors of online fsck to find and
653 fix bugs early, and helps developers of new features to find integration
664 During development of the online checking code, fstests was modified to run
665 ``xfs_scrub -n`` between each test to ensure that the new checking code
668 To start development of online repair, fstests was modified to run
669 ``xfs_repair`` to rebuild the filesystem's metadata indices between tests.
673 To complete the first phase of development of online repair, fstests was
674 modified to be able to run ``xfs_scrub`` in a "force rebuild" mode.
675 This enables a comparison of the effectiveness of online repair as compared to
684 to test the rather common fault that entire metadata blocks get corrupted.
687 Next, individual test cases were created to create a test filesystem, identify
692 This earlier test suite enabled XFS developers to test the ability of the
693 in-kernel validation functions and the ability of the offline fsck tool to
695 This part of the test suite was extended to cover online fsck in exactly the
702 * Write garbage to it
706 1. The kernel verifiers to stop obviously bad metadata
707 2. Offline repair (``xfs_repair``) to detect and fix
708 3. Online repair (``xfs_scrub``) to detect and fix
714 infrastructure to provide a much more powerful facility: targeted fuzz testing
717 block in the filesystem to simulate the effects of memory corruption and
719 Given that fstests already contains the ability to create a filesystem
720 containing every metadata format known to the filesystem, ``xfs_db`` can be
721 used to perform exhaustive fuzz testing!
731 * For each conceivable type of transformation that can be applied to a bit field...
744 1. The kernel verifiers to stop obviously bad metadata
753 Fortunately, having this much test coverage makes it easy for XFS developers to
756 used to discover incorrect repair code and missing functionality for entire
758 The enhanced testing was used to finalize the deprecation of ``xfs_check`` by
763 allow the online fsck developers to compare online fsck against offline fsck,
764 and they enable XFS developers to find deficiencies in the code base.
777 A unique requirement to online fsck is the ability to operate on a filesystem
779 Although it is of course impossible to run ``xfs_scrub`` with *zero* observable
783 To verify that these conditions are being met, fstests has been enhanced in
786 * For each scrub item type, create a test to exercise checking that item type
788 * For each scrub item type, create a test to exercise repairing that item type
790 * Race ``fsstress`` and ``xfs_scrub -n`` to ensure that checking the whole
792 * Race ``fsstress`` and ``xfs_scrub`` in force-rebuild mode to ensure that
800 Success is defined by the ability to run all of these tests without observing
801 any unexpected filesystem shutdowns due to corrupted metadata, kernel hang
814 Online fsck presents two modes of operation to administrators:
825 administrator waits for the results to be reported, just like the existing
827 Both tools share a ``-n`` option to perform a read-only scan, and a ``-v``
828 option to increase the verbosity of the information reported.
831 correction capabilities of the hardware to check data file contents.
839 It serializes scans for any filesystems that resolve to the same top level
840 kernel block device to prevent resource overconsumption.
845 To reduce the workload of system administrators, the ``xfs_scrub`` package
848 The background service configures scrub to run with as little privilege as
851 This can be tuned by the systemd administrator at any time to suit the latency
855 If desired, reports of failures (either due to inconsistencies or mere runtime
863 The decision to enable the background scan is left to the system administrator.
869 This automatic weekly scan is configured out of the box to perform an
876 The systemd unit file definitions have been subjected to a security audit
877 (as of systemd 249) to ensure that the xfs_scrub processes have as little
878 access to the rest of the system as possible.
880 were restricted to the minimum required, sandboxing was set up to the maximal
881 extent possible with sandboxing and system call filtering; and access to the
882 filesystem tree was restricted to the minimum needed to start the program and
884 The service definition files restrict CPU usage to 80% of one CPU core, and
885 apply as nice of a priority to IO and CPU scheduling as possible.
886 This measure was taken to minimize delays in the rest of the filesystem.
900 System administrators should use the ``health`` command of ``xfs_spaceman`` to
903 service window to run the online repair tool to correct the problem.
904 Failing that, the administrator can decide to schedule a maintenance window to
905 run the traditional offline repair tool to correct the problem.
909 Would it be helpful for sysadmins to have a daemon to listen for corruption
916 `wiring up health reports to correction returns
926 code that provide the ability to check and repair metadata while the system
937 ondisk block header to record a magic number, a checksum, a universally
943 supposed to be found at the ondisk address.
944 The first three components enable checking tools to disregard alleged metadata
945 that doesn't belong to the filesystem, and the fourth component enables the
946 filesystem to detect lost writes.
949 to the log as part of a transaction.
951 safely persisted to storage.
956 Sequence number tracking enables log recovery to avoid applying out of date
957 log updates to the filesystem.
960 the filesystem to detect obvious corruption when reading metadata blocks from
974 For performance reasons, filesystem authors were reluctant to add redundancy to
976 Filesystems designers in the early 21st century choose different strategies to
980 For XFS, a different redundancy strategy was chosen to modernize the design:
981 a secondary space usage index that maps allocated disk extents back to their
983 By adding a new index, the filesystem retains most of its ability to scale
984 well to heavily threaded workloads involving large datasets, since the primary
989 However, it has two critical advantages: first, the reverse index is key to
999 | A criticism of adding the secondary index is that it does nothing to |
1008 | mirroring to XFS itself. |
1009 | Perfection of RAID and volume management are best left to existing |
1029 For space allocated to files, the offset field tells scrub where the space was
1040 Program runtime and ease of resource acquisition are the only real limits to
1050 There are several observations to make about reverse mapping indices:
1054 The checking code for most primary metadata follows a path similar to the
1061 btree block requires locking the file and searching the entire btree to
1070 the AGF buffer lock but scrub wants to take a file ILOCK while holding
1077 The details of how these records are staged, written to disk, and committed
1083 The first step of checking a metadata structure is to examine every record
1086 XFS contains multiple layers of checking to try to prevent inconsistent
1088 Each of these layers contributes information that helps the kernel to make
1096 - Can the structure be optimized to improve performance or reduce the size of
1111 - Does the block belong to this filesystem?
1113 - Does the block belong to the structure that asked for the read?
1131 Every online fsck scrubbing function is expected to read every ondisk metadata
1133 Corruption problems observed during a check are immediately reported to
1135 failure to cross-reference once the full examination is complete.
1147 The scope of checking is still internal to the block.
1152 - Does the block belong to the owning structure that asked for the read?
1162 For example, block pointers and inumbers are checked to ensure that they point
1171 debugging is enabled or a write is about to occur.
1201 For regular runtime code, the cost of these checks is considered to be
1202 prohibitively expensive, but as scrub is dedicated to rooting out
1207 The XFS btree code has keyspace scanning functions that online fsck uses to
1209 Specifically, scrub can scan the key space of an index to determine if that
1210 keyspace is fully, sparsely, or not at all mapped to records.
1211 For the reverse mapping btree, it is possible to mask parts of the key for the
1220 - Does the block belong to the owning structure that asked for the read?
1228 - Do node pointers within the btree point to valid block addresses for the type
1258 - Do the sibling pointers point to valid blocks? Of the same level?
1260 - Do the child pointers point to valid blocks? Of the next level down?
1292 - Does each inode with zero link count correspond to a record in the free
1301 - If this is a CoW fork mapping, does it correspond to a CoW entry in the
1308 - Within the space subkeyspace of the rmap btree (that is to say, all
1309 records mapped to a particular space extent and ignoring the owner info),
1313 Proposed patchsets are the series to find gaps in
1320 to find
1323 and to
1332 to be attached to any file.
1333 Both the kernel and userspace can access the keys and values, subject to
1342 The mappings point to leaf blocks, remote value blocks, or dabtree blocks.
1345 Leaf blocks contain attribute key records that point to the name and the value.
1349 Remote value blocks contain values that are too large to fit inside a leaf.
1351 rooted at block 0) is created to map hashes of the attribute names to leaf
1354 Checking an extended attribute structure is not so straightforward due to the
1359 1. Walk the dabtree in the attr fork (if present) to ensure that there are no
1360 irregularities in the blocks or dabtree mappings that do not point to
1369 This performs a named lookup of the attr name to ensure the correctness
1380 255-byte sequence (name) to an inumber.
1382 Each directory file must have exactly one directory pointing to the file.
1383 A root directory points to itself.
1384 Directory entries point to files of any type.
1385 Each non-directory file may have multiple directories point to it.
1387 In XFS, directories are implemented as a file containing up to three 32GB
1394 information and an index that maps hashes of the dirent names to directory data
1401 beyond one block, then a dabtree is used to map hashes of dirent names to
1406 1. Walk the dabtree in the second partition (if present) to ensure that there
1407 are no irregularities in the blocks or dabtree mappings that do not point to
1415 b. Does the inumber correspond to an actual, allocated inode?
1423 back to the parent?
1426 dirent name to ensure the correctness of the dabtree.
1428 3. Walk the free space list in the third partition (if present) to ensure that
1439 maps user-provided names to improve lookup times by avoiding linear scans.
1440 Internally, it maps a 32-bit hash of the name to a block offset within the
1446 The format of leaf and node records are the same -- each entry points to the
1447 next level down in the hierarchy, with dabtree node records pointing to dabtree
1448 leaf blocks, and dabtree leaf records pointing to non-dabtree blocks elsewhere
1451 Checking and cross-referencing the dabtree is very similar to what is done for
1456 - Does the block belong to the owning structure that asked for the read?
1464 - Do node pointers within the dabtree point to valid fork offsets for dabtree
1467 - Do leaf pointers within the dabtree point to valid fork offsets for directory
1496 checking are sufficiently complicated to warrant separate sections.
1501 After performing a repair, the checking code is run a second time to validate
1503 internally and returned to the calling process.
1504 This step is critical for enabling system administrator to monitor the status
1506 For developers, it is a useful means to judge the efficacy of error detection
1512 Complex operations can make modifications to multiple per-AG data structures
1514 These chains, once committed to the log, are restarted during log recovery if
1517 online checking must coordinate with chained operations that are in progress to
1518 avoid incorrectly detecting inconsistencies due to pending chains.
1524 should be relatively rare as compared to filesystem change operations.
1528 The count should be bumped whenever a new item is added to the chain.
1532 * When online fsck wants to examine an AG, it should lock the AG header
1533 buffers to quiesce all transaction chains that want to modify that AG.
1535 If it is nonzero, cycle the buffer locks to allow the chain to make forward
1538 This may lead to online fsck taking a long time to complete, but regular
1557 Originally, transaction chains were added to XFS to avoid deadlocks when
1560 which makes it impossible (say) to use a single transaction to free a space
1561 extent in AG 7 and then try to free a now superfluous block mapping btree block
1563 To avoid these kinds of deadlocks, XFS creates Extent Freeing Intent (EFI) log
1564 items to commit to freeing some space in one transaction while deferring the
1565 actual metadata updates to a fresh transaction.
1568 1. The first transaction contains a physical update to the file's block mapping
1569 structures to remove the mapping from the btree blocks.
1570 It then attaches to the in-memory transaction an action item to schedule
1575 Returning to the example above, the action item tracks the freeing of both
1580 attaching the log item to the transaction.
1581 When the log is persisted to disk, the EFI item is written into the ondisk
1583 EFIs can list up to 16 extents to free, all sorted in AG order.
1585 2. The second transaction contains a physical update to the free space btrees
1586 of AG 3 to release the former BMBT block and a second physical update to the
1587 free space btrees of AG 7 to release the unmapped file space.
1590 Attached to the transaction is a an extent free done (EFD) log item.
1591 The EFD contains a pointer to the EFI logged in transaction #1 so that log
1592 recovery can tell if the EFI needs to be replayed.
1594 If the system goes down after transaction #1 is written back to the filesystem
1596 inconsistent filesystem metadata because there would not appear to be any owner
1602 EFI to complete the recovery phase.
1604 There are subtleties to XFS' transaction chaining strategy to consider:
1606 * Log items must be added to a transaction in the correct order to prevent
1609 completed before the last update to free the extent, and extents should not
1610 be reallocated until that last update commits to the log.
1617 * Unmounting the filesystem flushes all pending work to disk, which means that
1621 In this manner, XFS employs a form of eventual consistency to avoid deadlocks
1625 decided that it was impractical to cram all the reverse mapping updates for a
1634 * A shape change to the block mapping btree
1639 * An update to the reference counting information
1654 For copy-on-write updates this is even worse, because this must be done once to
1655 remove the space from a staging area and again to map it into the file!
1657 To deal with this explosion in a calm manner, XFS expands its use of deferred
1658 work items to cover most reverse mapping updates and all refcount updates.
1663 items carefully to avoid resource reuse conflicts between unsuspecting threads.
1666 updates to per-AG structures are coordinated by locking the buffers for AG
1674 will appear inconsistent to scrub and an observation of corruption will be
1678 Several other solutions to this problem were evaluated upon discovery of this
1681 1. Add a higher level lock to allocation groups and require writer threads to
1683 This would be very difficult to implement in practice because it is
1684 difficult to determine which locks need to be obtained, and in what order,
1686 Performing a dry run of a file operation to discover necessary locks would
1694 It would also fail to solve the problem because deferred work items can
1698 3. Teach online fsck to walk all transactions waiting for whichever lock(s)
1699 protect the data structure being scrubbed to look for pending operations.
1702 This solution is a nonstarter because it is *extremely* invasive to the main
1710 Online fsck uses an atomic intent item counter and lock cycling to coordinate
1712 There are two key properties to the drain mechanism.
1713 First, the counter is incremented when a deferred work item is *queued* to a
1715 *committed* to another transaction.
1716 The second property is that deferred work can be added to a transaction without
1718 locking that AG header buffer to log the physical updates and the intent done
1720 The first property enables scrub to yield to running transaction chains, which
1721 is an explicit deprioritization of online fsck to benefit file operations.
1722 The second property of the drain is key to the correct coordination of scrub,
1723 since scrub will always be able to decide if a conflict is possible.
1727 1. Call the appropriate subsystem function to add a deferred work item to a
1730 2. The function calls ``xfs_defer_drain_bump`` to increase the counter.
1732 3. When the deferred item manager wants to finish the deferred work item, it
1733 calls ``->finish_item`` to complete it.
1736 ``xfs_defer_drain_drop`` to decrease the sloppy counter and wake up any threads
1753 4. Wait for the intent counter to reach zero (``xfs_defer_drain_intents``), then go
1754 back to step 1 unless a signal has been caught.
1756 To avoid polling in step 4, the drain provides a waitqueue for scrub threads to
1757 be woken up whenever the intent count drops to zero.
1771 later, live update hooks) where it is useful for the online fsck code to know
1774 background, it is very important to minimize the runtime overhead imposed by
1777 Taking locks in the hot path of a writer thread to access a data structure only
1778 to find that no further action is necessary is expensive -- on the author's
1780 Fortunately, the kernel supports dynamic code patching, which enables XFS to
1781 replace a static branch to hook code with ``nop`` sleds when online fsck isn't
1783 This sled has an overhead of however long it takes the instruction decoder to
1784 skip past the sled, which seems to be on the order of less than 1ns and
1788 unconditional branch to call the hook code.
1795 to change a static key while holding any locks or resources that could be
1797 To minimize contention on the CPU hotplug lock, care should be taken not to
1800 Because static keys are intended to minimize hook overhead for regular
1805 defaults to false.
1809 - When deciding to invoke code that's only used by scrub, the regular
1810 filesystem should call the ``static_branch_unlikely`` predicate to avoid the
1814 ``static_branch_inc`` to enable and ``static_branch_dec`` to disable the
1816 Wrapper functions make it easy to compile out the relevant code if the kernel
1819 - Scrub functions wanting to turn on scrub-only XFS functionality should call
1820 the ``xchk_fsgates_enable`` from the setup function to enable a specific
1827 Online scrub has resource acquisition helpers (e.g. ``xchk_perag_lock``) to
1830 try to wait for intents to complete.
1846 Some online checking functions work by scanning the filesystem to build a
1849 For online repair to rebuild a metadata structure, it must compute the record
1851 structure to disk.
1854 To meet these goals, the kernel needs to collect a large amount of information
1859 * Allocating a contiguous region of memory to create a C array is very
1867 * The system might not have sufficient memory to stage all the information.
1869 At any given time, online fsck does not need to keep the entire record set in
1871 Continued development of online fsck demonstrated that the ability to perform
1876 to store intermediate data that doesn't need to be in memory at all times, so
1911 To support the first four use cases, high level data structures wrap the xfile
1912 to share functionality between online fsck functions.
1913 The rest of this section discusses the interfaces that the xfile presents to
1921 which behave similarly to their userspace counterparts.
1922 XFS is very record-based, which suggests that the ability to load and store
1924 To support these cases, a pair of ``xfile_obj_load`` and ``xfile_obj_store``
1925 functions are provided to read and persist objects into an xfile.
1929 behavior because the only reaction is to abort the operation back to userspace.
1934 It is convenient to access storage directly with pointers, just like userspace
1937 xfiles must be responsive to memory reclamation.
1938 tmpfs can only push a pagecache folio to the swap cache if the folio is neither
1941 Short term direct access to xfile contents is done by locking the pagecache
1944 Folio locks are not supposed to be held for long periods of time, so long
1945 term direct access to xfile contents is done by bumping the folio refcount,
1947 These long term users *must* be responsive to memory reclaim by hooking into
1948 the shrinker infrastructure to know when to release folios.
1950 The ``xfile_get_page`` and ``xfile_put_page`` functions are provided to
1951 retrieve the (locked) folio that backs part of an xfile and to release it.
1952 The only code to use these folio lease functions are the xfarray
1960 They are marked ``S_PRIVATE`` to prevent interference from the security system,
1964 To avoid locking recursion issues with the VFS, all accesses to the shmfs file
1967 xfile's address space to grab writable pages, copy the caller's buffer into the
1969 xfile readers call ``shmem_read_mapping_page_gfp`` to grab pages directly
1971 In other words, xfiles ignore the VFS read and write code paths to avoid
1972 having to create a dummy ``struct kiocb`` and to avoid taking inode and
1974 tmpfs cannot be frozen, and xfiles must not be exposed to userspace.
1976 If an xfile is shared between threads to stage repairs, the caller must provide
1977 its own locks to coordinate access.
1979 other threads to provide updates to the scanned data, the scrub function must
1980 provide a lock for all threads to share.
1990 Directories have a set of fixed-size dirent records that point to the names,
1991 and extended attributes have a set of fixed-size attribute keys that point to
1994 During a repair, scrub needs to stage new records during the gathering step and
1998 methods of the xfile directly, it is simpler for callers for there to be a
1999 higher level abstraction to take care of computing array offsets, to provide
2000 iterator functions, and to deal with sparse records and sorting.
2009 Array access patterns in online fsck tend to fall into three categories.
2010 Iteration of records is assumed to be necessary for all cases and will be
2018 Access to array elements is performed programmatically via ``xfarray_load`` and
2019 ``xfarray_store`` functions, which wrap the similarly-named xfile functions to
2021 Gaps are defined to be null records, and null records are defined to be a
2024 They are created either by calling ``xfarray_unset`` to null out an existing
2025 record or by never storing anything to an array index.
2028 and do not require multiple updates to a record.
2030 These callers can add records to the array without caring about array indices
2033 For callers that require records to be presentable in a specific order (e.g.
2041 at any time, and uniqueness of records is left to callers.
2042 The ``xfarray_store_anywhere`` function is used to insert a record in any
2053 Most users of the xfarray require the ability to iterate the records stored in
2066 All users of this idiom must be prepared to handle null records or must already
2069 For xfarray users that want to iterate a sparse array, the ``xfarray_iter``
2070 function ignores indices in the xfarray that have never been written to by
2071 calling ``xfile_seek_data`` (which internally uses ``SEEK_DATA``) to skip areas
2088 that for performance reasons, online repair ought to load batches of records
2093 set prior to bulk loading.
2103 To sort records in a reasonably short amount of time, ``xfarray`` takes
2105 heapsort to hedge against performance collapse if the chosen quicksort pivots
2119 In other words, ``xfarray`` uses heapsort to constrain the nested recursion of
2123 A good pivot splits the set to sort in half, leading to the divide and conquer
2124 behavior that is crucial to O(n * lg(n)) performance.
2125 A poor pivot barely splits the subset at all, leading to O(n\ :sup:`2`)
2127 The xfarray sort routine tries to avoid picking a bad pivot by sampling nine
2128 records into a memory buffer and using the kernel heapsort to identify the
2131 Most modern quicksort implementations employ Tukey's "ninther" to select a
2134 of the triads, and then sort the middle value of each triad to determine the
2137 It turned out to be much more performant to read the nine elements into a
2141 low-effort robust (resistant) location in large samples`, in *Contributions to
2146 subset around the pivot, then set up the current and next stack frames to
2148 This keeps the stack space requirements to log2(record count).
2151 keeps examined xfile pages mapped in the kernel for as long as possible to
2163 Each directory entry record needs to store entry name,
2164 and each extended attribute needs to store both the attribute name and value.
2166 ``xfblob`` abstraction was created to simplify management of these blobs
2169 Blob arrays provide ``xfblob_load`` and ``xfblob_store`` functions to retrieve
2172 Later, callers provide this cookie to the ``xblob_load`` to recall the object.
2179 to cache a small number of entries before adding them to a temporary ondisk
2195 Keeping the scan data up to date requires requires the ability to propagate
2198 applying them before writing the new metadata to disk, but this leads to
2200 Another option is to skip the side-log and commit live updates from the
2204 access to perform well.
2211 Conveniently, however, XFS has a library to create and maintain ordered reverse
2213 If only there was a means to create one in memory.
2218 The XFS buffer cache specializes in abstracting IO to block-oriented address
2219 spaces, which means that adaptation of the buffer cache to interface with
2232 Two modifications are necessary to support xfiles as a buffer cache target.
2233 The first is to make it possible for the ``struct xfs_buftarg`` structure to
2236 The second change is to modify the buffer ``ioapply`` function to "read" cached
2237 pages from the xfile and "write" cached pages back to the xfile.
2238 Multiple access to individual buffers is controlled by the ``xfs_buf`` lock,
2244 updates to an in-memory btree.
2261 To allocate a btree block, use ``xfile_seek_data`` to find a gap in the file.
2265 To free an xfbtree block, use ``xfile_discard`` (which internally uses
2266 ``FALLOC_FL_PUNCH_HOLE``) to remove the memory page from the xfile.
2271 An online fsck function that wants to create an xfbtree should proceed as
2274 1. Call ``xfile_create`` to create an xfile.
2276 2. Call ``xfs_alloc_memory_buftarg`` to create a buffer cache target structure
2277 pointing to the xfile.
2279 3. Pass the buffer cache target, buffer ops, and other information to
2280 ``xfbtree_create`` to write an initial tree header and root block to the
2282 Each btree type should define a wrapper that passes necessary arguments to
2284 For example, rmap btrees define ``xfs_rmapbt_mem_create`` to take care of
2288 4. Pass the xfbtree object to the btree cursor creation function for the
2293 5. Pass the btree cursor to the regular btree functions to make queries against
2294 and to update the in-memory btree.
2295 For example, a btree cursor for an rmap xfbtree can be passed to the
2298 xfbtree updates that are logged to a transaction.
2301 buffer target, and the destroy the xfile to release all resources.
2308 Although it is a clever hack to reuse the rmap btree code to handle the staging
2330 4. Queue the buffer to a special delwri list.
2335 6. Submit the delwri list to commit the changes to the xfile, if the updates
2347 the incore records to be sorted prior to commit, but was very slow and leaked
2357 To prepare for online fsck, each of the four bulk loaders were studied, notes
2365 The zeroth step of bulk loading is to assemble the entire record set that will
2367 Next, call ``xfs_btree_bload_compute_geometry`` to compute the shape of the
2383 The next variable to determine is the desired loading factor.
2386 Choosing maxrecs is also undesirable because adding a single record to each
2389 The default loading factor was chosen to be 75% of maxrecs, which provides a
2394 If space is tight, the loading factor will be set to maxrecs to try to avoid
2406 Once that's done, the number of leaf blocks required to store the record set
2411 The number of node blocks needed to point to the next level down in the tree
2427 summation of the number of blocks on each level, and the inode fork points to
2434 This only becomes relevant when non-bmap btrees gain the ability to root in
2445 To improve crash resilience, the reservation code also logs an Extent Freeing
2447 its in-memory ``struct xfs_extent_free_item`` object to the space reservation.
2448 If the system goes down, log recovery will use the unfinished EFIs to free the
2452 extent, it updates the in-memory reservation to reflect the claimed space.
2453 Block reservation tries to allocate as much contiguous space as possible to
2460 To avoid livelocking the filesystem, the EFIs must not pin the tail of the log
2462 To alleviate this problem, the dynamic relogging capability of the deferred ops
2463 mechanism is reused here to commit a transaction at the log head containing an
2465 This enables the log to release the old EFI to keep the log moving forwards.
2467 EFIs have a role to play during the commit and reaping phases; please see the
2483 rest of the block with records, and adds the new leaf block to a list of
2491 Sibling pointers are set every time a new block is added to the level::
2498 When it finishes writing the record leaf blocks, it moves on to the node
2500 To fill a node block, it walks each block in the next level down in the tree
2501 to compute the relevant keys and write them into the parent node::
2513 When it reaches the root level, it is ready to commit the new btree!::
2530 The first step to commit the new btree is to persist the btree blocks to disk
2533 in the recent past, so the builder must use ``xfs_buf_delwri_queue_here`` to
2535 to disk.
2539 Once the new blocks have been persisted to disk, control returns to the
2550 by the btree builder. The new EFDs must point to the EFIs attached to
2551 the reservation to prevent log recovery from freeing the new blocks.
2554 extent free work item to be free the unused space later in the
2559 If the btree loading code suspects this might be about to happen, it must
2560 call ``xrep_defer_finish`` to clear out the deferred work and obtain a
2563 3. Clear out the deferred work a second time to finish the commit and clean
2572 Repair moves on to reaping the old blocks, which will be presented in a
2578 The high level process to rebuild the inode index btree is:
2580 1. Walk the reverse mapping records to generate ``struct xfs_inobt_rec``
2584 2. Append the records to an xfarray in inode order.
2586 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2588 If the free space inode btree is enabled, call it again to estimate the
2593 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2595 If the free space inode btree is enabled, call it again to load the finobt.
2597 6. Commit the location of the new btree root block(s) to the AGI.
2603 The inode btree maps inumbers to the ondisk location of the associated
2615 If there are, the space metadata inconsistencies are reason enough to abort the
2617 Otherwise, read each cluster buffer to check that its contents appear to be
2618 ondisk inodes and to decide if the file is allocated
2621 enough information to fill a single inode chunk record, which is 64 consecutive
2626 ``xfarray_append`` to add the inode btree record to the xfarray.
2627 This xfarray is walked twice during the btree creation step -- once to populate
2628 the inode btree with all inode chunk records, and a second time to populate the
2631 but the record count for the free inode btree has to be computed as inode chunk
2642 Reverse mapping records are used to rebuild the reference count information.
2646 physical blocks, and that the rectangles can be laid down to allow them to
2661 Extents being used to stage copy-on-write operations should be the only records
2666 The high level process to rebuild the reference count btree is:
2668 1. Walk the reverse mapping records to generate ``struct xfs_refcount_irec``
2669 records for any space having more than one reverse mapping and add them to
2671 Any records owned by ``XFS_RMAP_OWN_COW`` are also added to the xfarray
2672 because these are extents allocated to stage a copy on write operation and
2675 Use any records owned by ``XFS_RMAP_OWN_REFC`` to create a bitmap of old
2682 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2687 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2690 6. Commit the location of new btree root block to the AGF.
2694 Details are as follows; the same algorithm is used by ``xfs_repair`` to
2718 refcount record associating the block number range that we just walked to
2723 Reverse mappings are added to the bag using ``xfarray_store_anywhere`` and
2735 The high level process to rebuild a data/attr fork mapping btree is:
2737 1. Walk the reverse mapping records to generate ``struct xfs_bmbt_rec``
2739 Append these records to an xfarray.
2743 2. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2749 records to that immediate area and skip to step 8.
2753 6. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2756 7. Commit the new btree root block to the inode fork immediate area.
2761 First, it's possible to move the fork offset to adjust the sizes of the
2763 Second, if there are sufficiently few fork mappings, it may be possible to use
2765 Third, the incore extent map must be reloaded carefully to avoid disturbing
2778 Whenever online fsck builds a new data structure to replace one that is
2779 suspect, there is a question of how to find and dispose of the blocks that
2780 belonged to the old structure.
2781 The laziest method of course is not to deal with them at all, but this slowly
2782 leads to service degradations as space leaks out of the filesystem.
2783 Hopefully, someone will schedule a rebuild of the free space information to
2786 the files and directories that it decides not to clear, hence it can build new
2790 to find space that is owned by the corresponding rmap owner yet truly free.
2794 Permitting the block allocator to hand them out again will not push the system
2797 For space metadata, the process of finding extents to dispose of generally
2801 The space reservations used to create the new metadata can be used here if
2802 the same rmap owner code is used to denote all of the objects being rebuilt.
2804 2. Survey the reverse mapping data to create a bitmap of space owned by the
2807 3. Use the bitmap disunion operator to subtract (1) from (2).
2809 The process moves on to step 4 below.
2813 new structure attached to a temporary file and swapping the forks.
2832 structure being repaired and move on to the next region.
2834 7. If the region is to be freed, mark any corresponding buffers in the buffer
2835 cache as stale to prevent log writeback.
2839 However, there is one complication to this procedure.
2840 Transactions are of finite size, so the reaping process must be careful to roll
2841 the transactions to avoid overruns.
2850 As stated earlier, online repair functions use very large transactions to
2861 Old reference count and inode btrees are the easiest to reap because they have
2864 Creating a list of extents to reap the old btree blocks is quite simple,
2867 1. Lock the relevant AGI/AGF header buffers to prevent allocation and frees.
2869 2. For each reverse mapping record with an rmap owner corresponding to the
2880 If it is possible to maintain the AGF lock throughout the repair (which is the
2887 The high level process to rebuild the free space indices is:
2889 1. Walk the reverse mapping records to generate ``struct xfs_alloc_rec_incore``
2892 2. Append the records to an xfarray.
2894 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2900 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2904 6. Commit the locations of the new btree root blocks to the AGF.
2920 index reconstruction, so it can use the collected free space information to
2922 It is not necessary to back each reserved extent with an EFI because the new
2932 Deferrred rmap and freeing operations are used to ensure that this transition
2933 is atomic, similar to the other btree repair functions.
2935 Third, finding the blocks to reap after the repair is not overly
2943 When repair walks reverse mapping records to synthesize free space records, it
2946 The repair context maintains a second bitmap corresponding to the rmap btree
2963 Old reverse mapping btrees are less difficult to reap after a repair.
2973 corresponding to the gaps in the new rmap btree records, and then clearing the
2974 bits corresponding to extents in the free space btrees and the current AGFL
2998 other owner, to avoid re-adding crosslinked blocks to the AGFL.
3002 5. The next operation to fix the freelist will right-size the list.
3012 careful to access the ondisk metadata *only* when the ondisk metadata is so
3014 When online fsck wants to open a damaged file for scrubbing, it must use
3016 representation *or* a lock on whichever object is necessary to prevent any
3017 update to the ondisk location.
3019 The only repairs that should be made to the ondisk inode buffers are whatever
3020 is necessary to get the in-core structure loaded.
3026 subject it to comprehensive checks, repairs, and optimizations.
3027 Most inode attributes are easy to check and constrain, or are user-controlled
3028 arbitrary bit patterns; these are both easy to fix.
3032 fsck functions to run.
3042 Similar to inodes, quota records ("dquots") also have both ondisk records and
3043 an in-memory representation, and hence are subject to the same cache coherency
3047 The only repairs that should be made to the ondisk quota record buffers are
3048 whatever is necessary to get the in-core structure loaded.
3062 Freezing to Fix Summary Counters
3072 which are key to enabling resource reservations for active transactions.
3075 It is therefore only necessary to serialize on the superblock when the
3076 superblock is being committed to disk.
3079 by training log recovery to recompute the summary counters from the AG headers,
3080 which eliminated the need for most transactions even to touch the superblock.
3082 To reduce contention even further, the incore counter is implemented as a
3087 online fsck to check them, since there is no way to quiesce a percpu counter
3089 Although online fsck can read the filesystem metadata to compute the correct
3090 values of the summary counters, there's no way to hold the value of a percpu
3093 Earlier versions of online scrub would return to userspace with an incomplete
3096 filesystem metadata to get an accurate reading and install it in the percpu
3099 To satisfy this requirement, online fsck must prevent other programs in the
3100 system from initiating new writes to the filesystem, it must disable background
3101 garbage collection threads, and it must wait for existing writer programs to
3104 inode btrees, and the realtime bitmap to compute the correct value of all
3106 This is very similar to a filesystem freeze, though not all of the pieces are
3109 - The final freeze state is set one higher than ``SB_FREEZE_COMPLETE`` to
3115 With this code in place, it is now possible to pause the filesystem for just
3116 long enough to check and correct the summary counters.
3122 | mechanism to quiesce filesystem activity. |
3123 | With the filesystem frozen, it is possible to resolve the counter values |
3128 | This leads to incorrect scan results and incorrect repairs. |
3130 | - Adding an extra lock to prevent others from thawing the filesystem |
3131 | required the addition of a ``->freeze_super`` function to wrap |
3135 | last reference to the VFS superblock, and any subsequent access |
3139 | This problem could be solved by grabbing extra references to the |
3143 | - The log need not be quiesced to check the summary counters, but a VFS |
3145 | This adds unnecessary runtime to live fscounter fsck operations. |
3148 | counters to disk as part of cleaning the log. |
3151 | sync_filesystem fails to flush the filesystem and returns an error. |
3164 entire filesystem to record observations and comparing the observations against
3167 observations to disk in a replacement structure and committing it atomically.
3168 However, it is not practical to shut down the entire filesystem to examine
3170 Therefore, online fsck must build the infrastructure to manage a live scan of
3172 There are two questions that need to be solved to perform a live walk:
3176 - How does the scan keep abreast of changes being made to the system by other
3203 Naturally, a scan through a keyspace requires a scan cursor object to track the
3210 concurrent filesystem update needs to be incorporated into the scan data.
3216 1. Lock the AGI buffer of the AG containing the inode pointed to by the visited
3221 2. Use the per-AG inode btree to look up the next inumber after the one that
3226 a. Move the examination cursor to the point of the inumber keyspace that
3227 corresponds to the start of the next AG.
3229 b. Adjust the visited inode cursor to indicate that it has "visited" the
3231 XFS inumbers are segmented, so the cursor needs to be marked as having
3232 visited the entire keyspace up to just before the start of the next AG's
3235 c. Unlock the AGI and return to step 1 if there are unexamined AGs in the
3238 d. If there are no more AGs to examine, set both cursors to the end of the
3242 4. Otherwise, there is at least one more inode to scan in this AG:
3244 a. Move the examination cursor ahead to the next inode marked as allocated
3247 b. Adjust the visited inode cursor to point to the inode just prior to where
3255 it was safe to advance the examination cursor across the entire keyspace,
3259 6. Drop the AGI lock and return the incore inode to the caller.
3265 2. Advance the scan cursor (``xchk_iscan_iter``) to get the next inode.
3268 a. Lock the inode to prevent updates during the scan.
3273 (``xchk_iscan_mark_visited``) to point to this inode.
3277 8. Call ``xchk_iscan_teardown`` to complete the scan.
3282 enough to load it into the inode cache.
3284 coordinator must release the AGI and push the main filesystem to get the inode
3299 In regular filesystem code, references to allocated XFS incore inodes are
3303 However, it is important to note that references to incore inodes obtained as
3308 References to incore inodes are always released (``xfs_irele``) outside of
3312 - The VFS may decide to kick off writeback as part of a ``DONTCACHE`` inode
3315 - Speculative preallocations need to be unreserved.
3329 to avoid deadlocks:
3335 3. Inode ``IOLOCK`` (VFS ``i_rwsem``) lock to control file IO.
3360 then decide to cross-reference the object with an object that is acquired
3363 to avoid deadlocks.
3369 context, and possibly with resources already locked and bound to it.
3379 flag on the inode to cause the kernel to try to drop the inode into the
3385 On the other hand, if there is no scrub transaction, it is desirable to drop
3386 otherwise unused inodes immediately to avoid polluting caches.
3387 To capture these nuances, the online fsck code has a separate ``xchk_irele``
3388 function to set or clear the ``DONTCACHE`` flag to get the required release
3405 For regular files, the MMAPLOCK can be acquired after the IOLOCK to stop page
3409 Due to the structure of existing filesystem code, IOLOCKs and MMAPLOCKs must be
3416 needs to take the IOLOCK of the file at the other end of the directory link.
3422 needs to take a second lock of the same class, it uses trylock to avoid an ABBA
3424 If the trylock fails, scrub drops all inode locks and use trylock loops to
3426 Trylock loops enable scrub to check for pending fatal signals, which is how
3428 However, trylock loops means that online fsck must be prepared to measure the
3429 resource being scrubbed before and after the lock cycle to detect changes and
3438 Online fsck must verify that the dotdot dirent of a directory points up to a
3440 pointing down to the child directory.
3443 while updates to the directory tree are being made.
3444 The coordinated inode scan provides a way to walk the filesystem without the
3446 The child directory is kept locked to prevent updates to the dotdot dirent, but
3447 if the scanner fails to lock a parent, it can drop and relock both the child
3464 filesystem scan is the ability to stay informed about updates being made by
3467 Two pieces of Linux kernel infrastructure enable online fsck to monitor regular
3470 Filesystem hooks convey information about an ongoing filesystem operation to
3474 notifier call chain facility to dispatch updates to any number of interested
3478 Because these hooks are private to the XFS module, the information passed along
3479 contains exactly what the checking function needs to update its observations.
3481 The current implementation of XFS hooks uses SRCU notifier chains to reduce the
3482 impact to highly threaded workloads.
3483 Regular blocking notifier chains use a rwsem and seem to have a much lower
3488 The following pieces are necessary to hook a certain point in the filesystem:
3497 around the ``xfs_hooks`` and ``xfs_hook`` objects to take advantage of type
3498 checking to ensure correct usage.
3500 - A callsite in the regular filesystem code must be chosen to call
3502 This place should be adjacent to (and not earlier than) the place where
3503 the filesystem update is committed to the transaction.
3504 In general, when the filesystem calls a hook chain, it should be able to
3505 handle sleeping and should not be vulnerable to memory reclaim or locking
3510 - The online fsck function should define a structure to hold scan data, a lock
3511 to coordinate access to the scan data, and a ``struct xfs_hook`` object.
3515 - The online fsck code must contain a C function to catch the hook action code
3518 hook information must be applied to the scan data.
3520 - Prior to unlocking inodes to start the scan, online fsck must call
3521 ``xfs_hooks_setup`` to initialize the ``struct xfs_hook``, and
3522 ``xfs_hooks_add`` to enable the hook.
3524 - Online fsck must call ``xfs_hooks_del`` to disable the hook once the scan is
3527 The number of hooks should be kept to a minimum to reduce complexity.
3528 Static keys are used to reduce the overhead of filesystem hooks to nearly
3565 These rules must be followed to ensure correct interactions between the
3566 checking code and the code making an update to the filesystem:
3568 - Prior to invoking the notifier call chain, the filesystem function being
3570 to scan the inode.
3572 - The scanning function and the scrub hook function must coordinate access to
3575 - Scrub hook function must not add the live update information to the scan
3585 - The hook function can abort the inode scan to avoid breaking the other rules.
3591 - ``xchk_iscan_iter`` grabs a reference to the next inode in the scan or
3592 returns zero if there is nothing left to scan
3594 - ``xchk_iscan_want_live_update`` to decide if an inode has already been
3596 This is critical for hook functions to decide if they need to update the
3599 - ``xchk_iscan_mark_visited`` to mark an inode as having been visited in the
3602 - ``xchk_iscan_teardown`` to finish the scan
3614 It is useful to compare the mount time quotacheck code to the online repair
3616 Mount time quotacheck does not have to contend with concurrent operations, so
3624 Add each file's resource usage to the incore dquot.
3628 incore dquot to a delayed write (delwri) list.
3630 4. Write the buffer list to disk.
3632 Like most online fsck functions, online quotacheck can't write to regular
3635 Therefore, online quotacheck records file resource usage to a shadow dquot
3636 index implemented with a sparse ``xfarray``, and only writes to the real dquots
3639 are handled in phases to minimize contention on dquots:
3641 1. The inodes involved are joined and locked to a transaction.
3643 2. For each dquot attached to the file:
3647 b. A quota reservation is added to the dquot's resource usage.
3658 b. Quota usage changes are logged and unused reservation is given back to
3665 (``dqtrx``) that operates in a similar manner to the regular code.
3666 The step 4 hook commits the shadow ``dqtrx`` changes to the shadow dquots.
3679 realtime blocks) and add that to the shadow dquots for the user, group,
3691 Live updates are key to being able to walk every quota record without
3692 needing to hold any locks for a long duration.
3694 resource counts are set to the values in the shadow dquot.
3707 The coordinated inode scanner is used to visit all directories on the
3725 A crucial point to understand about how the link count inode scanner interacts
3730 Furthermore, a subdirectory A with a dotdot entry pointing back to B is
3740 The backref information is used to detect inconsistencies in the number of
3741 links pointing to child subdirectories and the number of dotdot entries
3747 Live updates are key to being able to walk every inode without needing to hold
3749 If repairs are desired, the inode's link count is set to the value in the
3751 If no parents are found, the file must be :ref:`reparented <orphanage>` to the
3752 orphanage to prevent the file from being lost forever.
3766 and use an :ref:`in-memory array <xfarray>` to store the gathered observations.
3778 Unfortunately, repairs to the reverse mapping btree cannot use the "standard"
3783 <liveupdate>`, and an :ref:`in-memory rmap btree <xfbtree>` to complete the
3786 1. Set up an xfbtree to stage rmap records.
3795 can receive updates to the rmap btree from the rest of the filesystem during
3804 b. Use the rmap code to add the record to the in-memory btree.
3806 c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3807 xfbtree changes to the xfile.
3817 c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3818 xfbtree changes to the xfile.
3819 This is performed with an empty transaction to avoid changing the
3830 10. Perform the usual btree bulk loading and commit to install the new rmap
3834 to :ref:`reap after rmap btree repair <rmap_reap>`.
3849 File forks map 64-bit logical file fork space extents to physical storage space
3850 extents, similar to how a memory management unit maps 64-bit virtual addresses
3851 to physical memory addresses.
3854 to other blocks mapped within that same address space, and file-based linear
3863 contents) to commit the repair.
3869 consistent to use a temporary file safely!
3871 memory to stage ondisk space usage information.
3874 block headers to match the file being repaired and not the temporary file. The
3875 directory, extended attribute, and symbolic link functions were all modified to
3876 allow callers to specify owner numbers explicitly.
3878 There is a downside to the reaping process -- if the system crashes during the
3882 Temporary files created for repair are similar to ``O_TMPFILE`` files created
3885 the last reference to the file is lost.
3887 the kernel at all, they must be specially marked to prevent them from being
3902 | using a ``COLLAPSE_RANGE`` operation to slide the new extents into |
3909 | applied to the record offset computation to build an alternate copy. |
3911 | - Extended attributes are allowed to use the entire attr fork offset |
3916 | requirement means that online repair would have to be able to perform |
3917 | a log assisted ``COLLAPSE_RANGE`` operation to ensure that the old |
3930 | An atomic range collapse operation would have to rewrite this part of |
3933 | it's something to be aware of. |
3937 | Were the atomic commit to use a range collapse operation, each block |
3938 | would have to be rewritten very carefully to preserve the graph |
3941 | of blocks repeatedly, which is not conducive to quick repairs. |
3943 | This lead to the introduction of temporary file staging. |
3949 Online repair code should use the ``xrep_tempfile_create`` function to create a
3951 This allocates an inode, marks the in-core inode private, and attaches it to
3953 These files are hidden from userspace, may not be added to the directory tree,
3960 access to file data are controlled via the IOLOCK, and access to file metadata
3964 To comply with the nested locking strategy laid out in the :ref:`inode
3968 Data can be written to a temporary file by two means:
3970 1. ``xrep_tempfile_copyin`` can be used to set the contents of a regular
3974 be used to write to the temporary file.
3977 must be conveyed to the file being repaired, which is the topic of the next
3990 It is not possible to swap the inumbers of two files, so instead the new
3992 This suggests the need for the ability to swap extents, but the existing extent
3997 reverse mapping information up to date with every exchange of mappings.
4005 c. Defragmentation is assumed to occur between two files with identical
4010 d. Online repair needs to swap the contents of two files that are by definition
4019 of log intent item to track the progress of an operation to exchange two file
4023 The new log item records the progress of the exchange to ensure that once an
4024 exchange begins, it will always run to completion, even there are
4039 | ``sb_features_log_incompat`` field to indicate that the log contains |
4050 | time that the log cleans itself, it is necessary for upper level code to |
4051 | communicate to the log when it is going to use a log incompatible |
4054 | The log coordinates access to incompatible features through the use of |
4056 | The log cleaning code tries to take this rwsem in exclusive mode to |
4058 | Filesystem code signals its intention to use a log incompat feature in a |
4062 | functions to obtain the log feature and call |
4063 | ``xfs_add_incompat_log_feature`` to set the feature bits in the primary |
4065 | The superblock update is performed transactionally, so the wrapper to |
4066 | obtain log assistance must be called just prior to the creation of the |
4071 | function is called to release the feature. |
4084 The goal is to exchange all file fork mappings between two file fork offset
4086 There are likely to be many extent mappings in each fork, and the edges of
4088 Furthermore, there may be other updates that need to happen after the swap,
4089 such as exchanging file sizes, inode flags, or conversion of fork data to local
4113 The new log intent item contains enough information to track two logical fork
4117 from one file to the other.
4119 and the blockcount field is decremented to reflect the progress made.
4121 instead of the data fork and other work to be done after the extent swap.
4122 The two isize fields are used to swap the file size at the end of the operation
4128 At the start, it should contain the entirety of the file ranges to be
4131 2. Call ``xfs_defer_finish`` to process the exchange.
4133 This will log an extent swap intent item to the transaction for the deferred
4144 Mutual holes, unwritten extents, and extent mappings to the same physical
4147 For the next few steps, this document will refer to the mapping that came
4150 b. Create a deferred block mapping update to unmap map1 from file 1.
4152 c. Create a deferred block mapping update to unmap map2 from file 2.
4154 d. Create a deferred block mapping update to map map1 into file 2.
4156 e. Create a deferred block mapping update to map map2 into file 1.
4177 l. Return the proper error code (EAGAIN) to the deferred operation manager
4178 to inform it that there is more work to be done.
4180 moving back to the start of step 3.
4194 There are a few things that need to be taken care of before initiating an
4196 First, regular files require the page cache to be flushed to disk before the
4197 operation begins, and directio writes to be quiesced.
4200 the operation, and reserve that quantity of resources to avoid an unrecoverable
4202 The preparation step scans the ranges of both files to estimate:
4204 - Data device blocks needed to handle the repeated updates to the fork
4209 - The number of extent mappings that will be added to each file.
4211 User programs must never be able to access a realtime file extent that maps
4212 to different extents on the realtime volume, which could happen if the
4213 operation fails to run to completion.
4216 but it is very important to maintain correct accounting.
4218 swap ever add more extent mappings to a fork than it can support.
4219 Regular users are required to abide the quota limits, though metadata repairs
4220 may exceed quota to resolve inconsistent metadata elsewhere.
4225 Extended attributes, symbolic links, and directories can set the fork format to
4227 Metadata repairs must take extra steps to support these cases:
4238 The contents of the local format fork are converted to a block to perform the
4240 The conversion to block format must be done in the same transaction that
4242 The regular atomic extent swap is used to exchange the mappings.
4244 rolled one more time to convert the second file's fork back to local format
4245 so that the second file will be ready to go as soon as the ILOCK is dropped.
4249 Although there is no verification, it is still important to maintain
4250 referential integrity, so prior to performing the extent swap, online repair
4266 To repair a metadata file, online repair proceeds as follows:
4270 2. Use the staging data to write out new contents into the temporary repair
4272 The same fork must be written to as is being repaired.
4277 4. Call ``xrep_tempswap_trans_alloc`` to allocate a new scrub transaction with
4281 5. Call ``xrep_tempswap_contents`` to swap the contents.
4283 6. Commit the transaction to complete the repair.
4291 bitmap, similar to Unix FFS.
4294 The realtime summary file indexes the number of free extents of a given size to
4298 length, similar to what the free space by count (cntbt) btree does for the data
4303 counters to match the number of blocks in the rt bitmap.
4307 To check the summary file against the bitmap:
4318 c. Increment it, and write it back to the xfile.
4322 To repair the summary file, write the xfile contents into the temporary file
4323 and use atomic extent swap to commit the new contents.
4335 Values are limited in size to 64KiB, but there is no limit in the number of
4344 btree (``dabtree``) is created to map hashes of attribute names to entries
4349 1. Walk the attr fork mappings of the file being repaired to find the attribute
4353 a. Walk the attr leaf block to find candidate keys.
4359 If that succeeds, add the name and value to the staging xfarray and
4363 memory or there are no more attr fork blocks to examine, unlock the file and
4364 add the staged extended attributes to the temporary file.
4366 3. Use atomic extent swapping to exchange the new and old extended attribute
4368 The old attribute blocks are now attached to the temporary file.
4382 The offline repair tool scans all inodes to find files with nonzero link count,
4383 and then it scans all directories to establish parentage of those linked files.
4385 moved to the ``/lost+found`` directory.
4386 It does not try to salvage anything.
4388 The best that online repair can do at this time is to read directory data
4393 and moving orphans to the ``/lost+found`` directory.
4402 If the dotdot entry is not unreadable, try to confirm that the alleged
4403 parent has a child entry pointing back to the directory being repaired.
4404 Otherwise, walk the filesystem to find it.
4406 2. Walk the first partition of data fork of the directory to find the directory
4410 a. Walk the directory data block to find candidate entries.
4416 If that succeeds, add the name, inode number, and file type to the
4420 memory or there are no more directory data blocks to examine, unlock the
4424 4. Use atomic extent swapping to exchange the new and old directory structures.
4425 The old directory blocks are now attached to the temporary file.
4434 In theory it is necessary to scan all dentry cache entries for a directory to
4446 Unfortunately, the current dentry cache design doesn't provide a means to walk
4458 A parent pointer is a piece of file metadata that enables a user to locate the
4459 file's parent directory without having to traverse the directory tree from the
4469 In other words, child files use extended attributes to store pointers to
4471 The directory checking process can be strengthened to ensure that the target of
4472 each dirent also contains a parent pointer pointing back to the dirent.
4485 | Each link from a parent directory to a child file is mirrored with an |
4486 | extended attribute in the child that could be used to identify the |
4491 | 1. The XFS codebase of the late 2000s did not have the infrastructure to |
4494 | followed up with the corresponding change to the reverse links. |
4498 | taking any kernel or inode locks to coordinate access. |
4503 | used to reconnect the directory tree. |
4506 | that parent pointer attribute creation is likely to fail at some |
4510 | a file system repair to depend on. |
4513 | During 2022, Allison introduced log intent items to track physical |
4515 | This solves the referential integrity problem by making it possible to |
4520 | to handle the maximum hardlink count of any file. |
4533 2. Set up an inode scanner and hook into the directory entry code to receive
4542 b. When finished scanning that file, flush the stashed updates to the
4551 We cannot write directly to the temporary directory because hook
4552 functions are not allowed to modify filesystem metadata.
4554 to apply the stashed updates to the temporary directory.
4574 *Answer*: There are a few ways to solve this problem:
4577 sufficient to find the entry in the parent.
4582 will fail due to conflicts with the free space in the directory.
4585 amending the xattr code to support updating an xattr key and reindexing the
4586 dabtree, though this would have to be performed with the parent directory
4592 4. Change the ondisk xattr format to ``(parent_inum, name) → (parent_gen)``,
4594 forcing repair code to update the dirent position.
4595 Unfortunately, this requires changes to the xattr code to support attr
4598 5. Change the ondisk xattr format to ``(parent_inum, hash(name)) →
4600 If the hash is sufficiently resistant to collisions (e.g. sha256) then
4610 Online reconstruction of a file's parent pointer information works similarly to
4617 2. Set up an inode scanner and hook into the directory entry code to receive
4627 b. When finished scanning the directory, flush the stashed updates to the
4636 We cannot write parent pointers directly to the temporary file because
4637 hook functions are not allowed to modify filesystem metadata.
4639 to apply the stashed parent pointer updates to the temporary file.
4641 5. Copy all non-parent pointer extended attributes to the temporary file.
4659 Parent pointer checks are therefore a second pass to be added to the existing
4694 need to be written to the inode.
4697 and need to be removed from the inode.
4702 4. Move on to examining link counts, as we do today.
4711 during phase 3 to decide which files are corrupt enough to be zapped.
4712 This scan would have to be converted into a multi-pass scan:
4718 2. The next pass records parent pointers pointing to the directories noted
4720 This second pass may have to happen after the phase 4 scan for duplicate
4723 3. The third pass resets corrupt directories to an empty shortform directory.
4728 Use the parent pointer information recorded during step 2 to reconstruct
4729 the dirents and add them to the now-empty directories.
4741 downwards either to more subdirectories or to non-directory files.
4743 disconnected graph, which makes files impossible to access via regular path
4747 detect a dotdot entry pointing to a parent directory that doesn't have a link
4748 back to the child directory and the file link count checker can detect a file
4749 that isn't pointed to by any directory in the filesystem.
4756 When orphans are found, they should be reconnected to the directory tree.
4757 Offline fsck solves the problem by creating a directory ``/lost+found`` to
4760 Reparenting a file to the orphanage does not reset any of its permissions or
4765 VFS mechanisms to create the orphanage directory with all the necessary
4772 to try to ensure that the lost and found directory actually exists.
4773 This also attaches the orphanage directory to the scrub context.
4775 2. If the decision is made to reconnect a file, take the IOLOCK of both the
4781 to compute the new name in the orphanage and the block reservation required.
4783 4. Use ``xrep_orphanage_adoption_prep`` to reserve resources to the repair
4786 5. Call ``xrep_orphanage_adopt`` to reparent the orphaned file into the lost
4798 program, ``xfs_scrub``, that provide the ability to drive metadata checks and
4824 the file forks that map directory and extended attribute data to physical
4843 Therefore, a metadata dependency graph is a convenient way to schedule checking
4846 - Phase 1 checks that the provided path maps to an XFS filesystem and detect
4858 to checking names.
4860 - Phase 6 depends on groups (i) through (b) to find file data blocks to verify,
4861 to read them, and to report which blocks of which files are affected.
4873 it is desirable to scrub inodes in parallel to minimize runtime, particularly
4875 This requires careful scheduling to keep the threads as evenly loaded as
4880 Each workqueue item walked the inode btree (with ``XFS_IOC_INUMBERS``) to find
4881 inode chunks and then called bulkstat (``XFS_IOC_BULKSTAT``) to gather enough
4882 information to construct file handles.
4883 The file handle was then passed to a function to generate scrub items for each
4885 This simple algorithm leads to thread balancing problems in phase 3 if the
4889 been dispatching at the level of individual inodes, or, to constrain memory
4892 Thanks to Dave Chinner, bounded workqueues in userspace enable ``xfs_scrub`` to
4895 and it uses INUMBERS to find inode btree chunks.
4897 of items that can be waiting to be run.
4898 Each inode btree chunk found by the first workqueue's workers are queued to the
4900 creates a file handle, and passes it to a function to generate scrub items for
4904 This doesn't completely solve the balancing problem, but reduces it enough to
4905 move on to more pressing issues.
4922 functioning of the inode indices to find inodes to scan.
4923 Failed repairs are rescheduled to phase 4.
4924 Problems reported in any other space metadata are deferred to phase 4.
4925 Optimization opportunities are always deferred to phase 4, no matter their
4934 so infrequent that the ``struct xfs_scrub_metadata`` objects used to
4935 communicate with the kernel could also be used as the primary object to
4938 filesystem object, it became much more memory-efficient to track all eligible
4946 means that ``xfs_scrub`` must try to complete the repair work scheduled by
4950 1. Start a round of repair with a workqueue and enough workers to keep the CPUs
4955 i. Ask the kernel to repair everything listed in the repair item for a
4966 b. If any repairs were made, jump back to 1a to retry all the phase 2 items.
4970 i. Ask the kernel to repair everything listed in the repair item for a
4981 d. If any repairs were made, jump back to 1c to retry all the phase 3 items.
4983 2. If step 1 made any repair progress of any kind, jump back to step 1 to start
4986 3. If there are items left to repair, run them all serially one more time.
4988 to repair anything.
5013 phase 4, it moves on to phase 5, which checks for suspicious looking names in
5028 For this section, the term "naming domain" refers to any place where names are
5034 points to support international languages.
5040 To maximize its expressiveness, the Unicode standard defines separate control
5044 identically to "Latin Small Letter A" U+0061 "a".
5046 The standard also permits characters to be constructed in multiple ways --
5055 characters to alter the presentation of text.
5056 For example, the character "Right-to-Left Override" U+202E can trick some
5060 name will render identically to a name that does not have the zero width
5065 The kernel, in its indifference to upper level encoding schemes, permits this.
5066 Most filesystem drivers persist the byte sequence names that are given to them
5077 to identify names with a directory or within a file's extended attributes that
5081 All of these potential issues are reported to the system administrator during
5087 The system administrator can elect to initiate a media scan of all file data
5091 The scan starts by calling ``FS_IOC_GETFSMAP`` to scan the filesystem space map
5092 to find areas that are allocated to file data fork extents.
5094 they were data fork extents to reduce the command setup overhead.
5096 verification request is sent to the disk as a directio read of the raw block
5100 to narrow down the failure to the specific region of the media and recorded.
5102 mapping ioctl to map the recorded media errors back to metadata structures
5104 For media errors in blocks owned by files, parent pointers can be used to
5115 make it easier for code readers to understand what has been built, for whom it
5117 Please feel free to contact the XFS mailing list with questions.
5122 As discussed earlier, a second frontend to the atomic extent swap mechanism is
5123 a new ioctl call that userspace programs can use to commit updates to files
5126 necessary refinements to online repair and lack of customer demand mean that
5132 As mentioned earlier, XFS has long had the ability to swap extents between
5133 files, which is used almost exclusively by ``xfs_fsr`` to defragment files.
5138 some log support to continue rewriting the owner fields of BMBT blocks during
5140 When the reverse mapping btree was later added to XFS, the only way to maintain
5141 the consistency of the fork mappings with the reverse mapping index was to
5142 develop an iterative mechanism that used deferred bmap and rmap operations to
5144 This mechanism is identical to steps 2-3 from the procedure above except for
5157 wants to update.
5158 Next, it opens a temporary file and calls the file clone operation to reflink
5160 Writes to the original file should instead be written to the temporary file.
5162 (``FIEXCHANGE_RANGE``) to exchange the file contents, thereby committing all
5163 of the updates to the original file, or none of them.
5168 only wants the commit to occur if the original file's contents have not
5170 To make this happen, the calling process snapshots the file modification and
5171 change timestamps of the original file before reflinking its data to the
5173 When the program is ready to commit the changes, it passes the timestamps
5174 into the kernel as arguments to the atomic extent swap system call.
5179 logical sector size matching the filesystem block size to force all writes
5180 to be aligned to the filesystem block size.
5181 Stage all writes to a temporary file, and when that is complete, call the
5182 atomic extent swap system call with a flag to indicate that holes in the
5193 systems to mitigate the effects of speculative execution attacks.
5194 This incentivizes program authors to make as few system calls as possible to
5197 With vectorized scrub, userspace pushes to the kernel the identity of a
5198 filesystem object, a list of scrub types to run against that object, and a
5202 dependency that cannot be satisfied due to a corruption, and tells userspace
5206 call to XFS.
5221 Userspace is allowed to send a fatal signal to the process which will cause
5222 ``xfs_scrub`` to exit when it reaches a good stopping point, but there's no way
5223 for userspace to provide a time budget to the kernel.
5224 Given that the scrub codebase has helpers to detect fatal signals, it shouldn't
5225 be too much work to allow userspace to specify a timeout for a scrub/repair
5227 However, most repair functions have the property that once they begin to touch
5234 Over the years, many XFS users have requested the creation of a program to
5239 The first piece the ``clearspace`` program needs is the ability to read the
5244 maps it to a file.
5246 The third piece is the ability to force an online repair.
5248 To clear all the metadata out of a portion of physical storage, clearspace
5249 uses the new fallocate map-freespace call to map any free space in that region
5250 to the space collector file.
5255 After each relocation, clearspace calls the "map free space" function again to
5258 To clear all the file data out of a portion of the physical storage, clearspace
5259 uses the FSMAP information to find relevant file data blocks.
5261 of the file to try to share the physical space with a dummy file.
5266 <swapext_if_unchanged>` feature) to change the target file's data extent
5271 There are further optimizations that could apply to the above algorithm.
5272 To clear a piece of physical storage that has a high sharing factor, it is
5273 strongly desirable to retain this sharing factor.
5274 In fact, these extents should be moved first to maximize sharing factor after
5276 To make this work smoothly, clearspace needs a new ioctl
5277 (``FS_IOC_GETREFCOUNTS``) to report reference count information to userspace.
5283 *Answer*: To move inode chunks, Dave Chinner constructed a prototype program
5289 filesystem to update directory entries.
5293 **Future Work Question**: Can static keys be used to minimize the cost of
5311 Removing the end of the filesystem ought to be a simple matter of evacuating
5313 to the shrink code.