Lines Matching full:buffers
84 * Returns if the folio has dirty or writeback buffers. If all the buffers
86 * any of the buffers are locked, it is assumed they are locked for IO.
181 * But it's the page lock which protects the buffers. To get around this,
223 /* we might be here because some of the buffers on this page are in __find_get_block_slow()
226 * elsewhere, don't buffer_error if we had some unmapped buffers in __find_get_block_slow()
420 * If a page's buffers are under async readin (end_buffer_async_read
422 * control could lock one of the buffers after it has completed
423 * but while some of the other buffers have not completed. This
428 * The page comes unlocked when it has no locked buffer_async buffers
432 * the buffers.
469 * management of a list of dependent buffers at ->i_mapping->i_private_list.
471 * Locking is a little subtle: try_to_free_buffers() will remove buffers
474 * at the time, not against the S_ISREG file which depends on those buffers.
476 * which backs the buffers. Which is different from the address_space
477 * against which the buffers are listed. So for a particular address_space,
482 * Which introduces a requirement: all buffers on an address_space's
485 * address_spaces which do not place buffers at ->i_private_list via these
496 * mark_buffer_dirty_fsync() to clearly define why those buffers are being
503 * that buffers are taken *off* the old inode's list when they are freed
530 * as you dirty the buffers, and then use osync_inode_buffers to wait for
531 * completion. Any other dirty buffers which are not yet queued for
560 * sync_mapping_buffers - write out & wait upon a mapping's "associated" buffers
561 * @mapping: the mapping which wants those buffers written
563 * Starts I/O against the buffers at mapping->i_private_list, and waits upon
567 * @mapping is a file or directory which needs those buffers to be written for
592 * filesystems which track all non-inode metadata in the buffers list
617 /* check and advance again to catch errors after syncing out buffers */ in generic_buffers_fsync_noflush()
635 * filesystems which track all non-inode metadata in the buffers list
696 * If the page has buffers, the uptodate buffers are set dirty, to preserve
697 * dirty-state coherency between the page and the buffers. It the page does
698 * not have buffers then when they are later attached they will all be set
701 * The buffers are dirtied before the page is dirtied. There's a small race
704 * before the buffers, a concurrent writepage caller could clear the page dirty
705 * bit, see a bunch of clean buffers and we'd end up with dirty buffers/clean
709 * page's buffer list. Also use this to protect against clean buffers being
751 * Write out and wait upon a list of buffers.
754 * initially dirty buffers get waited on, but that any subsequently
755 * dirtied buffers don't. After all, we don't want fsync to last
758 * Do this in two main stages: first we copy dirty buffers to a
766 * the osync code to catch these locked, dirty buffers without requeuing
767 * any newly dirty buffers for write.
849 * Invalidate any and all dirty buffers on a given inode. We are
851 * done a sync(). Just drop the buffers from the inode list.
854 * assumes that all the buffers are against the blockdev. Not true
873 * Remove any clean buffers from the inode's buffer list. This is called
874 * when we're trying to free the inode itself. Those buffers can pin it.
876 * Returns true if all buffers were removed.
902 * Create the appropriate buffers when given a folio for data area and
904 * follow the buffers created. Return NULL if unable to create more
905 * buffers.
994 * Initialise the state of a blockdev folio's buffers.
1056 * writeback, or buffers may be cleaned. This should not in grow_dev_folio()
1057 * happen very often; maybe we have old buffers attached to in grow_dev_folio()
1072 * Link the folio to the buffers and initialise them. Take the in grow_dev_folio()
1087 * Create buffers for the specified block device block's folio. If
1088 * that folio was dirty, the buffers are set dirty also. Returns false
1107 /* Create a folio with the proper size buffers */ in grow_buffers()
1140 * The relationship between dirty buffers and dirty pages:
1142 * Whenever a page has any dirty buffers, the page's dirty bit is set, and
1145 * At all times, the dirtiness of the buffers represents the dirtiness of
1146 * subsections of the page. If the page has buffers, the page dirty bit is
1149 * When a page is set dirty in its entirety, all its buffers are marked dirty
1150 * (if the page has buffers).
1153 * buffers are not.
1155 * Also. When blockdev buffers are explicitly read with bread(), they
1157 * uptodate - even if all of its buffers are uptodate. A subsequent
1159 * buffers, will set the folio uptodate and will perform no I/O.
1223 * Decrement a buffer_head's reference count. If all buffers against a page
1225 * and unlocked then try_to_free_buffers() may strip the buffers from the page
1226 * in preparation for freeing it (sometimes, rarely, buffers are removed from
1227 * a page but it ends up not being freed, and buffers may later be reattached).
1278 * The bhs[] array is sorted - newest buffer is at bhs[0]. Buffers have their
1584 * block_invalidate_folio() does not have to release all buffers, but it must
1628 * We release buffers only if the entire folio is being invalidated. in block_invalidate_folio()
1640 * We attach and possibly dirty the buffers atomically wrt
1678 * clean_bdev_aliases: clean a range of buffers in block device
1679 * @bdev: Block device to clean buffers in
1693 * writeout I/O going on against recently-freed buffers. We don't wait on that
1719 * to pin buffers here since we can afford to sleep and in clean_bdev_aliases()
1780 * While block_write_full_folio is writing back the dirty buffers under
1781 * the page lock, whoever dirtied the buffers may decide to clean them
1811 * here, and the (potentially unmapped) buffers may become dirty at in __block_write_full_folio()
1815 * Buffers outside i_size may be dirtied by block_dirty_folio; in __block_write_full_folio()
1826 * Get all the dirty buffers mapped to disk addresses and in __block_write_full_folio()
1832 * mapped buffers outside i_size will occur, because in __block_write_full_folio()
1883 * The folio and its buffers are protected by the writeback flag, in __block_write_full_folio()
1903 * The folio was marked dirty, but the buffers were in __block_write_full_folio()
1924 /* Recovery: lock and submit the mapped buffers */ in __block_write_full_folio()
1958 * If a folio has any new buffers, zero them out here, and mark them uptodate
2185 * If this is a partial write which happened to make all buffers in __block_commit_write()
2232 * The buffers that were written will now be uptodate, so in block_write_end()
2297 * block_is_partially_uptodate checks whether buffers within a folio are
2300 * Returns true if all buffers which correspond to the specified part
2410 * All buffers are uptodate or get_block() returned an in block_read_full_folio()
2417 /* Stage two: lock the buffers */ in block_read_full_folio()
2861 * try_to_free_buffers() checks if all the buffers on this particular folio
2867 * If the folio is dirty but all the buffers are clean then we need to
2869 * may be against a block device, and a later reattachment of buffers
2870 * to a dirty folio will set *all* buffers dirty. Which would corrupt
2873 * The same applies to regular filesystem folios: if all the buffers are
2932 * If the filesystem writes its buffers by hand (eg ext3) in try_to_free_buffers()
2933 * then we can have clean buffers against a dirty folio. We in try_to_free_buffers()
2938 * the folio's buffers clean. We discover that here and clean in try_to_free_buffers()
3081 * __bh_read_batch - Submit read for a batch of unlocked buffers