Lines Matching full:buffers
82 * Returns if the page has dirty or writeback buffers. If all the buffers
180 * But it's the page lock which protects the buffers. To get around this,
222 /* we might be here because some of the buffers on this page are in __find_get_block_slow()
225 * elsewhere, don't buffer_error if we had some unmapped buffers in __find_get_block_slow()
285 * If none of the buffers had errors and they are all in end_buffer_async_read()
385 * If a page's buffers are under async readin (end_buffer_async_read
387 * control could lock one of the buffers after it has completed
388 * but while some of the other buffers have not completed. This
393 * The page comes unlocked when it has no locked buffer_async buffers
397 * the buffers.
434 * management of a list of dependent buffers at ->i_mapping->private_list.
436 * Locking is a little subtle: try_to_free_buffers() will remove buffers
439 * at the time, not against the S_ISREG file which depends on those buffers.
441 * which backs the buffers. Which is different from the address_space
442 * against which the buffers are listed. So for a particular address_space,
447 * Which introduces a requirement: all buffers on an address_space's
450 * address_spaces which do not place buffers at ->private_list via these
461 * mark_buffer_dirty_fsync() to clearly define why those buffers are being
468 * that buffers are taken *off* the old inode's list when they are freed
495 * you dirty the buffers, and then use osync_inode_buffers to wait for
496 * completion. Any other dirty buffers which are not yet queued for
531 * sync_mapping_buffers - write out & wait upon a mapping's "associated" buffers
532 * @mapping: the mapping which wants those buffers written
534 * Starts I/O against the buffers at mapping->private_list, and waits upon
538 * @mapping is a file or directory which needs those buffers to be written for
622 * If the page has buffers, the uptodate buffers are set dirty, to preserve
623 * dirty-state coherency between the page and the buffers. It the page does
624 * not have buffers then when they are later attached they will all be set
627 * The buffers are dirtied before the page is dirtied. There's a small race
630 * before the buffers, a concurrent writepage caller could clear the page dirty
631 * bit, see a bunch of clean buffers and we'd end up with dirty buffers/clean
635 * page's buffer list. Also use this to protect against clean buffers being
680 * Write out and wait upon a list of buffers.
683 * initially dirty buffers get waited on, but that any subsequently
684 * dirtied buffers don't. After all, we don't want fsync to last
687 * Do this in two main stages: first we copy dirty buffers to a
695 * the osync code to catch these locked, dirty buffers without requeuing
696 * any newly dirty buffers for write.
778 * Invalidate any and all dirty buffers on a given inode. We are
780 * done a sync(). Just drop the buffers from the inode list.
783 * assumes that all the buffers are against the blockdev. Not true
802 * Remove any clean buffers from the inode's buffer list. This is called
803 * when we're trying to free the inode itself. Those buffers can pin it.
805 * Returns true if all buffers were removed.
831 * Create the appropriate buffers when given a page for data area and
833 * follow the buffers created. Return NULL if unable to create more
834 * buffers.
916 * Initialise the state of a blockdev page's buffers.
991 * Allocate some buffers for this page in grow_dev_page()
996 * Link the page to the buffers and initialise them. Take the in grow_dev_page()
1014 * Create buffers for the specified block device block's page. If
1015 * that page was dirty, the buffers are set dirty also.
1042 /* Create a page with the proper size buffers.. */ in grow_buffers()
1077 * The relationship between dirty buffers and dirty pages:
1079 * Whenever a page has any dirty buffers, the page's dirty bit is set, and
1082 * At all times, the dirtiness of the buffers represents the dirtiness of
1083 * subsections of the page. If the page has buffers, the page dirty bit is
1086 * When a page is set dirty in its entirety, all its buffers are marked dirty
1087 * (if the page has buffers).
1090 * buffers are not.
1092 * Also. When blockdev buffers are explicitly read with bread(), they
1094 * uptodate - even if all of its buffers are uptodate. A subsequent
1096 * buffers, will set the page uptodate and will perform no I/O.
1165 * Decrement a buffer_head's reference count. If all buffers against a page
1167 * and unlocked then try_to_free_buffers() may strip the buffers from the page
1168 * in preparation for freeing it (sometimes, rarely, buffers are removed from
1169 * a page but it ends up not being freed, and buffers may later be reattached).
1220 * The bhs[] array is sorted - newest buffer is at bhs[0]. Buffers have their
1495 * block_invalidatepage() does not have to release all buffers, but it must
1539 * We release buffers only if the entire page is being invalidated. in block_invalidatepage()
1552 * We attach and possibly dirty the buffers atomically wrt
1587 * clean_bdev_aliases: clean a range of buffers in block device
1588 * @bdev: Block device to clean buffers in
1602 * writeout I/O going on against recently-freed buffers. We don't wait on that
1628 * to pin buffers here since we can afford to sleep and in clean_bdev_aliases()
1697 * While block_write_full_page is writing back the dirty buffers under
1698 * the page lock, whoever dirtied the buffers may decide to clean them
1729 * here, and the (potentially unmapped) buffers may become dirty at in __block_write_full_page()
1733 * Buffers outside i_size may be dirtied by __set_page_dirty_buffers; in __block_write_full_page()
1745 * Get all the dirty buffers mapped to disk addresses and in __block_write_full_page()
1751 * mapped buffers outside i_size will occur, because in __block_write_full_page()
1801 * The page and its buffers are protected by PageWriteback(), so we can in __block_write_full_page()
1822 * The page was marked dirty, but the buffers were in __block_write_full_page()
1843 /* Recovery: lock and submit the mapped buffers */ in __block_write_full_page()
1877 * If a page has any new buffers, zero them out here, and mark them uptodate
2093 * If this is a partial write which happened to make all buffers in __block_commit_write()
2143 * The buffers that were written will now be uptodate, so we in block_write_end()
2208 * block_is_partially_uptodate checks whether buffers within a page are
2211 * Returns true if all buffers which correspond to a file portion
2313 * All buffers are uptodate - we can set the page uptodate in block_read_full_page()
2322 /* Stage two: lock the buffers */ in block_read_full_page()
2549 * Attach the singly-linked list of buffers created by nobh_write_begin, to
2617 * Allocate buffers so that we can keep track of state, and potentially in nobh_write_begin()
2677 * The page is locked, so these buffers are protected from in nobh_write_begin()
2702 * Buffers need to be attached to the page at this point, otherwise in nobh_write_begin()
3079 * request. Further it marks as clean buffers that are processed for
3087 * All of the buffers must be for the same device, and must also be a
3173 * try_to_free_buffers() checks if all the buffers on this particular page
3179 * If the page is dirty but all the buffers are clean then we need to
3181 * may be against a block device, and a later reattachment of buffers
3182 * to a dirty page will set *all* buffers dirty. Which would corrupt
3185 * The same applies to regular filesystem pages: if all the buffers are
3244 * If the filesystem writes its buffers by hand (eg ext3) in try_to_free_buffers()
3245 * then we can have clean buffers against a dirty page. We in try_to_free_buffers()
3250 * the page's buffers clean. We discover that here and clean in try_to_free_buffers()