Searched refs:must_insert_reserved (Results 1 – 4 of 4) sorted by relevance
655 if (update->must_insert_reserved) { in update_existing_head_ref() 660 * the must_insert_reserved flag set. in update_existing_head_ref() 663 existing->must_insert_reserved = update->must_insert_reserved; in update_existing_head_ref() 731 bool must_insert_reserved = false; in init_delayed_ref_head() local 756 * ref->must_insert_reserved is the flag used to record that in init_delayed_ref_head() 759 * Once we record must_insert_reserved, switch the action to in init_delayed_ref_head() 763 must_insert_reserved = true; in init_delayed_ref_head() 772 head_ref->must_insert_reserved = must_insert_reserved; in init_delayed_ref_head() [all...]
160 * The root that triggered the allocation when must_insert_reserved is166 * Track reserved bytes when setting must_insert_reserved. On success177 * until the delayed ref is processed. must_insert_reserved is186 bool must_insert_reserved; member
1544 * Don't check must_insert_reserved, as this is called from contexts in free_head_ref_squota_rsv() 1803 if (head->must_insert_reserved) { in cleanup_extent_op() 1849 /* must_insert_reserved can be set only if we didn't run the head ref. */ in btrfs_cleanup_ref_head_accounting() 1850 if (head->must_insert_reserved) in btrfs_cleanup_ref_head_accounting() 1892 if (head->must_insert_reserved) { in cleanup_ref_head() 1919 bool must_insert_reserved; in btrfs_run_delayed_refs_for_head() local 1956 * Record the must_insert_reserved flag before we drop the in btrfs_run_delayed_refs_for_head() 1959 must_insert_reserved = locked_ref->must_insert_reserved; in btrfs_run_delayed_refs_for_head() 1966 locked_ref->must_insert_reserved in btrfs_run_delayed_refs_for_head() [all...]
97 if (head->must_insert_reserved != check->must_insert) { in validate_ref_head() 99 head->must_insert_reserved, check->must_insert); in validate_ref_head()