Lines Matching full:we

78  * We need to make sure the buffer pointer returned is naturally aligned for the
79 * biggest basic data type we put into it. We have already accounted for this
82 * However, this padding does not get written into the log, and hence we have to
87 * We also add space for the xlog_op_header that describes this region in the
88 * log. This prepends the data region we return to the caller to copy their data
90 * is not 8 byte aligned, we have to be careful to ensure that we align the
91 * start of the buffer such that the region we return to the call is 8 byte
172 * we have overrun available reservation space, return 0. The memory barrier
290 * path. Hence any lock will be globally hot if we take it unconditionally on
293 * As tickets are only ever moved on and off head->waiters under head->lock, we
294 * only need to take that lock if we are going to add the ticket to the queue
295 * and sleep. We can avoid taking the lock if the ticket was never added to
296 * head->waiters because the t_queue list head will be empty and we hold the
313 * logspace before us. Wake up the first waiters, if we do not wake in xlog_grant_head_check()
375 * This is a new transaction on the ticket, so we need to change the in xfs_log_regrant()
377 * the log. Just add one to the existing tid so that we can see chains in xfs_log_regrant()
398 * If we are failing, make sure the ticket doesn't have any current in xfs_log_regrant()
399 * reservations. We don't want to add this back when the ticket/ in xfs_log_regrant()
411 * When writes happen to the on-disk log, we don't subtract the length of the
413 * reservation, we prevent over allocation problems.
449 * If we are failing, make sure the ticket doesn't have any current in xfs_log_reserve()
450 * reservations. We don't want to add this back when the ticket/ in xfs_log_reserve()
460 * space waiters so they can process the newly set shutdown state. We really
461 * don't care what order we process callbacks here because the log is shut down
462 * and so state cannot change on disk anymore. However, we cannot wake waiters
463 * until the callbacks have been processed because we may be in unmount and
464 * we must ensure that all AIL operations the callbacks perform have completed
465 * before we tear down the AIL.
467 * We avoid processing actively referenced iclogs so that we don't run callbacks
502 * If XLOG_ICL_NEED_FUA is already set on the iclog, we need to ensure that the
505 * within the iclog. We need to ensure that the log tail does not move beyond
514 * the iclog will get zeroed on activation of the iclog after sync, so we
531 * of the tail LSN into the iclog so we guarantee that the log tail does in xlog_state_release_iclog()
532 * not move between the first time we know that the iclog needs to be in xlog_state_release_iclog()
533 * made stable and when we eventually submit it. in xlog_state_release_iclog()
614 * Now that we have set up the log and it's internal geometry in xfs_log_mount()
615 * parameters, we can validate the given log space and drop a critical in xfs_log_mount()
619 * the other log geometry constraints, so we don't have to check those in xfs_log_mount()
622 * Note: For v4 filesystems, we can't just reject the mount if the in xfs_log_mount()
627 * We can, however, reject mounts for V5 format filesystems, as the in xfs_log_mount()
653 * Initialize the AIL now we have a log. in xfs_log_mount()
685 * Now the log has been fully initialised and we know were our in xfs_log_mount()
686 * space grant counters are, we can initialise the permanent ticket in xfs_log_mount()
707 * If we finish recovery successfully, start the background log work. If we are
708 * not doing recovery, then we have a RO filesystem and we don't need to start
724 * During the second phase of log recovery, we need iget and in xfs_log_mount_finish()
727 * of inodes before we're done replaying log items on those in xfs_log_mount_finish()
729 * so that we don't leak the quota inodes if subsequent mount in xfs_log_mount_finish()
732 * We let all inodes involved in redo item processing end up on in xfs_log_mount_finish()
733 * the LRU instead of being evicted immediately so that if we do in xfs_log_mount_finish()
736 * in log recovery failure. We have to evict the unreferenced in xfs_log_mount_finish()
737 * lru inodes after clearing SB_ACTIVE because we don't in xfs_log_mount_finish()
753 * but we do it unconditionally to make sure we're always in a clean in xfs_log_mount_finish()
773 /* Make sure the log is dead if we're returning failure. */ in xfs_log_mount_finish()
808 * is done before we tear down these buffers.
826 * have been ordered and callbacks run before we are woken here, hence
852 * Write out an unmount record using the ticket provided. We have to account for
915 * At this point, we're umounting anyway, so there's no point in in xlog_unmount_write()
948 * We just write the magic number now since that particular field isn't
967 * If we think the summary counters are bad, avoid writing the unmount in xfs_log_unmount_write()
986 * To do this, we first need to shut down the background log work so it is not
987 * trying to cover the log as we clean up. We then need to unpin all objects in
988 * the log so we can then flush them out. Once they have completed their IO and
989 * run the callbacks removing themselves from the AIL, we can cover the log.
996 * Clear log incompat features since we're quiescing the log. Report in xfs_log_quiesce()
1016 * XBF_ASYNC flag set, so we need to use a lock/unlock pair to wait for in xfs_log_quiesce()
1038 * During unmount, we need to ensure we flush all the dirty metadata objects
1039 * from the AIL so that the log is empty before we write the unmount record to
1040 * the log. Once this is done, we can tear down the AIL and the log.
1050 * cleaning will have been skipped and so we need to wait in xfs_log_unmount()
1051 * for the iclog to complete shutdown processing before we in xfs_log_unmount()
1085 * Wake up processes waiting for log space after we have moved the log tail.
1117 * Determine if we have a transaction that has gone to disk that needs to be
1120 * we start attempting to cover the log.
1122 * Only if we are then in a state where covering is needed, the caller is
1126 * If there are any items in the AIl or CIL, then we do not want to attempt to
1127 * cover the log as we may be in a situation where there isn't log space
1130 * there's no point in running a dummy transaction at this point because we
1192 * state machine if the log requires covering. Therefore, we must call in xfs_log_cover()
1193 * this function once and use the result until we've issued an sb sync. in xfs_log_cover()
1212 * we found it. in xfs_log_cover()
1241 * Race to shutdown the filesystem if we see an error. in xlog_ioend_work()
1252 * Drop the lock to signal that we are done. Nothing references the in xlog_ioend_work()
1255 * unlock as we could race with it being freed. in xlog_ioend_work()
1265 * If the filesystem blocksize is too large, we may need to choose a
1298 * Clear the log incompat flags if we have the opportunity.
1300 * This only happens if we're about to log the second dummy transaction as part
1320 * Every sync period we need to unpin all items in the AIL and push them to
1321 * disk. If there is nothing dirty, then we might need to cover the log to
1339 * We cannot use an inode here for this - that will push dirty in xfs_log_worker()
1341 * will prevent log covering from making progress. Hence we in xfs_log_worker()
1442 * done this way so that we can use different sizes for machines in xlog_alloc_log()
1642 * We lock the iclogbufs here so that we can serialise against I/O in xlog_write_iclog()
1643 * completion during unmount. We might be processing a shutdown in xlog_write_iclog()
1645 * unmount thread, and hence we need to ensure that completes before in xlog_write_iclog()
1646 * tearing down the iclogbufs. Hence we need to hold the buffer lock in xlog_write_iclog()
1652 * It would seem logical to return EIO here, but we rely on in xlog_write_iclog()
1654 * doing it here. We kick of the state machine and unlock in xlog_write_iclog()
1662 * We use REQ_SYNC | REQ_IDLE here to tell the block layer the are more in xlog_write_iclog()
1677 * For external log devices, we also need to flush the data in xlog_write_iclog()
1680 * but it *must* complete before we issue the external log IO. in xlog_write_iclog()
1682 * If the flush fails, we cannot conclude that past metadata in xlog_write_iclog()
1684 * not possible, hence we must shut down with log IO error to in xlog_write_iclog()
1703 * If this log buffer would straddle the end of the log we will have in xlog_write_iclog()
1704 * to split it up into two bios, so that we can continue at the start. in xlog_write_iclog()
1728 * We need to bump cycle number for the part of the iclog that is
1772 * fashion. Previously, we should have moved the current iclog
1776 * to save away the 1st word of each BBSIZE block into the header. We replace
1780 * we can't have part of a 512 byte block written and part not written. By
1781 * tagging each block, we will know which blocks are valid when recovering
1810 * If we have a ticket, account for the roundoff via the ticket in xlog_sync()
1812 * Otherwise, we have to move grant heads directly. in xlog_sync()
1835 /* Do we need to split this write into 2 parts? */ in xlog_sync()
2061 * length. We write until we cannot fit a full record into the remaining space
2062 * and then stop. We return the log vector that is to be written that cannot
2081 /* walk the logvec, copying until we run out of space in the iclog */ in xlog_write_partial()
2089 * start recovering from the next opheader it finds. Because we in xlog_write_partial()
2095 * opheader, then we need to start afresh with a new iclog. in xlog_write_partial()
2117 /* If we wrote the whole region, move to the next. */ in xlog_write_partial()
2122 * We now have a partially written iovec, but it can span in xlog_write_partial()
2123 * multiple iclogs so we loop here. First we release the iclog in xlog_write_partial()
2124 * we currently have, then we get a new iclog and add a new in xlog_write_partial()
2125 * opheader. Then we continue copying from where we were until in xlog_write_partial()
2126 * we either complete the iovec or fill the iclog. If we in xlog_write_partial()
2127 * complete the iovec, then we increment the index and go right in xlog_write_partial()
2128 * back to the top of the outer loop. if we fill the iclog, we in xlog_write_partial()
2133 * and get a new one before returning to the outer loop. We must in xlog_write_partial()
2134 * always guarantee that we exit this inner loop with at least in xlog_write_partial()
2136 * iclog, hence we cannot just terminate the loop at the end in xlog_write_partial()
2137 * of the of the continuation. So we loop while there is no in xlog_write_partial()
2143 * Ensure we include the continuation opheader in the in xlog_write_partial()
2144 * space we need in the new iclog by adding that size in xlog_write_partial()
2145 * to the length we require. This continuation opheader in xlog_write_partial()
2147 * consumes hasn't been accounted to the lv we are in xlog_write_partial()
2169 * continuation. Otherwise we're going around again. in xlog_write_partial()
2205 * 2. Check whether we violate the tickets reservation.
2212 * 3. Find out if we can fit entire region into this iclog
2232 * we don't really know exactly how much space will be used. As a result,
2233 * we don't update ic_offset until the end when we know exactly how many
2267 * If we have a context pointer, pass it the first iclog we are in xlog_write()
2286 * We have no iclog to release, so just return in xlog_write()
2299 * We've already been guaranteed that the last writes will fit inside in xlog_write()
2301 * those writes accounted to it. Hence we do not need to update the in xlog_write()
2322 * dummy transaction, we can change state into IDLE (the second time in xlog_state_activate_iclog()
2323 * around). Otherwise we should change the state into NEED a dummy. in xlog_state_activate_iclog()
2324 * We don't need to cover the dummy. in xlog_state_activate_iclog()
2331 * We have two dirty iclogs so start over. This could also be in xlog_state_activate_iclog()
2375 * We go to NEED for any non-covering writes. We go to NEED2 if we just in xlog_covered_state()
2376 * wrote the first covering record (DONE). We go to IDLE if we just in xlog_covered_state()
2444 * Return true if we need to stop processing, false to continue to the next
2465 * Now that we have an iclog that is in the DONE_SYNC state, do in xlog_state_iodone_process_iclog()
2466 * one more check here to see if we have chased our tail around. in xlog_state_iodone_process_iclog()
2467 * If this is not the lowest lsn iclog, then we will leave it in xlog_state_iodone_process_iclog()
2475 * If there are no callbacks on this iclog, we can mark it clean in xlog_state_iodone_process_iclog()
2476 * immediately and return. Otherwise we need to run the in xlog_state_iodone_process_iclog()
2489 * in the DONE_SYNC state, we skip the rest and just try to in xlog_state_iodone_process_iclog()
2498 * we ran any callbacks, indicating that we dropped the icloglock. We don't need
2588 * If we got an error, either on the first buffer, or in the case of in xlog_state_done_syncing()
2589 * split log writes, on the second, we shut down the file system and in xlog_state_done_syncing()
2599 * iclog buffer, we wake them all, one will get to do the in xlog_state_done_syncing()
2608 * If the head of the in-core log ring is not (ACTIVE or DIRTY), then we must
2609 * sleep. We wait on the flush queue on the head iclog as that should be
2611 * we will wait here and all new writes will sleep until a sync completes.
2688 * If we are the only one writing to this iclog, sync it to in xlog_state_get_iclog_space()
2689 * disk. We need to do an atomic compare and decrement here to in xlog_state_get_iclog_space()
2702 /* Do we have enough room to write the full amount in the remainder in xlog_state_get_iclog_space()
2703 * of this iclog? Or must we continue a write on the next iclog and in xlog_state_get_iclog_space()
2704 * mark this iclog as completely taken? In the case where we switch in xlog_state_get_iclog_space()
2722 * The first cnt-1 times a ticket goes through here we don't need to move the
2744 /* just return if we still have some of the pre-reserved space */ in xfs_log_ticket_regrant()
2756 * All the information we need to make a correct determination of space left
2758 * count should have been decremented to zero. We only need to deal with the
2762 * reservation can be done before we need to ask for more space. The first
2763 * one goes to fill up the first current reservation. Once we run out of
2782 * If this is a permanent reservation ticket, we may be able to free in xfs_log_ticket_ungrant()
2852 * pmem) or fast async storage because we drop the icloglock to issue the IO.
2882 * we don't guarantee this data will be written out. A change from past
2885 * Basically, we try and perform an intelligent scan of the in-core logs.
2886 * If we determine there is no flushable data, we just return. There is no
2894 * We may sleep if:
2902 * b) when we return from flushing out this iclog, it is still
2929 * If the head is dirty or (active and empty), then we need to in xfs_log_force()
2932 * If the previous iclog is active or dirty we are done. There in xfs_log_force()
2933 * is nothing to sync out. Otherwise, we attach ourselves to the in xfs_log_force()
2939 /* We have exclusive access to this iclog. */ in xfs_log_force()
2949 * Someone else is still writing to this iclog, so we in xfs_log_force()
2951 * gets synced immediately as we may be waiting on it. in xfs_log_force()
2958 * The iclog we are about to wait on may contain the checkpoint pushed in xfs_log_force()
2960 * to disk yet. Like the ACTIVE case above, we need to make sure caches in xfs_log_force()
3016 * We sleep here if we haven't already slept (e.g. this is the in xlog_force_lsn()
3017 * first time we've looked at the correct iclog buf) and the in xlog_force_lsn()
3019 * is that if we are doing sync transactions here, by waiting in xlog_force_lsn()
3020 * for the previous I/O to complete, we can allow a few more in xlog_force_lsn()
3021 * transactions into this iclog before we close it down. in xlog_force_lsn()
3023 * Otherwise, we mark the buffer WANT_SYNC, and bump up the in xlog_force_lsn()
3024 * refcnt so we can release the log (which drops the ref count). in xlog_force_lsn()
3049 * ACTIVE case above, we need to make sure caches are flushed in xlog_force_lsn()
3058 * completes, so we don't need to manipulate caches here at all. in xlog_force_lsn()
3059 * We just need to wait for completion if necessary. in xlog_force_lsn()
3080 * a synchronous log force, we will wait on the iclog with the LSN returned by
3157 * We need to account for all the leadup data and trailer data in xlog_calc_unit_res()
3159 * And then we need to account for the worst case in terms of using in xlog_calc_unit_res()
3184 * the space used for the headers. If we use the iclog size, then we in xlog_calc_unit_res()
3196 * Fundamentally, this means we must pass the entire log vector to in xlog_calc_unit_res()
3205 /* add extra header reservations if we overrun */ in xlog_calc_unit_res()
3328 * 2. Make sure we have a good magic number
3329 * 3. Make sure we don't have magic numbers in the data
3444 * Return true if the shutdown cause was a log IO error and we actually shut the
3459 * If we allow the log force below on a second pass after shutting in xlog_force_shutdown()
3460 * down the log, we risk deadlocking the CIL push as it may require in xlog_force_shutdown()
3469 * being shut down. We need to do this first as shutting down the log in xlog_force_shutdown()
3473 * When we are in recovery, there are no transactions to flush, and in xlog_force_shutdown()
3474 * we don't want to touch the log because we don't want to perturb the in xlog_force_shutdown()
3475 * current head/tail for future recovery attempts. Hence we need to in xlog_force_shutdown()
3478 * If we are shutting down due to a log IO error, then we must avoid in xlog_force_shutdown()
3487 * set, there someone else is performing the shutdown and so we are done in xlog_force_shutdown()
3488 * here. This should never happen because we should only ever get called in xlog_force_shutdown()
3492 * cannot change once they hold the log->l_icloglock. Hence we need to in xlog_force_shutdown()
3493 * hold that lock here, even though we use the atomic test_and_set_bit() in xlog_force_shutdown()
3519 * We don't want anybody waiting for log reservations after this. That in xlog_force_shutdown()
3520 * means we have to wake up everybody queued up on reserveq as well as in xlog_force_shutdown()
3521 * writeq. In addition, we make sure in xlog_{re}grant_log_space that in xlog_force_shutdown()
3522 * we don't enqueue anything once the SHUTDOWN flag is set, and this in xlog_force_shutdown()
3582 * resets the in-core LSN. We can't validate in this mode, but in xfs_log_check_lsn()