Lines Matching +full:write +full:- +full:to +full:- +full:write

15  * The flag bits that are used to represent these states are:
29 * Empty -> Want - on read or write to get old data for parity calc
30 * Empty -> Dirty - on compute_parity to satisfy write/sync request.
31 * Empty -> Clean - on compute_block when computing a block for failed drive
32 * Want -> Empty - on failed read
33 * Want -> Clean - on successful completion of read request
34 * Dirty -> Clean - on successful completion of write request
35 * Dirty -> Clean - on failed write
36 * Clean -> Dirty - on compute_parity to satisfy write/sync (RECONSTRUCT or RMW)
38 * The Want->Empty, Want->Clean, Dirty->Clean, transitions
41 * This leaves one multi-stage transition:
42 * Want->Dirty->Clean
51 * successfully written to the spare ( or to parity when resyncing).
52 * To distingush these states we have a stripe bit STRIPE_INSYNC that
53 * is set whenever a write is scheduled to the spare, or to the parity
59 * to the appropriate stripe in one of two lists linked on b_reqnext.
60 * One list (bh_read) for read requests, one (bh_write) for write.
73 * When a buffer on the write list is committed for write it is copied
79 * The write list and read list both act as fifos. The read list,
80 * write list and written list are protected by the device_lock.
89 * to be handled in some way. Both of these are fifo queues. Each
90 * stripe is also (potentially) linked to a hash bucket in the hash
97 * - stripes have a reference counter. If count==0, they are on a list.
98 * - If a stripe might need handling, STRIPE_HANDLE is set.
99 * - When refcount reaches zero, then if STRIPE_HANDLE it is put on
103 * cleared while a stripe has a non-zero count means that if the
110 * lockdev check-hash unlink-stripe cnt++ clean-stripe hash-stripe unlockdev
112 * lockdev check-hash if(!cnt++)unlink-stripe unlockdev
113 * attach a request to an active stripe (add_stripe_bh())
114 * lockdev attach-buffer unlockdev
117 * (lockdev check-buffers unlockdev) ..
118 * change-state ..
121 …* lockdev if (!--cnt) { if STRIPE_HANDLE, add to handle_list else add to inactive-list } unlo…
129 * -copying data between the stripe cache and user application buffers
130 * -computing blocks to save a disk access, or to recover a missing block
131 * -updating the parity on a write operation (reconstruct write and
132 * read-modify-write)
133 * -checking parity correctness
134 * -running i/o to disk
136 * api to (optionally) offload operations to dedicated hardware engines.
139 * the count is non-zero.
146 * block is re-marked up to date (assuming the check was successful) and is
147 * not re-read from disk.
148 * 2/ When a write operation is requested we immediately lock the affected
149 * blocks, and mark them as not up to date. This causes new read requests
150 * to be held off, as well as parity checks and compute block operations.
152 * that block as if it is up to date. raid5_run_ops guaruntees that any
158 * Operations state - intermediate states that are visible outside of
164 * sh->state flags (STRIPE_BIOFILL_RUN and STRIPE_COMPUTE_RUN)
167 * enum check_states - handles syncing / repairing a stripe
168 * @check_state_idle - check operations are quiesced
169 * @check_state_run - check operation is running
170 * @check_state_result - set outside lock when check result is valid
171 * @check_state_compute_run - check failed and we are repairing
172 * @check_state_compute_result - set outside lock when compute result is valid
177 check_state_run_q, /* q-parity check */
185 * enum reconstruct_states - handles writing or expanding a stripe
189 reconstruct_state_prexor_drain_run, /* prexor-write */
190 reconstruct_state_drain_run, /* write */
206 short ddf_layout;/* use DDF ordering to calculate Q */
215 * @target - STRIPE_OP_COMPUTE_BLK target
216 * @target2 - 2nd compute target in the raid6 case
217 * @zero_sum_result - P and Q verification flags
218 * @request - async service request flags for raid_run_ops
230 * writing data to both devices.
241 /* stripe_head_state - collects and tracks the dynamic state of a stripe_head
245 /* 'syncing' means that we need to read all devices, either
246 * to check/correct parity, or to reconstruct a missing device.
248 * the source is valid at this point so we don't need to
270 /* and some that are internal to handle_stripe */
271 R5_Insync, /* rdev && rdev->in_sync at start */
272 R5_Wantread, /* want to schedule a read */
277 R5_ReWrite, /* have tried to over-write the readerror */
279 R5_Expanded, /* This block now has post-expand data */
283 R5_Wantfill, /* dev->toread contains a bio that needs
286 R5_Wantdrain, /* dev->towrite needs to be drained */
287 R5_WantFUA, /* Write should be FUA */
288 R5_WriteError, /* got a write error - need to record it */
289 R5_MadeGood, /* A bad block has been fixed by writing to it */
292 * fixed by writing to it */
294 * up-to-date at this stripe. */
295 R5_WantReplace, /* We need to update the replacement, we have read
296 * data in, and now is a good time to write it out.
317 STRIPE_FULL_WRITE, /* all blocks are set to be overwritten */
337 * To improve write throughput, we need to delay the handling of some
338 * stripes until there has been a chance that several write requests
340 * In particular, any write request that would require pre-reading
342 * in a pre-read phase. Further, if the "delayed" queue is empty when
347 * it to the count of prereading stripes.
348 * When write is initiated, or the stripe refcnt == 0 (just in case) we
351 * move any strips from delayed to handle and clear the DELAYED flag and set
353 * In stripe_handle, if we find pre-reading is necessary, we do it if
354 * PREREAD_ACTIVE is set, else we set DELAYED which will send it to the delayed queue.
375 * else is it the next sector to work on.
397 atomic_t pending_full_writes; /* full write backlog */
413 int fullsync; /* set to 1 if a full sync is needed,
427 * associated with conf to handle
442 * waiting for 25% to be free
462 /* Define non-rotating (raid4) algorithms. These allow
463 * conversion of raid4 to raid5.
476 * Interestingly DDFv1.2-Errata-A does not specify N_CONTINUE but
487 * with the Q block always on the last device (N-1).
488 * This allows trivial conversion from RAID5 to RAID6