Lines Matching +full:5 +full:a
5 The device-mapper RAID (dm-raid) target provides a bridge from DM to MD.
6 It allows the MD RAID drivers to be accessed using a device-mapper
95 clear bits. A longer interval means less bitmap I/O but
96 resyncing after a failure is likely to take longer.
107 Stripe cache size (RAID 4/5/6 only)
115 a RAID10 configuration. The number of copies is can be
134 layout is what a traditional RAID10 would look like. The
135 3-device layout is what might be called a 'RAID1E - Integrated
175 value) to any reshape supporting raid levels 4/5/6 and 10.
176 RAID levels 4/5/6 allow for addition of devices (metadata
180 A minimum of devices have to be kept to enforce resilience,
181 which is 3 devices for raid4/5 and 4 devices for raid6.
191 at the beginning of each raid device. The kernel raid4/5/6/10
194 starting at data_offset to fill up a new stripe with the larger
202 This option adds a journal device to raid4/5/6 raid sets and
206 be throttled versus non-journaled raid4/5/6 sets.
207 Takeover/reshape is not possible with a raid4/5/6 journal device;
211 This option sets the caching mode on journaled raid4/5/6 raid sets
215 raid1 or raid10) to avoid a single point of failure.
220 data. A Maximum of 64 metadata/data device entries are supported
224 If a drive has failed or is missing at creation time, a '-' can be
225 given for both the metadata and data drives for a given position.
240 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
248 5 8:17 8:18 8:33 8:34 8:49 8:50 8:65 8:66 8:81 8:82
261 The output is as follows (normally a single line, but expanded here for
272 0 1960893648 raid raid4 5 AAAAA 2/490221568 init 0
274 Here we can see the RAID type is raid4, there are 5 devices - all of
275 which are 'A'live, and the array is 2/490221568 complete with its initial
276 recovery. Here is a fuller description of the individual fields:
282 - 'A' = alive and in-sync
283 - 'a' = alive but not in-sync
298 (possibly aided by a bitmap).
300 - A device in the array is being rebuilt or
303 - A user-initiated full check of the array is
313 - The array is undergoing a reshape.
315 in RAID1/10 or wrong parity values found in RAID4/5/6.
316 This value is valid only after a "check" of the array
317 is performed. A healthy array has a 'mismatch_cnt' of 0.
319 each component device of a raid set (see the respective
321 <journal_char> - 'A' - active write-through journal device.
322 - 'a' - active write-back journal device.
337 "resync" Initiate/continue a resync.
338 "recover" Initiate/continue a recover process.
339 "check" Initiate a check (i.e. a "scrub") of the array.
340 "repair" Initiate a repair of the array.
347 When a block is discarded, some storage devices will return zeroes when
351 zeroes when discarded blocks are read! Since RAID 4/5/6 uses blocks
352 from a number of devices to calculate parity blocks and (for performance
355 of a RAID 4/5/6 stripe and if subsequent read results are not
359 enable discards with RAID 4/5/6.
362 even when reporting 'discard_zeroes_data', by default RAID 4/5/6
370 to safely enable discard support for RAID 4/5/6:
380 1.0.0 Initial version. Support for RAID 4/5/6
396 and reject to start the raid set if any are set by a newer
397 target version, thus avoiding data corruption on a raid set
398 with a reshape in progress.
403 fails reading a superblock. Correctly emit 'maj:min1 maj:min2' and
406 1.10.0 Add support for raid4/5/6 journal device
410 1.11.1 Add raid4/5/6 journal write-back support via journal_mode option
412 1.13.0 Fix dev_health status at end of "recover" (was 'a', now 'A')