Lines Matching full:bad

3  * Bad block management
20 * The purpose of badblocks set/clear is to manage bad blocks ranges which are
23 * When the caller of badblocks_set() wants to set a range of bad blocks, the
27 * more complicated when the setting range covers multiple already set bad block
28 * ranges, with restrictions of maximum length of each bad range and the bad
32 * for setting a large range of bad blocks, we can handle it by dividing the
34 * bad table full conditions. Every time only a smaller piece of the bad range
36 * possible overlapped or adjacent already set bad block ranges. Then the hard
39 * When setting a range of bad blocks to the bad table, the simplified situations
40 * to be considered are, (The already set bad blocks ranges are naming with
41 * prefix E, and the setting bad blocks range is naming with prefix S)
43 * 1) A setting range is not overlapped or adjacent to any other already set bad
51 * For this situation if the bad blocks table is not full, just allocate a
52 * free slot from the bad blocks table to mark the setting range S. The
57 * 2) A setting range starts exactly at a start LBA of an already set bad blocks
67 * be merged into existing bad range E. The result is,
77 * An extra slot from the bad blocks table will be allocated for S, and head
84 * be merged into existing bad range E. The result is,
94 bad blocks range E. The result is,
117 * 3) A setting range starts before the start LBA of an already set bad blocks
138 * 4) A setting range starts after the start LBA of an already set bad blocks
140 * 4.1) If the setting range S exactly matches the tail part of already set bad
187 * 4.3) If the setting bad blocks range S is overlapped with an already set bad
208 * 5) A setting bad blocks range S is adjacent to one or more already set bad
210 * 5.1) Front merge: If the already set bad blocks range E is before setting
225 * range S right after already set range E into the bad blocks table. The
231 * 6.1) Multiple already set ranges may merge into less ones in a full bad table
239 * In the above example, when the bad blocks table is full, inserting the
241 * can be allocated from bad blocks table. In this situation a proper
242 * setting method should be go though all the setting bad blocks range and
244 * is available slot from bad blocks table, re-try again to handle more
245 * setting bad blocks ranges as many as possible.
254 * to no-space in bad blocks table, but the following E1, E2 and E3 ranges
256 * 1 free slot in bad blocks table.
260 * Since the bad blocks table is not full anymore, re-try again for the
262 * bad blocks table with previous freed slot from multiple ranges merge.
264 * In the following example, in bad blocks table, E1 is an acked bad blocks
265 * range and E2 is an unacked bad blocks range, therefore they are not able
266 * to merge into a larger range. The setting bad blocks range S is acked,
275 * the bad blocks table should be (E3 is remaining part of E2 which is not
281 * The above result is correct but not perfect. Range E1 and S in the bad
283 * occupy less bad blocks table space and make badblocks_check() faster.
290 * 6.3) Behind merge: If the already set bad blocks range E is behind the setting
313 * S in front of the already set range E in the bad blocks table. The result
320 * the bad block range setting conditions. Maybe there is some rare corner case
322 * to no space, or some ranges are not merged to save bad blocks table space.
326 * which starts before or at current setting range. Since the setting bad blocks
332 * return correct bad blocks table index immediately.
335 * Clearing a bad blocks range from the bad block table has similar idea as
337 * when the clearing range hits middle of a bad block range, the existing bad
339 * bad block table. The simplified situations to be considered are, (The already
340 * set bad blocks ranges in bad block table are naming with prefix E, and the
341 * clearing bad blocks range is naming with prefix C)
343 * 1) A clearing range is not overlapped to any already set ranges in bad block
351 * For the above situations, no bad block to be cleared and no failure
353 * 2) The clearing range hits middle of an already setting bad blocks range in
354 * the bad block table.
361 * In this situation if the bad block table is not full, the range E will be
366 * 3) The clearing range starts exactly at same LBA as an already set bad block range
367 * from the bad block table.
377 * item deleted from bad block table. The result is,
388 * For this situation the whole bad blocks range E will be cleared and its
389 * corresponded item is deleted from the bad block table.
390 * 4) The clearing range exactly ends at same LBA as an already set bad block
404 * 5) The clearing range is partially overlapped with an already set bad block
405 * range from the bad block table.
406 * 5.1) The already set bad block range is front overlapped with the clearing
425 * 5.2) The already set bad block range is behind overlaopped with the clearing
446 * All bad blocks range clearing can be simplified into the above 5 situations
448 * while-loop. The idea is similar to bad blocks range setting but much
453 * Find the range starts at-or-before 's' from bad table. The search
454 * starts from index 'hint' and stops at index 'hint_end' from the bad
476 * Find the range starts at-or-before bad->start. If 'hint' is provided
477 * (hint >= 0) then search in the bad table from hint firstly. It is
478 * very probably the wanted bad range can be found from the hint index,
481 static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad, in prev_badblocks() argument
484 sector_t s = bad->start; in prev_badblocks()
508 /* Do bisect search in bad table */ in prev_badblocks()
531 * Return 'true' if the range indicated by 'bad' can be forward
532 * merged with the bad range (from the bad table) indexed by 'prev'.
535 struct badblocks_context *bad) in can_merge_front() argument
537 sector_t s = bad->start; in can_merge_front()
540 if (BB_ACK(p[prev]) == bad->ack && in can_merge_front()
548 * Do forward merge for range indicated by 'bad' and the bad range
549 * (from bad table) indexed by 'prev'. The return value is sectors
550 * merged from bad->len.
552 static int front_merge(struct badblocks *bb, int prev, struct badblocks_context *bad) in front_merge() argument
554 sector_t sectors = bad->len; in front_merge()
555 sector_t s = bad->start; in front_merge()
571 BB_LEN(p[prev]) + merged, bad->ack); in front_merge()
579 * handle: If a bad range (indexed by 'prev' from bad table) exactly
580 * starts as bad->start, and the bad range ahead of 'prev' (indexed by
581 * 'prev - 1' from bad table) exactly ends at where 'prev' starts, and
583 * these two bad range (from bad table) can be combined.
585 * Return 'true' if bad ranges indexed by 'prev' and 'prev - 1' from bad
589 struct badblocks_context *bad) in can_combine_front() argument
594 (BB_OFFSET(p[prev]) == bad->start) && in can_combine_front()
603 * Combine the bad ranges indexed by 'prev' and 'prev - 1' (from bad
604 * table) into one larger bad range, and the new range is indexed by
621 * Return 'true' if the range indicated by 'bad' is exactly forward
622 * overlapped with the bad range (from bad table) indexed by 'front'.
623 * Exactly forward overlap means the bad range (from bad table) indexed
624 * by 'prev' does not cover the whole range indicated by 'bad'.
627 struct badblocks_context *bad) in overlap_front() argument
631 if (bad->start >= BB_OFFSET(p[front]) && in overlap_front()
632 bad->start < BB_END(p[front])) in overlap_front()
638 * Return 'true' if the range indicated by 'bad' is exactly backward
639 * overlapped with the bad range (from bad table) indexed by 'behind'.
641 static bool overlap_behind(struct badblocks *bb, struct badblocks_context *bad, in overlap_behind() argument
646 if (bad->start < BB_OFFSET(p[behind]) && in overlap_behind()
647 (bad->start + bad->len) > BB_OFFSET(p[behind])) in overlap_behind()
653 * Return 'true' if the range indicated by 'bad' can overwrite the bad
654 * range (from bad table) indexed by 'prev'.
656 * The range indicated by 'bad' can overwrite the bad range indexed by
658 * 1) The whole range indicated by 'bad' can cover partial or whole bad
659 * range (from bad table) indexed by 'prev'.
660 * 2) The ack value of 'bad' is larger or equal to the ack value of bad
663 * If the overwriting doesn't cover the whole bad range (from bad table)
664 * indexed by 'prev', new range might be split from existing bad range,
665 * 1) The overwrite covers head or tail part of existing bad range, 1
666 * extra bad range will be split and added into the bad table.
667 * 2) The overwrite covers middle of existing bad range, 2 extra bad
669 * added into the bad table.
674 struct badblocks_context *bad, int *extra) in can_front_overwrite() argument
679 WARN_ON(!overlap_front(bb, prev, bad)); in can_front_overwrite()
681 if (BB_ACK(p[prev]) >= bad->ack) in can_front_overwrite()
684 if (BB_END(p[prev]) <= (bad->start + bad->len)) { in can_front_overwrite()
685 len = BB_END(p[prev]) - bad->start; in can_front_overwrite()
686 if (BB_OFFSET(p[prev]) == bad->start) in can_front_overwrite()
691 bad->len = len; in can_front_overwrite()
693 if (BB_OFFSET(p[prev]) == bad->start) in can_front_overwrite()
698 * one, an extra slot needed from bad table. in can_front_overwrite()
710 * Do the overwrite from the range indicated by 'bad' to the bad range
711 * (from bad table) indexed by 'prev'.
713 * extra bad range(s) might be split and added into the bad table. All
714 * the splitting cases in the bad table will be handled here.
717 struct badblocks_context *bad, int extra) in front_overwrite() argument
726 bad->ack); in front_overwrite()
729 if (BB_OFFSET(p[prev]) == bad->start) { in front_overwrite()
731 bad->len, bad->ack); in front_overwrite()
734 p[prev + 1] = BB_MAKE(bad->start + bad->len, in front_overwrite()
739 bad->start - BB_OFFSET(p[prev]), in front_overwrite()
748 p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack); in front_overwrite()
753 bad->start - BB_OFFSET(p[prev]), in front_overwrite()
762 p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack); in front_overwrite()
771 return bad->len; in front_overwrite()
775 * Explicitly insert a range indicated by 'bad' to the bad table, where
778 static int insert_at(struct badblocks *bb, int at, struct badblocks_context *bad) in insert_at() argument
785 len = min_t(sector_t, bad->len, BB_MAX_LEN); in insert_at()
788 p[at] = BB_MAKE(bad->start, len, bad->ack); in insert_at()
814 * Return 'true' if the range indicated by 'bad' is exactly backward
815 * overlapped with the bad range (from bad table) indexed by 'behind'.
838 /* Do exact work to set bad block range into the bad block table */
843 struct badblocks_context bad; in _badblocks_set() local
867 bad.ack = acknowledged; in _badblocks_set()
871 bad.start = s; in _badblocks_set()
872 bad.len = sectors; in _badblocks_set()
879 len = insert_at(bb, 0, &bad); in _badblocks_set()
885 prev = prev_badblocks(bb, &bad, hint); in _badblocks_set()
890 if (bad.len > (BB_OFFSET(p[0]) - bad.start)) in _badblocks_set()
891 bad.len = BB_OFFSET(p[0]) - bad.start; in _badblocks_set()
892 len = insert_at(bb, 0, &bad); in _badblocks_set()
900 if (can_combine_front(bb, prev, &bad)) { in _badblocks_set()
908 if (can_merge_front(bb, prev, &bad)) { in _badblocks_set()
909 len = front_merge(bb, prev, &bad); in _badblocks_set()
915 if (overlap_front(bb, prev, &bad)) { in _badblocks_set()
918 if (!can_front_overwrite(bb, prev, &bad, &extra)) { in _badblocks_set()
928 len = front_overwrite(bb, prev, &bad, extra); in _badblocks_set()
932 if (can_combine_front(bb, prev, &bad)) { in _badblocks_set()
941 /* cannot merge and there is space in bad table */ in _badblocks_set()
943 overlap_behind(bb, &bad, prev + 1)) in _badblocks_set()
944 bad.len = min_t(sector_t, in _badblocks_set()
945 bad.len, BB_OFFSET(p[prev + 1]) - bad.start); in _badblocks_set()
947 len = insert_at(bb, prev + 1, &bad); in _badblocks_set()
982 * Clear the bad block range from bad block table which is front overlapped
984 * already set bad block range are cleared. If the whole bad block range is
989 struct badblocks_context *bad, int *deleted) in front_clear() argument
991 sector_t sectors = bad->len; in front_clear()
992 sector_t s = bad->start; in front_clear()
1028 * bad block range from bad block table. In this condition the existing bad
1032 struct badblocks_context *bad) in front_splitting_clear() argument
1037 sector_t sectors = bad->len; in front_splitting_clear()
1038 sector_t s = bad->start; in front_splitting_clear()
1048 /* Do the exact work to clear bad block range from the bad block table */
1051 struct badblocks_context bad; in _badblocks_clear() local
1070 * However it is better the think a block is bad when it in _badblocks_clear()
1071 * isn't than to think a block is not bad when it is. in _badblocks_clear()
1081 bad.ack = true; in _badblocks_clear()
1085 bad.start = s; in _badblocks_clear()
1086 bad.len = sectors; in _badblocks_clear()
1095 prev = prev_badblocks(bb, &bad, hint); in _badblocks_clear()
1099 if (overlap_behind(bb, &bad, 0)) { in _badblocks_clear()
1106 * Both situations are to clear non-bad range, in _badblocks_clear()
1114 if ((prev + 1) >= bb->count && !overlap_front(bb, prev, &bad)) { in _badblocks_clear()
1120 /* Clear will split a bad record but the table is full */ in _badblocks_clear()
1121 if (badblocks_full(bb) && (BB_OFFSET(p[prev]) < bad.start) && in _badblocks_clear()
1122 (BB_END(p[prev]) > (bad.start + sectors))) { in _badblocks_clear()
1127 if (overlap_front(bb, prev, &bad)) { in _badblocks_clear()
1128 if ((BB_OFFSET(p[prev]) < bad.start) && in _badblocks_clear()
1129 (BB_END(p[prev]) > (bad.start + bad.len))) { in _badblocks_clear()
1132 len = front_splitting_clear(bb, prev, &bad); in _badblocks_clear()
1142 len = front_clear(bb, prev, &bad, &deleted); in _badblocks_clear()
1152 if ((prev + 1) < bb->count && overlap_behind(bb, &bad, prev + 1)) { in _badblocks_clear()
1153 len = BB_OFFSET(p[prev + 1]) - bad.start; in _badblocks_clear()
1155 /* Clear non-bad range should be treated as successful */ in _badblocks_clear()
1162 /* Clear non-bad range should be treated as successful */ in _badblocks_clear()
1185 /* Do the exact work to check bad blocks range from the bad block table */
1190 struct badblocks_context bad; in _badblocks_check() local
1197 bad.start = s; in _badblocks_check()
1198 bad.len = sectors; in _badblocks_check()
1205 prev = prev_badblocks(bb, &bad, hint); in _badblocks_check()
1209 ((prev + 1) >= bb->count) && !overlap_front(bb, prev, &bad)) { in _badblocks_check()
1215 if ((prev >= 0) && overlap_front(bb, prev, &bad)) { in _badblocks_check()
1235 if ((prev + 1) < bb->count && overlap_behind(bb, &bad, prev + 1)) { in _badblocks_check()
1236 len = BB_OFFSET(p[prev + 1]) - bad.start; in _badblocks_check()
1265 * badblocks_check() - check a given range for bad sectors
1272 * We can record which blocks on each device are 'bad' and so just
1274 * Entries in the bad-block table are 64bits wide. This comprises:
1275 * Length of bad-range, in sectors: 0-511 for lengths 1-512
1276 * Start of bad-range, sector offset, 54 bits (allows 8 exbibytes)
1281 * Locking of the bad-block table uses a seqlock so badblocks_check
1283 * We will sometimes want to check for bad blocks in a bi_end_io function,
1286 * When looking for a bad block we specify a range and want to
1287 * know if any block in the range is bad. So we binary-search
1293 * 0: there are no known bad blocks in the range
1294 * 1: there are known bad block which are all acknowledged
1295 * -1: there are bad blocks which have not yet been acknowledged in metadata.
1296 * plus the start/length of the first bad section we overlap.
1326 * badblocks_set() - Add a range of bad blocks to the table.
1328 * @s: first sector to mark as bad
1329 * @sectors: number of sectors to mark as bad
1330 * @acknowledged: weather to mark the bad sectors as acknowledged
1349 * badblocks_clear() - Remove a range of bad blocks to the table.
1351 * @s: first sector to mark as bad
1352 * @sectors: number of sectors to mark as bad
1369 * ack_all_badblocks() - Acknowledge all bad blocks in a list.
1406 * badblocks_show() - sysfs access to bad-blocks list
1455 * badblocks_store() - sysfs access to bad-blocks list