/linux/crypto/async_tx/ |
H A D | async_pq.c | 20 /* the struct page *blocks[] parameter passed to async_gen_syndrome() 22 * blocks[disks-2] and the 'Q' destination address at blocks[disks-1] 107 do_sync_gen_syndrome(struct page **blocks, unsigned int *offsets, int disks, in do_sync_gen_syndrome() argument 117 srcs = (void **) blocks; in do_sync_gen_syndrome() 120 if (blocks[i] == NULL) { in do_sync_gen_syndrome() 124 srcs[i] = page_address(blocks[i]) + offsets[i]; in do_sync_gen_syndrome() 157 * @blocks: source blocks from idx 0..disks-3, P @ disks-2 and Q @ disks-1 159 * @disks: number of blocks (includin 177 async_gen_syndrome(struct page ** blocks,unsigned int * offsets,int disks,size_t len,struct async_submit_ctl * submit) async_gen_syndrome() argument 272 pq_val_chan(struct async_submit_ctl * submit,struct page ** blocks,int disks,size_t len) pq_val_chan() argument 298 async_syndrome_val(struct page ** blocks,unsigned int * offsets,int disks,size_t len,enum sum_check_flags * pqres,struct page * spare,unsigned int s_off,struct async_submit_ctl * submit) async_syndrome_val() argument [all...] |
H A D | async_raid6_recov.c | 154 struct page **blocks, unsigned int *offs, in __2data_recov_4() argument 168 p = blocks[disks-2]; in __2data_recov_4() 170 q = blocks[disks-1]; in __2data_recov_4() 173 a = blocks[faila]; in __2data_recov_4() 175 b = blocks[failb]; in __2data_recov_4() 204 struct page **blocks, unsigned int *offs, in __2data_recov_5() argument 222 if (blocks[i] == NULL) in __2data_recov_5() 231 p = blocks[disks-2]; in __2data_recov_5() 233 q = blocks[disks-1]; in __2data_recov_5() 235 g = blocks[goo in __2data_recov_5() 295 __2data_recov_n(int disks,size_t bytes,int faila,int failb,struct page ** blocks,unsigned int * offs,struct async_submit_ctl * submit) __2data_recov_n() argument 394 async_raid6_2data_recov(int disks,size_t bytes,int faila,int failb,struct page ** blocks,unsigned int * offs,struct async_submit_ctl * submit) async_raid6_2data_recov() argument 472 async_raid6_datap_recov(int disks,size_t bytes,int faila,struct page ** blocks,unsigned int * offs,struct async_submit_ctl * submit) async_raid6_datap_recov() argument [all...] |
/linux/Documentation/userspace-api/media/v4l/ |
H A D | vidioc-g-edid.rst | 60 ``start_block``, ``blocks`` and ``edid`` fields, zero the ``reserved`` 62 ``start_block`` and of size ``blocks`` will be placed in the memory 64 ``blocks`` * 128 bytes large (the size of one block is 128 bytes). 66 If there are fewer blocks than specified, then the driver will set 67 ``blocks`` to the actual number of blocks. If there are no EDID blocks 70 If blocks have to be retrieved from the sink, then this call will block 73 If ``start_block`` and ``blocks`` are both set to 0 when 74 :ref:`VIDIOC_G_EDID <VIDIOC_G_EDID>` is called, then the driver will set ``blocks`` t [all...] |
/linux/Documentation/admin-guide/device-mapper/ |
H A D | writecache.rst | 27 start writeback when the number of used blocks reach this 30 stop writeback when the number of used blocks drops below 33 limit the number of blocks that are in flight during 37 when the application writes this amount of blocks without 38 issuing the FLUSH request, the blocks are automatically 58 new writes (however, writes to already cached blocks are 63 blocks drops to zero, userspace can unload the 80 2. the number of blocks 81 3. the number of free blocks 82 4. the number of blocks unde [all...] |
H A D | dm-dust.rst | 10 requests on specific blocks (to emulate the behavior of a hard disk 14 "dmsetup status" displays "fail_read_on_bad_block"), reads of blocks 17 Writes of blocks in the "bad block list will result in the following: 28 messages to add arbitrary bad blocks at new locations, and the 30 configured "bad blocks" will be treated as bad, or bypassed. 86 Adding and removing bad blocks 90 enabled or disabled), bad blocks may be added or removed from the 102 These bad blocks will be stored in the "bad block list". 128 ...and writing to the bad blocks will remove the blocks fro [all...] |
H A D | era.rst | 9 addition it keeps track of which blocks were written within a user 14 Use cases include tracking changed blocks for backup software, and 25 origin dev device holding data blocks that may change 55 <metadata block size> <#used metadata blocks>/<#total metadata blocks> 61 #used metadata blocks Number of metadata blocks used 62 #total metadata blocks Total number of metadata blocks 64 held metadata root The location, in blocks, o [all...] |
H A D | vdo.rst | 77 of 4096-byte blocks. Must match the current size of the vdo 87 blocks. The minimum and recommended value is 32768 blocks. 89 must be at least 4096 blocks per logical thread. 136 deduplication based on the hash value of data blocks. The 155 blocks. I/O requests to a vdo volume are normally split 156 into 4096-byte blocks, and processed up to 2048 at a time. 159 4096-byte blocks in a single bio, and are limited to 1500 185 least 32832 4096-byte blocks if at all, and must not exceed the size of the 283 <compression state> <physical blocks use [all...] |
H A D | cache.rst | 56 3. A small metadata device - records which blocks are in the cache, 66 The origin is divided up into blocks of a fixed size. This block size 90 blocks should remain clean. 107 dirty blocks in a cache. Useful for decommissioning a cache or when 109 blocks, in the area of the cache being removed, to be clean. If the 110 area being removed from the cache still contains dirty blocks the resize 143 system crashes all cache blocks will be assumed dirty when restarted. 168 blocks. However, we allow this bitset to have a different block size 169 from the cache blocks. This is because we need to track the discard 187 cache dev fast device holding cached data blocks [all...] |
H A D | verity.rst | 50 The number of data blocks on the data device. Additional blocks are 55 This is the offset, in <hash_block_size>-blocks, from the start of hash_dev 79 Log corrupted blocks, but allow read operations to proceed normally. 100 Do not verify blocks that are expected to contain zeroes and always return 101 zeroes instead. This may be useful if the partition contains unused blocks 107 may be the same device where data and hash blocks reside, in which case 111 on the hash device after the hash blocks. 122 The number of encoding data blocks on the FEC device. The block size for 126 This is the offset, in <data_block_size> blocks, fro [all...] |
/linux/crypto/ |
H A D | aegis128-core.c | 32 union aegis_block blocks[AEGIS128_STATE_BLOCKS]; member 66 tmp = state->blocks[AEGIS128_STATE_BLOCKS - 1]; in crypto_aegis128_update() 68 crypto_aegis_aesenc(&state->blocks[i], &state->blocks[i - 1], in crypto_aegis128_update() 69 &state->blocks[i]); in crypto_aegis128_update() 70 crypto_aegis_aesenc(&state->blocks[0], &tmp, &state->blocks[0]); in crypto_aegis128_update() 83 crypto_aegis_block_xor(&state->blocks[0], msg); in crypto_aegis128_update_a() 95 crypto_xor(state->blocks[0].bytes, msg, AEGIS_BLOCK_SIZE); in crypto_aegis128_update_u() 108 state->blocks[ in crypto_aegis128_init() [all...] |
/linux/Documentation/filesystems/ext4/ |
H A D | blocks.rst | 3 Blocks title 6 ext4 allocates storage space in units of “blocks”. A block is a group of 8 integral power of 2. Blocks are in turn grouped into larger units called 11 page size (i.e. 64KiB blocks on a i386 which only has 4KiB memory 12 pages). By default a filesystem can contain 2^32 blocks; if the '64bit' 13 feature is enabled, then a filesystem can have 2^64 blocks. The location 28 * - Blocks 43 * - Blocks Per Block Group 58 * - Blocks Per File, Extents 63 * - Blocks Pe [all...] |
/linux/fs/jffs2/ |
H A D | jffs2_fs_sb.h | 80 /* Number of free blocks there must be before we... */ 86 /* Number of 'very dirty' blocks before we trigger immediate GC */ 92 struct jffs2_eraseblock *blocks; /* The whole array of blocks. Used for getting blocks member 93 * from the offset (blocks[ofs / sector_size]) */ 98 struct list_head clean_list; /* Blocks 100% full of clean data */ 99 struct list_head very_dirty_list; /* Blocks with lots of dirty space */ 100 struct list_head dirty_list; /* Blocks with some dirty space */ 101 struct list_head erasable_list; /* Blocks whic [all...] |
/linux/arch/arm64/crypto/ |
H A D | aes-neonbs-glue.c | 30 int rounds, int blocks); 32 int rounds, int blocks); 35 int rounds, int blocks, u8 iv[]); 38 int rounds, int blocks, u8 iv[]); 41 int rounds, int blocks, u8 iv[]); 43 int rounds, int blocks, u8 iv[]); 47 int rounds, int blocks); 49 int rounds, int blocks, u8 iv[]); 97 int rounds, int blocks)) in __ecb_crypt() argument 107 unsigned int blocks in __ecb_crypt() local 167 unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; cbc_encrypt() local 190 unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; cbc_decrypt() local 218 int blocks = (walk.nbytes / AES_BLOCK_SIZE) & ~7; ctr_encrypt() local 279 __xts_crypt(struct skcipher_request * req,bool encrypt,void (* fn)(u8 out[],u8 const in[],u8 const rk[],int rounds,int blocks,u8 iv[])) __xts_crypt() argument 318 int blocks = (walk.nbytes / AES_BLOCK_SIZE) & ~7; __xts_crypt() local [all...] |
/linux/Documentation/devicetree/bindings/sifive/ |
H A D | sifive-blocks-ip-versioning.txt | 1 DT compatible string versioning for SiFive open-source IP blocks 4 strings for open-source SiFive IP blocks. HDL for these IP blocks 7 https://github.com/sifive/sifive-blocks 14 https://github.com/sifive/sifive-blocks/blob/v1.0/src/main/scala/devices/uart/UART.scala#L43 16 Until these IP blocks (or IP integration) support version 17 auto-discovery, the maintainers of these IP blocks intend to increment 19 interface to these IP blocks changes, or when the functionality of the 20 underlying IP blocks changes in a way that software should be aware of. 25 upstream sifive-blocks commit [all...] |
/linux/Documentation/admin-guide/mm/ |
H A D | memory-hotplug.rst | 46 Memory sections are combined into chunks referred to as "memory blocks". The 51 All memory blocks have the same size. 59 (2) Onlining memory blocks 62 for the direct mapping, is allocated and initialized, and memory blocks are 64 blocks. 75 (1) Offlining memory blocks 83 In the second phase, the memory blocks are removed and metadata is freed. 109 blocks, and, if successful, hotunplug the memory from Linux. 122 Only complete memory blocks can be probed. Individual memory blocks ar [all...] |
/linux/arch/x86/crypto/ |
H A D | ecb_cbc_helpers.h | 32 #define ECB_WALK_ADVANCE(blocks) do { \ argument 33 dst += (blocks) * __bsize; \ 34 src += (blocks) * __bsize; \ 35 nbytes -= (blocks) * __bsize; \ 38 #define ECB_BLOCK(blocks, func) do { \ argument 39 const int __blocks = (blocks); \ 46 ECB_WALK_ADVANCE(blocks); \ 61 #define CBC_DEC_BLOCK(blocks, func) do { \ argument 62 const int __blocks = (blocks); \ 68 const u8 *__iv = src + ((blocks) [all...] |
H A D | cast5-avx-x86_64-asm_64.S | 218 * RL1: blocks 1 and 2 219 * RR1: blocks 3 and 4 220 * RL2: blocks 5 and 6 221 * RR2: blocks 7 and 8 222 * RL3: blocks 9 and 10 223 * RR3: blocks 11 and 12 224 * RL4: blocks 13 and 14 225 * RR4: blocks 15 and 16 227 * RL1: encrypted blocks 1 and 2 228 * RR1: encrypted blocks [all...] |
/linux/drivers/mtd/ |
H A D | rfd_ftl.c | 88 struct block *blocks; member 95 struct block *block = &part->blocks[block_no]; in build_block_map() 188 part->blocks = kcalloc(part->total_blocks, sizeof(struct block), in scan_header() 190 if (!part->blocks) in scan_header() 235 kfree(part->blocks); in scan_header() 277 erase->addr = part->blocks[block].offset; in erase_block() 280 part->blocks[block].state = BLOCK_ERASING; in erase_block() 281 part->blocks[block].free_sectors = 0; in erase_block() 288 part->blocks[block].state = BLOCK_FAILED; in erase_block() 289 part->blocks[bloc in erase_block() [all...] |
/linux/fs/jfs/ |
H A D | jfs_extent.c | 82 /* This blocks if we are low on resources */ in extAlloc() 105 * extent if we can allocate the blocks immediately in extAlloc() 116 /* allocate the disk blocks for the extent. initially, extBalloc() in extAlloc() 117 * will try to allocate disk blocks for the requested size (xlen). in extAlloc() 118 * if this fails (xlen contiguous free blocks not available), it'll in extAlloc() 119 * try to allocate a smaller number of blocks (producing a smaller in extAlloc() 120 * extent), with this smaller number of blocks consisting of the in extAlloc() 121 * requested number of blocks rounded down to the next smaller in extAlloc() 123 * and retry the allocation until the number of blocks to allocate in extAlloc() 124 * is smaller than the number of blocks pe in extAlloc() [all...] |
/linux/drivers/gpu/drm/msm/disp/dpu1/ |
H A D | dpu_hw_catalog.h | 17 * 5 ctl paths. In all cases, it can have max 12 hardware blocks 32 * SSPP sub-blocks/features 68 * MIXER sub-blocks/features 78 * DSPP sub-blocks 87 * CTL sub-blocks 97 * WB sub-blocks and features 130 * VBIF sub-blocks and features 142 * DSC sub-blocks/features 152 * MACRO DPU_HW_BLK_INFO - information of HW blocks inside DPU 282 * struct dpu_sspp_sub_blks : SSPP sub-blocks [all...] |
/linux/fs/xfs/libxfs/ |
H A D | xfs_btree_staging.c | 27 * initializing new btree blocks and filling them with records or key/ptr 184 * height of and the number of blocks needed to construct the btree. See the 188 * In step four, the caller must allocate xfs_btree_bload.nr_blocks blocks and 190 * blocks to be allocated beforehand to avoid ENOSPC failures midway through a 197 * is responsible for cleaning up the previous btree blocks, if any. 205 * is the number of blocks in the next lower level of the tree. For each 210 * The number of blocks for the level is defined to be: 212 * blocks = floor(nr_items / desired) 218 * npb = nr_items / blocks 220 * Some of the leftmost blocks i 487 xfs_btree_bload_level_geometry(struct xfs_btree_cur * cur,struct xfs_btree_bload * bbl,unsigned int level,uint64_t nr_this_level,unsigned int * avg_per_block,uint64_t * blocks,uint64_t * blocks_with_extra) xfs_btree_bload_level_geometry() argument 685 uint64_t blocks; xfs_btree_bload() local [all...] |
/linux/arch/arm/crypto/ |
H A D | aes-neonbs-glue.c | 29 int rounds, int blocks); 31 int rounds, int blocks); 34 int rounds, int blocks, u8 iv[]); 37 int rounds, int blocks, u8 ctr[]); 40 int rounds, int blocks, u8 iv[], int); 42 int rounds, int blocks, u8 iv[], int); 82 int rounds, int blocks)) in __ecb_crypt() argument 92 unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; in __ecb_crypt() local 95 blocks = round_down(blocks, in __ecb_crypt() 178 unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; cbc_decrypt() local 254 __xts_crypt(struct skcipher_request * req,bool encrypt,void (* fn)(u8 out[],u8 const in[],u8 const rk[],int rounds,int blocks,u8 iv[],int)) __xts_crypt() argument 285 unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; __xts_crypt() local [all...] |
/linux/Documentation/filesystems/ |
H A D | ext2.rst | 49 resuid=n The user ID which may use the reserved blocks. 50 resgid=n The group ID which may use the reserved blocks. 76 the concepts of blocks, inodes and directories. It has space in the 83 Blocks section in Specification 86 The space in the device or file is split up into blocks. These are 88 which is decided when the filesystem is created. Smaller blocks mean 95 Blocks are clustered into block groups in order to reduce fragmentation 99 Two blocks near the start of each group are reserved for the block usage 100 bitmap and the inode usage bitmap which show which blocks and inodes 106 blocks [all...] |
H A D | nilfs2.rst | 67 blocks to be written to disk without making a 70 filesystem except for the updates on data blocks still 75 blocks. That means, it is guaranteed that no 84 block device when blocks are freed. This is useful 125 due to redundant move of in-use blocks. 193 of logs. Each log is composed of summary information blocks, payload 194 blocks, and an optional super root block (SR):: 209 | Summary | Payload blocks |SR| 212 The payload blocks are organized per file, and each file consists of 213 data blocks an [all...] |
/linux/arch/m68k/emu/ |
H A D | nfblock.c | 40 static inline s32 nfhd_get_capacity(u32 major, u32 minor, u32 *blocks, in nfhd_get_capacity() argument 44 virt_to_phys(blocks), virt_to_phys(blocksize)); in nfhd_get_capacity() 55 u32 blocks, bsize; member 84 geo->cylinders = dev->blocks >> (6 - dev->bshift); in nfhd_getgeo() 97 static int __init nfhd_init_one(int id, u32 blocks, u32 bsize) in nfhd_init_one() argument 107 pr_info("nfhd%u: found device with %u blocks (%u bytes)\n", dev_id, in nfhd_init_one() 108 blocks, bsize); in nfhd_init_one() 120 dev->blocks = blocks; in nfhd_init_one() 136 set_capacity(dev->disk, (sector_t)blocks * (bsiz in nfhd_init_one() 155 u32 blocks, bsize; nfhd_init() local [all...] |