Lines Matching +full:32 +full:bit
6 * Protect against 64-bit values tearing on 32-bit architectures. This is
11 * - Use a seqcount on 32-bit SMP, only disable preemption for 32-bit UP.
12 * - The whole thing is a no-op on 64-bit architectures.
27 * used for 64bit architectures).
34 * seqcounts are not used for UP kernels). 32-bit UP stat readers could read
35 * corrupted 64-bit values otherwise.
69 #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
120 #if BITS_PER_LONG == 32 && defined(CONFIG_SMP) in u64_stats_init()
127 #if BITS_PER_LONG==32 && defined(CONFIG_SMP) in u64_stats_update_begin()
134 #if BITS_PER_LONG==32 && defined(CONFIG_SMP) in u64_stats_update_end()
144 #if BITS_PER_LONG==32 && defined(CONFIG_SMP) in u64_stats_update_begin_irqsave()
155 #if BITS_PER_LONG==32 && defined(CONFIG_SMP) in u64_stats_update_end_irqrestore()
163 #if BITS_PER_LONG==32 && defined(CONFIG_SMP) in __u64_stats_fetch_begin()
172 #if BITS_PER_LONG==32 && !defined(CONFIG_SMP) in u64_stats_fetch_begin()
181 #if BITS_PER_LONG==32 && defined(CONFIG_SMP) in __u64_stats_fetch_retry()
191 #if BITS_PER_LONG==32 && !defined(CONFIG_SMP) in u64_stats_fetch_retry()
199 * - SMP 32bit arches use seqcount protection, irq safe.
200 * - UP 32bit must disable irqs.
201 * - 64bit have no problem atomically reading u64 values, irq safe.
205 #if BITS_PER_LONG==32 && !defined(CONFIG_SMP) in u64_stats_fetch_begin_irq()
214 #if BITS_PER_LONG==32 && !defined(CONFIG_SMP) in u64_stats_fetch_retry_irq()