Lines Matching +full:in +full:- +full:memory
1 .. SPDX-License-Identifier: GPL-2.0
7 Paged virtual memory was invented along with virtual memory as a concept in
9 virtual memory. The feature migrated to newer computers and became a de facto
10 feature of all Unix-like systems as time went by. In 1985 the feature was
11 included in the Intel 80386, which was the CPU Linux 1.0 was developed on.
14 as seen on the external memory bus.
16 Linux defines page tables as a hierarchy which is currently five levels in
22 is the physical address of the page (as seen on the external memory bus)
25 Physical memory address 0 will be *pfn 0* and the highest pfn will be
26 the last page of physical memory the external address bus of the CPU can
34 As you can see, with 4KB pages the page base address uses bits 12-31 of the
35 address, and this is why `PAGE_SHIFT` in this case is defined as 12 and
36 `PAGE_SIZE` is usually defined in terms of the page shift as `(1 << PAGE_SHIFT)`
38 Over time a deeper hierarchy has been developed in response to increasing memory
41 the fact that Torvald's first computer had 4MB of physical memory. Entries in
42 this single table were referred to as *PTE*:s - page table entries.
45 become hierarchical and that in turn is done to save page table memory and
49 of entries, breaking down the whole memory into single pages. Such a page table
50 would be very sparse, because large portions of the virtual memory usually
51 remains unused. By using hierarchical page tables large holes in the virtual
52 address space does not waste valuable page table memory, because it will suffice
53 to mark large areas as unmapped at a higher level in the page table hierarchy.
56 to a physical memory range, which allows mapping a contiguous range of several
57 megabytes or even gigabytes in a single high-level page table entry, taking
58 shortcuts in mapping virtual memory to physical memory: there is no need to
59 traverse deeper in the hierarchy when you find a large mapped range like this.
63 +-----+
65 +-----+
67 | +-----+
68 +-->| P4D |
69 +-----+
71 | +-----+
72 +-->| PUD |
73 +-----+
75 | +-----+
76 +-->| PMD |
77 +-----+
79 | +-----+
80 +-->| PTE |
81 +-----+
87 - **pte**, `pte_t`, `pteval_t` = **Page Table Entry** - mentioned earlier.
89 mapping a single page of virtual memory to a single page of physical memory.
92 A typical example is that the `pteval_t` is a 32- or 64-bit value with the
94 architecture-specific bits such as memory protection.
96 The **entry** part of the name is a bit confusing because while in Linux 1.0
97 this did refer to a single page table entry in the single top level page
98 table, it was retrofitted to be an array of mapping elements when two-level
102 - **pmd**, `pmd_t`, `pmdval_t` = **Page Middle Directory**, the hierarchy right
105 - **pud**, `pud_t`, `pudval_t` = **Page Upper Directory** was introduced after
106 the other levels to handle 4-level page tables. It is potentially unused,
109 - **p4d**, `p4d_t`, `p4dval_t` = **Page Level 4 Directory** was introduced to
110 handle 5-level page tables after the *pud* was introduced. Now it was clear
116 - **pgd**, `pgd_t`, `pgdval_t` = **Page Global Directory** - the Linux kernel
117 main page table handling the PGD for the kernel memory is still found in
118 `swapper_pg_dir`, but each userspace process in the system also has its own
119 memory context and thus its own *pgd*, found in `struct mm_struct` which
120 in turn is referenced to in each `struct task_struct`. So tasks have memory
121 context in the form of a `struct mm_struct` and this in turn has a
124 To repeat: each level in the page table hierarchy is a *array of pointers*, so
127 pointers on each level is architecture-defined.::
130 --> +-----+ PTE
131 | ptr |-------> +-----+
132 | ptr |- | ptr |-------> PAGE
137 +-----+ +----> +-----+
138 | ptr |-------> PAGE
148 compile-time augmented to just skip a level when accessing the next lower
151 Page table handling code that wishes to be architecture-neutral, such as the
152 virtual memory manager, will need to be written so that it traverses all of the
154 architecture-specific code, so as to be robust to future changes.
160 The `Memory Management Unit (MMU)` is a hardware component that handles virtual
161 to physical address translations. It may use relatively small caches in hardware
165 When CPU accesses a memory location, it provides a virtual address to the MMU,
166 which checks if there is the existing translation in the TLB or in the Page
171 Each page of memory has associated permission and dirty bits. The latter
172 indicate that the page has been modified since it was loaded into memory.
174 If nothing prevents it, eventually the physical memory can be accessed and the
178 happen because the CPU is trying to access memory that the current task is not
179 permitted to, or because the data is not present into physical memory.
187 "Copy-on-Write". Page faults may also happen when frames have been swapped out
191 These techniques improve memory efficiency, reduce latency, and minimize space
193 and "Copy-on-Write" because these subjects are out of scope as they belong to
197 undesirable since it's performed as a means to reduce memory under heavy
200 Swapping can't work for memory mapped by kernel logical addresses. These are a
202 physical memory. Given any logical address, its physical address is determined
207 If the kernel fails to make room for the data that must be present in the
208 physical frames, the kernel invokes the out-of-memory (OOM) killer to make room
214 could use instructions to address (non-shared) memory which does not belong to
216 to a read-only location.
218 If the above-mentioned conditions happen in user-space, the kernel sends a
224 check if memory is present and, if not, requests to load data from persistent
232 `handle_mm_fault()` which, in turn, (likely) ends up calling
237 that the virtual address is pointing to areas of physical memory which are not
239 condition resolves to the kernel sending the above-mentioned SIGSEGV signal
249 above-mentioned convention to name them after the corresponding types of tables
250 in the hierarchy.
261 reduced page table overhead, memory allocation efficiency, and performance
263 trade-offs, like wasted memory and allocation challenges.
272 Linux to handle page faults in a way that is tailored to the specific