1.. SPDX-License-Identifier: GPL-2.0+
2
3=======
4IOMMUFD
5=======
6
7:Author: Jason Gunthorpe
8:Author: Kevin Tian
9
10Overview
11========
12
13IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
14IO page tables from userspace using file descriptors. It intends to be general
15and consumable by any driver that wants to expose DMA to userspace. These
16drivers are eventually expected to deprecate any internal IOMMU logic
17they may already/historically implement (e.g. vfio_iommu_type1.c).
18
19At minimum iommufd provides universal support of managing I/O address spaces and
20I/O page tables for all IOMMUs, with room in the design to add non-generic
21features to cater to specific hardware functionality.
22
23In this context the capital letter (IOMMUFD) refers to the subsystem while the
24small letter (iommufd) refers to the file descriptors created via /dev/iommu for
25use by userspace.
26
27Key Concepts
28============
29
30User Visible Objects
31--------------------
32
33Following IOMMUFD objects are exposed to userspace:
34
35- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing map/unmap
36  of user space memory into ranges of I/O Virtual Address (IOVA).
37
38  The IOAS is a functional replacement for the VFIO container, and like the VFIO
39  container it copies an IOVA map to a list of iommu_domains held within it.
40
41- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
42  external driver.
43
44- IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table
45  (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING"
46  primarly indicates this type of HWPT should be linked to an IOAS. It also
47  indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING
48  feature flag. This can be either an UNMANAGED stage-1 domain for a device
49  running in the user space, or a nesting parent stage-2 domain for mappings
50  from guest-level physical addresses to host-level physical addresses.
51
52  The IOAS has a list of HWPT_PAGINGs that share the same IOVA mapping and
53  it will synchronize its mapping with each member HWPT_PAGING.
54
55- IOMMUFD_OBJ_HWPT_NESTED, representing an actual hardware I/O page table
56  (i.e. a single struct iommu_domain) managed by user space (e.g. guest OS).
57  "NESTED" indicates that this type of HWPT should be linked to an HWPT_PAGING.
58  It also indicates that it is backed by an iommu_domain that has a type of
59  IOMMU_DOMAIN_NESTED. This must be a stage-1 domain for a device running in
60  the user space (e.g. in a guest VM enabling the IOMMU nested translation
61  feature.) As such, it must be created with a given nesting parent stage-2
62  domain to associate to. This nested stage-1 page table managed by the user
63  space usually has mappings from guest-level I/O virtual addresses to guest-
64  level physical addresses.
65
66- IOMMUFD_FAULT, representing a software queue for an HWPT reporting IO page
67  faults using the IOMMU HW's PRI (Page Request Interface). This queue object
68  provides user space an FD to poll the page fault events and also to respond
69  to those events. A FAULT object must be created first to get a fault_id that
70  could be then used to allocate a fault-enabled HWPT via the IOMMU_HWPT_ALLOC
71  command by setting the IOMMU_HWPT_FAULT_ID_VALID bit in its flags field.
72
73- IOMMUFD_OBJ_VIOMMU, representing a slice of the physical IOMMU instance,
74  passed to or shared with a VM. It may be some HW-accelerated virtualization
75  features and some SW resources used by the VM. For examples:
76
77  * Security namespace for guest owned ID, e.g. guest-controlled cache tags
78  * Non-device-affiliated event reporting, e.g. invalidation queue errors
79  * Access to a sharable nesting parent pagetable across physical IOMMUs
80  * Virtualization of various platforms IDs, e.g. RIDs and others
81  * Delivery of paravirtualized invalidation
82  * Direct assigned invalidation queues
83  * Direct assigned interrupts
84
85  Such a vIOMMU object generally has the access to a nesting parent pagetable
86  to support some HW-accelerated virtualization features. So, a vIOMMU object
87  must be created given a nesting parent HWPT_PAGING object, and then it would
88  encapsulate that HWPT_PAGING object. Therefore, a vIOMMU object can be used
89  to allocate an HWPT_NESTED object in place of the encapsulated HWPT_PAGING.
90
91  .. note::
92
93     The name "vIOMMU" isn't necessarily identical to a virtualized IOMMU in a
94     VM. A VM can have one giant virtualized IOMMU running on a machine having
95     multiple physical IOMMUs, in which case the VMM will dispatch the requests
96     or configurations from this single virtualized IOMMU instance to multiple
97     vIOMMU objects created for individual slices of different physical IOMMUs.
98     In other words, a vIOMMU object is always a representation of one physical
99     IOMMU, not necessarily of a virtualized IOMMU. For VMMs that want the full
100     virtualization features from physical IOMMUs, it is suggested to build the
101     same number of virtualized IOMMUs as the number of physical IOMMUs, so the
102     passed-through devices would be connected to their own virtualized IOMMUs
103     backed by corresponding vIOMMU objects, in which case a guest OS would do
104     the "dispatch" naturally instead of VMM trappings.
105
106- IOMMUFD_OBJ_VDEVICE, representing a virtual device for an IOMMUFD_OBJ_DEVICE
107  against an IOMMUFD_OBJ_VIOMMU. This virtual device holds the device's virtual
108  information or attributes (related to the vIOMMU) in a VM. An immediate vDATA
109  example can be the virtual ID of the device on a vIOMMU, which is a unique ID
110  that VMM assigns to the device for a translation channel/port of the vIOMMU,
111  e.g. vSID of ARM SMMUv3, vDeviceID of AMD IOMMU, and vRID of Intel VT-d to a
112  Context Table. Potential use cases of some advanced security information can
113  be forwarded via this object too, such as security level or realm information
114  in a Confidential Compute Architecture. A VMM should create a vDEVICE object
115  to forward all the device information in a VM, when it connects a device to a
116  vIOMMU, which is a separate ioctl call from attaching the same device to an
117  HWPT_PAGING that the vIOMMU holds.
118
119- IOMMUFD_OBJ_VEVENTQ, representing a software queue for a vIOMMU to report its
120  events such as translation faults occurred to a nested stage-1 (excluding I/O
121  page faults that should go through IOMMUFD_OBJ_FAULT) and HW-specific events.
122  This queue object provides user space an FD to poll/read the vIOMMU events. A
123  vIOMMU object must be created first to get its viommu_id, which could be then
124  used to allocate a vEVENTQ. Each vIOMMU can support multiple types of vEVENTS,
125  but is confined to one vEVENTQ per vEVENTQ type.
126
127All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
128
129The diagrams below show relationships between user-visible objects and kernel
130datastructures (external to iommufd), with numbers referred to operations
131creating the objects and links::
132
133  _______________________________________________________________________
134 |                      iommufd (HWPT_PAGING only)                       |
135 |                                                                       |
136 |        [1]                  [3]                                [2]    |
137 |  ________________      _____________                        ________  |
138 | |                |    |             |                      |        | |
139 | |      IOAS      |<---| HWPT_PAGING |<---------------------| DEVICE | |
140 | |________________|    |_____________|                      |________| |
141 |         |                    |                                  |     |
142 |_________|____________________|__________________________________|_____|
143           |                    |                                  |
144           |              ______v_____                          ___v__
145           | PFN storage |  (paging)  |                        |struct|
146           |------------>|iommu_domain|<-----------------------|device|
147                         |____________|                        |______|
148
149  _______________________________________________________________________
150 |                      iommufd (with HWPT_NESTED)                       |
151 |                                                                       |
152 |        [1]                  [3]                [4]             [2]    |
153 |  ________________      _____________      _____________     ________  |
154 | |                |    |             |    |             |   |        | |
155 | |      IOAS      |<---| HWPT_PAGING |<---| HWPT_NESTED |<--| DEVICE | |
156 | |________________|    |_____________|    |_____________|   |________| |
157 |         |                    |                  |               |     |
158 |_________|____________________|__________________|_______________|_____|
159           |                    |                  |               |
160           |              ______v_____       ______v_____       ___v__
161           | PFN storage |  (paging)  |     |  (nested)  |     |struct|
162           |------------>|iommu_domain|<----|iommu_domain|<----|device|
163                         |____________|     |____________|     |______|
164
165  _______________________________________________________________________
166 |                      iommufd (with vIOMMU/vDEVICE)                    |
167 |                                                                       |
168 |                             [5]                [6]                    |
169 |                        _____________      _____________               |
170 |                       |             |    |             |              |
171 |      |----------------|    vIOMMU   |<---|   vDEVICE   |<----|        |
172 |      |                |             |    |_____________|     |        |
173 |      |                |             |                        |        |
174 |      |      [1]       |             |          [4]           | [2]    |
175 |      |     ______     |             |     _____________     _|______  |
176 |      |    |      |    |     [3]     |    |             |   |        | |
177 |      |    | IOAS |<---|(HWPT_PAGING)|<---| HWPT_NESTED |<--| DEVICE | |
178 |      |    |______|    |_____________|    |_____________|   |________| |
179 |      |        |              |                  |               |     |
180 |______|________|______________|__________________|_______________|_____|
181        |        |              |                  |               |
182  ______v_____   |        ______v_____       ______v_____       ___v__
183 |   struct   |  |  PFN  |  (paging)  |     |  (nested)  |     |struct|
184 |iommu_device|  |------>|iommu_domain|<----|iommu_domain|<----|device|
185 |____________|   storage|____________|     |____________|     |______|
186
1871. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. An iommufd can
188   hold multiple IOAS objects. IOAS is the most generic object and does not
189   expose interfaces that are specific to single IOMMU drivers. All operations
190   on the IOAS must operate equally on each of the iommu_domains inside of it.
191
1922. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
193   to bind a device to an iommufd. The driver is expected to implement a set of
194   ioctls to allow userspace to initiate the binding operation. Successful
195   completion of this operation establishes the desired DMA ownership over the
196   device. The driver must also set the driver_managed_dma flag and must not
197   touch the device until this operation succeeds.
198
1993. IOMMUFD_OBJ_HWPT_PAGING can be created in two ways:
200
201   * IOMMUFD_OBJ_HWPT_PAGING is automatically created when an external driver
202     calls the IOMMUFD kAPI to attach a bound device to an IOAS. Similarly the
203     external driver uAPI allows userspace to initiate the attaching operation.
204     If a compatible member HWPT_PAGING object exists in the IOAS's HWPT_PAGING
205     list, then it will be reused. Otherwise a new HWPT_PAGING that represents
206     an iommu_domain to userspace will be created, and then added to the list.
207     Successful completion of this operation sets up the linkages among IOAS,
208     device and iommu_domain. Once this completes the device could do DMA.
209
210   * IOMMUFD_OBJ_HWPT_PAGING can be manually created via the IOMMU_HWPT_ALLOC
211     uAPI, provided an ioas_id via @pt_id to associate the new HWPT_PAGING to
212     the corresponding IOAS object. The benefit of this manual allocation is to
213     allow allocation flags (defined in enum iommufd_hwpt_alloc_flags), e.g. it
214     allocates a nesting parent HWPT_PAGING if the IOMMU_HWPT_ALLOC_NEST_PARENT
215     flag is set.
216
2174. IOMMUFD_OBJ_HWPT_NESTED can be only manually created via the IOMMU_HWPT_ALLOC
218   uAPI, provided an hwpt_id or a viommu_id of a vIOMMU object encapsulating a
219   nesting parent HWPT_PAGING via @pt_id to associate the new HWPT_NESTED object
220   to the corresponding HWPT_PAGING object. The associating HWPT_PAGING object
221   must be a nesting parent manually allocated via the same uAPI previously with
222   an IOMMU_HWPT_ALLOC_NEST_PARENT flag, otherwise the allocation will fail. The
223   allocation will be further validated by the IOMMU driver to ensure that the
224   nesting parent domain and the nested domain being allocated are compatible.
225   Successful completion of this operation sets up linkages among IOAS, device,
226   and iommu_domains. Once this completes the device could do DMA via a 2-stage
227   translation, a.k.a nested translation. Note that multiple HWPT_NESTED objects
228   can be allocated by (and then associated to) the same nesting parent.
229
230   .. note::
231
232      Either a manual IOMMUFD_OBJ_HWPT_PAGING or an IOMMUFD_OBJ_HWPT_NESTED is
233      created via the same IOMMU_HWPT_ALLOC uAPI. The difference is at the type
234      of the object passed in via the @pt_id field of struct iommufd_hwpt_alloc.
235
2365. IOMMUFD_OBJ_VIOMMU can be only manually created via the IOMMU_VIOMMU_ALLOC
237   uAPI, provided a dev_id (for the device's physical IOMMU to back the vIOMMU)
238   and an hwpt_id (to associate the vIOMMU to a nesting parent HWPT_PAGING). The
239   iommufd core will link the vIOMMU object to the struct iommu_device that the
240   struct device is behind. And an IOMMU driver can implement a viommu_alloc op
241   to allocate its own vIOMMU data structure embedding the core-level structure
242   iommufd_viommu and some driver-specific data. If necessary, the driver can
243   also configure its HW virtualization feature for that vIOMMU (and thus for
244   the VM). Successful completion of this operation sets up the linkages between
245   the vIOMMU object and the HWPT_PAGING, then this vIOMMU object can be used
246   as a nesting parent object to allocate an HWPT_NESTED object described above.
247
2486. IOMMUFD_OBJ_VDEVICE can be only manually created via the IOMMU_VDEVICE_ALLOC
249   uAPI, provided a viommu_id for an iommufd_viommu object and a dev_id for an
250   iommufd_device object. The vDEVICE object will be the binding between these
251   two parent objects. Another @virt_id will be also set via the uAPI providing
252   the iommufd core an index to store the vDEVICE object to a vDEVICE array per
253   vIOMMU. If necessary, the IOMMU driver may choose to implement a vdevce_alloc
254   op to init its HW for virtualization feature related to a vDEVICE. Successful
255   completion of this operation sets up the linkages between vIOMMU and device.
256
257A device can only bind to an iommufd due to DMA ownership claim and attach to at
258most one IOAS object (no support of PASID yet).
259
260Kernel Datastructure
261--------------------
262
263User visible objects are backed by following datastructures:
264
265- iommufd_ioas for IOMMUFD_OBJ_IOAS.
266- iommufd_device for IOMMUFD_OBJ_DEVICE.
267- iommufd_hwpt_paging for IOMMUFD_OBJ_HWPT_PAGING.
268- iommufd_hwpt_nested for IOMMUFD_OBJ_HWPT_NESTED.
269- iommufd_fault for IOMMUFD_OBJ_FAULT.
270- iommufd_viommu for IOMMUFD_OBJ_VIOMMU.
271- iommufd_vdevice for IOMMUFD_OBJ_VDEVICE.
272- iommufd_veventq for IOMMUFD_OBJ_VEVENTQ.
273
274Several terminologies when looking at these datastructures:
275
276- Automatic domain - refers to an iommu domain created automatically when
277  attaching a device to an IOAS object. This is compatible to the semantics of
278  VFIO type1.
279
280- Manual domain - refers to an iommu domain designated by the user as the
281  target pagetable to be attached to by a device. Though currently there are
282  no uAPIs to directly create such domain, the datastructure and algorithms
283  are ready for handling that use case.
284
285- In-kernel user - refers to something like a VFIO mdev that is using the
286  IOMMUFD access interface to access the IOAS. This starts by creating an
287  iommufd_access object that is similar to the domain binding a physical device
288  would do. The access object will then allow converting IOVA ranges into struct
289  page * lists, or doing direct read/write to an IOVA.
290
291iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
292mapped to memory pages, composed of:
293
294- struct io_pagetable holding the IOVA map
295- struct iopt_area's representing populated portions of IOVA
296- struct iopt_pages representing the storage of PFNs
297- struct iommu_domain representing the IO page table in the IOMMU
298- struct iopt_pages_access representing in-kernel users of PFNs
299- struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
300
301Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
302ultimately derived from userspace VAs via an mm_struct. Once they have been
303pinned the PFNs are stored in IOPTEs of an iommu_domain or inside the pinned_pfns
304xarray if they have been pinned through an iommufd_access.
305
306PFN have to be copied between all combinations of storage locations, depending
307on what domains are present and what kinds of in-kernel "software access" users
308exist. The mechanism ensures that a page is pinned only once.
309
310An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
311list of iommu_domains that mirror the IOVA to PFN map.
312
313Multiple io_pagetable-s, through their iopt_area-s, can share a single
314iopt_pages which avoids multi-pinning and double accounting of page
315consumption.
316
317iommufd_ioas is shareable between subsystems, e.g. VFIO and VDPA, as long as
318devices managed by different subsystems are bound to a same iommufd.
319
320IOMMUFD User API
321================
322
323.. kernel-doc:: include/uapi/linux/iommufd.h
324
325IOMMUFD Kernel API
326==================
327
328The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
329scene. This allows the external drivers calling such kAPI to implement a simple
330device-centric uAPI for connecting its device to an iommufd, instead of
331explicitly imposing the group semantics in its uAPI as VFIO does.
332
333.. kernel-doc:: drivers/iommu/iommufd/device.c
334   :export:
335
336.. kernel-doc:: drivers/iommu/iommufd/main.c
337   :export:
338
339VFIO and IOMMUFD
340----------------
341
342Connecting a VFIO device to iommufd can be done in two ways.
343
344First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
345container IOCTLs by mapping them into io_pagetable operations. Doing so allows
346the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
347/dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
348container fd.
349
350The second approach directly extends VFIO to support a new set of device-centric
351user API based on aforementioned IOMMUFD kernel API. It requires userspace
352change but better matches the IOMMUFD API semantics and easier to support new
353iommufd features when comparing it to the first approach.
354
355Currently both approaches are still work-in-progress.
356
357There are still a few gaps to be resolved to catch up with VFIO type1, as
358documented in iommufd_vfio_check_extension().
359
360Future TODOs
361============
362
363Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
364type1. New features on the radar include:
365
366 - Binding iommu_domain's to PASID/SSID
367 - Userspace page tables, for ARM, x86 and S390
368 - Kernel bypass'd invalidation of user page tables
369 - Re-use of the KVM page table in the IOMMU
370 - Dirty page tracking in the IOMMU
371 - Runtime Increase/Decrease of IOPTE size
372 - PRI support with faults resolved in userspace
373