Lines Matching +full:pcie +full:- +full:host +full:- +full:1
1 .. SPDX-License-Identifier: GPL-2.0
9 The NVMe PCI endpoint function target driver implements a NVMe PCIe controller
16 controller over a PCIe link, thus implementing an NVMe PCIe device similar to a
21 files or block devices, or can use NVMe passthrough to expose to the PCI host an
22 existing physical NVMe device or a NVMe fabrics host controller (e.g. a NVMe TCP
23 host controller).
26 NVMe target core code to parse and execute NVMe commands submitted by the PCIe
27 host. However, using the PCI endpoint framework API and DMA API, the driver is
28 also responsible for managing all data transfers over the PCIe link. This
32 1) The driver manages retrieval of NVMe commands in submission queues using DMA
37 submissions from the PCIe host.
40 PCIe host using MMIO copy of the entries in the host completion queue.
42 PCI endpoint framework API to raise an interrupt to the host to signal the
47 segments representing the mapping of the command data buffer on the host.
48 The command data buffer is transferred over the PCIe link using this list of
51 data buffer is transferred from the host into a local memory buffer before
54 buffer is transferred to the host once the command completes.
57 -----------------------
59 The NVMe capabilities exposed to the PCIe host through the BAR 0 registers
63 1) The NVMe PCI endpoint target driver always sets the controller capability
78 ------------------
90 ------------------------------------------------------
96 1) One memory window for raising MSI or MSI-X interrupts
110 -----------------------------
117 1) The NVMe target core code limits the maximum number of I/O queues to the
120 the number of MSI-X or MSI vectors available.
127 Limitations and NVMe Specification Non-Compliance
128 -------------------------------------------------
142 -------------------
152 To facilitate testing, enabling the null-blk driver (CONFIG_BLK_DEV_NULL_BLK)
157 ---------------------
165 a40000000.pcie-ep
170 a40000000.pcie-ep
172 The endpoint board must of course also be connected to a host with a PCI cable
173 with RX-TX signal swapped. If the host PCI slot used does not have
174 plug-and-play capabilities, the host should be powered off when the NVMe PCI
178 --------------------
185 ----------------------------------
193 # mount -t configfs none /sys/kernel/config
207 nvmet 118784 1 nvmet_pci_epf
216 # echo -n "Linux-pci-epf" > nvmepf.0.nqn/attr_model
219 # echo 1 > nvmepf.0.nqn/attr_allow_any_host
225 # mkdir nvmepf.0.nqn/namespaces/1
226 # echo -n "/dev/nullb0" > nvmepf.0.nqn/namespaces/1/device_path
227 # echo 1 > "nvmepf.0.nqn/namespaces/1/enable"
232 # mkdir 1
233 # echo -n "pci" > 1/addr_trtype
234 # ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
235 /sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn
238 -----------------------------------
266 If the PCI endpoint controller used does not support MSI-X, MSI can be
274 # echo 1 > nvmepf.0/nvme/portid
281 # ln -s functions/nvmet_pci_epf/nvmepf.0 controllers/a40000000.pcie-ep/
282 # echo 1 > controllers/a40000000.pcie-ep/start
287 .. code-block:: text
291 nvmet: adding nsid 1 to subsystem nvmepf.0.nqn
292 nvmet_pci_epf nvmet_pci_epf.0: PCI endpoint controller supports MSI-X, 32 vectors
293 …nvmet: Created nvm controller 1 for subsystem nvmepf.0.nqn for NQN nqn.2014-08.org.nvmexpress:uuid…
296 PCI Root-Complex Host
297 ---------------------
299 Booting the PCI host will result in the initialization of the PCIe link (this
301 message on the endpoint will also signal when the host NVMe driver enables the
306 On the host side, the NVMe PCI endpoint function target device will is
309 # lspci -n
310 0000:01:00.0 0108: 1b96:beef
322 # nvme id-ctrl /dev/nvme0
327 mn : Linux-pci-epf
328 fr : 6.13.0-r
350 subclass_code Must be 0x08 (Non-Volatile Memory controller)
357 interrupt_pin Interrupt PIN to use if MSI and MSI-X are not supported