| /linux/Documentation/gpu/amdgpu/ |
| H A D | debugfs.rst | 15 Run benchmarks using the DMA engine the driver uses for GPU memory paging. 18 (Graphics Translation Tables) is system memory that is accessible by the GPU. 35 which are submitted to the kernel for execution on an particular GPU engine. 42 Provides raw access to the IP discovery binary provided by the GPU. Read this 49 Provides raw access to the ROM binary image from the GPU. Read this file to 68 Trigger a GPU reset. Read this file to trigger reset the entire GPU. 69 All work currently running on the GPU will be lost. 76 is how the CPU sends commands to the GPU. The CPU writes commands into the 77 buffer and then asks the GPU engine to process it. This is the raw binary 87 The driver writes the requested ring features and metadata (GPU addresses of [all …]
|
| H A D | debugging.rst | 2 GPU Debugging 9 issues on the GPU. 15 To aid in debugging GPU virtual memory related problems, the driver supports a 18 `vm_fault_stop` - If non-0, halt the GPU memory controller on a GPU page fault. 20 `vm_update_mode` - If non-0, use the CPU to update GPU page tables rather than 21 the GPU. 27 If you see a GPU page fault in the kernel log, you can decode it to figure 53 The GPU virtual address that caused the fault comes next. 55 The client ID indicates the GPU block that caused the fault. 77 an invalid page (PERMISSION_FAULTS = 0x3) at GPU virtual address [all …]
|
| H A D | thermal.rst | 2 GPU Power/Thermal Controls and Monitoring 11 GPU sysfs Power State Interfaces 14 GPU power controls are exposed via sysfs files. 137 If it's enabled, that means that the GPU is free to enter into GFXOFF mode as 143 Read it to check current GFXOFF's status of a GPU:: 148 - 0: GPU is in GFXOFF state, the gfx engine is powered down. 170 interval the GPU was in GFXOFF mode. *Only supported in vangogh*
|
| /linux/Documentation/gpu/rfc/ |
| H A D | gpusvm.rst | 4 GPU SVM Section 25 * Eviction is defined as migrating data from the GPU back to the 26 CPU without a virtual address to free up GPU memory. 32 * GPU page table invalidation, which requires a GPU virtual address, is 33 handled via the notifier that has access to the GPU virtual address. 34 * GPU fault side 36 and should strive to take mmap_read lock only in GPU SVM layer. 47 migration policy requiring GPU access to occur in GPU memory. 49 While no current user (Xe) of GPU SVM has such a policy, it is likely 60 * GPU pagetable locking [all …]
|
| H A D | i915_vm_bind.rst | 8 objects (BOs) or sections of a BOs at specified GPU virtual addresses on a 10 mappings) will be persistent across multiple GPU submissions (execbuf calls) 30 * Support capture of persistent mappings in the dump upon GPU error. 96 newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future 108 In future, when GPU page faults are supported, we can potentially use a 124 When GPU page faults are supported, the execbuf path do not take any of these 166 implement Vulkan's Sparse Resources. With increasing GPU hardware performance, 180 Where GPU page faults are not available, kernel driver upon buffer invalidation 190 waiting process. User fence can be signaled either by the GPU or kernel async 199 Allows compute UMD to directly submit GPU jobs instead of through execbuf [all …]
|
| /linux/Documentation/admin-guide/perf/ |
| H A D | nvidia-pmu.rst | 52 The NVLink-C2C0 PMU monitors incoming traffic from a GPU/CPU connected with 56 * NVIDIA Grace Hopper Superchip: Hopper GPU is connected with Grace SoC. 58 In this config, the PMU captures GPU ATS translated or EGM traffic from the GPU. 73 * Count event id 0x0 from the GPU/CPU connected with socket 0:: 77 * Count event id 0x0 from the GPU/CPU connected with socket 1:: 81 * Count event id 0x0 from the GPU/CPU connected with socket 2:: 85 * Count event id 0x0 from the GPU/CPU connected with socket 3:: 89 The NVLink-C2C has two ports that can be connected to one GPU (occupying both 90 ports) or to two GPUs (one GPU per port). The user can use "port" bitmap 97 * Count event id 0x0 from the GPU connected with socket 0 on port 0:: [all …]
|
| /linux/Documentation/gpu/nova/core/ |
| H A D | devinit.rst | 7 overview using the Ampere GPU family as an example. The goal is to provide a conceptual 11 that occur after a GPU reset. The devinit sequence is essential for properly configuring 12 the GPU hardware before it can be used. 15 Unit) microcontroller of the GPU. This interpreter executes a "script" of initialization 18 nova-core driver is even loaded. On an Ampere GPU, the devinit ucode is separate from the 33 Upon reset, several microcontrollers on the GPU (such as PMU, SEC2, GSP, etc.) run GPU 34 firmware (gfw) code to set up the GPU and its core parameters. Most of the GPU is 37 These low-level GPU firmware components are typically: 43 - On an Ampere GPU, the FWSEC typically runs on the GSP (GPU System Processor) in
|
| H A D | fwsec.rst | 7 and its role in the GPU boot sequence. As such, this information is subject to 8 change in the future and is only current as of the Ampere GPU family. However, 14 'Heavy-secure' mode, and performs firmware verification after a GPU reset 15 before loading various ucode images onto other microcontrollers on the GPU, 172 This is using an GA-102 Ampere GPU as an example and could vary for future GPUs.
|
| /linux/Documentation/driver-api/ |
| H A D | edac.rst | 116 Several stacks of HBM chips connect to the CPU or GPU through an ultra-fast 196 GPU nodes can be accessed the same way as the data fabric on CPU nodes. 199 and each GPU data fabric contains four Unified Memory Controllers (UMC). 207 Memory controllers on AMD GPU nodes can be represented in EDAC thusly: 209 GPU DF / GPU Node -> EDAC MC 210 GPU UMC -> EDAC CSROW 211 GPU UMC channel -> EDAC CHANNEL 218 - The CPU UMC (Unified Memory Controller) is mostly the same as the GPU UMC. 224 - GPU UMCs use 1 chip select, So UMC = EDAC CSROW. 225 - GPU UMCs use 8 channels, So UMC channel = EDAC channel. [all …]
|
| /linux/drivers/gpu/drm/virtio/ |
| H A D | Kconfig | 3 tristate "Virtio GPU driver" 11 This is the virtual GPU driver for virtio. It can be used with 17 bool "Virtio GPU driver modesetting support" 21 Enable modesetting support for virtio GPU driver. This can be 22 disabled in cases where only "headless" usage of the GPU is
|
| /linux/Documentation/driver-api/thermal/ |
| H A D | nouveau_thermal.rst | 14 This driver allows to read the GPU core temperature, drive the GPU fan and 28 In order to protect the GPU from overheating, Nouveau supports 4 configurable 34 The GPU will be downclocked to reduce its power dissipation; 36 The GPU is put on hold to further lower power dissipation; 38 Shut the computer down to protect your GPU. 44 The default value for these thresholds comes from the GPU's vbios. These
|
| /linux/Documentation/translations/zh_CN/mm/ |
| H A D | hmm.rst | 15 提供基础设施和帮助程序以将非常规内存(设备内存,如板上 GPU 内存)集成到常规内核路径中,其 20 必不可少,其中 GPU、DSP 或 FPGA 用于代表进程执行各种计算。 32 具有大量板载内存(几 GB)的设备(如 GPU)历来通过专用驱动程序特定 API 管理其内存。这会 40 具体来说,这意味着想要利用像 GPU 这样的设备的代码需要在通用分配的内存(malloc、mmap 56 序员干预的情况下利用 GPU 和其他设备。某些编译器识别的模式仅适用于共享地址空间。对所有 72 另一个严重的因素是带宽有限(约 32GBytes/s,PCIE 4.0 和 16 通道)。这比最快的 GPU 93 缓冲区池)并在其中写入 GPU 特定命令以执行更新(取消映射、缓存失效和刷新等)。这不能通
|
| /linux/Documentation/gpu/ |
| H A D | msm-crash-dump.rst | 7 Following a GPU hang the MSM driver outputs debugging information via 35 ID of the GPU that generated the crash formatted as 39 The current value of RBBM_STATUS which shows what top level GPU 50 GPU address of the ringbuffer. 76 GPU address of the buffer object. 91 GPU memory region.
|
| H A D | nouveau.rst | 4 drm/nouveau NVIDIA GPU Driver 15 driver, responsible for managing NVIDIA GPU hardware at the kernel level. 16 NVKM provides a unified interface for handling various GPU architectures.
|
| H A D | drm-vm-bind-async.rst | 12 * ``gpu_vm``: A virtual GPU address space. Typically per process, but 38 the GPU and CPU. Memory fences are sometimes referred to as 45 which therefore needs to set the gpu_vm or the GPU execution context in 49 affected gpu_vmas, submits a GPU command batch and registers the 50 dma_fence representing the GPU command's activity with all affected 73 out-fences. Synchronous VM_BIND may block and wait for GPU operations; 80 before modifying the GPU page-tables, and signal the out-syncobjs when 142 for a GPU semaphore embedded by UMD in the workload. 214 /* Map (parts of) an object into the GPU virtual address range. 216 /* Unmap a GPU virtual address range */ [all …]
|
| H A D | drm-vm-bind-locking.rst | 29 * ``gpu_vm``: Abstraction of a virtual GPU address space with 32 * ``gpu_vma``: Abstraction of a GPU address range within a gpu_vm with 45 and which tracks GPU activity. When the GPU activity is finished, 49 to track GPU activity in the form of multiple dma_fences on a 57 affected gpu_vmas, submits a GPU command batch and registers the 58 dma_fence representing the GPU command's activity with all affected 226 for GPU idle or depend on all previous GPU activity. Furthermore, any 227 subsequent attempt by the GPU to access freed memory through the 388 GPU virtual address range, directly maps a CPU mm range of anonymous- 398 pages, dirty them if they are not mapped read-only to the GPU, and [all …]
|
| H A D | i915.rst | 279 Intel GPU Basics 282 An Intel GPU has multiple engines. There are several engine types: 299 The Intel GPU family is a family of integrated GPU's using Unified 300 Memory Access. For having the GPU "do work", user space will feed the 301 GPU batch buffers via one of the ioctls `DRM_IOCTL_I915_GEM_EXECBUFFER2` 303 instruct the GPU to perform work (for example rendering) and that work 306 `DRM_IOCTL_I915_GEM_CREATE`). An ioctl providing a batchbuffer for the GPU 321 Gen4, also have that a context carries with it a GPU HW context; 322 the HW context is essentially (most of at least) the state of a GPU. 323 In addition to the ordering guarantees, the kernel will restore GPU [all …]
|
| /linux/drivers/gpu/trace/ |
| H A D | Kconfig | 4 bool "Enable GPU memory usage tracepoints" 8 global and per-process GPU memory usage. Intended for 11 Tracepoint availability varies by GPU driver.
|
| /linux/drivers/vfio/pci/nvgrace-gpu/ |
| H A D | Kconfig | 3 tristate "VFIO support for the GPU in the NVIDIA Grace Hopper Superchip" 7 VFIO support for the GPU in the NVIDIA Grace Hopper Superchip is 8 required to assign the GPU device to userspace using KVM/qemu/etc.
|
| /linux/drivers/gpu/drm/amd/amdkfd/ |
| H A D | Kconfig | 7 bool "HSA kernel driver for AMD GPU devices" 13 Enable this if you want to use HSA features on AMD GPU devices. 29 bool "HSA kernel driver support for peer-to-peer for AMD GPU devices" 33 the PCIe bus. This can improve performance of multi-GPU compute
|
| /linux/arch/arm/boot/dts/rockchip/ |
| H A D | rk3288-veyron-mickey.dts | 86 * and don't let the GPU go faster than 400 MHz. 103 * the CPU and the GPU. 139 /* At very hot, don't let GPU go over 300 MHz */ 180 /* After 1st level throttle the GPU down to as low as 400 MHz */ 187 * Slightly after we throttle the GPU, we'll also make sure that 189 * throttle the CPU lower than 1.4 GHz due to GPU heat--we'll 200 /* When hot, GPU goes down to 300 MHz */ 206 /* When really hot, don't let GPU go _above_ 300 MHz */
|
| /linux/arch/arm64/boot/dts/qcom/ |
| H A D | msm8996-v3.0.dtsi | 13 * This revision seems to have differ GPU CPR 14 * parameters, GPU frequencies and some differences 16 * the GPU. Funnily enough, it's simpler to make it an
|
| /linux/Documentation/ABI/testing/ |
| H A D | sysfs-driver-panthor-profiling | 9 1: Enable GPU cycle measurements for running jobs. 10 2: Enable GPU timestamp sampling for running jobs.
|
| /linux/drivers/gpu/drm/qxl/ |
| H A D | Kconfig | 3 tristate "QXL virtual GPU" 12 QXL virtual GPU for Spice virtualization desktop integration.
|
| /linux/drivers/gpu/nova-core/ |
| H A D | Kconfig | 2 tristate "Nova Core GPU driver" 11 GPUs based on the GPU System Processor (GSP). This is true for Turing
|