Searched refs:training (Results 1 – 18 of 18) sorted by relevance
| /linux/Documentation/gpu/ |
| H A D | zynqmp.rst | 78 Link training symbol pattern TPS1 (/D10.2/) 81 Link training symbol pattern TPS2 84 Link training symbol pattern TPS3 (for HBR2)
|
| H A D | introduction.rst | 169 * `Understanding the Linux Graphics Stack <https://bootlin.com/doc/training/graphics/graphics-slide…
|
| /linux/Documentation/networking/device_drivers/atm/ |
| H A D | cxacru.rst | 77 - "training" 117 [4942253.654954] ATM dev 0: ADSL line: training
|
| /linux/drivers/accel/habanalabs/ |
| H A D | Kconfig | 18 designed to accelerate Deep Learning inference and training workloads.
|
| /linux/tools/thermal/tmon/ |
| H A D | README | 30 that can be used for thermal relationship training.
|
| /linux/Documentation/hwmon/ |
| H A D | peci-dimmtemp.rst | 57 completes memory training and testing.
|
| /linux/drivers/memory/tegra/ |
| H A D | tegra210-emc-core.c | 561 struct tegra210_emc *emc = timer_container_of(emc, timer, training); in tegra210_emc_train() 574 mod_timer(&emc->training, in tegra210_emc_train() 580 mod_timer(&emc->training, in tegra210_emc_training_start() 586 timer_delete(&emc->training); in tegra210_emc_training_stop() 1965 timer_setup(&emc->training, tegra210_emc_train, 0); in tegra210_emc_probe()
|
| H A D | tegra210-emc.h | 914 struct timer_list training; member
|
| /linux/Documentation/accel/ |
| H A D | introduction.rst | 34 a method of scaling-up/out, i.e. connecting to other training cards inside
|
| /linux/Documentation/admin-guide/hw-vuln/ |
| H A D | indirect-target-selection.rst | 17 - **Intra-Mode BTI**: In-kernel training such as through cBPF or other native
|
| /linux/tools/memory-model/Documentation/ |
| H A D | recipes.txt | 7 takes off the training wheels to cover more involved examples, 195 Taking off the training wheels
|
| /linux/Documentation/ABI/testing/ |
| H A D | sysfs-bus-platform-devices-ampere-smpro | 294 - 04: DDR training report status
|
| /linux/Documentation/networking/ |
| H A D | devmem.rst | 22 - Distributed training, where ML accelerators, such as GPUs on different hosts,
|
| H A D | ethtool-netlink.rst | 554 ``ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE`` Failure during link training 600 Link training substates: 613 after training
|
| /linux/Documentation/scsi/ |
| H A D | aic79xx.rst | 171 interveining training.
|
| /linux/Documentation/accel/qaic/ |
| H A D | aic100.rst | 113 AIC100 is not intended for training neural networks. AIC100 can be utilized
|
| /linux/arch/arm64/boot/dts/qcom/ |
| H A D | lemans.dtsi | 676 ddr_training_checksum: ddr-training-checksum@908c0000 { 733 ddr_training_data_mem: ddr-training-data@91b90000 {
|
| /linux/Documentation/filesystems/xfs/ |
| H A D | xfs-online-fsck-design.rst | 2970 by training log recovery to recompute the summary counters from the AG headers,
|