Lines Matching +full:memory +full:- +full:to +full:- +full:memory
1 # SPDX-License-Identifier: GPL-2.0-only
6 bool "Xen memory balloon driver"
9 The balloon driver allows the Xen domain to request more memory from
10 the system to expand the domain's memory allocation, or alternatively
11 return unneeded memory to the system.
14 bool "Memory hotplug support for Xen balloon driver"
18 Memory hotplug support for Xen balloon driver allows expanding memory
23 It's also very useful for non PV domains to obtain unpopulated physical
24 memory ranges to use in order to map foreign memory or grants.
26 Memory could be hotplugged in following steps:
28 1) target domain: ensure that memory auto online policy is in
29 effect by checking /sys/devices/system/memory/auto_online_blocks
32 2) control domain: xl mem-max <target-domain> <maxmem>
33 where <maxmem> is >= requested memory size,
35 3) control domain: xl mem-set <target-domain> <memory>
36 where <memory> is requested memory size; alternatively memory
37 could be added by writing proper value to
42 Alternatively, if memory auto onlining was not requested at step 1
43 the newly added memory can be manually onlined in the target domain
46 for i in /sys/devices/system/memory/memory*/state; do \
49 or by adding the following line to udev rules:
51 …SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /…
54 int "Hotplugged memory limit (in GiB) for a PV guest"
59 Maximum amount of memory (in GiB) that a PV guest can be
60 expanded to when using memory hotplug.
62 A PV guest can have more memory than this limit if is
65 This value is used to allocate enough space in internal
66 tables needed for physical memory administration.
69 bool "Scrub pages before returning them to system by default"
73 Scrub pages before returning them to the system for reuse by
75 is not accidentally visible to other domains. It is more
87 The evtchn driver allows a userspace process to trigger event
88 channels and to receive notification of an event channel
97 to other virtual machines.
104 The xen filesystem provides a way for domains to share
107 may pass arbitrary information to the initial domain.
115 The old xenstore userspace tools expect to find "xenbus"
117 xenfs filesystem. Selecting this causes the kernel to create
142 Allows userspace processes to use grants.
145 bool "Add support for dma-buf grant access device driver extension"
149 Allows userspace processes and kernel modules to use Xen backed
150 dma-buf implementation. With this extension grant references to
151 the pages of an imported dma-buf can be exported for other domain
153 converted into a local dma-buf for local export.
156 tristate "User-space grant reference allocator driver"
160 Allows userspace processes to create pages with access granted
161 to other domains. This can be used to implement frontend drivers
162 or as part of an inter-domain shared memory channel.
168 Extends grant table module API to allow allocating DMA capable
170 The resulting buffer is similar to one allocated by the balloon
171 driver in that proper memory reservation is made by
176 but require DMAable memory instead.
188 tristate "Xen PCI-device stub driver"
195 device backend driver without para-virtualized support for guests.
196 If you select this to be a module, you will need to make sure no
197 other driver has bound to the device(s) you want to make visible to
201 into the kernel) allows you to bind the PCI devices to this module
203 xen-pciback.hide=(03:00.0)(04:00.0)
208 tristate "Xen PCI-device backend driver"
214 The PCI device backend driver allows the kernel to export arbitrary
215 PCI devices to other guests. If you select this to be a module, you
216 will need to make sure no other driver has bound to the device(s)
217 you want to make visible to other guests.
220 devices to appear in the guest. You can choose the default (0) where
225 into the kernel) allows you to bind the PCI devices to this module
227 xen-pciback.hide=(03:00.0)(04:00.0)
238 sends a small set of POSIX calls to the backend, which
247 allows PV Calls frontends to send POSIX calls to the backend,
256 The SCSI backend driver allows the kernel to export its SCSI Devices
257 to other guests via a high-performance shared-memory interface.
259 if guests need generic access to SCSI devices.
266 The hypercall passthrough driver allows privileged user programs to
268 running as Dom0 to perform privileged operations, but in some
277 daemon can speed up interrupt delivery from / to a guest.
284 This ACPI processor uploads Power Management information to the Xen
287 To do that the driver parses the Power Management data and uploads
288 said information to the Xen hypervisor. Then the Xen hypervisor can
293 To compile this driver as a module, choose M here: the module will be
294 called xen_acpi_processor If you do not know what to choose, select
315 Support for auto-translated physmap guests.
327 /proc/xen/xensyms file, similar to /proc/kallsyms
336 bool "Use unpopulated memory ranges for guest mappings"
340 Use unpopulated memory ranges in order to create mappings for guest
341 memory regions, including grant maps and foreign pages. This avoids
342 having to balloon out RAM regions in order to obtain physical memory
343 space to create such mappings.
366 bool "Require Xen virtio support to use grants"
369 Require virtio for Xen guests to use grant mappings.
370 This will avoid the need to give the backend the right to map all
371 of the guest memory. This will need support on the backend side