Home
last modified time | relevance | path

Searched full:memory (Results 1 – 25 of 1639) sorted by relevance

12345678910>>...66

/qemu/include/hw/mem/
H A Dmemory-device.h2 * Memory Device Interface
20 #define TYPE_MEMORY_DEVICE "memory-device"
33 * All memory devices need to implement TYPE_MEMORY_DEVICE as an interface.
35 * A memory device is a device that owns a memory region which is
37 * address in guest physical memory can either be specified explicitly
40 * Some memory device might not own a memory region in certain device
42 * empty memory devices are mostly ignored by the memory device code.
44 * Conceptually, memory devices only span one memory region. If multiple
45 * successive memory regions are used, a covering memory region has to
46 * be provided. Scattered memory regions are not supported for single
[all …]
/qemu/docs/
H A Dmemory-hotplug.txt1 QEMU memory hotplug
4 This document explains how to use the memory hotplug feature in QEMU,
7 Guest support is required for memory hotplug to work.
12 In order to be able to hotplug memory, QEMU has to be told how many
13 hotpluggable memory slots to create and what is the maximum amount of
14 memory the guest can grow. This is done at startup time by means of
22 - "slots" is the number of hotpluggable memory slots
29 Creates a guest with 1GB of memory and three hotpluggable memory slots.
30 The hotpluggable memory slots are empty when the guest is booted, so all
31 memory the guest will see after boot is 1GB. The maximum memory the
[all …]
H A Dnvdimm.txt7 The current QEMU only implements the persistent memory mode of vNVDIMM
13 The storage of a vNVDIMM device in QEMU is provided by the memory
14 backend (i.e. memory-backend-file and memory-backend-ram). A simple
20 -object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE,readonly=off
34 - "object memory-backend-file,id=mem1,share=on,mem-path=$PATH,
50 virtual NVDIMM device whose storage is provided by above memory backend
63 detect a NVDIMM device which is in the persistent memory mode and whose
68 1. Prior to QEMU v2.8.0, if memory-backend-file is used and the actual
79 option of memory-backend-file, e.g. 4KB alignment on x86. However,
82 change breaks the usage of memory-backend-file that only satisfies
[all …]
/qemu/docs/devel/
H A Dmemory.rst2 The memory API
5 The memory API models the memory and I/O buses and controllers of a QEMU
9 - memory-mapped I/O (MMIO)
10 - memory controllers that can dynamically reroute physical memory regions
13 The memory model provides support for
16 - setting up coalesced memory for kvm
19 Memory is modelled as an acyclic graph of MemoryRegion objects. Sinks
21 buses, memory controllers, and memory regions that have been rerouted.
23 In addition to MemoryRegion objects, the memory API provides AddressSpace
25 These represent memory as seen from the CPU or a device's viewpoint.
[all …]
H A Ds390-dasd-ipl.rst9 1. A READ IPL ccw is constructed in memory location ``0x0``.
12 so when it is complete another ccw will be fetched and executed from memory
18 IPL ccw it read the 24-bytes of IPL1 to be read into memory starting at
21 the original READ IPL ccw. The read ccw will read the IPL2 data into memory
34 the real operating system is loaded into memory and we are ready to hand
39 NOTE: The IPL2 channel program might read data into memory
48 The psw that was loaded into memory location ``0x0`` as part of the ipl process
50 psw's instruction address will point to the location in memory where we want
57 memory location 0x0 that reads IPL1. It then executes this ccw thereby kicking
71 1. Place a "Read IPL" ccw into memory location ``0x0`` with chaining bit on.
[all …]
/qemu/include/system/
H A Dmemory.h2 * Physical memory management API
36 #define TYPE_MEMORY_REGION "memory-region"
40 #define TYPE_IOMMU_MEMORY_REGION "iommu-memory-region"
216 /* RAM is a persistent kind memory */
230 * set, the OS will do the reservation, if supported for the memory type.
272 * Memory region callbacks
275 /* Read from the memory region. @addr is relative to @mr; @size is
280 /* Write to the memory region. @addr is relative to @mr; @size is
351 * to handle requests to the memory region. Other methods are optional.
359 * to an output TLB entry. If the IOMMU is aware of memory transaction
[all …]
/qemu/include/standard-headers/linux/
H A Dvirtio_mem.h50 * "resizable DIMM" consisting of small memory blocks that can be plugged
51 * or unplugged. The device driver is responsible for (un)plugging memory
54 * Virtio-mem devices can only operate on their assigned memory region in
55 * order to (un)plug memory. A device cannot (un)plug memory belonging to
58 * The "region_size" corresponds to the maximum amount of memory that can
59 * be provided by a device. The "size" corresponds to the amount of memory
63 * "requested_size". It is impossible to plug more memory than requested.
65 * The "usable_region_size" represents the memory region that can actually
66 * be used to (un)plug memory. It is always at least as big as the
70 * There are no guarantees what will happen if unplugged memory is
[all …]
H A Dvirtio_balloon.h35 #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */
64 #define VIRTIO_BALLOON_S_SWAP_IN 0 /* Amount of memory swapped in */
65 #define VIRTIO_BALLOON_S_SWAP_OUT 1 /* Amount of memory swapped out */
68 #define VIRTIO_BALLOON_S_MEMFREE 4 /* Total amount of free memory */
69 #define VIRTIO_BALLOON_S_MEMTOT 5 /* Total amount of memory */
70 #define VIRTIO_BALLOON_S_AVAIL 6 /* Available memory as in /proc */
75 #define VIRTIO_BALLOON_S_ALLOC_STALL 11 /* Stall count of memory allocatoin */
76 #define VIRTIO_BALLOON_S_ASYNC_SCAN 12 /* Amount of memory scanned asynchronously */
77 #define VIRTIO_BALLOON_S_DIRECT_SCAN 13 /* Amount of memory scanned directly */
78 #define VIRTIO_BALLOON_S_ASYNC_RECLAIM 14 /* Amount of memory reclaimed asynchronously */
[all …]
/qemu/docs/system/
H A Dvm-templating.rst6 For now, the focus is on VM memory aspects, and not about how to save and
14 in fast startup times and reduced memory consumption.
18 new VMs are able to read template VM memory; however, any modifications
33 Memory configuration
36 In order to create the template VM, we have to make sure that VM memory
39 Supply VM RAM via memory-backend-file, with ``share=on`` (modifications go
49 -object memory-backend-file,id=pc.ram,mem-path=template,size=2g,share=on,... \\
50 -machine q35,memory-backend=pc.ram
52 If multiple memory backends are used (vNUMA, DIMMs), configure all
53 memory backends accordingly.
[all …]
/qemu/hw/mem/
H A Dmemory-device.c2 * Memory Device Interface
14 #include "hw/mem/memory-device.h"
29 /* dropping const here is fine as we don't touch the memory region */ in memory_device_is_empty()
62 if (dev->realized) { /* only realized memory devices matter */ in memory_device_build_list()
82 * Memslots that are reserved by memory devices (required but still reported
89 /* This is unexpected, and we warned already in the memory notifier. */ in get_reserved_memslots()
137 * Consider our soft-limit across all memory devices. We don't really in memory_device_memslot_decision_limit()
158 /* We cannot have any other memory devices? So give all to this device. */ in memory_device_memslot_decision_limit()
165 * still available for memory devices. in memory_device_memslot_decision_limit()
192 /* we will need memory slots for kvm and vhost */ in memory_device_check_addable()
[all …]
/qemu/docs/system/devices/
H A Dcxl.rst4 targets accelerators and memory devices attached to a CXL host.
27 - BAR mapped memory accesses used for registers and mailboxes.
34 * Memory operations
37 supported by the host for normal memory should also work for
38 CXL attached memory devices.
49 **Type 1:** These support coherent caching of host memory. Example might
50 be a crypto accelerators. May also have device private memory accessible
51 via means such as PCI memory reads and writes to BARs.
53 **Type 2:** These support coherent caching of host memory and host
54 managed device memory (HDM) for which the coherency protocol is managed
[all …]
H A Divshmem.rst1 Inter-VM Shared Memory device
4 On Linux hosts, a shared memory device is available. The basic syntax
11 where hostmem names a host memory backend. For a POSIX shared memory
16 -object memory-backend-file,size=1M,share,mem-path=/dev/shm/ivshmem,id=hostmem
19 shared memory region. Interrupt support requires using a shared memory
21 shared memory server is qemu.git/contrib/ivshmem-server. An example
22 syntax when using the shared memory server is:
42 memory on migration to the destination host. With ``master=off``, the
47 At most one of the devices sharing the same memory can be master. The
54 memory backend that has hugepage support:
[all …]
H A Dvirtio-pmem.rst7 The virtio pmem device is a paravirtualized persistent memory device
14 host page cache. This reduces guest memory footprint as the host can
15 make efficient memory reclaim decisions under memory pressure.
28 A virtio pmem device backed by a memory-backend-file can be created on
31 -object memory-backend-file,id=mem1,share,mem-path=./virtio_pmem.img,size=4G
36 - "object memory-backend-file,id=mem1,share,mem-path=<image>, size=<image size>"
40 pci device whose storage is provided by above memory backend device.
49 memory backing has to be added via 'object_add'; afterwards, the virtio
55 (qemu) object_add memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
/qemu/tests/multiboot/
H A Dmmap.out6 Lower memory: 639k
7 Upper memory: 129920k
9 e820 memory map:
24 Lower memory: 639k
25 Upper memory: 104k
27 e820 memory map:
41 Lower memory: 639k
42 Upper memory: 2096000k
44 e820 memory map:
59 Lower memory: 639k
[all …]
/qemu/include/qemu/
H A Dmemalign.h2 * Allocation and free functions for aligned memory
12 * qemu_try_memalign: Allocate aligned memory
16 * Allocate memory on an aligned boundary (i.e. the returned
19 * On success, returns allocated memory; on failure, returns NULL.
21 * The memory allocated through this function must be freed via
26 * qemu_memalign: Allocate aligned memory, without failing
30 * Allocate memory in the same way as qemu_try_memalign(), but
31 * abort() with an error message if the memory allocation fails.
33 * The memory allocated through this function must be freed via
38 * qemu_vfree: Free memory allocated through qemu_memalign
[all …]
/qemu/qapi/
H A Ddump.json8 # = Dump guest memory
14 # An enumeration of guest-memory-dump's format.
49 # @dump-guest-memory:
51 # Dump guest's memory to vmcore. It is a synchronous operation that
52 # can take very long depending on the amount of guest memory.
54 # @paging: if true, do paging to get guest's memory mapping. This
64 # corrupted memory, which cannot be trusted
84 # @length: if specified, the memory size, in bytes. If you don't want
85 # to dump all guest's memory, please specify the start @begin and
88 # @format: if specified, the format of guest memory dump. But non-elf
[all …]
H A Dmachine.json185 # @default-ram-id: the default ID of initial RAM memory backend
492 # @hmat-lb: memory latency and bandwidth information (Since: 5.0)
494 # @hmat-cache: memory side cache information (Since: 5.0)
530 # @mem: memory size of this node; mutually exclusive with @memdev.
531 # Equally divide total memory among nodes if both @mem and @memdev
534 # @memdev: memory backend object. If specified for one node, it must
538 # to the nodeid which has the memory controller responsible for
577 # Create a CXL Fixed Memory Window
579 # @size: Size of the Fixed Memory Window in bytes. Must be a multiple
600 # List of CXL Fixed Memory Windows.
[all …]
/qemu/tests/tcg/aarch64/gdbstub/
H A Dtest-mte.py3 # Test GDB memory-tag commands that exercise the stubs for the qIsAddressTagged,
8 # The test consists in breaking just after a tag is set in a specific memory
9 # chunk, and then using the GDB 'memory-tagging' subcommands to set/get tags in
10 # different memory locations and ranges in the MTE-enabled memory chunk.
26 PATTERN_0 = r"Memory tags for address 0x[0-9a-f]+ match \(0x[0-9a-f]+\)."
41 # Tagged address: the start of the MTE-enabled memory chunk to be tested
56 co = gdb.execute(f"memory-tag check {ta}", False, True)
68 gdb.execute(f"memory-tag set-allocation-tag {ta} 1 04", False, True)
73 gdb.execute(f"memory-tag set-allocation-tag {ta}+16 1 06", False, True)
77 co = gdb.execute(f"memory-tag print-allocation-tag {ta}", False, True)
[all …]
/qemu/docs/specs/
H A Dacpi_mem_hotplug.rst1 QEMU<->ACPI BIOS memory hotplug interface
4 ACPI BIOS GPE.3 handler is dedicated for notifying OS about memory hot-add
7 Memory hot-plug interface (IO port 0xa00-0xa17, 1-4 byte access)
14 Lo part of memory device phys address
16 Hi part of memory device phys address
18 Lo part of memory device size in bytes
20 Hi part of memory device size in bytes
22 Memory device proximity domain
24 Memory device status fields
48 Memory device slot selector, selects active memory device.
[all …]
/qemu/tests/functional/
H A Dtest_mem_addr_space.py3 # Check for crash when using memory beyond the available guest processor
47 access up to a maximum of 64GiB of memory. Memory hotplug region begins
49 we have 0.5 GiB of VM memory, see pc_q35_init()). This means total
50 hotpluggable memory size is 60 GiB. Per slot, we reserve 1 GiB of memory
52 actual memory size of 59 GiB. If the VM is started with 0.5 GiB of
53 memory, maxmem should be set to a maximum value of 59.5 GiB to ensure
54 that the processor can address all memory directly.
64 '-object', 'memory-backend-ram,id=mem1,size=1G',
75 access up to a maximum of 64GiB of memory. Rest is the same as the case
82 '-object', 'memory-backend-ram,id=mem1,size=1G',
[all …]
/qemu/include/crypto/
H A Dhash.h70 * @iov: the array of memory regions to hash
76 * Computes the hash across all the memory regions
87 * The memory referenced in @result must be released with a call
102 * @buf: the memory region to hash
108 * Computes the hash across all the memory region
119 * The memory referenced in @result must be released with a call
134 * @iov: the array of memory regions to hash
139 * Computes the hash across all the memory regions
143 * memory pointer in @digest must be released
157 * @iov: the array of memory regions to hash
[all …]
H A Dhmac.h63 * Release the memory associated with @hmac that was
73 * @iov: the array of memory regions to hmac
79 * Computes the hmac across all the memory regions
90 * The memory referenced in @result must be released with a call
106 * @buf: the memory region to hmac
112 * Computes the hmac across all the memory region
123 * The memory referenced in @result must be released with a call
139 * @iov: the array of memory regions to hmac
144 * Computes the hmac across all the memory regions
148 * memory pointer in @digest must be released
[all …]
/qemu/tests/tcg/aarch64/
H A DMakefile.softmmu-target46 memory: CFLAGS+=-DCHECK_UNALIGNED=1
48 memory-sve: memory.c $(LINK_SCRIPT) $(CRT_OBJS) $(MINILIB_OBJS)
51 memory-sve: CFLAGS+=-DCHECK_UNALIGNED=1 -march=armv8.1-a+sve -O3
53 TESTS+=memory-sve
74 .PHONY: memory-record
75 run-memory-record: memory-record memory
80 $(QEMU_OPTS) memory)
82 .PHONY: memory-replay
83 run-memory-replay: memory-replay run-memory-record
88 $(QEMU_OPTS) memory)
[all …]
/qemu/tests/tcg/arm/
H A DMakefile.softmmu-target45 memory: CFLAGS+=-DCHECK_UNALIGNED=0
61 .PHONY: memory-record
62 run-memory-record: memory-record memory
67 $(QEMU_OPTS) memory)
69 .PHONY: memory-replay
70 run-memory-replay: memory-replay run-memory-record
75 $(QEMU_OPTS) memory)
77 EXTRA_RUNS+=run-memory-replay
/qemu/tests/qtest/
H A Dcxl-test.c38 "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M " \
39 "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M " \
43 "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M " \
44 "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M " \
48 "-object memory-backend-ram,id=cxl-mem0,size=256M " \
52 "-object memory-backend-ram,id=cxl-mem0,size=256M " \
53 "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M " \
57 "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M " \
58 "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M " \
60 "-object memory-backend-file,id=cxl-mem1,mem-path=%s,size=256M " \
[all …]

12345678910>>...66