Home
last modified time | relevance | path

Searched full:performance (Results 1 – 25 of 188) sorted by relevance

12345678

/qemu/docs/specs/
H A Dppc-spapr-numa.rst15 similar mean performance (or in our context here, distance) relative to
46 Relative Performance Distance and ibm,associativity-reference-points
50 to define the relevant performance/distance related boundaries, defining
58 boundary is the most significant to application performance, followed by
60 same performance boundaries are expected to have relative NUMA distance
64 performance.
97 Changing the ibm,associativity-reference-points array changes the performance
113 P1 and P2 does not have a common performance boundary. Since this is a one level
119 is considered to be on the same performance boundary:
144 provides a common-performance hierarchy, and the ibm,associativity-reference-points
/qemu/docs/config/
H A Dq35-virtio-serial.cfg15 # tailored towards optimal performance with modern guests,
55 # for better performance.
58 # yield good performance in the guest, and might even lead
183 # We use virtio-net for improved performance over emulated
H A Dmach-virt-serial.cfg16 # tailored towards optimal performance with modern guests,
52 # for better performance.
55 # yield good performance in the guest, and might even lead
233 # We use virtio-net for improved performance over emulated
H A Dq35-virtio-graphical.cfg14 # tailored towards optimal performance with modern guests,
50 # for better performance.
53 # yield good performance in the guest, and might even lead
178 # We use virtio-net for improved performance over emulated
H A Dmach-virt-graphical.cfg15 # tailored towards optimal performance with modern guests,
46 # for better performance.
49 # yield good performance in the guest, and might even lead
227 # We use virtio-net for improved performance over emulated
/qemu/hw/riscv/
H A Driscv-iommu-bits.h203 /* 5.19 Performance monitoring counter overflow status (32bits) */
207 /* 5.20 Performance monitoring counter inhibits (32bits) */
211 /* 5.21 Performance monitoring cycles counter (64bits) */
216 /* 5.22 Performance monitoring event counters (31 * 64bits) */
221 /* 5.23 Performance monitoring event selectors (31 * 64bits) */
H A Driscv-iommu-hpm.h2 * RISC-V IOMMU - Hardware Performance Monitor (HPM) helpers
/qemu/tests/unit/
H A Drcutorture.c2 * rcutorture.c: simple user-level performance/stress test of RCU.
6 * Run a read-side performance test with the specified
9 * Run an update-side performance test with the specified
12 * Run a combined read/update performance test with the specified
114 * Performance test.
/qemu/include/block/
H A Dthread-pool.h78 * pool without destroying it or in a performance sensitive path where the
80 * pool free operation for later, less performance sensitive time.
/qemu/docs/
H A Dqcow2-cache.txt12 performance significantly. However, setting the right cache sizes is
53 value can improve the I/O performance significantly.
150 refcount cache size won't have any measurable effect in performance
189 - Try different entry sizes to see which one gives faster performance
H A Drdma.txt9 An *exhaustive* paper (2010) shows additional performance details
17 * Performance
66 high-performance RDMA hardware using the following command:
76 Example performance of this using an idle VM in the previous example
77 can be found in the "Performance" section.
103 PERFORMANCE
/qemu/linux-user/ppc/
H A Dcpu_loop.c231 case POWERPC_EXCP_EPERFM: /* Embedded performance monitor IRQ */ in cpu_loop()
232 cpu_abort(cs, "Performance monitor exception not handled\n"); in cpu_loop()
315 case POWERPC_EXCP_PERFM: /* Embedded performance monitor IRQ */ in cpu_loop()
316 cpu_abort(cs, "Performance monitor exception not handled\n"); in cpu_loop()
/qemu/docs/devel/migration/
H A Dmapped-ram.rst38 For best performance enable the ``direct-io`` parameter as well:
60 right before the snapshot to take benefit of the performance gains
75 a performance increase for VMs with larger RAM sizes (10s to
/qemu/contrib/gitdm/
H A Dgroup-map-interns9 # GSoC 2020 TCG performance
/qemu/tcg/tci/
H A Dtcg-target-mo.h12 * on the host. But if you want performance, you use the normal backend.
/qemu/docs/tools/
H A Dqemu-img.rst184 Allow out-of-order writes to the destination. This option improves performance,
191 improve performance if the data is remote, such as with NFS or iSCSI backends,
456 Out of order writes can be enabled with ``-W`` to improve performance.
856 larger cluster sizes generally provide better performance.
861 initially larger but can improve performance when the image needs
868 performance. This is particularly interesting with
880 Btrfs has low performance when hosting a VM image file, even more
882 off COW is a way to mitigate this bad performance. Generally there
938 images to either raw or qcow2 in order to achieve good performance.
/qemu/scripts/simplebench/
H A Dbench_write_req.py3 # Test to compare performance of write requests for two qemu-img binary files.
120 '<path to another qemu-img to compare performance with> '
/qemu/trace/
H A Dcontrol.h105 * If the event has the disabled property, the check will have no performance
118 * If the event has the disabled property, the check will have no performance
/qemu/hw/9pfs/
H A Dcodir.c50 * TODO: This will be removed for performance reasons.
95 * the client would then suffer performance issues, so better log that in do_readdir_many()
207 * beneficial from performance point of view. Because for every fs driver
/qemu/docs/devel/
H A Dtracing.rst230 can optimize out trace events completely. This imposes no performance
379 might have a noticeable performance impact even when the event is
384 thus having no performance impact at all on regular builds (i.e., unless you
392 disabled, this check will have no performance impact.
H A Dtcg-plugins.rst144 when calling the callbacks. This is also for performance, since some
154 takes a lock. But this is very infrequent; we want performance when
/qemu/include/qemu/
H A Dsys_membarrier.h13 /* Only block reordering at the compiler level in the performance-critical
/qemu/docs/system/
H A Dqemu-block-drivers.rst.inc133 provide better performance.
139 improve performance when the image needs to grow. ``falloc`` and ``full``
146 the goal of avoiding metadata I/O and improving performance. This is
159 Btrfs has low performance when hosting a VM image file, even more
161 COW is a way to mitigate this bad performance. Generally there are two
200 generally provide better performance.
207 performance benchmarking.
835 throttling, image formats, etc. Disk I/O performance is typically higher than
/qemu/
H A DREADME.rst10 it achieves very good performance. QEMU can also integrate with the Xen
13 near native performance for CPUs. When QEMU emulates CPUs directly it is
/qemu/linux-user/aarch64/
H A Dmte_user_helper.c22 * Because there is no performance difference between the modes, and in arm_set_mte_tcf0()

12345678