/qemu/docs/specs/ |
H A D | ppc-spapr-numa.rst | 15 similar mean performance (or in our context here, distance) relative to 46 Relative Performance Distance and ibm,associativity-reference-points 50 to define the relevant performance/distance related boundaries, defining 58 boundary is the most significant to application performance, followed by 60 same performance boundaries are expected to have relative NUMA distance 64 performance. 97 Changing the ibm,associativity-reference-points array changes the performance 113 P1 and P2 does not have a common performance boundary. Since this is a one level 119 is considered to be on the same performance boundary: 144 provides a common-performance hierarchy, and the ibm,associativity-reference-points
|
/qemu/docs/config/ |
H A D | q35-virtio-serial.cfg | 15 # tailored towards optimal performance with modern guests, 55 # for better performance. 58 # yield good performance in the guest, and might even lead 183 # We use virtio-net for improved performance over emulated
|
H A D | mach-virt-serial.cfg | 16 # tailored towards optimal performance with modern guests, 52 # for better performance. 55 # yield good performance in the guest, and might even lead 233 # We use virtio-net for improved performance over emulated
|
H A D | q35-virtio-graphical.cfg | 14 # tailored towards optimal performance with modern guests, 50 # for better performance. 53 # yield good performance in the guest, and might even lead 178 # We use virtio-net for improved performance over emulated
|
H A D | mach-virt-graphical.cfg | 15 # tailored towards optimal performance with modern guests, 46 # for better performance. 49 # yield good performance in the guest, and might even lead 227 # We use virtio-net for improved performance over emulated
|
/qemu/hw/riscv/ |
H A D | riscv-iommu-bits.h | 203 /* 5.19 Performance monitoring counter overflow status (32bits) */ 207 /* 5.20 Performance monitoring counter inhibits (32bits) */ 211 /* 5.21 Performance monitoring cycles counter (64bits) */ 216 /* 5.22 Performance monitoring event counters (31 * 64bits) */ 221 /* 5.23 Performance monitoring event selectors (31 * 64bits) */
|
H A D | riscv-iommu-hpm.h | 2 * RISC-V IOMMU - Hardware Performance Monitor (HPM) helpers
|
/qemu/tests/unit/ |
H A D | rcutorture.c | 2 * rcutorture.c: simple user-level performance/stress test of RCU. 6 * Run a read-side performance test with the specified 9 * Run an update-side performance test with the specified 12 * Run a combined read/update performance test with the specified 114 * Performance test.
|
/qemu/include/block/ |
H A D | thread-pool.h | 78 * pool without destroying it or in a performance sensitive path where the 80 * pool free operation for later, less performance sensitive time.
|
/qemu/docs/ |
H A D | qcow2-cache.txt | 12 performance significantly. However, setting the right cache sizes is 53 value can improve the I/O performance significantly. 150 refcount cache size won't have any measurable effect in performance 189 - Try different entry sizes to see which one gives faster performance
|
H A D | rdma.txt | 9 An *exhaustive* paper (2010) shows additional performance details 17 * Performance 66 high-performance RDMA hardware using the following command: 76 Example performance of this using an idle VM in the previous example 77 can be found in the "Performance" section. 103 PERFORMANCE
|
/qemu/linux-user/ppc/ |
H A D | cpu_loop.c | 231 case POWERPC_EXCP_EPERFM: /* Embedded performance monitor IRQ */ in cpu_loop() 232 cpu_abort(cs, "Performance monitor exception not handled\n"); in cpu_loop() 315 case POWERPC_EXCP_PERFM: /* Embedded performance monitor IRQ */ in cpu_loop() 316 cpu_abort(cs, "Performance monitor exception not handled\n"); in cpu_loop()
|
/qemu/docs/devel/migration/ |
H A D | mapped-ram.rst | 38 For best performance enable the ``direct-io`` parameter as well: 60 right before the snapshot to take benefit of the performance gains 75 a performance increase for VMs with larger RAM sizes (10s to
|
/qemu/contrib/gitdm/ |
H A D | group-map-interns | 9 # GSoC 2020 TCG performance
|
/qemu/tcg/tci/ |
H A D | tcg-target-mo.h | 12 * on the host. But if you want performance, you use the normal backend.
|
/qemu/docs/tools/ |
H A D | qemu-img.rst | 184 Allow out-of-order writes to the destination. This option improves performance, 191 improve performance if the data is remote, such as with NFS or iSCSI backends, 456 Out of order writes can be enabled with ``-W`` to improve performance. 856 larger cluster sizes generally provide better performance. 861 initially larger but can improve performance when the image needs 868 performance. This is particularly interesting with 880 Btrfs has low performance when hosting a VM image file, even more 882 off COW is a way to mitigate this bad performance. Generally there 938 images to either raw or qcow2 in order to achieve good performance.
|
/qemu/scripts/simplebench/ |
H A D | bench_write_req.py | 3 # Test to compare performance of write requests for two qemu-img binary files. 120 '<path to another qemu-img to compare performance with> '
|
/qemu/trace/ |
H A D | control.h | 105 * If the event has the disabled property, the check will have no performance 118 * If the event has the disabled property, the check will have no performance
|
/qemu/hw/9pfs/ |
H A D | codir.c | 50 * TODO: This will be removed for performance reasons. 95 * the client would then suffer performance issues, so better log that in do_readdir_many() 207 * beneficial from performance point of view. Because for every fs driver
|
/qemu/docs/devel/ |
H A D | tracing.rst | 230 can optimize out trace events completely. This imposes no performance 379 might have a noticeable performance impact even when the event is 384 thus having no performance impact at all on regular builds (i.e., unless you 392 disabled, this check will have no performance impact.
|
H A D | tcg-plugins.rst | 144 when calling the callbacks. This is also for performance, since some 154 takes a lock. But this is very infrequent; we want performance when
|
/qemu/include/qemu/ |
H A D | sys_membarrier.h | 13 /* Only block reordering at the compiler level in the performance-critical
|
/qemu/docs/system/ |
H A D | qemu-block-drivers.rst.inc | 133 provide better performance. 139 improve performance when the image needs to grow. ``falloc`` and ``full`` 146 the goal of avoiding metadata I/O and improving performance. This is 159 Btrfs has low performance when hosting a VM image file, even more 161 COW is a way to mitigate this bad performance. Generally there are two 200 generally provide better performance. 207 performance benchmarking. 835 throttling, image formats, etc. Disk I/O performance is typically higher than
|
/qemu/ |
H A D | README.rst | 10 it achieves very good performance. QEMU can also integrate with the Xen 13 near native performance for CPUs. When QEMU emulates CPUs directly it is
|
/qemu/linux-user/aarch64/ |
H A D | mte_user_helper.c | 22 * Because there is no performance difference between the modes, and in arm_set_mte_tcf0()
|