Lines Matching +full:- +full:- +full:with +full:- +full:coroutine

10  * the COPYING file in the top-level directory.
20 #include "qemu/coroutine-core.h"
26 #include "block/graph-lock.h"
27 #include "hw/qdev-core.h"
76 * Called with ctx->list_lock acquired.
88 * Called with ctx->list_lock incremented but not locked.
98 * Tell aio_poll() when to stop userspace polling early because ->wait()
106 * Returns: true if ->wait() should be called, false otherwise.
133 /* Used by AioContext users to protect from multi-threaded access. */
144 /* The list of registered AIO handlers. Protected by ctx->list_lock. */
147 /* The list of AIO handlers to be deleted. Protected by ctx->list_lock. */
153 * thread so it is still accessed with atomic primitives.
156 * timers) will be re-evaluated before the next blocking poll() or
158 * skipped. If it is non-zero, you may need to wake up a concurrent
164 * Bits 1-31 simply count the number of active calls to aio_poll
171 * play safe and allow it---it will just cause extra calls to
206 QSLIST_HEAD(, Coroutine) scheduled_coroutines;
227 /* TimerLists for calling timers - one per clock type. Has its own
245 * ctx->list_lock. Iterated and modified mostly by the event loop thread
246 * from aio_poll() with ctx->list_lock incremented. aio_set_fd_handler()
247 * only touches the list to delete nodes if ctx->list_lock's count is zero.
254 /* epoll(7) state used when built with CONFIG_EPOLL */
263 * AioContext provide a mini event-loop that can be waited on synchronously.
289 * @name: A human-readable identifier for debugging purposes.
308 * to be wait-free, thread-safe and signal-safe. The #QEMUBH structure
311 * @name: A human-readable identifier for debugging purposes.
313 * device-reentrancy issues
328 * aio_bh_new_guarded: Allocate a new bottom half structure with a
341 * aio_poll to exit, so that the next call will re-examine pending events.
403 * itself is also wait-free and thread-safe, it can of course race with the
489 * registered with aio_set_event_notifier. Do nothing if the event notifier is
530 * Allocate a new timer (with attributes) attached to the context @ctx.
543 return timer_new_full(&ctx->tlg, type, scale, attributes, cb, opaque); in aio_timer_new_with_attrs()
563 return timer_new_full(&ctx->tlg, type, scale, 0, cb, opaque); in aio_timer_new()
577 * Initialise a new timer (with attributes) attached to the context @ctx.
585 timer_init_full(ts, &ctx->tlg, type, scale, attributes, cb, opaque); in aio_timer_init_with_attrs()
605 timer_init_full(ts, &ctx->tlg, type, scale, 0, cb, opaque); in aio_timer_init()
619 * @co: the coroutine
621 * Start a coroutine on a remote AioContext.
623 * The coroutine must not be entered by anyone else while aio_co_schedule()
624 * is active. In addition the coroutine must have yielded unless ctx
625 * is the context in which the coroutine is running (i.e. the value of
626 * qemu_get_current_aio_context() from the coroutine itself).
628 void aio_co_schedule(AioContext *ctx, Coroutine *co);
634 * Move the currently running coroutine to new_ctx. If the coroutine is already
644 * @co: the coroutine
646 * Restart a coroutine on the AioContext where it was running last, thus
650 * aio_co_wake may be executed either in coroutine or non-coroutine
651 * context. The coroutine must not be entered by anyone else while
654 void aio_co_wake(Coroutine *co);
658 * @ctx: the context to run the coroutine
659 * @co: the coroutine to run
661 * Enter a coroutine in the specified AioContext.
663 void aio_co_enter(AioContext *ctx, Coroutine *co);
669 * called from the main thread or with the "big QEMU lock" taken it