Lines Matching full:eoi
309 * as a "replay" because EOI decided there was still something in xive_get_irq()
317 * entry (on HW interrupt) from a replay triggered by EOI, in xive_get_irq()
338 * After EOI'ing an interrupt, we need to re-check the queue
350 DBG_VERBOSE("eoi: pending=0x%02x\n", xc->pending_prio); in xive_do_queue_eoi()
356 * EOI an interrupt at the source. There are several methods
362 /* If the XIVE supports the new "store EOI facility, use it */ in xive_do_source_eoi()
373 if (WARN_ON_ONCE(!xive_ops->eoi)) in xive_do_source_eoi()
375 xive_ops->eoi(hw_irq); in xive_do_source_eoi()
380 * Otherwise for EOI, we use the special MMIO that does in xive_do_source_eoi()
382 * except for LSIs where we use the "EOI cycle" special in xive_do_source_eoi()
388 * For LSIs the HW EOI cycle is used rather than PQ bits, in xive_do_source_eoi()
405 /* irq_chip eoi callback, called with irq descriptor lock held */
415 * EOI the source if it hasn't been disabled and hasn't in xive_irq_eoi()
837 * 11, then perform an EOI. in xive_irq_retrigger()
843 * avoid calling into the backend EOI code which we don't in xive_irq_retrigger()
845 * only do EOI for LSIs anyway. in xive_irq_retrigger()
907 * This saved_p is cleared by the host EOI, when we know in xive_irq_set_vcpu_affinity()
917 * that we *will* eventually get an EOI for it on in xive_irq_set_vcpu_affinity()
958 * interrupt with an EOI. If it is set, we know there is in xive_irq_set_vcpu_affinity()
1110 DBG_VERBOSE("IPI eoi: irq=%d [0x%lx] (HW IRQ 0x%x) pending=%02x\n", in xive_ipi_eoi()
1445 * For LSIs, we EOI, this will cause a resend if it's in xive_flush_cpu_queue()