diff options
| author | Boris Ostrovsky <boris.ostrovsky@oracle.com> | 2016-03-18 10:11:07 -0400 |
|---|---|---|
| committer | Jiri Slaby <jslaby@suse.cz> | 2016-04-20 08:46:47 +0200 |
| commit | bc2d8a1f629272916b402b40d54bd3ac4c0c6595 (patch) | |
| tree | 0b7ae6d99a033d69f56d4deb2f776690fe0b2b23 | |
| parent | a3e63f3aff2762aa9ee0d6e0101584ca39abc9c7 (diff) | |
| download | linux-bc2d8a1f629272916b402b40d54bd3ac4c0c6595.tar.gz linux-bc2d8a1f629272916b402b40d54bd3ac4c0c6595.tar.bz2 linux-bc2d8a1f629272916b402b40d54bd3ac4c0c6595.zip | |
xen/events: Mask a moving irq
commit ff1e22e7a638a0782f54f81a6c9cb139aca2da35 upstream.
Moving an unmasked irq may result in irq handler being invoked on both
source and target CPUs.
With 2-level this can happen as follows:
On source CPU:
evtchn_2l_handle_events() ->
generic_handle_irq() ->
handle_edge_irq() ->
eoi_pirq():
irq_move_irq(data);
/***** WE ARE HERE *****/
if (VALID_EVTCHN(evtchn))
clear_evtchn(evtchn);
If at this moment target processor is handling an unrelated event in
evtchn_2l_handle_events()'s loop it may pick up our event since target's
cpu_evtchn_mask claims that this event belongs to it *and* the event is
unmasked and still pending. At the same time, source CPU will continue
executing its own handle_edge_irq().
With FIFO interrupt the scenario is similar: irq_move_irq() may result
in a EVTCHNOP_unmask hypercall which, in turn, may make the event
pending on the target CPU.
We can avoid this situation by moving and clearing the event while
keeping event masked.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
| -rw-r--r-- | drivers/xen/events.c | 28 |
1 files changed, 24 insertions, 4 deletions
diff --git a/drivers/xen/events.c b/drivers/xen/events.c index bd12a6660fe3..3715a54117bb 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -573,9 +573,19 @@ static void eoi_pirq(struct irq_data *data) struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) }; int rc = 0; - irq_move_irq(data); + if (!VALID_EVTCHN(evtchn)) + return; - if (VALID_EVTCHN(evtchn)) + if (unlikely(irqd_is_setaffinity_pending(data))) { + int masked = test_and_set_mask(evtchn); + + clear_evtchn(evtchn); + + irq_move_masked_irq(data); + + if (!masked) + unmask_evtchn(evtchn); + } else clear_evtchn(evtchn); if (pirq_needs_eoi(data->irq)) { @@ -1603,9 +1613,19 @@ static void ack_dynirq(struct irq_data *data) { int evtchn = evtchn_from_irq(data->irq); - irq_move_irq(data); + if (!VALID_EVTCHN(evtchn)) + return; - if (VALID_EVTCHN(evtchn)) + if (unlikely(irqd_is_setaffinity_pending(data))) { + int masked = test_and_set_mask(evtchn); + + clear_evtchn(evtchn); + + irq_move_masked_irq(data); + + if (!masked) + unmask_evtchn(evtchn); + } else clear_evtchn(evtchn); } |
