diff options
| author | Marco Crivellari <marco.crivellari@suse.com> | 2025-11-06 16:58:30 +0100 |
|---|---|---|
| committer | Juergen Gross <jgross@suse.com> | 2026-01-12 11:28:46 +0100 |
| commit | 842df741a4e464f65cf1a2056cd51e9a86a68a20 (patch) | |
| tree | e3907b56102ac4d91648247f907363bfb8d6d566 /drivers/xen | |
| parent | 0f61b1860cc3f52aef9036d7235ed1f017632193 (diff) | |
| download | linux-842df741a4e464f65cf1a2056cd51e9a86a68a20.tar.gz linux-842df741a4e464f65cf1a2056cd51e9a86a68a20.tar.bz2 linux-842df741a4e464f65cf1a2056cd51e9a86a68a20.zip | |
xen/events: replace use of system_wq with system_percpu_wq
Currently if a user enqueues a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
This continues the effort to refactor workqueue APIs, which began with
the introduction of new workqueues and a new alloc_workqueue flag in:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
Switch to using system_percpu_wq because system_wq is going away as part of
a workqueue restructuring.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Message-ID: <20251106155831.306248-2-marco.crivellari@suse.com>
Diffstat (limited to 'drivers/xen')
| -rw-r--r-- | drivers/xen/events/events_base.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index 9478fae014e5..663df17776fd 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -581,7 +581,7 @@ static void lateeoi_list_add(struct irq_info *info) eoi_list); if (!elem || info->eoi_time < elem->eoi_time) { list_add(&info->eoi_list, &eoi->eoi_list); - mod_delayed_work_on(info->eoi_cpu, system_wq, + mod_delayed_work_on(info->eoi_cpu, system_percpu_wq, &eoi->delayed, delay); } else { list_for_each_entry_reverse(elem, &eoi->eoi_list, eoi_list) { @@ -666,7 +666,7 @@ static void xen_irq_lateeoi_worker(struct work_struct *work) break; if (now < info->eoi_time) { - mod_delayed_work_on(info->eoi_cpu, system_wq, + mod_delayed_work_on(info->eoi_cpu, system_percpu_wq, &eoi->delayed, info->eoi_time - now); break; @@ -782,7 +782,7 @@ static void xen_free_irq(struct irq_info *info) WARN_ON(info->refcnt > 0); - queue_rcu_work(system_wq, &info->rwork); + queue_rcu_work(system_percpu_wq, &info->rwork); } /* Not called for lateeoi events. */ |
