diff options
| author | Marco Crivellari <marco.crivellari@suse.com> | 2025-09-18 16:24:27 +0200 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2025-09-22 17:40:30 -0700 |
| commit | 27ce71e1ce81875df72f7698ba27988392bef602 (patch) | |
| tree | 49f92e73d86d5b9dcb5fe3856c5f84901fca0424 /net/vmw_vsock | |
| parent | 5fd8bb982e10f29e856ef71072609af5ce55d281 (diff) | |
| download | linux-27ce71e1ce81875df72f7698ba27988392bef602.tar.gz linux-27ce71e1ce81875df72f7698ba27988392bef602.tar.bz2 linux-27ce71e1ce81875df72f7698ba27988392bef602.zip | |
net: WQ_PERCPU added to alloc_workqueue users
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This change adds a new WQ_PERCPU flag at the network subsystem, to explicitly
request the use of the per-CPU behavior. Both flags coexist for one release
cycle to allow callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Link: https://patch.msgid.link/20250918142427.309519-4-marco.crivellari@suse.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/vmw_vsock')
| -rw-r--r-- | net/vmw_vsock/virtio_transport.c | 2 | ||||
| -rw-r--r-- | net/vmw_vsock/vsock_loopback.c | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index b6569b0ca2bb..8c867023a2e5 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -926,7 +926,7 @@ static int __init virtio_vsock_init(void) { int ret; - virtio_vsock_workqueue = alloc_workqueue("virtio_vsock", 0, 0); + virtio_vsock_workqueue = alloc_workqueue("virtio_vsock", WQ_PERCPU, 0); if (!virtio_vsock_workqueue) return -ENOMEM; diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c index 6e78927a598e..bc2ff918b315 100644 --- a/net/vmw_vsock/vsock_loopback.c +++ b/net/vmw_vsock/vsock_loopback.c @@ -139,7 +139,7 @@ static int __init vsock_loopback_init(void) struct vsock_loopback *vsock = &the_vsock_loopback; int ret; - vsock->workqueue = alloc_workqueue("vsock-loopback", 0, 0); + vsock->workqueue = alloc_workqueue("vsock-loopback", WQ_PERCPU, 0); if (!vsock->workqueue) return -ENOMEM; |
