diff options
| author | Frederic Weisbecker <frederic@kernel.org> | 2022-11-25 14:54:59 +0100 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2023-03-10 09:39:08 +0100 |
| commit | f7dc606a47d386a4412f1c0a1153eb013f1487c1 (patch) | |
| tree | 726d8ce9c94032567e2cc84f6e1bd8fba88ae7af /kernel/rcu | |
| parent | a0818534fb6429d612ef01c7a63c91c70ed69792 (diff) | |
| download | linux-f7dc606a47d386a4412f1c0a1153eb013f1487c1.tar.gz linux-f7dc606a47d386a4412f1c0a1153eb013f1487c1.tar.bz2 linux-f7dc606a47d386a4412f1c0a1153eb013f1487c1.zip | |
rcu-tasks: Remove preemption disablement around srcu_read_[un]lock() calls
[ Upstream commit 44757092958bdd749775022f915b7ac974384c2a ]
Ever since the following commit:
5a41344a3d83 ("srcu: Simplify __srcu_read_unlock() via this_cpu_dec()")
SRCU doesn't rely anymore on preemption to be disabled in order to
modify the per-CPU counter. And even then it used to be done from the API
itself.
Therefore and after checking further, it appears to be safe to remove
the preemption disablement around __srcu_read_[un]lock() in
exit_tasks_rcu_start() and exit_tasks_rcu_finish()
Suggested-by: Boqun Feng <boqun.feng@gmail.com>
Suggested-by: Paul E. McKenney <paulmck@kernel.org>
Suggested-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Stable-dep-of: 28319d6dc5e2 ("rcu-tasks: Fix synchronize_rcu_tasks() VS zap_pid_ns_processes()")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'kernel/rcu')
| -rw-r--r-- | kernel/rcu/tasks.h | 4 |
1 files changed, 0 insertions, 4 deletions
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index d937bacf27b6..2408ca633872 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -632,9 +632,7 @@ EXPORT_SYMBOL_GPL(show_rcu_tasks_classic_gp_kthread); */ void exit_tasks_rcu_start(void) __acquires(&tasks_rcu_exit_srcu) { - preempt_disable(); current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu); - preempt_enable(); } /* @@ -646,9 +644,7 @@ void exit_tasks_rcu_finish(void) __releases(&tasks_rcu_exit_srcu) { struct task_struct *t = current; - preempt_disable(); __srcu_read_unlock(&tasks_rcu_exit_srcu, t->rcu_tasks_idx); - preempt_enable(); exit_tasks_rcu_finish_trace(t); } |
