diff options
author | Waiman Long <longman@redhat.com> | 2024-03-18 20:50:04 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2024-03-21 20:45:17 +0100 |
commit | 3774b28d8f3b9e8a946beb9550bee85e5454fc9f (patch) | |
tree | b7bd0cbb64f1d2e4fb10639e2dda674422752685 /kernel/locking/lock_events.h | |
parent | 4ae3dc83b047d51485cce1a72be277a110d77c91 (diff) | |
download | linux-3774b28d8f3b9e8a946beb9550bee85e5454fc9f.tar.gz linux-3774b28d8f3b9e8a946beb9550bee85e5454fc9f.tar.bz2 linux-3774b28d8f3b9e8a946beb9550bee85e5454fc9f.zip |
locking/qspinlock: Always evaluate lockevent* non-event parameter once
The 'inc' parameter of lockevent_add() and the cond parameter of
lockevent_cond_inc() are only evaluated when CONFIG_LOCK_EVENT_COUNTS
is on. That can cause problem if those parameters are expressions
with side effect like a "++". Fix this by evaluating those non-event
parameters once even if CONFIG_LOCK_EVENT_COUNTS is off. This will also
eliminate the need of the __maybe_unused attribute to the wait_early
local variable in pv_wait_node().
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20240319005004.1692705-1-longman@redhat.com
Diffstat (limited to 'kernel/locking/lock_events.h')
-rw-r--r-- | kernel/locking/lock_events.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/locking/lock_events.h b/kernel/locking/lock_events.h index a6016b91803d..d2345e9c0190 100644 --- a/kernel/locking/lock_events.h +++ b/kernel/locking/lock_events.h @@ -53,8 +53,8 @@ static inline void __lockevent_add(enum lock_events event, int inc) #else /* CONFIG_LOCK_EVENT_COUNTS */ #define lockevent_inc(ev) -#define lockevent_add(ev, c) -#define lockevent_cond_inc(ev, c) +#define lockevent_add(ev, c) do { (void)(c); } while (0) +#define lockevent_cond_inc(ev, c) do { (void)(c); } while (0) #endif /* CONFIG_LOCK_EVENT_COUNTS */ |