diff options
| author | Liangyan <liangyan.peng@linux.alibaba.com> | 2019-08-26 20:16:33 +0800 |
|---|---|---|
| committer | Ben Hutchings <ben@decadent.org.uk> | 2019-11-22 15:57:26 +0000 |
| commit | 734653b4bf8cdd9d3885093692c36134bc5e1e4f (patch) | |
| tree | 330e1f1f6a8d2236b6a49c9ad957087a6450a494 /kernel | |
| parent | fe9f8517515917570e1e63f88373f5c4f31319bc (diff) | |
| download | linux-734653b4bf8cdd9d3885093692c36134bc5e1e4f.tar.gz linux-734653b4bf8cdd9d3885093692c36134bc5e1e4f.tar.bz2 linux-734653b4bf8cdd9d3885093692c36134bc5e1e4f.zip | |
sched/fair: Don't assign runtime for throttled cfs_rq
commit 5e2d2cc2588bd3307ce3937acbc2ed03c830a861 upstream.
do_sched_cfs_period_timer() will refill cfs_b runtime and call
distribute_cfs_runtime to unthrottle cfs_rq, sometimes cfs_b->runtime
will allocate all quota to one cfs_rq incorrectly, then other cfs_rqs
attached to this cfs_b can't get runtime and will be throttled.
We find that one throttled cfs_rq has non-negative
cfs_rq->runtime_remaining and cause an unexpetced cast from s64 to u64
in snippet:
distribute_cfs_runtime() {
runtime = -cfs_rq->runtime_remaining + 1;
}
The runtime here will change to a large number and consume all
cfs_b->runtime in this cfs_b period.
According to Ben Segall, the throttled cfs_rq can have
account_cfs_rq_runtime called on it because it is throttled before
idle_balance, and the idle_balance calls update_rq_clock to add time
that is accounted to the task.
This commit prevents cfs_rq to be assgined new runtime if it has been
throttled until that distribute_cfs_runtime is called.
Signed-off-by: Liangyan <liangyan.peng@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: shanpeic@linux.alibaba.com
Cc: xlpang@linux.alibaba.com
Fixes: d3d9dc330236 ("sched: Throttle entities exceeding their allowed bandwidth")
Link: https://lkml.kernel.org/r/20190826121633.6538-1-liangyan.peng@linux.alibaba.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
[bwh: Backported to 3.16: Open-code SCHED_WARN_ON().]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/sched/fair.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7b882eed3e47..ea2d33aa1f55 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3271,6 +3271,8 @@ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) if (likely(cfs_rq->runtime_remaining > 0)) return; + if (cfs_rq->throttled) + return; /* * if we're unable to extend our runtime we resched so that the active * hierarchy can be throttled @@ -3450,6 +3452,11 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b, if (!cfs_rq_throttled(cfs_rq)) goto next; + /* By the above check, this should never be true */ +#ifdef CONFIG_SCHED_DEBUG + WARN_ON_ONCE(cfs_rq->runtime_remaining > 0); +#endif + runtime = -cfs_rq->runtime_remaining + 1; if (runtime > remaining) runtime = remaining; |
