diff options
| author | Zizhi Wo <wozizhi@huawei.com> | 2025-05-06 10:09:31 +0800 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2025-05-13 12:08:27 -0600 |
| commit | c4da7bf54b1f76e7c5c8cc6d1c4db8b19af67c5d (patch) | |
| tree | 3e4b052c88044599e1ee6d54f9d056b19d6de1f5 /block/blk-throttle.c | |
| parent | a404be5399d762f5737a4a731b42a38f552f2b44 (diff) | |
| download | linux-c4da7bf54b1f76e7c5c8cc6d1c4db8b19af67c5d.tar.gz linux-c4da7bf54b1f76e7c5c8cc6d1c4db8b19af67c5d.tar.bz2 linux-c4da7bf54b1f76e7c5c8cc6d1c4db8b19af67c5d.zip | |
blk-throttle: Introduce flag "BIO_TG_BPS_THROTTLED"
Subsequent patches will split the single queue into separate bps and iops
queues. To prevent IO that has already passed through the bps queue at a
single tg level from being counted toward bps wait time again, we introduce
"BIO_TG_BPS_THROTTLED" flag. Since throttle and QoS operate at different
levels, we reuse the value as "BIO_QOS_THROTTLED".
We set this flag when charge bps and clear it when charge iops, as the bio
will move to the upper-level tg or be dispatched.
This patch does not involve functional changes.
Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-5-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-throttle.c')
| -rw-r--r-- | block/blk-throttle.c | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/block/blk-throttle.c b/block/blk-throttle.c index fea09a91c20b..ee4eeee8f21f 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -792,12 +792,16 @@ static void throtl_charge_bps_bio(struct throtl_grp *tg, struct bio *bio) unsigned int bio_size = throtl_bio_data_size(bio); /* Charge the bio to the group */ - if (!bio_flagged(bio, BIO_BPS_THROTTLED)) + if (!bio_flagged(bio, BIO_BPS_THROTTLED) && + !bio_flagged(bio, BIO_TG_BPS_THROTTLED)) { + bio_set_flag(bio, BIO_TG_BPS_THROTTLED); tg->bytes_disp[bio_data_dir(bio)] += bio_size; + } } static void throtl_charge_iops_bio(struct throtl_grp *tg, struct bio *bio) { + bio_clear_flag(bio, BIO_TG_BPS_THROTTLED); tg->io_disp[bio_data_dir(bio)]++; } @@ -823,7 +827,8 @@ static unsigned long tg_dispatch_bps_time(struct throtl_grp *tg, struct bio *bio /* no need to throttle if this bio's bytes have been accounted */ if (bps_limit == U64_MAX || tg->flags & THROTL_TG_CANCELING || - bio_flagged(bio, BIO_BPS_THROTTLED)) + bio_flagged(bio, BIO_BPS_THROTTLED) || + bio_flagged(bio, BIO_TG_BPS_THROTTLED)) return 0; tg_update_slice(tg, rw); |
