diff options
author | Geliang Tang <geliang.tang@suse.com> | 2023-08-21 15:25:19 -0700 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2023-08-22 17:31:19 -0700 |
commit | 0fa1b3783a17d75a4aa1651a18ede041ffca5750 (patch) | |
tree | 3735b005b1d160142a19aa0533e7590641045121 /net/mptcp/sched.c | |
parent | 07336a87fe871518a7b3508e29a21ca1735b3edc (diff) | |
download | linux-0fa1b3783a17d75a4aa1651a18ede041ffca5750.tar.gz linux-0fa1b3783a17d75a4aa1651a18ede041ffca5750.tar.bz2 linux-0fa1b3783a17d75a4aa1651a18ede041ffca5750.zip |
mptcp: use get_send wrapper
This patch adds the multiple subflows support for __mptcp_push_pending
and __mptcp_subflow_push_pending. Use get_send() wrapper instead of
mptcp_subflow_get_send() in them.
Check the subflow scheduled flags to test which subflow or subflows are
picked by the scheduler, use them to send data.
Move msk_owned_by_me() and fallback checks into get_send() wrapper from
mptcp_subflow_get_send().
This commit allows the scheduler to set the subflow->scheduled bit in
multiple subflows, but it does not allow for sending redundant data.
Multiple scheduled subflows will send sequential data on each subflow.
Reviewed-by: Mat Martineau <martineau@kernel.org>
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <martineau@kernel.org>
Link: https://lore.kernel.org/r/20230821-upstream-net-next-20230818-v1-8-0c860fb256a8@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/mptcp/sched.c')
-rw-r--r-- | net/mptcp/sched.c | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c index 884606686cfe..078b5d44978d 100644 --- a/net/mptcp/sched.c +++ b/net/mptcp/sched.c @@ -99,6 +99,19 @@ int mptcp_sched_get_send(struct mptcp_sock *msk) struct mptcp_subflow_context *subflow; struct mptcp_sched_data data; + msk_owned_by_me(msk); + + /* the following check is moved out of mptcp_subflow_get_send */ + if (__mptcp_check_fallback(msk)) { + if (msk->first && + __tcp_can_send(msk->first) && + sk_stream_memory_free(msk->first)) { + mptcp_subflow_set_scheduled(mptcp_subflow_ctx(msk->first), true); + return 0; + } + return -EINVAL; + } + mptcp_for_each_subflow(msk, subflow) { if (READ_ONCE(subflow->scheduled)) return 0; |