diff options
author | David S. Miller <davem@davemloft.net> | 2023-12-27 13:08:10 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2023-12-27 13:08:10 +0000 |
commit | 2f7ccf1d8835975a92fae7704fa73cb2e49bc12f (patch) | |
tree | 6c814f739546bc21a243d1128bd95866d78e7858 /net/core/skbuff.c | |
parent | c2b2ee36250d967c21890cb801e24af4b6a9eaa5 (diff) | |
parent | dc1a00380aa6cc24dc3709ee50a22d1e24cd3673 (diff) | |
download | linux-2f7ccf1d8835975a92fae7704fa73cb2e49bc12f.tar.gz linux-2f7ccf1d8835975a92fae7704fa73cb2e49bc12f.tar.bz2 linux-2f7ccf1d8835975a92fae7704fa73cb2e49bc12f.zip |
Merge branch 'net-tja11xx-macsec-support'
Radu Pirea says:
====================
Add MACsec support for TJA11XX C45 PHYs
This is the MACsec support for TJA11XX PHYs. The MACsec block encrypts
the ethernet frames on the fly and has no buffering. This operation will
grow the frames by 32 bytes. If the frames are sent back to back, the
MACsec block will not have enough room to insert the SecTAG and the ICV
and the frames will be dropped.
To mitigate this, the PHY can parse a specific ethertype with some
padding bytes and replace them with the SecTAG and ICV. These padding
bytes might be dummy or might contain information about TX SC that must
be used to encrypt the frame.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/skbuff.c')
-rw-r--r-- | net/core/skbuff.c | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index ce5687ddb768..12d22c0b8551 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -5995,6 +5995,31 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len) } EXPORT_SYMBOL(skb_ensure_writable); +int skb_ensure_writable_head_tail(struct sk_buff *skb, struct net_device *dev) +{ + int needed_headroom = dev->needed_headroom; + int needed_tailroom = dev->needed_tailroom; + + /* For tail taggers, we need to pad short frames ourselves, to ensure + * that the tail tag does not fail at its role of being at the end of + * the packet, once the conduit interface pads the frame. Account for + * that pad length here, and pad later. + */ + if (unlikely(needed_tailroom && skb->len < ETH_ZLEN)) + needed_tailroom += ETH_ZLEN - skb->len; + /* skb_headroom() returns unsigned int... */ + needed_headroom = max_t(int, needed_headroom - skb_headroom(skb), 0); + needed_tailroom = max_t(int, needed_tailroom - skb_tailroom(skb), 0); + + if (likely(!needed_headroom && !needed_tailroom && !skb_cloned(skb))) + /* No reallocation needed, yay! */ + return 0; + + return pskb_expand_head(skb, needed_headroom, needed_tailroom, + GFP_ATOMIC); +} +EXPORT_SYMBOL(skb_ensure_writable_head_tail); + /* remove VLAN header from packet and update csum accordingly. * expects a non skb_vlan_tag_present skb with a vlan tag payload */ |