summaryrefslogtreecommitdiff
path: root/drivers/s390/net
AgeCommit message (Collapse)AuthorFilesLines
2018-03-09s390/qeth: restructure IP notification handlersJulian Wiedmann2-63/+49
Extract a helper that does the actual work & returns the right NOTIFY_* responses, and start putting the temporary ipaddr container objects on the stack rather than kmalloc'ing them. They are small, and this reduces the confusion of which objects actually get added to qeth's IP tables. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: reset NAPI context during queue initJulian Wiedmann3-4/+2
init_qdio_queues() resets the Input Queue's overall QDIO state, and positions the buffer cursor back to 0. So this is the obvious place to also reset the queue's NAPI context (in contrast to doing it rather randomly in the middle of the big set_online() path). No functional change. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: reduce RX skb setupJulian Wiedmann2-10/+1
Newly-allocated skbs default to PACKET_HOST, and eth_type_trans() is smart enough to determine any other packet type from the frame's destination address. So except for the IQD sniffer case, there is no need to set up skb->pkt_type manually. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: allocate skb from NAPI cacheJulian Wiedmann3-6/+4
napi_alloc_skb() doesn't need to disable IRQs during the allocation, and thus may save us a few cycles. Doing so requires a small fix-up in the HiperTransport path, which currently assumes a fixed NET_SKB_PAD headroom padding. napi_alloc_skb() adds an additional NET_IP_ALIGN padding, so use the proper helper for setting up the mac_header offset. Use this opportunity to convert the non-NAPI path to netdev_alloc_skb(), which means that skb->dev is now always set-up during allocation and doesn't need to be assigned manually. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: pass correct length to header_ops->create()Julian Wiedmann1-4/+3
We need to pass the *payload* length, not the L2 address length. For qeth (using eth_header()) this is merely a cosmetic change: the parameter only matters when building headers for ETH_P_802_2 or ETH_P_802_3, whereas our fake headers are built with ETH_P_IP / ETH_P_IPV6 / 0. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: advertise IFF_UNICAST_FLTJulian Wiedmann1-0/+1
qeth implements HW-based Unicast Filtering (via SETVMAC) on L2 devices. Tell the stack, so it knows that receiving traffic for secondary addresses doesn't require full-blown promiscuous mode. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: support SG for more device typesJulian Wiedmann2-13/+17
NETIF_F_SG support is currently limited to OSA (and for L2 even OSD) devices. Advertise it for some more device types (OSM, L2 OSX, z/VM OSA) that share the same code paths. For now, keep it switched off by default on these devices. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: remove outdated portname debug msgJulian Wiedmann1-5/+2
The 'portname' attribute is deprecated and setting it has no effect. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-09s390/qeth: use __ipa_cmd() for casting an IPA cmd bufferJulian Wiedmann3-28/+24
"s390/qeth: fix SETIP command handling" introduced a new helper, apply it driver-wide. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-28s390/qeth: fix IPA command submission raceJulian Wiedmann1-9/+10
If multiple IPA commands are build & sent out concurrently, fill_ipacmd_header() may assign a seqno value to a command that's different from what send_control_data() later assigns to this command's reply. This is due to other commands passing through send_control_data(), and incrementing card->seqno.ipa along the way. So one IPA command has no reply that's waiting for its seqno, while some other IPA command has multiple reply objects waiting for it. Only one of those waiting replies wins, and the other(s) times out and triggers a recovery via send_ipa_cmd(). Fix this by making sure that the same seqno value is assigned to a command and its reply object. Do so immediately before submitting the command & while holding the irq_pending "lock", to produce nicely ascending seqnos. As a side effect, *all* IPA commands now use a reply object that's waiting for its actual seqno. Previously, early IPA commands that were submitted while the card was still DOWN used the "catch-all" IDX seqno. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-28s390/qeth: fix IP address lookup for L3 devicesJulian Wiedmann2-51/+74
Current code ("qeth_l3_ip_from_hash()") matches a queried address object against objects in the IP table by IP address, Mask/Prefix Length and MAC address ("qeth_l3_ipaddrs_is_equal()"). But what callers actually require is either a) "is this IP address registered" (ie. match by IP address only), before adding a new address. b) or "is this address object registered" (ie. match all relevant attributes), before deleting an address. Right now 1. the ADD path is too strict in its lookup, and eg. doesn't detect conflicts between an existing NORMAL address and a new VIPA address (because the NORMAL address will have mask != 0, while VIPA has a mask == 0), 2. the DELETE path is not strict enough, and eg. allows del_rxip() to delete a VIPA address as long as the IP address matches. Fix all this by adding helpers (_addr_match_ip() and _addr_match_all()) that do the appropriate checking. Note that the ADD path for NORMAL addresses is special, as qeth keeps track of how many times such an address is in use (and there is no immediate way of returning errors to the caller). So when a requested NORMAL address _fully_ matches an existing one, it's not considered a conflict and we merely increment the refcount. Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-28Revert "s390/qeth: fix using of ref counter for rxip addresses"Julian Wiedmann1-5/+3
This reverts commit cb816192d986f7596009dedcf2201fe2e5bc2aa7. The issue this attempted to fix never actually occurs. l3_add_rxip() checks (via l3_ip_from_hash()) if the requested address was previously added to the card. If so, it returns -EEXIST and doesn't call l3_add_ip(). As a result, the "address exists" path in l3_add_ip() is never taken for rxip addresses, and this patch had no effect. Fixes: cb816192d986 ("s390/qeth: fix using of ref counter for rxip addresses") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-28s390/qeth: fix double-free on IP add/remove raceJulian Wiedmann1-1/+2
Registering an IPv4 address with the HW takes quite a while, so we temporarily drop the ip_htable lock. Any concurrent add/remove of the same IP adjusts the IP's use count, and (on remove) is then blocked by addr->in_progress. After the register call has completed, we check the use count for concurrently attempted add/remove calls - and possibly straight-away deregister the IP again. This happens via l3_delete_ip(), which 1) looks up the queried IP in the htable (getting a reference to the *same* queried object), 2) deregisters the IP from the HW, and 3) frees the IP object. The caller in l3_add_ip() then does a second free on the same object. For this case, skip all the extra checks and lookups in l3_delete_ip() and just deregister & free the IP object ourselves. Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-28s390/qeth: fix IP removal on offline cardsJulian Wiedmann1-11/+3
If the HW is not reachable, then none of the IPs in qeth's internal table has been registered with the HW yet. So when deleting such an IP, there's no need to stage it for deregistration - just drop it from the table. This fixes the "add-delete-add" scenario on an offline card, where the the second "add" merely increments the IP's use count. But as the IP is still set to DISP_ADDR_DELETE from the previous "delete" step, l3_recover_ip() won't register it with the HW when the card goes online. Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-28s390/qeth: fix overestimated count of buffer elementsJulian Wiedmann2-9/+12
qeth_get_elements_for_range() doesn't know how to handle a 0-length range (ie. start == end), and returns 1 when it should return 0. Such ranges occur on TSO skbs, where the L2/L3/L4 headers (and thus all of the skb's linear data) are skipped when mapping the skb into regular buffer elements. This overestimation may cause several performance-related issues: 1. sub-optimal IO buffer selection, where the next buffer gets selected even though the skb would actually still fit into the current buffer. 2. forced linearization, if the element count for a non-linear skb exceeds QETH_MAX_BUFFER_ELEMENTS. Rather than modifying qeth_get_elements_for_range() and adding overhead to every caller, fix up those callers that are in risk of passing a 0-length range. Fixes: 2863c61334aa ("qeth: refactor calculation of SBALE count") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-09s390/qeth: fix SETIP command handlingJulian Wiedmann2-6/+13
send_control_data() applies some special handling to SETIP v4 IPA commands. But current code parses *all* command types for the SETIP command code. Limit the command code check to IPA commands. Fixes: 5b54e16f1a54 ("qeth: do not spin for SETIP ip assist command") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-09s390/qeth: fix underestimated count of buffer elementsUrsula Braun1-1/+1
For a memory range/skb where the last byte falls onto a page boundary (ie. 'end' is of the form xxx...xxx001), the PFN_UP() part of the calculation currently doesn't round up to the next PFN due to an off-by-one error. Thus qeth believes that the skb occupies one page less than it actually does, and may select a IO buffer that doesn't have enough spare buffer elements to fit all of the skb's data. HW detects this as a malformed buffer descriptor, and raises an exception which then triggers device recovery. Fixes: 2863c61334aa ("qeth: refactor calculation of SBALE count") Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-02s390/qeth: support early setup for z/VM NICsJulian Wiedmann1-9/+77
The transport mode that a z/VM NIC is configured in, must match the hypervisor-internal network which the NIC is coupled to. To get this right automatically, have qeth issue a diag26c hypervisor call that provides all sorts of information for a specific VNIC. With z/VM update VM65918, this also includes the VNIC's required transport mode. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-02s390/qeth: use common helper to display rxip/vipaJulian Wiedmann1-66/+24
By parameterising the address type, we need just one helper that walks the IP table and builds up the response string. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-02s390/qeth: improve error reporting on IP add/removalJulian Wiedmann3-25/+35
When adding & removing IP entries for rxip/vipa/ipato/hsuid, forward any resulting errors back to the sysfs-level caller. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-1/+8
Lots of overlapping changes. Also on the net-next side the XDP state management is handled more in the generic layers so undo the 'net' nfp fix which isn't applicable in net-next. Include a necessary change by Jakub Kicinski, with log message: ==================== cls_bpf no longer takes care of offload tracking. Make sure netdevsim performs necessary checks. This fixes a warning caused by TC trying to remove a filter it has not added. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: replace open-coded in*_pton()Julian Wiedmann3-89/+12
There's a common helper for parsing an IP address string, let's use it. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: pass full data length to l3_fill_header()Julian Wiedmann1-8/+10
The TSO and IQD paths already need to fix-up the current values, and OSA will require more flexibility in the future as well. So just let the caller specify the data length. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: streamline l3_fill_header()Julian Wiedmann1-56/+43
Consolidate the cast type translation, move the passthru path out of the RCU-guarded section, and use the appropriate rtable helpers when determining the next-hop address. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: unionize next-hop field in qeth L3 headerJulian Wiedmann2-8/+23
The L3 packet descriptor's 'dest_addr' field is used for a different purpose in RX descriptors. Clean up the hard-coded byte accesses and try to be more self-documenting. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: recognize non-IP multicast on L3 transmitJulian Wiedmann2-5/+1
When 1. an skb has no neighbour, and 2. skb->protocol is not IP[V6], we select the skb's cast type based on its destination MAC address. The multicast check is currently restricted to Multicast IP-mapped MACs. Extend it to also cover non-IP Multicast MACs. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: clean up l3_get_cast_type()Julian Wiedmann2-35/+20
Use the proper helpers to check for multicast IP addressing, and remove some ancient Token Ring code. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: robustify qeth_get_ip_version()Julian Wiedmann1-6/+8
Instead of assuming that skb->data points to the Ethernet header, use the right helper and struct to access the Ethertype field. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: align L2 and L3 set_rx_mode() implementationsJulian Wiedmann2-67/+43
Once all of qeth_l3_set_rx_mode()'s single-use helpers are folded back in, the two implementations actually look quite similar. So improve the readability by converting both set_rx_mode() routines to a common format. This also allows us to walk ip_mc_htable just once, instead of three times. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: use ether_addr_* helpersJulian Wiedmann6-37/+32
Be a little more self-documenting, and get rid of OSA_ADDR_LEN. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: consolidate qeth MAC address helpersJulian Wiedmann1-29/+18
For adding/removing a MAC address, use just one helper each that handles both unicast and multicast. Saves one level of indirection for multicast addresses, while improving the error reporting for unicast addresses. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: don't keep track of MAC address's cast typeJulian Wiedmann2-20/+8
Instead of tracking the uc/mc state in each MAC address object, just check the multicast bit in the address itself. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: drop CONFIG_QETH_IPV6Julian Wiedmann2-54/+9
commit "s390/qeth: use ip*_eth_mc_map helpers" removed the last occurrence of CONFIG_IPV6-dependent code. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: use ip*_eth_mc_map helpersJulian Wiedmann1-29/+14
Get rid of some wrapper indirection, and stop accessing the skb at hard-coded offsets. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20qeth: convert qeth_reply.refcnt from atomic_t to refcount_tElena Reshetova2-6/+5
atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialized to 1 using atomic_set() - a resource is freed upon counter reaching zero - once counter reaches zero, its further increments aren't allowed - counter schema uses basic atomic operations (set, inc, inc_not_zero, dec_and_test, etc.) Such atomic variables should be converted to a newly provided refcount_t type and API that prevents accidental counter overflows and underflows. This is important since overflows and underflows can lead to use-after-free situation and be exploitable. The variable qeth_reply.refcnt is used as pure reference counter. Convert it to refcount_t and fix up the operations. Suggested-by: Kees Cook <keescook@chromium.org> Reviewed-by: David Windsor <dwindsor@gmail.com> Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> [jwi: removed the WARN_ONs. Use CONFIG_REFCOUNT_FULL if you care.] Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20net: convert lcs_reply.refcnt from atomic_t to refcount_tElena Reshetova2-8/+5
atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialized to 1 using atomic_set() - a resource is freed upon counter reaching zero - once counter reaches zero, its further increments aren't allowed - counter schema uses basic atomic operations (set, inc, inc_not_zero, dec_and_test, etc.) Such atomic variables should be converted to a newly provided refcount_t type and API that prevents accidental counter overflows and underflows. This is important since overflows and underflows can lead to use-after-free situation and be exploitable. The variable lcs_reply.refcnt is used as pure reference counter. Convert it to refcount_t and fix up the operations. Suggested-by: Kees Cook <keescook@chromium.org> Reviewed-by: David Windsor <dwindsor@gmail.com> Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> [jwi: removed the WARN_ONs. Use CONFIG_REFCOUNT_FULL if you care.] Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-20s390/qeth: fix error handling in checksum cmd callbackJulian Wiedmann1-1/+8
Make sure to check both return code fields before processing the response. Otherwise we risk operating on invalid data. Fixes: c9475369bd2b ("s390/qeth: rework RX/TX checksum offload") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15s390/qeth: update takeover IPs after configuration changeJulian Wiedmann5-37/+67
Any modification to the takeover IP-ranges requires that we re-evaluate which IP addresses are takeover-eligible. Otherwise we might do takeover for some addresses when we no longer should, or vice-versa. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15s390/qeth: lock IP table while applying takeover changesJulian Wiedmann1-0/+2
Modifying the flags of an IP addr object needs to be protected against eg. concurrent removal of the same object from the IP table. Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15s390/qeth: don't apply takeover changes to RXIPJulian Wiedmann2-4/+6
When takeover is switched off, current code clears the 'TAKEOVER' flag on all IPs. But the flag is also used for RXIP addresses, and those should not be affected by the takeover mode. Fix the behaviour by consistenly applying takover logic to NORMAL addresses only. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15s390/qeth: apply takeover changes when mode is toggledJulian Wiedmann3-20/+19
Just as for an explicit enable/disable, toggling the takeover mode also requires that the IP addresses get updated. Otherwise all IPs that were added to the table before the mode-toggle, get registered with the old settings. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-05s390: add a few more SPDX identifiersMartin Schwidefsky1-0/+1
Add the correct SPDX license to a few more files under arch/s390 and drivers/s390 which have been missed to far. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-12-04Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds4-4/+41
Pull networking fixes from David Miller: 1) Various TCP control block fixes, including one that crashes with SELinux, from David Ahern and Eric Dumazet. 2) Fix ACK generation in rxrpc, from David Howells. 3) ipvlan doesn't set the mark properly in the ipv4 route lookup key, from Gao Feng. 4) SIT configuration doesn't take on the frag_off ipv4 field configuration properly, fix from Hangbin Liu. 5) TSO can fail after device down/up on stmmac, fix from Lars Persson. 6) Various bpftool fixes (mostly in JSON handling) from Quentin Monnet. 7) Various SKB leak fixes in vhost/tun/tap (mostly observed as performance problems). From Wei Xu. 8) mvpps's TX descriptors were not zero initialized, from Yan Markman. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (57 commits) tcp: use IPCB instead of TCP_SKB_CB in inet_exact_dif_match() tcp: add tcp_v4_fill_cb()/tcp_v4_restore_cb() rxrpc: Fix the MAINTAINERS record rxrpc: Use correct netns source in rxrpc_release_sock() liquidio: fix incorrect indentation of assignment statement stmmac: reset last TSO segment size after device open ipvlan: Add the skb->mark as flow4's member to lookup route s390/qeth: build max size GSO skbs on L2 devices s390/qeth: fix GSO throughput regression s390/qeth: fix thinko in IPv4 multicast address tracking tap: free skb if flags error tun: free skb in early errors vhost: fix skb leak in handle_rx() bnxt_en: Fix a variable scoping in bnxt_hwrm_do_send_msg() bnxt_en: fix dst/src fid for vxlan encap/decap actions bnxt_en: wildcard smac while creating tunnel decap filter bnxt_en: Need to unconditionally shut down RoCE in bnxt_shutdown phylink: ensure we take the link down when phylink_stop() is called sfp: warn about modules requiring address change sequence sfp: improve RX_LOS handling ...
2017-12-02s390/qeth: build max size GSO skbs on L2 devicesJulian Wiedmann2-4/+2
The current GSO skb size limit was copy&pasted over from the L3 path, where it is needed due to a TSO limitation. As L2 devices don't offer TSO support (and thus all GSO skbs are segmented before they reach the driver), there's no reason to restrict the stack in how large it may build the GSO skbs. Fixes: d52aec97e5bc ("qeth: enable scatter/gather in layer 2 mode") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02s390/qeth: fix GSO throughput regressionJulian Wiedmann4-0/+38
Using GSO with small MTUs currently results in a substantial throughput regression - which is caused by how qeth needs to map non-linear skbs into its IO buffer elements: compared to a linear skb, each GSO-segmented skb effectively consumes twice as many buffer elements (ie two instead of one) due to the additional header-only part. This causes the Output Queue to be congested with low-utilized IO buffers. Fix this as follows: If the MSS is low enough so that a non-SG GSO segmentation produces order-0 skbs (currently ~3500 byte), opt out from NETIF_F_SG. This is where we anticipate the biggest savings, since an SG-enabled GSO segmentation produces skbs that always consume at least two buffer elements. Larger MSS values continue to get a SG-enabled GSO segmentation, since 1) the relative overhead of the additional header-only buffer element becomes less noticeable, and 2) the linearization overhead increases. With the throughput regression fixed, re-enable NETIF_F_SG by default to reap the significant CPU savings of GSO. Fixes: 5722963a8e83 ("qeth: do not turn on SG per default") Reported-by: Nils Hoppmann <niho@de.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-02s390/qeth: fix thinko in IPv4 multicast address trackingJulian Wiedmann1-0/+1
Commit 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") reworked how secondary addresses are managed for qeth devices. Instead of dropping & subsequently re-adding all addresses on every ndo_set_rx_mode() call, qeth now keeps track of the addresses that are currently registered with the HW. On a ndo_set_rx_mode(), we thus only need to do (de-)registration requests for the addresses that have actually changed. On L3 devices, the lookup for IPv4 Multicast addresses checks the wrong hashtable - and thus never finds a match. As a result, we first delete *all* such addresses, and then re-add them again. So each set_rx_mode() causes a short period where the IPv4 Multicast addresses are not registered, and the card stops forwarding inbound traffic for them. Fix this by setting the ->is_multicast flag on the lookup object, thus enabling qeth_l3_ip_from_hash() to search the correct hashtable and find a match there. Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback") Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-30Merge branch 'for-linus' of ↵Linus Torvalds10-43/+10
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 fixes from Martin Schwidefsky: - SPDX identifiers are added to more of the s390 specific files. - The ELF_ET_DYN_BASE base patch from Kees is reverted, with the change some old 31-bit programs crash. - Bug fixes and cleanups. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (29 commits) s390/gs: add compat regset for the guarded storage broadcast control block s390: revert ELF_ET_DYN_BASE base changes s390: Remove redundant license text s390: crypto: Remove redundant license text s390: include: Remove redundant license text s390: kernel: Remove redundant license text s390: add SPDX identifiers to the remaining files s390: appldata: add SPDX identifiers to the remaining files s390: pci: add SPDX identifiers to the remaining files s390: mm: add SPDX identifiers to the remaining files s390: crypto: add SPDX identifiers to the remaining files s390: kernel: add SPDX identifiers to the remaining files s390: sthyi: add SPDX identifiers to the remaining files s390: drivers: Remove redundant license text s390: crypto: Remove redundant license text s390: virtio: add SPDX identifiers to the remaining files s390: scsi: zfcp_aux: add SPDX identifier s390: net: add SPDX identifiers to the remaining files s390: char: add SPDX identifiers to the remaining files s390: cio: add SPDX identifiers to the remaining files ...
2017-11-24s390: drivers: Remove redundant license textGreg Kroah-Hartman3-43/+0
Now that the SPDX tag is in all drivers/s390/ files, that identifies the license in a specific and legally-defined manner. So the extra GPL text wording can be removed as it is no longer needed at all. This is done on a quest to remove the 700+ different ways that files in the kernel describe the GPL license text. And there's unneeded stuff like the address (sometimes incorrect) for the FSF which is never needed. No copyright headers or other non-license-description text was removed. Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Cc: Julian Wiedmann <jwi@linux.vnet.ibm.com> Cc: Ursula Braun <ubraun@linux.vnet.ibm.com> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Halil Pasic <pasic@linux.vnet.ibm.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-11-24s390: net: add SPDX identifiers to the remaining filesGreg Kroah-Hartman10-0/+10
It's good to have SPDX identifiers in all files to make it easier to audit the kernel tree for correct licenses. Update the drivers/s390/net/ files with the correct SPDX license identifier based on the license text in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This work is based on a script and data from Thomas Gleixner, Philippe Ombredanne, and Kate Stewart. Cc: Julian Wiedmann <jwi@linux.vnet.ibm.com> Cc: Ursula Braun <ubraun@linux.vnet.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Philippe Ombredanne <pombredanne@nexb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-11-21treewide: setup_timer() -> timer_setup()Kees Cook1-4/+5
This converts all remaining cases of the old setup_timer() API into using timer_setup(), where the callback argument is the structure already holding the struct timer_list. These should have no behavioral changes, since they just change which pointer is passed into the callback with the same available pointers after conversion. It handles the following examples, in addition to some other variations. Casting from unsigned long: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... setup_timer(&ptr->my_timer, my_callback, ptr); and forced object casts: void my_callback(struct something *ptr) { ... } ... setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr); become: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... timer_setup(&ptr->my_timer, my_callback, 0); Direct function assignments: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... ptr->my_timer.function = my_callback; have a temporary cast added, along with converting the args: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... ptr->my_timer.function = (TIMER_FUNC_TYPE)my_callback; And finally, callbacks without a data assignment: void my_callback(unsigned long data) { ... } ... setup_timer(&ptr->my_timer, my_callback, 0); have their argument renamed to verify they're unused during conversion: void my_callback(struct timer_list *unused) { ... } ... timer_setup(&ptr->my_timer, my_callback, 0); The conversion is done with the following Coccinelle script: spatch --very-quiet --all-includes --include-headers \ -I ./arch/x86/include -I ./arch/x86/include/generated \ -I ./include -I ./arch/x86/include/uapi \ -I ./arch/x86/include/generated/uapi -I ./include/uapi \ -I ./include/generated/uapi --include ./include/linux/kconfig.h \ --dir . \ --cocci-file ~/src/data/timer_setup.cocci @fix_address_of@ expression e; @@ setup_timer( -&(e) +&e , ...) // Update any raw setup_timer() usages that have a NULL callback, but // would otherwise match change_timer_function_usage, since the latter // will update all function assignments done in the face of a NULL // function initialization in setup_timer(). @change_timer_function_usage_NULL@ expression _E; identifier _timer; type _cast_data; @@ ( -setup_timer(&_E->_timer, NULL, _E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E->_timer, NULL, (_cast_data)_E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E._timer, NULL, &_E); +timer_setup(&_E._timer, NULL, 0); | -setup_timer(&_E._timer, NULL, (_cast_data)&_E); +timer_setup(&_E._timer, NULL, 0); ) @change_timer_function_usage@ expression _E; identifier _timer; struct timer_list _stl; identifier _callback; type _cast_func, _cast_data; @@ ( -setup_timer(&_E->_timer, _callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | _E->_timer@_stl.function = _callback; | _E->_timer@_stl.function = &_callback; | _E->_timer@_stl.function = (_cast_func)_callback; | _E->_timer@_stl.function = (_cast_func)&_callback; | _E._timer@_stl.function = _callback; | _E._timer@_stl.function = &_callback; | _E._timer@_stl.function = (_cast_func)_callback; | _E._timer@_stl.function = (_cast_func)&_callback; ) // callback(unsigned long arg) @change_callback_handle_cast depends on change_timer_function_usage@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; identifier _handle; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { ( ... when != _origarg _handletype *_handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg ) } // callback(unsigned long arg) without existing variable @change_callback_handle_cast_no_arg depends on change_timer_function_usage && !change_callback_handle_cast@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { + _handletype *_origarg = from_timer(_origarg, t, _timer); + ... when != _origarg - (_handletype *)_origarg + _origarg ... when != _origarg } // Avoid already converted callbacks. @match_callback_converted depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier t; @@ void _callback(struct timer_list *t) { ... } // callback(struct something *handle) @change_callback_handle_arg depends on change_timer_function_usage && !match_callback_converted && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; @@ void _callback( -_handletype *_handle +struct timer_list *t ) { + _handletype *_handle = from_timer(_handle, t, _timer); ... } // If change_callback_handle_arg ran on an empty function, remove // the added handler. @unchange_callback_handle_arg depends on change_timer_function_usage && change_callback_handle_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; identifier t; @@ void _callback(struct timer_list *t) { - _handletype *_handle = from_timer(_handle, t, _timer); } // We only want to refactor the setup_timer() data argument if we've found // the matching callback. This undoes changes in change_timer_function_usage. @unchange_timer_function_usage depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg && !change_callback_handle_arg@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type change_timer_function_usage._cast_data; @@ ( -timer_setup(&_E->_timer, _callback, 0); +setup_timer(&_E->_timer, _callback, (_cast_data)_E); | -timer_setup(&_E._timer, _callback, 0); +setup_timer(&_E._timer, _callback, (_cast_data)&_E); ) // If we fixed a callback from a .function assignment, fix the // assignment cast now. @change_timer_function_assignment depends on change_timer_function_usage && (change_callback_handle_cast || change_callback_handle_cast_no_arg || change_callback_handle_arg)@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type _cast_func; typedef TIMER_FUNC_TYPE; @@ ( _E->_timer.function = -_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -&_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)_callback; +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)&_callback +(TIMER_FUN