summaryrefslogtreecommitdiff
path: root/drivers/dma
AgeCommit message (Collapse)AuthorFilesLines
2024-04-07dmaengine: fsl-dpaa2-qdma: Remove unused function dpdmai_create()Frank Li2-56/+0
Remove unused function dpdmai_create(); Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240320-dpaa2-v1-2-eb56e47c94ec@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: fsl-dpaa2-qdma: clean up unused macroFrank Li1-9/+0
Remove unused macro definition. Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240320-dpaa2-v1-1-eb56e47c94ec@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: fsl-edma: add i.MX8ULP edma supportJoy Zou3-0/+29
Add support for the i.MX8ULP platform to the eDMA driver. Introduce the use of the correct FSL_EDMA_DRV_HAS_CHCLK flag to handle per-channel clock configurations. Signed-off-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240323-8ulp_edma-v3-5-c0e981027c05@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: fsl-edma: clean up chclk and FSL_EDMA_DRV_HAS_CHCLKFrank Li2-10/+0
No device currently utilizes chclk and FSL_EDMA_DRV_HAS_CHCLK features. Removes these unused features. Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240323-8ulp_edma-v3-3-c0e981027c05@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: fsl-edma: add safety check for 'srcid'Frank Li1-0/+7
Ensure that 'srcid' is a non-zero value to avoid dtb passing invalid 'srcid' to the driver. Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240323-8ulp_edma-v3-2-c0e981027c05@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: fsl-edma: remove 'slave_id' from fsl_edma_chanFrank Li3-8/+7
The 'slave_id' field is redundant as it duplicates the functionality of 'srcid'. Remove 'slave_id' from fsl_edma_chan to eliminate redundancy. Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240323-8ulp_edma-v3-1-c0e981027c05@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dma: dw-axi-dmac: support per channel interruptBaruch Siach2-8/+23
Hardware might not support a single combined interrupt that covers all channels. In that case we have to deal with interrupt per channel. Add support for that configuration. Signed-off-by: Baruch Siach <baruch@tkos.co.il> Link: https://lore.kernel.org/r/ebab52e886ef1adc3c40e636aeb1ba3adfe2e578.1711453387.git.baruchs-c@neureality.ai Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07Avoid hw_desc array overrun in dw-axi-dmacJoao Pinto2-4/+3
I have a use case where nr_buffers = 3 and in which each descriptor is composed by 3 segments, resulting in the DMA channel descs_allocated to be 9. Since axi_desc_put() handles the hw_desc considering the descs_allocated, this scenario would result in a kernel panic (hw_desc array will be overrun). To fix this, the proposal is to add a new member to the axi_dma_desc structure, where we keep the number of allocated hw_descs (axi_desc_alloc()) and use it in axi_desc_put() to handle the hw_desc array correctly. Additionally I propose to remove the axi_chan_start_first_queued() call after completing the transfer, since it was identified that unbalance can occur (started descriptors can be interrupted and transfer ignored due to DMA channel not being enabled). Signed-off-by: Joao Pinto <jpinto@synopsys.com> Link: https://lore.kernel.org/r/1711536564-12919-1-git-send-email-jpinto@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: dw-axi-dmac: Add support for StarFive JH8100 DMATan Chun Hau1-0/+3
JH8100 requires reset operation only in device probe. Signed-off-by: Tan Chun Hau <chunhau.tan@starfivetech.com> Link: https://lore.kernel.org/r/20240327025126.229475-3-chunhau.tan@starfivetech.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: axi-dmac: move to device managed probeNuno Sa1-44/+34
In axi_dmac_probe(), there's a mix in using device managed APIs and explicitly cleaning things in the driver .remove() hook. Move to use device managed APIs and thus drop the .remove() hook. Signed-off-by: Nuno Sa <nuno.sa@analog.com> Link: https://lore.kernel.org/r/20240328-axi-dmac-devm-probe-v3-2-523c0176df70@analog.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: axi-dmac: fix possible race in remove()Nuno Sa1-1/+1
We need to first free the IRQ before calling of_dma_controller_free(). Otherwise we could get an interrupt and schedule a tasklet while removing the DMA controller. Fixes: 0e3b67b348b8 ("dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller") Cc: stable@kernel.org Signed-off-by: Nuno Sa <nuno.sa@analog.com> Link: https://lore.kernel.org/r/20240328-axi-dmac-devm-probe-v3-1-523c0176df70@analog.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: xilinx: xdma: Clarify kdoc in XDMA driverMiquel Raynal1-6/+8
Clarify the kernel doc of xdma_fill_descs(), especially how big chunks will be handled. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Louis Chauvet <louis.chauvet@bootlin.com> Link: https://lore.kernel.org/stable/20240327-digigram-xdma-fixes-v1-3-45f4a52c0283%40bootlin.com Link: https://lore.kernel.org/r/20240327-digigram-xdma-fixes-v1-3-45f4a52c0283@bootlin.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: xilinx: xdma: Fix synchronization issueLouis Chauvet2-8/+21
The current xdma_synchronize method does not properly wait for the last transfer to be done. Due to limitations of the XMDA engine, it is not possible to stop a transfer in the middle of a descriptor. Said otherwise, if a stop is requested at the end of descriptor "N" and the OS is fast enough, the DMA controller will effectively stop immediately. However, if the OS is slightly too slow to request the stop and the DMA engine starts descriptor "N+1", the N+1 transfer will be performed until its end. This means that after a terminate_all, the last descriptor must remain valid and the synchronization must wait for this last descriptor to be terminated. Fixes: 855c2e1d1842 ("dmaengine: xilinx: xdma: Rework xdma_terminate_all()") Fixes: f5c392d106e7 ("dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks") Cc: stable@vger.kernel.org Suggested-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Louis Chauvet <louis.chauvet@bootlin.com> Link: https://lore.kernel.org/r/20240327-digigram-xdma-fixes-v1-2-45f4a52c0283@bootlin.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: xilinx: xdma: Fix wrong offsets in the buffers addresses in dma ↵Miquel Raynal1-1/+1
descriptor The addition of interleaved transfers slightly changed the way addresses inside DMA descriptors are derived, breaking cyclic transfers. Fixes: 3e184e64c2e5 ("dmaengine: xilinx: xdma: Prepare the introduction of interleaved DMA transfers") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Louis Chauvet <louis.chauvet@bootlin.com> Link: https://lore.kernel.org/r/20240327-digigram-xdma-fixes-v1-1-45f4a52c0283@bootlin.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dma: Add lockdep asserts to virt-dmaSean Anderson1-0/+10
Add lockdep asserts to all functions with "vc.lock must be held by caller" in their documentation. This will help catch cases where these assumptions do not hold. Signed-off-by: Sean Anderson <sean.anderson@linux.dev> Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com> Link: https://lore.kernel.org/r/20240308210034.3634938-4-sean.anderson@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dma: xilinx_dpdma: Remove unnecessary use of irqsave/restoreSean Anderson1-6/+4
xilinx_dpdma_chan_done_irq and xilinx_dpdma_chan_vsync_irq are always called with IRQs disabled from xilinx_dpdma_irq_handler. Therefore we don't need to save/restore the IRQ flags. Signed-off-by: Sean Anderson <sean.anderson@linux.dev> Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com> Link: https://lore.kernel.org/r/20240308210034.3634938-3-sean.anderson@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dma: xilinx_dpdma: Fix lockingSean Anderson1-3/+10
There are several places where either chan->lock or chan->vchan.lock was not held. Add appropriate locking. This fixes lockdep warnings like [ 31.077578] ------------[ cut here ]------------ [ 31.077831] WARNING: CPU: 2 PID: 40 at drivers/dma/xilinx/xilinx_dpdma.c:834 xilinx_dpdma_chan_queue_transfer+0x274/0x5e0 [ 31.077953] Modules linked in: [ 31.078019] CPU: 2 PID: 40 Comm: kworker/u12:1 Not tainted 6.6.20+ #98 [ 31.078102] Hardware name: xlnx,zynqmp (DT) [ 31.078169] Workqueue: events_unbound deferred_probe_work_func [ 31.078272] pstate: 600000c5 (nZCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 31.078377] pc : xilinx_dpdma_chan_queue_transfer+0x274/0x5e0 [ 31.078473] lr : xilinx_dpdma_chan_queue_transfer+0x270/0x5e0 [ 31.078550] sp : ffffffc083bb2e10 [ 31.078590] x29: ffffffc083bb2e10 x28: 0000000000000000 x27: ffffff880165a168 [ 31.078754] x26: ffffff880164e920 x25: ffffff880164eab8 x24: ffffff880164d480 [ 31.078920] x23: ffffff880165a148 x22: ffffff880164e988 x21: 0000000000000000 [ 31.079132] x20: ffffffc082aa3000 x19: ffffff880164e880 x18: 0000000000000000 [ 31.079295] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 [ 31.079453] x14: 0000000000000000 x13: ffffff8802263dc0 x12: 0000000000000001 [ 31.079613] x11: 0001ffc083bb2e34 x10: 0001ff880164e98f x9 : 0001ffc082aa3def [ 31.079824] x8 : 0001ffc082aa3dec x7 : 0000000000000000 x6 : 0000000000000516 [ 31.079982] x5 : ffffffc7f8d43000 x4 : ffffff88003c9c40 x3 : ffffffffffffffff [ 31.080147] x2 : ffffffc7f8d43000 x1 : 00000000000000c0 x0 : 0000000000000000 [ 31.080307] Call trace: [ 31.080340] xilinx_dpdma_chan_queue_transfer+0x274/0x5e0 [ 31.080518] xilinx_dpdma_issue_pending+0x11c/0x120 [ 31.080595] zynqmp_disp_layer_update+0x180/0x3ac [ 31.080712] zynqmp_dpsub_plane_atomic_update+0x11c/0x21c [ 31.080825] drm_atomic_helper_commit_planes+0x20c/0x684 [ 31.080951] drm_atomic_helper_commit_tail+0x5c/0xb0 [ 31.081139] commit_tail+0x234/0x294 [ 31.081246] drm_atomic_helper_commit+0x1f8/0x210 [ 31.081363] drm_atomic_commit+0x100/0x140 [ 31.081477] drm_client_modeset_commit_atomic+0x318/0x384 [ 31.081634] drm_client_modeset_commit_locked+0x8c/0x24c [ 31.081725] drm_client_modeset_commit+0x34/0x5c [ 31.081812] __drm_fb_helper_restore_fbdev_mode_unlocked+0x104/0x168 [ 31.081899] drm_fb_helper_set_par+0x50/0x70 [ 31.081971] fbcon_init+0x538/0xc48 [ 31.082047] visual_init+0x16c/0x23c [ 31.082207] do_bind_con_driver.isra.0+0x2d0/0x634 [ 31.082320] do_take_over_console+0x24c/0x33c [ 31.082429] do_fbcon_takeover+0xbc/0x1b0 [ 31.082503] fbcon_fb_registered+0x2d0/0x34c [ 31.082663] register_framebuffer+0x27c/0x38c [ 31.082767] __drm_fb_helper_initial_config_and_unlock+0x5c0/0x91c [ 31.082939] drm_fb_helper_initial_config+0x50/0x74 [ 31.083012] drm_fbdev_dma_client_hotplug+0xb8/0x108 [ 31.083115] drm_client_register+0xa0/0xf4 [ 31.083195] drm_fbdev_dma_setup+0xb0/0x1cc [ 31.083293] zynqmp_dpsub_drm_init+0x45c/0x4e0 [ 31.083431] zynqmp_dpsub_probe+0x444/0x5e0 [ 31.083616] platform_probe+0x8c/0x13c [ 31.083713] really_probe+0x258/0x59c [ 31.083793] __driver_probe_device+0xc4/0x224 [ 31.083878] driver_probe_device+0x70/0x1c0 [ 31.083961] __device_attach_driver+0x108/0x1e0 [ 31.084052] bus_for_each_drv+0x9c/0x100 [ 31.084125] __device_attach+0x100/0x298 [ 31.084207] device_initial_probe+0x14/0x20 [ 31.084292] bus_probe_device+0xd8/0xdc [ 31.084368] deferred_probe_work_func+0x11c/0x180 [ 31.084451] process_one_work+0x3ac/0x988 [ 31.084643] worker_thread+0x398/0x694 [ 31.084752] kthread+0x1bc/0x1c0 [ 31.084848] ret_from_fork+0x10/0x20 [ 31.084932] irq event stamp: 64549 [ 31.084970] hardirqs last enabled at (64548): [<ffffffc081adf35c>] _raw_spin_unlock_irqrestore+0x80/0x90 [ 31.085157] hardirqs last disabled at (64549): [<ffffffc081adf010>] _raw_spin_lock_irqsave+0xc0/0xdc [ 31.085277] softirqs last enabled at (64503): [<ffffffc08001071c>] __do_softirq+0x47c/0x500 [ 31.085390] softirqs last disabled at (64498): [<ffffffc080017134>] ____do_softirq+0x10/0x1c [ 31.085501] ---[ end trace 0000000000000000 ]--- Fixes: 7cbb0c63de3f ("dmaengine: xilinx: dpdma: Add the Xilinx DisplayPort DMA engine driver") Signed-off-by: Sean Anderson <sean.anderson@linux.dev> Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com> Link: https://lore.kernel.org/r/20240308210034.3634938-2-sean.anderson@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: imx-sdma: support dual fifo for DEV_TO_DEVShengjiu Wang1-1/+17
SSI and SPDIF are dual fifo interface, when support ASRC P2P with SSI and SPDIF, the src fifo or dst fifo number can be two. The p2p watermark level bit 13 and 14 are designed for these use case. This patch is to complete this function in driver. Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com> Signed-off-by: Joy Zou <joy.zou@nxp.com> Acked-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240329-sdma_upstream-v4-3-daeb3067dea7@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: imx-sdma: Support 24bit/3bytes for sg modeShengjiu Wang1-0/+4
Update 3bytes buswidth that is supported by sdma. Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com> Signed-off-by: Vipul Kumar <vipul_kumar@mentor.com> Signed-off-by: Srikanth Krishnakar <Srikanth_Krishnakar@mentor.com> Acked-by: Robin Gong <yibin.gong@nxp.com> Reviewed-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240329-sdma_upstream-v4-2-daeb3067dea7@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: imx-sdma: Support allocate memory from internal SRAM (iram)Nicolin Chen1-10/+36
Allocate memory from SoC internal SRAM to reduce DDR access and keep DDR in lower power state (such as self-referesh) longer. Check iram_pool before sdma_init() so that ccb/context could be allocated from iram because DDR maybe in self-referesh in lower power audio case while sdma still running. Reviewed-by: Shengjiu Wang <shengjiu.wang@nxp.com> Signed-off-by: Nicolin Chen <b42378@freescale.com> Signed-off-by: Joy Zou <joy.zou@nxp.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240329-sdma_upstream-v4-1-daeb3067dea7@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: idma64: Add check for dma_set_max_seg_sizeChen Ni1-1/+3
As the possible failure of the dma_set_max_seg_size(), it should be better to check the return value of the dma_set_max_seg_size(). Fixes: e3fdb1894cfa ("dmaengine: idma64: set maximum allowed segment size for DMA") Signed-off-by: Chen Ni <nichen@iscas.ac.cn> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20240403024932.3342606-1-nichen@iscas.ac.cn Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: idxd: Convert spinlock to mutex to lock evl workqueueRex Zhang6-13/+12
drain_workqueue() cannot be called safely in a spinlocked context due to possible task rescheduling. In the multi-task scenario, calling queue_work() while drain_workqueue() will lead to a Call Trace as pushing a work on a draining workqueue is not permitted in spinlocked context. Call Trace: <TASK> ? __warn+0x7d/0x140 ? __queue_work+0x2b2/0x440 ? report_bug+0x1f8/0x200 ? handle_bug+0x3c/0x70 ? exc_invalid_op+0x18/0x70 ? asm_exc_invalid_op+0x1a/0x20 ? __queue_work+0x2b2/0x440 queue_work_on+0x28/0x30 idxd_misc_thread+0x303/0x5a0 [idxd] ? __schedule+0x369/0xb40 ? __pfx_irq_thread_fn+0x10/0x10 ? irq_thread+0xbc/0x1b0 irq_thread_fn+0x21/0x70 irq_thread+0x102/0x1b0 ? preempt_count_add+0x74/0xa0 ? __pfx_irq_thread_dtor+0x10/0x10 ? __pfx_irq_thread+0x10/0x10 kthread+0x103/0x140 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x31/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 </TASK> The current implementation uses a spinlock to protect event log workqueue and will lead to the Call Trace due to potential task rescheduling. To address the locking issue, convert the spinlock to mutex, allowing the drain_workqueue() to be called in a safe mutex-locked context. This change ensures proper synchronization when accessing the event log workqueue, preventing potential Call Trace and improving the overall robustness of the code. Fixes: c40bd7d9737b ("dmaengine: idxd: process user page faults for completion record") Signed-off-by: Rex Zhang <rex.zhang@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Lijun Pan <lijun.pan@intel.com> Link: https://lore.kernel.org/r/20240404223949.2885604-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-07dmaengine: idxd: Check for driver name match before sva user featureJerry Snitselaar1-8/+9
Currently if the user driver is probed on a workqueue configured for another driver with SVA not enabled on the system, it will print out a number of probe failing messages like the following: [ 264.831140] user: probe of wq13.0 failed with error -95 On some systems, such as GNR, the number of messages can reach over 100. Move the SVA feature check to be after the driver name match check. Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20240405213941.3629709-1-jsnitsel@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-03-28idma64: Don't try to serve interrupts when device is powered offAndy Shevchenko1-0/+4
When iDMA 64-bit device is powered off, the IRQ status register is all 1:s. This is never happen in real case and signalling that the device is simply powered off. Don't try to serve interrupts that are not ours. Fixes: 667dfed98615 ("dmaengine: add a driver for Intel integrated DMA 64-bit") Reported-by: Heiner Kallweit <hkallweit1@gmail.com> Closes: https://lore.kernel.org/r/700bbb84-90e1-4505-8ff0-3f17ea8bc631@gmail.com Tested-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20240321120453.1360138-1-andriy.shevchenko@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-03-28dmaengine: tegra186: Fix residual calculationAkhil R1-0/+3
The existing residual calculation returns an incorrect value when bytes_xfer == bytes_req. This scenario occurs particularly with drivers like UART where DMA is scheduled for maximum number of bytes and is terminated when the bytes inflow stops. At higher baud rates, it could request the tx_status while there is no bytes left to transfer. This will lead to incorrect residual being set. Hence return residual as '0' when bytes transferred equals to the bytes requested. Fixes: ee17028009d4 ("dmaengine: tegra: Add tegra gpcdma driver") Signed-off-by: Akhil R <akhilrajeev@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20240315124411.17582-1-akhilrajeev@nvidia.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-03-28dmaengine: owl: fix register access functionsArnd Bergmann1-2/+2
When building with 'make W=1', clang notices that the computed register values are never actually written back but instead the wrong variable is set: drivers/dma/owl-dma.c:244:6: error: variable 'regval' set but not used [-Werror,-Wunused-but-set-variable] 244 | u32 regval; | ^ drivers/dma/owl-dma.c:268:6: error: variable 'regval' set but not used [-Werror,-Wunused-but-set-variable] 268 | u32 regval; | ^ Change these to what was most likely intended. Fixes: 47e20577c24d ("dmaengine: Add Actions Semi Owl family S900 DMA driver") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Peter Korsgaard <peter@korsgaard.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20240322132116.906475-1-arnd@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-03-28dmaengine: Revert "dmaengine: pl330: issue_pending waits until WFP state"Vinod Koul1-3/+0
This reverts commit 22a9d9585812 ("dmaengine: pl330: issue_pending waits until WFP state") as it seems to cause regression in pl330 driver. Note the issue now exists in mainline so a fix to be done. Cc: stable@vger.kernel.org Reported-by: karthikeyan <karthikeyan@linumiz.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-03-15Merge tag 'dmaengine-6.9-rc1' of ↵Linus Torvalds16-174/+540
git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine Pull dmaengine updates from Vinod Koul: "New hardware support: - Allwinner H616 dma support - Renesas r8a779h0 dma controller support - TI CSI2RX dma support Updates: - Freescale edma driver updates for TCD64csupport for i.MX95 - constify of pointers and args - Yaml conversion for MediaTek High-Speed controller binding - TI k3 udma support for TX/RX DMA channels for thread IDs: * tag 'dmaengine-6.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (25 commits) dmaengine: of: constify of_phandle_args in of_dma_find_controller() dmaengine: pl08x: constify pointer to char in filter function MAINTAINERS: change in AMD ptdma maintainer MAINTAINERS: adjust file entry in MEDIATEK DMA DRIVER dmaengine: idxd: constify the struct device_type usage dt-bindings: renesas,rcar-dmac: Add r8a779h0 support dt-bindings: dma: convert MediaTek High-Speed controller to the json-schema dmaengine: idxd: make dsa_bus_type const dmaengine: fsl-edma: integrate TCD64 support for i.MX95 dt-bindings: fsl-dma: fsl-edma: add fsl,imx95-edma5 compatible string dmaengine: mcf-edma: utilize edma_write_tcdreg() macro for TCD Access dmaengine: fsl-edma: add address for channel mux register in fsl_edma_chan dmaengine: fsl-edma: fix spare build warning dmaengine: fsl-edma: involve help macro fsl_edma_set(get)_tcd() dt-bindings: mmp-dma: convert to YAML dmaengine: ti: k3-psil-j721s2: Add entry for CSI2RX dmaengine: ti: k3-udma-glue: Add function to request RX chan for thread ID dmaengine: ti: k3-udma-glue: Add function to request TX chan for thread ID dmaengine: ti: k3-udma-glue: Update name for remote RX channel device dmaengine: ti: k3-udma-glue: Add function to parse channel by ID ...
2024-03-11Merge tag 'irq-msi-2024-03-10' of ↵Linus Torvalds2-7/+7
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull MSI updates from Thomas Gleixner: "Updates for the MSI interrupt subsystem and initial RISC-V MSI support. The core changes have been adopted from previous work which converted ARM[64] to the new per device MSI domain model, which was merged to support multiple MSI domain per device. The ARM[64] changes are being worked on too, but have not been ready yet. The core and platform-MSI changes have been split out to not hold up RISC-V and to avoid that RISC-V builds on the scheduled for removal interfaces. The core support provides new interfaces to handle wire to MSI bridges in a straight forward way and introduces new platform-MSI interfaces which are built on top of the per device MSI domain model. Once ARM[64] is converted over the old platform-MSI interfaces and the related ugliness in the MSI core code will be removed. The actual MSI parts for RISC-V were finalized late and have been post-poned for the next merge window. Drivers: - Add a new driver for the Andes hart-level interrupt controller - Rework the SiFive PLIC driver to prepare for MSI suport - Expand the RISC-V INTC driver to support the new RISC-V AIA controller which provides the basis for MSI on RISC-V - A few fixup for the fallout of the core changes" * tag 'irq-msi-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (29 commits) irqchip/riscv-intc: Fix low-level interrupt handler setup for AIA x86/apic/msi: Use DOMAIN_BUS_GENERIC_MSI for HPET/IO-APIC domain search genirq/matrix: Dynamic bitmap allocation irqchip/riscv-intc: Add support for RISC-V AIA irqchip/sifive-plic: Improve locking safety by using irqsave/irqrestore irqchip/sifive-plic: Parse number of interrupts and contexts early in plic_probe() irqchip/sifive-plic: Cleanup PLIC contexts upon irqdomain creation failure irqchip/sifive-plic: Use riscv_get_intc_hwnode() to get parent fwnode irqchip/sifive-plic: Use devm_xyz() for managed allocation irqchip/sifive-plic: Use dev_xyz() in-place of pr_xyz() irqchip/sifive-plic: Convert PLIC driver into a platform driver irqchip/riscv-intc: Introduce Andes hart-level interrupt controller irqchip/riscv-intc: Allow large non-standard interrupt number genirq/irqdomain: Don't call ops->select for DOMAIN_BUS_ANY tokens irqchip/imx-intmux: Handle pure domain searches correctly genirq/msi: Provide MSI_FLAG_PARENT_PM_DEV genirq/irqdomain: Reroute device MSI create_mapping genirq/msi: Provide allocation/free functions for "wired" MSI interrupts genirq/msi: Optionally use dev->fwnode for device domain genirq/msi: Provide DOMAIN_BUS_WIRED_TO_MSI ...
2024-02-23dmaengine: of: constify of_phandle_args in of_dma_find_controller()Krzysztof Kozlowski1-1/+1
of_dma_find_controller() does not modify passed of_phandle_args so the argument can be made pointer to const for code safety. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20240208202742.631307-2-krzysztof.kozlowski@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-23dmaengine: pl08x: constify pointer to char in filter functionKrzysztof Kozlowski1-1/+1
The opaque argument chan_id passed to filter function is actually pointer to const memory, so make that obvious in the filter for code readability and safety. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20240208202742.631307-1-krzysztof.kozlowski@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-23dmaengine: ptdma: use consistent DMA masksTadeusz Struk1-2/+0
The PTDMA driver sets DMA masks in two different places for the same device inconsistently. First call is in pt_pci_probe(), where it uses 48bit mask. The second call is in pt_dmaengine_register(), where it uses a 64bit mask. Using 64bit dma mask causes IO_PAGE_FAULT errors on DMA transfers between main memory and other devices. Without the extra call it works fine. Additionally the second call doesn't check the return value so it can silently fail. Remove the superfluous dma_set_mask() call and only use 48bit mask. Cc: stable@vger.kernel.org Fixes: b0b4a6b10577 ("dmaengine: ptdma: register PTDMA controller as a DMA resource") Reviewed-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com> Signed-off-by: Tadeusz Struk <tstruk@gigaio.com> Link: https://lore.kernel.org/r/20240222163053.13842-1-tstruk@gigaio.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-23dmaengine: fsl-qdma: add __iomem and struct in union to fix sparse warningFrank Li1-11/+10
Fix below sparse warnings. drivers/dma/fsl-qdma.c:645:50: sparse: warning: incorrect type in argument 2 (different address spaces) drivers/dma/fsl-qdma.c:645:50: sparse: expected void [noderef] __iomem *addr drivers/dma/fsl-qdma.c:645:50: sparse: got void drivers/dma/fsl-qdma.c:387:15: sparse: sparse: restricted __le32 degrades to integer drivers/dma/fsl-qdma.c:390:19: sparse: expected restricted __le64 [usertype] data drivers/dma/fsl-qdma.c:392:13: sparse: expected unsigned int [assigned] [usertype] cmd QDMA decriptor have below 3 kind formats. (little endian) Compound Command Descriptor Format ┌──────┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┐ │Offset│3│3│2│2│2│2│2│2│2│2│2│2│1│1│1│1│1│1│1│1│1│1│ │ │ │ │ │ │ │ │ │ │ │ │1│0│9│8│7│6│5│4│3│2│1│0│9│8│7│6│5│4│3│2│1│0│9│8│7│6│5│4│3│2│1│0│ ├──────┼─┴─┼─┴─┴─┼─┴─┴─┼─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┼─┴─┴─┴─┴─┴─┴─┴─┤ │ 0x0C │DD │ - │QUEUE│ - │ ADDR │ ├──────┼───┴─────┴─────┴───────────────────────────────┴───────────────┤ │ 0x08 │ ADDR │ ├──────┼─────┬─────────────────┬───────────────────────────────────────┤ │ 0x04 │ FMT │ OFFSET │ - │ ├──────┼─┬─┬─┴─────────────────┴───────────────────────┬───────────────┤ │ │ │S│ │ │ │ 0x00 │-│E│ - │ STATUS │ │ │ │R│ │ │ └──────┴─┴─┴───────────────────────────────────────────┴───────────────┘ Compound S/G Table Entry Format ┌──────┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┐ │Offset│3│3│2│2│2│2│2│2│2│2│2│2│1│1│1│1│1│1│1│1│1│1│ │ │ │ │ │ │ │ │ │ │ │ │1│0│9│8│7│6│5│4│3│2│1│0│9│8│7│6│5│4│3│2│1│0│9│8│7│6│5│4│3│2│1│0│ ├──────┼─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┼─┴─┴─┴─┴─┴─┴─┴─┤ │ 0x0C │ - │ ADDR │ ├──────┼───────────────────────────────────────────────┴───────────────┤ │ 0x08 │ ADDR │ ├──────┼─┬─┬───────────────────────────────────────────────────────────┤ │ 0x04 │E│F│ LENGTH │ ├──────┼─┴─┴─────────────────────────────────┬─────────────────────────┤ │ 0x00 │ - │ OFFSET │ └──────┴─────────────────────────────────────┴─────────────────────────┘ Source/Destination Descriptor Format ┌──────┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┐ │Offset│3│3│2│2│2│2│2│2│2│2│2│2│1│1│1│1│1│1│1│1│1│1│ │ │ │ │ │ │ │ │ │ │ │ │1│0│9│8│7│6│5│4│3│2│1│0│9│8│7│6│5│4│3│2│1│0│9│8│7│6│5│4│3│2│1│0│ ├──────┼─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┤ │ 0x0C │ CMD │ ├──────┼───────────────────────────────────────────────────────────────┤ │ 0x08 │ - │ ├──────┼───────────────┬───────────────────────┬───────────────────────┤ │ 0x04 │ - │ S[D]SS │ S[D]SD │ ├──────┼───────────────┴───────────────────────┴───────────────────────┤ │ 0x00 │ - │ └──────┴───────────────────────────────────────────────────────────────┘ Previous code use 64bit 'data' map to 0x8 and 0xC. In little endian system CMD is high part of 64bit 'data'. It is correct by left shift 32. But in big endian system, shift left 32 will write to 0x8 position. Sparse detect this problem. Add below field ot match 'Source/Destination Descriptor Format'. struct { __le32 __reserved2; __le32 cmd; } __packed; Using ddf(sdf)->cmd save to correct posistion regardless endian. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202402081929.mggOTHaZ-lkp@intel.com/ Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240219155939.611237-1-Frank.Li@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-22dmaengine: idxd: constify the struct device_type usageRicardo B. Marliere3-13/+13
Since commit aed65af1cc2f ("drivers: make device_type const"), the driver core can properly handle constant struct device_type. Move the dsa_device_type, iax_device_type, idxd_wq_device_type, idxd_cdev_file_type, idxd_cdev_device_type and idxd_group_device_type variables to be constant structures as well, placing it into read-only memory which can not be modified at runtime. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: "Ricardo B. Marliere" <ricardo@marliere.net> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20240219-device_cleanup-dmaengine-v1-1-9f72f3cf3587@marliere.net Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-22dmaengine: idxd: Ensure safe user copy of completion recordFenghua Yu1-3/+12
If CONFIG_HARDENED_USERCOPY is enabled, copying completion record from event log cache to user triggers a kernel bug. [ 1987.159822] usercopy: Kernel memory exposure attempt detected from SLUB object 'dsa0' (offset 74, size 31)! [ 1987.170845] ------------[ cut here ]------------ [ 1987.176086] kernel BUG at mm/usercopy.c:102! [ 1987.180946] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI [ 1987.186866] CPU: 17 PID: 528 Comm: kworker/17:1 Not tainted 6.8.0-rc2+ #5 [ 1987.194537] Hardware name: Intel Corporation AvenueCity/AvenueCity, BIOS BHSDCRB1.86B.2492.D03.2307181620 07/18/2023 [ 1987.206405] Workqueue: wq0.0 idxd_evl_fault_work [idxd] [ 1987.212338] RIP: 0010:usercopy_abort+0x72/0x90 [ 1987.217381] Code: 58 65 9c 50 48 c7 c2 17 85 61 9c 57 48 c7 c7 98 fd 6b 9c 48 0f 44 d6 48 c7 c6 b3 08 62 9c 4c 89 d1 49 0f 44 f3 e8 1e 2e d5 ff <0f> 0b 49 c7 c1 9e 42 61 9c 4c 89 cf 4d 89 c8 eb a9 66 66 2e 0f 1f [ 1987.238505] RSP: 0018:ff62f5cf20607d60 EFLAGS: 00010246 [ 1987.244423] RAX: 000000000000005f RBX: 000000000000001f RCX: 0000000000000000 [ 1987.252480] RDX: 0000000000000000 RSI: ffffffff9c61429e RDI: 00000000ffffffff [ 1987.260538] RBP: ff62f5cf20607d78 R08: ff2a6a89ef3fffe8 R09: 00000000fffeffff [ 1987.268595] R10: ff2a6a89eed00000 R11: 0000000000000003 R12: ff2a66934849c89a [ 1987.276652] R13: 0000000000000001 R14: ff2a66934849c8b9 R15: ff2a66934849c899 [ 1987.284710] FS: 0000000000000000(0000) GS:ff2a66b22fe40000(0000) knlGS:0000000000000000 [ 1987.293850] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1987.300355] CR2: 00007fe291a37000 CR3: 000000010fbd4005 CR4: 0000000000f71ef0 [ 1987.308413] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1987.316470] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400 [ 1987.324527] PKRU: 55555554 [ 1987.327622] Call Trace: [ 1987.330424] <TASK> [ 1987.332826] ? show_regs+0x6e/0x80 [ 1987.336703] ? die+0x3c/0xa0 [ 1987.339988] ? do_trap+0xd4/0xf0 [ 1987.343662] ? do_error_trap+0x75/0xa0 [ 1987.347922] ? usercopy_abort+0x72/0x90 [ 1987.352277] ? exc_invalid_op+0x57/0x80 [ 1987.356634] ? usercopy_abort+0x72/0x90 [ 1987.360988] ? asm_exc_invalid_op+0x1f/0x30 [ 1987.365734] ? usercopy_abort+0x72/0x90 [ 1987.370088] __check_heap_object+0xb7/0xd0 [ 1987.374739] __check_object_size+0x175/0x2d0 [ 1987.379588] idxd_copy_cr+0xa9/0x130 [idxd] [ 1987.384341] idxd_evl_fault_work+0x127/0x390 [idxd] [ 1987.389878] process_one_work+0x13e/0x300 [ 1987.394435] ? __pfx_worker_thread+0x10/0x10 [ 1987.399284] worker_thread+0x2f7/0x420 [ 1987.403544] ? _raw_spin_unlock_irqrestore+0x2b/0x50 [ 1987.409171] ? __pfx_worker_thread+0x10/0x10 [ 1987.414019] kthread+0x107/0x140 [ 1987.417693] ? __pfx_kthread+0x10/0x10 [ 1987.421954] ret_from_fork+0x3d/0x60 [ 1987.426019] ? __pfx_kthread+0x10/0x10 [ 1987.430281] ret_from_fork_asm+0x1b/0x30 [ 1987.434744] </TASK> The issue arises because event log cache is created using kmem_cache_create() which is not suitable for user copy. Fix the issue by creating event log cache with kmem_cache_create_usercopy(), ensuring safe user copy. Fixes: c2f156bf168f ("dmaengine: idxd: create kmem cache for event log fault items") Reported-by: Tony Zhu <tony.zhu@intel.com> Tested-by: Tony Zhu <tony.zhu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Lijun Pan <lijun.pan@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20240209191412.1050270-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-16dmaengine: fsl-edma: correct max_segment_size settingFrank Li2-3/+6
Correcting the previous setting of 0x3fff to the actual value of 0x7fff. Introduced new macro 'EDMA_TCD_ITER_MASK' for improved code clarity and utilization of FIELD_GET to obtain the accurate maximum value. Cc: stable@vger.kernel.org Fixes: e06748539432 ("dmaengine: fsl-edma: support edma memcpy") Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20240207194733.2112870-1-Frank.Li@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-16dmaengine: idxd: make dsa_bus_type constRicardo B. Marliere2-2/+2
Since commit d492cc2573a0 ("driver core: device.h: make struct bus_type a const *"), the driver core can properly handle constant struct bus_type, move the dsa_bus_type variable to be a constant structure as well, placing it into read-only memory which can not be modified at runtime. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: "Ricardo B. Marliere" <ricardo@marliere.net> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20240213-bus_cleanup-idxd-v1-1-c3e703675387@marliere.net Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-16dmaengine: idxd: Remove shadow Event Log head stored in idxdFenghua Yu4-5/+3
head is defined in idxd->evl as a shadow of head in the EVLSTATUS register. There are two issues related to the shadow head: 1. Mismatch between the shadow head and the state of the EVLSTATUS register: If Event Log is supported, upon completion of the Enable Device command, the Event Log head in the variable idxd->evl->head should be cleared to match the state of the EVLSTATUS register. But the variable is not reset currently, leading mismatch between the variable and the register state. The mismatch causes incorrect processing of Event Log entries. 2. Unnecessary shadow head definition: The shadow head is unnecessary as head can be read directly from the EVLSTATUS register. Reading head from the register incurs no additional cost because event log head and tail are always read together and tail is already read directly from the register as required by hardware. Remove the shadow Event Log head stored in idxd->evl to address the mentioned issues. Fixes: 244da66cda35 ("dmaengine: idxd: setup event log configuration") Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20240215024931.1739621-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-15irqchip: Convert all platform MSI users to the new APIThomas Gleixner2-7/+7
Switch all the users of the platform MSI domain over to invoke the new interfaces which branch to the original platform MSI functions when the irqdomain associated to the caller device does not yet provide MSI parent functionality. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Anup Patel <apatel@ventanamicro.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240127161753.114685-7-apatel@ventanamicro.com
2024-02-07dmaengine: fsl-edma: integrate TCD64 support for i.MX95Frank Li3-43/+170
In i.MX95's edma version 5, the TCD structure is extended to support 64-bit addresses for fields like saddr and daddr. To prevent code duplication, employ help macros to handle the fields, as the field names remain the same between TCD and TCD64. Change local variables related to TCD addresses from 'u32' to 'dma_addr_t' to accept 64-bit DMA addresses. Change 'vtcd' type to 'void *' to avoid direct use. Use helper macros to access the TCD fields correctly. Call 'dma_set_mask_and_coherent(64)' when TCD64 is supported. Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20231221153528.1588049-7-Frank.Li@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-07dmaengine: mcf-edma: utilize edma_write_tcdreg() macro for TCD AccessFrank Li1-1/+1
The TCD structure has undergone modifications, incorporating fields extended to 64 bits. When TCD64 is enabled, the TCD type shifts to 'void *'. Use of the edma_write_tcdreg() macro to facilitate TCD register access. Add cpu_to_le32(0) to ensure little-endian compatibility with TCD registers and avoid a build warning. Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20231221153528.1588049-5-Frank.Li@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-02-07dmaengine: fsl-edma: add address for channel mux register in fsl_edma_chan</