<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/kernel/dma, branch v6.19.12</title>
<subtitle>Clone of https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git</subtitle>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/'/>
<entry>
<title>dma: swiotlb: add KMSAN annotations to swiotlb_bounce()</title>
<updated>2026-04-02T11:25:24+00:00</updated>
<author>
<name>Shigeru Yoshida</name>
<email>syoshida@redhat.com</email>
</author>
<published>2026-03-15T08:27:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=1f5a4d74684ac4c76b35b181868f7e415e772d6c'/>
<id>1f5a4d74684ac4c76b35b181868f7e415e772d6c</id>
<content type='text'>
[ Upstream commit 6f770b73d0311a5b099277653199bb6421c4fed2 ]

When a device performs DMA to a bounce buffer, KMSAN is unaware of
the write and does not mark the data as initialized.  When
swiotlb_bounce() later copies the bounce buffer back to the original
buffer, memcpy propagates the uninitialized shadow to the original
buffer, causing false positive uninit-value reports.

Fix this by calling kmsan_unpoison_memory() on the bounce buffer
before copying it back in the DMA_FROM_DEVICE path, so that memcpy
naturally propagates initialized shadow to the destination.

Suggested-by: Alexander Potapenko &lt;glider@google.com&gt;
Link: https://lore.kernel.org/CAG_fn=WUGta-paG1BgsGRoAR+fmuCgh3xo=R3XdzOt_-DqSdHw@mail.gmail.com/
Fixes: 7ade4f10779c ("dma: kmsan: unpoison DMA mappings")
Signed-off-by: Shigeru Yoshida &lt;syoshida@redhat.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260315082750.2375581-1-syoshida@redhat.com
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 6f770b73d0311a5b099277653199bb6421c4fed2 ]

When a device performs DMA to a bounce buffer, KMSAN is unaware of
the write and does not mark the data as initialized.  When
swiotlb_bounce() later copies the bounce buffer back to the original
buffer, memcpy propagates the uninitialized shadow to the original
buffer, causing false positive uninit-value reports.

Fix this by calling kmsan_unpoison_memory() on the bounce buffer
before copying it back in the DMA_FROM_DEVICE path, so that memcpy
naturally propagates initialized shadow to the destination.

Suggested-by: Alexander Potapenko &lt;glider@google.com&gt;
Link: https://lore.kernel.org/CAG_fn=WUGta-paG1BgsGRoAR+fmuCgh3xo=R3XdzOt_-DqSdHw@mail.gmail.com/
Fixes: 7ade4f10779c ("dma: kmsan: unpoison DMA mappings")
Signed-off-by: Shigeru Yoshida &lt;syoshida@redhat.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260315082750.2375581-1-syoshida@redhat.com
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dma-mapping: avoid random addr value print out on error path</title>
<updated>2026-03-04T12:20:53+00:00</updated>
<author>
<name>Jiri Pirko</name>
<email>jiri@nvidia.com</email>
</author>
<published>2026-02-09T15:38:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=2214692fe2c4ad4e33ca78d593efdb0ddfe38b0a'/>
<id>2214692fe2c4ad4e33ca78d593efdb0ddfe38b0a</id>
<content type='text'>
[ Upstream commit 47322c469d4a63ac45b705ca83680671ff71c975 ]

dma_addr is unitialized in dma_direct_map_phys() when swiotlb is forced
and DMA_ATTR_MMIO is set which leads to random value print out in
warning. Fix that by just returning DMA_MAPPING_ERROR.

Fixes: e53d29f957b3 ("dma-mapping: convert dma_direct_*map_page to be phys_addr_t based")
Signed-off-by: Jiri Pirko &lt;jiri@nvidia.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260209153809.250835-2-jiri@resnulli.us
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 47322c469d4a63ac45b705ca83680671ff71c975 ]

dma_addr is unitialized in dma_direct_map_phys() when swiotlb is forced
and DMA_ATTR_MMIO is set which leads to random value print out in
warning. Fix that by just returning DMA_MAPPING_ERROR.

Fixes: e53d29f957b3 ("dma-mapping: convert dma_direct_*map_page to be phys_addr_t based")
Signed-off-by: Jiri Pirko &lt;jiri@nvidia.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260209153809.250835-2-jiri@resnulli.us
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dma: contiguous: Check return value of dma_contiguous_reserve_area()</title>
<updated>2026-02-02T08:20:32+00:00</updated>
<author>
<name>Shanker Donthineni</name>
<email>sdonthineni@nvidia.com</email>
</author>
<published>2026-01-29T18:13:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=c33efdfcfa6f80e05ce1ee33694c1bad4994cd78'/>
<id>c33efdfcfa6f80e05ce1ee33694c1bad4994cd78</id>
<content type='text'>
Commit 8f1fc1bf1a3d ("dma: contiguous: Reserve default CMA heap")
introduced a bug where dma_heap_cma_register_heap() is called with
a NULL pointer when dma_contiguous_reserve_area() fails to reserve
the CMA area.

When dma_contiguous_reserve_area() fails, dma_contiguous_default_area
remains NULL (initialized as a global variable), but the code doesn't
check the return value and proceeds to call dma_heap_cma_register_heap()
with this NULL pointer.

Later during boot, add_cma_heaps() iterates through the dma_areas[]
array and attempts to register heaps. When it encounters the NULL
pointer stored by the earlier call, it crashes in __add_cma_heap()
-&gt; dma_heap_add() when trying to dereference the NULL CMA pointer.

The crash manifests as:
  Unable to handle kernel NULL pointer dereference at virtual address
  0000000000000038
  ...
  Call trace:
   dma_heap_add+0x40/0x2b0
   __add_cma_heap+0x80/0xe0
   add_cma_heaps+0x64/0xb0
   do_one_initcall+0x60/0x318
   kernel_init_freeable+0x260/0x2f0
   kernel_init+0x2c/0x168
   ret_from_fork+0x10/0x20

Fix this by checking the return value of dma_contiguous_reserve_area()
and only calling dma_heap_cma_register_heap() when the reservation
succeeds.

Fixes: 8f1fc1bf1a3d ("dma: contiguous: Reserve default CMA heap")
Signed-off-by: Shanker Donthineni &lt;sdonthineni@nvidia.com&gt;
Reviewed-by: T.J. Mercier &lt;tjmercier@google.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260129181317.2429196-1-sdonthineni@nvidia.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 8f1fc1bf1a3d ("dma: contiguous: Reserve default CMA heap")
introduced a bug where dma_heap_cma_register_heap() is called with
a NULL pointer when dma_contiguous_reserve_area() fails to reserve
the CMA area.

When dma_contiguous_reserve_area() fails, dma_contiguous_default_area
remains NULL (initialized as a global variable), but the code doesn't
check the return value and proceeds to call dma_heap_cma_register_heap()
with this NULL pointer.

Later during boot, add_cma_heaps() iterates through the dma_areas[]
array and attempts to register heaps. When it encounters the NULL
pointer stored by the earlier call, it crashes in __add_cma_heap()
-&gt; dma_heap_add() when trying to dereference the NULL CMA pointer.

The crash manifests as:
  Unable to handle kernel NULL pointer dereference at virtual address
  0000000000000038
  ...
  Call trace:
   dma_heap_add+0x40/0x2b0
   __add_cma_heap+0x80/0xe0
   add_cma_heaps+0x64/0xb0
   do_one_initcall+0x60/0x318
   kernel_init_freeable+0x260/0x2f0
   kernel_init+0x2c/0x168
   ret_from_fork+0x10/0x20

Fix this by checking the return value of dma_contiguous_reserve_area()
and only calling dma_heap_cma_register_heap() when the reservation
succeeds.

Fixes: 8f1fc1bf1a3d ("dma: contiguous: Reserve default CMA heap")
Signed-off-by: Shanker Donthineni &lt;sdonthineni@nvidia.com&gt;
Reviewed-by: T.J. Mercier &lt;tjmercier@google.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260129181317.2429196-1-sdonthineni@nvidia.com
</pre>
</div>
</content>
</entry>
<entry>
<title>dma/pool: distinguish between missing and exhausted atomic pools</title>
<updated>2026-01-29T09:23:45+00:00</updated>
<author>
<name>Sai Sree Kartheek Adivi</name>
<email>s-adivi@ti.com</email>
</author>
<published>2026-01-28T13:35:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=56c430c7f06d838fe3b2077dbbc4cc0bf992312b'/>
<id>56c430c7f06d838fe3b2077dbbc4cc0bf992312b</id>
<content type='text'>
Currently, dma_alloc_from_pool() unconditionally warns and dumps a stack
trace when an allocation fails, with the message "Failed to get suitable
pool".

This conflates two distinct failure modes:
1. Configuration error: No atomic pool is available for the requested
   DMA mask (a fundamental system setup issue)
2. Resource Exhaustion: A suitable pool exists but is currently full (a
   recoverable runtime state)

This lack of distinction prevents drivers from using __GFP_NOWARN to
suppress error messages during temporary pressure spikes, such as when
awaiting synchronous reclaim of descriptors.

Refactor the error handling to distinguish these cases:
- If no suitable pool is found, keep the unconditional WARN regarding
  the missing pool.
- If a pool was found but is exhausted, respect __GFP_NOWARN and update
  the warning message to explicitly state "DMA pool exhausted".

Fixes: 9420139f516d ("dma-pool: fix coherent pool allocations for IOMMU mappings")
Signed-off-by: Sai Sree Kartheek Adivi &lt;s-adivi@ti.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260128133554.3056582-1-s-adivi@ti.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, dma_alloc_from_pool() unconditionally warns and dumps a stack
trace when an allocation fails, with the message "Failed to get suitable
pool".

This conflates two distinct failure modes:
1. Configuration error: No atomic pool is available for the requested
   DMA mask (a fundamental system setup issue)
2. Resource Exhaustion: A suitable pool exists but is currently full (a
   recoverable runtime state)

This lack of distinction prevents drivers from using __GFP_NOWARN to
suppress error messages during temporary pressure spikes, such as when
awaiting synchronous reclaim of descriptors.

Refactor the error handling to distinguish these cases:
- If no suitable pool is found, keep the unconditional WARN regarding
  the missing pool.
- If a pool was found but is exhausted, respect __GFP_NOWARN and update
  the warning message to explicitly state "DMA pool exhausted".

Fixes: 9420139f516d ("dma-pool: fix coherent pool allocations for IOMMU mappings")
Signed-off-by: Sai Sree Kartheek Adivi &lt;s-adivi@ti.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20260128133554.3056582-1-s-adivi@ti.com
</pre>
</div>
</content>
</entry>
<entry>
<title>of: reserved_mem: Allow reserved_mem framework detect "cma=" kernel param</title>
<updated>2026-01-28T23:26:36+00:00</updated>
<author>
<name>Oreoluwa Babatunde</name>
<email>oreoluwa.babatunde@oss.qualcomm.com</email>
</author>
<published>2026-01-26T17:13:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=0fd17e5983337231dc655e9ca0095d2ca3f47405'/>
<id>0fd17e5983337231dc655e9ca0095d2ca3f47405</id>
<content type='text'>
When initializing the default cma region, the "cma=" kernel parameter
takes priority over a DT defined linux,cma-default region. Hence, give
the reserved_mem framework the ability to detect this so that the DT
defined cma region can skip initialization accordingly.

Signed-off-by: Oreoluwa Babatunde &lt;oreoluwa.babatunde@oss.qualcomm.com&gt;
Tested-by: Joy Zou &lt;joy.zou@nxp.com&gt;
Acked-by: Rob Herring (Arm) &lt;robh@kernel.org&gt;
Fixes: 8a6e02d0c00e ("of: reserved_mem: Restructure how the reserved memory regions are processed")
Fixes: 2c223f7239f3 ("of: reserved_mem: Restructure call site for dma_contiguous_early_fixup()")
Link: https://lore.kernel.org/r/20251210002027.1171519-1-oreoluwa.babatunde@oss.qualcomm.com
[mszyprow: rebased onto v6.19-rc1, added fixes tags, added a stub for
 cma_skip_dt_default_reserved_mem() if no CONFIG_DMA_CMA is set]
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When initializing the default cma region, the "cma=" kernel parameter
takes priority over a DT defined linux,cma-default region. Hence, give
the reserved_mem framework the ability to detect this so that the DT
defined cma region can skip initialization accordingly.

Signed-off-by: Oreoluwa Babatunde &lt;oreoluwa.babatunde@oss.qualcomm.com&gt;
Tested-by: Joy Zou &lt;joy.zou@nxp.com&gt;
Acked-by: Rob Herring (Arm) &lt;robh@kernel.org&gt;
Fixes: 8a6e02d0c00e ("of: reserved_mem: Restructure how the reserved memory regions are processed")
Fixes: 2c223f7239f3 ("of: reserved_mem: Restructure call site for dma_contiguous_early_fixup()")
Link: https://lore.kernel.org/r/20251210002027.1171519-1-oreoluwa.babatunde@oss.qualcomm.com
[mszyprow: rebased onto v6.19-rc1, added fixes tags, added a stub for
 cma_skip_dt_default_reserved_mem() if no CONFIG_DMA_CMA is set]
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dma/pool: Avoid allocating redundant pools</title>
<updated>2026-01-14T10:00:00+00:00</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2026-01-12T15:46:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=c6ccd098807483762ccd726e1498bac5a71d0005'/>
<id>c6ccd098807483762ccd726e1498bac5a71d0005</id>
<content type='text'>
On smaller systems, e.g. embedded arm64, it is common for all memory
to end up in ZONE_DMA32 or even ZONE_DMA. In such cases it is redundant
to allocate a nominal pool for an empty higher zone that just ends up
coming from a lower zone that should already have its own pool anyway.
We already have logic to skip allocating a ZONE_DMA pool when that is
empty, so generalise that to save memory in the case of other zones too.

Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Tested-by: Vladimir Kondratiev &lt;vladimir.kondratiev@mobileye.com&gt;
Reviewed-by: Baoquan He &lt;bhe@redhat.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/8ab8d8a620dee0109f33f5cb63d6bfeed35aac37.1768230104.git.robin.murphy@arm.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On smaller systems, e.g. embedded arm64, it is common for all memory
to end up in ZONE_DMA32 or even ZONE_DMA. In such cases it is redundant
to allocate a nominal pool for an empty higher zone that just ends up
coming from a lower zone that should already have its own pool anyway.
We already have logic to skip allocating a ZONE_DMA pool when that is
empty, so generalise that to save memory in the case of other zones too.

Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Tested-by: Vladimir Kondratiev &lt;vladimir.kondratiev@mobileye.com&gt;
Reviewed-by: Baoquan He &lt;bhe@redhat.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/8ab8d8a620dee0109f33f5cb63d6bfeed35aac37.1768230104.git.robin.murphy@arm.com
</pre>
</div>
</content>
</entry>
<entry>
<title>dma/pool: Improve pool lookup</title>
<updated>2026-01-14T10:00:00+00:00</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2026-01-12T15:46:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=b31ac41b59b6b6f1f6d426e2088e5c391bf89bf3'/>
<id>b31ac41b59b6b6f1f6d426e2088e5c391bf89bf3</id>
<content type='text'>
If CONFIG_ZONE_DMA32 is enabled, but we have not allocated the
corresponding atomic_pool_dma32, dma_guess_pool() may return the NULL
value of that and fail a GFP_DMA32 allocation without trying to fall
back to other pools which may exist. Furthermore, if no GFP_DMA pool
exists, it is preferable to try GFP_DMA32 rather than immediately fall
back to GFP_KERNEL with even less chance of success. Improve matters
by encoding an explicit order of pool preference for each flag.

Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Tested-by: Vladimir Kondratiev &lt;vladimir.kondratiev@mobileye.com&gt;
Reviewed-by: Baoquan He &lt;bhe@redhat.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/c846b1a2f43295cac926c7af2ce907f62baec518.1768230104.git.robin.murphy@arm.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If CONFIG_ZONE_DMA32 is enabled, but we have not allocated the
corresponding atomic_pool_dma32, dma_guess_pool() may return the NULL
value of that and fail a GFP_DMA32 allocation without trying to fall
back to other pools which may exist. Furthermore, if no GFP_DMA pool
exists, it is preferable to try GFP_DMA32 rather than immediately fall
back to GFP_KERNEL with even less chance of success. Improve matters
by encoding an explicit order of pool preference for each flag.

Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Tested-by: Vladimir Kondratiev &lt;vladimir.kondratiev@mobileye.com&gt;
Reviewed-by: Baoquan He &lt;bhe@redhat.com&gt;
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/c846b1a2f43295cac926c7af2ce907f62baec518.1768230104.git.robin.murphy@arm.com
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'dma-mapping-6.19-2025-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux</title>
<updated>2025-12-10T23:14:23+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2025-12-10T23:14:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=840b22edd5adf9dda46f4e701815eadce8f2f3eb'/>
<id>840b22edd5adf9dda46f4e701815eadce8f2f3eb</id>
<content type='text'>
Pull dma-mapping fixes from Marek Szyprowski:

 - last minute fix for missing parenthesis in recently merged code (Hans
   de Goede)

 - removal of excessive, non-fatal warnings (Dave Kleikamp)

* tag 'dma-mapping-6.19-2025-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux:
  dma-mapping: Fix DMA_BIT_MASK() macro being broken
  dma/pool: eliminate alloc_pages warning in atomic_pool_expand
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull dma-mapping fixes from Marek Szyprowski:

 - last minute fix for missing parenthesis in recently merged code (Hans
   de Goede)

 - removal of excessive, non-fatal warnings (Dave Kleikamp)

* tag 'dma-mapping-6.19-2025-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux:
  dma-mapping: Fix DMA_BIT_MASK() macro being broken
  dma/pool: eliminate alloc_pages warning in atomic_pool_expand
</pre>
</div>
</content>
</entry>
<entry>
<title>dma/pool: eliminate alloc_pages warning in atomic_pool_expand</title>
<updated>2025-12-08T08:40:57+00:00</updated>
<author>
<name>Dave Kleikamp</name>
<email>dave.kleikamp@oracle.com</email>
</author>
<published>2025-12-02T15:28:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=463d439becb81383f3a5a5d840800131f265a09c'/>
<id>463d439becb81383f3a5a5d840800131f265a09c</id>
<content type='text'>
atomic_pool_expand iteratively tries the allocation while decrementing
the page order. There is no need to issue a warning if an attempted
allocation fails.

Signed-off-by: Dave Kleikamp &lt;dave.kleikamp@oracle.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Fixes: d7e673ec2c8e ("dma-pool: Only allocate from CMA when in same memory zone")
[mszyprow: fixed typo]
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20251202152810.142370-1-dave.kleikamp@oracle.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
atomic_pool_expand iteratively tries the allocation while decrementing
the page order. There is no need to issue a warning if an attempted
allocation fails.

Signed-off-by: Dave Kleikamp &lt;dave.kleikamp@oracle.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Fixes: d7e673ec2c8e ("dma-pool: Only allocate from CMA when in same memory zone")
[mszyprow: fixed typo]
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Link: https://lore.kernel.org/r/20251202152810.142370-1-dave.kleikamp@oracle.com
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'dma-mapping-6.19-2025-12-05' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux</title>
<updated>2025-12-06T17:25:05+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2025-12-06T17:25:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=a7405aa92feec2598cedc1b6c651beb1848240fe'/>
<id>a7405aa92feec2598cedc1b6c651beb1848240fe</id>
<content type='text'>
Pull dma-mapping updates from Marek Szyprowski:

 - More DMA mapping API refactoring to physical addresses as the primary
   interface instead of page+offset parameters.

   This time dma_map_ops callbacks are converted to physical addresses,
   what in turn results also in some simplification of architecture
   specific code (Leon Romanovsky and Jason Gunthorpe)

 - Clarify that dma_map_benchmark is not a kernel self-test, but
   standalone tool (Qinxin Xia)

* tag 'dma-mapping-6.19-2025-12-05' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux:
  dma-mapping: remove unused map_page callback
  xen: swiotlb: Convert mapping routine to rely on physical address
  x86: Use physical address for DMA mapping
  sparc: Use physical address DMA mapping
  powerpc: Convert to physical address DMA mapping
  parisc: Convert DMA map_page to map_phys interface
  MIPS/jazzdma: Provide physical address directly
  alpha: Convert mapping routine to rely on physical address
  dma-mapping: remove unused mapping resource callbacks
  xen: swiotlb: Switch to physical address mapping callbacks
  ARM: dma-mapping: Switch to physical address mapping callbacks
  ARM: dma-mapping: Reduce struct page exposure in arch_sync_dma*()
  dma-mapping: convert dummy ops to physical address mapping
  dma-mapping: prepare dma_map_ops to conversion to physical address
  tools/dma: move dma_map_benchmark from selftests to tools/dma
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull dma-mapping updates from Marek Szyprowski:

 - More DMA mapping API refactoring to physical addresses as the primary
   interface instead of page+offset parameters.

   This time dma_map_ops callbacks are converted to physical addresses,
   what in turn results also in some simplification of architecture
   specific code (Leon Romanovsky and Jason Gunthorpe)

 - Clarify that dma_map_benchmark is not a kernel self-test, but
   standalone tool (Qinxin Xia)

* tag 'dma-mapping-6.19-2025-12-05' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux:
  dma-mapping: remove unused map_page callback
  xen: swiotlb: Convert mapping routine to rely on physical address
  x86: Use physical address for DMA mapping
  sparc: Use physical address DMA mapping
  powerpc: Convert to physical address DMA mapping
  parisc: Convert DMA map_page to map_phys interface
  MIPS/jazzdma: Provide physical address directly
  alpha: Convert mapping routine to rely on physical address
  dma-mapping: remove unused mapping resource callbacks
  xen: swiotlb: Switch to physical address mapping callbacks
  ARM: dma-mapping: Switch to physical address mapping callbacks
  ARM: dma-mapping: Reduce struct page exposure in arch_sync_dma*()
  dma-mapping: convert dummy ops to physical address mapping
  dma-mapping: prepare dma_map_ops to conversion to physical address
  tools/dma: move dma_map_benchmark from selftests to tools/dma
</pre>
</div>
</content>
</entry>
</feed>
