diff options
| author | Alex Zhang <zhangalex@google.com> | 2020-08-06 23:22:24 -0700 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2024-11-17 14:58:53 +0100 |
| commit | 00106a045d3e9651b2cf1ffd45894bde341da16b (patch) | |
| tree | ddbc7210b167c3eb5d412a82706c8f4e8c9ee1aa /mm | |
| parent | e681c83006b56a43e6a88375a41e438c6bcfd6a5 (diff) | |
| download | linux-00106a045d3e9651b2cf1ffd45894bde341da16b.tar.gz linux-00106a045d3e9651b2cf1ffd45894bde341da16b.tar.bz2 linux-00106a045d3e9651b2cf1ffd45894bde341da16b.zip | |
mm/memory.c: make remap_pfn_range() reject unaligned addr
commit 0c4123e3fb82d6014d0a70b52eb38153f658541c upstream.
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as well as
explicitly adds a check for it.
Signed-off-by: Alex Zhang <zhangalex@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20200617233512.177519-1-zhangalex@google.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Harshvardhan Jha <harshvardhan.j.jha@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
| -rw-r--r-- | mm/memory.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c index 50503743724c..1d009d3d87b3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1920,7 +1920,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd, /** * remap_pfn_range - remap kernel memory to userspace * @vma: user vma to map to - * @addr: target user address to start at + * @addr: target page aligned user address to start at * @pfn: page frame number of kernel physical memory address * @size: size of mapping area * @prot: page protection flags for this mapping @@ -1939,6 +1939,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, unsigned long remap_pfn = pfn; int err; + if (WARN_ON_ONCE(!PAGE_ALIGNED(addr))) + return -EINVAL; + /* * Physically remapped pages are special. Tell the * rest of the world about it: |
