diff options
| author | James Houghton <jthoughton@google.com> | 2022-10-18 20:01:25 +0000 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2022-11-26 09:27:22 +0100 |
| commit | ec667443b2dbc6cdbbac4073e51a17733158ec6a (patch) | |
| tree | 626a19360a404e25015d9eb527597aec67835a9b /mm | |
| parent | cf5bf29e9e8bb34e97e089ef4abb519f3fecbaf3 (diff) | |
| download | linux-ec667443b2dbc6cdbbac4073e51a17733158ec6a.tar.gz linux-ec667443b2dbc6cdbbac4073e51a17733158ec6a.tar.bz2 linux-ec667443b2dbc6cdbbac4073e51a17733158ec6a.zip | |
hugetlbfs: don't delete error page from pagecache
[ Upstream commit 8625147cafaa9ba74713d682f5185eb62cb2aedb ]
This change is very similar to the change that was made for shmem [1], and
it solves the same problem but for HugeTLBFS instead.
Currently, when poison is found in a HugeTLB page, the page is removed
from the page cache. That means that attempting to map or read that
hugepage in the future will result in a new hugepage being allocated
instead of notifying the user that the page was poisoned. As [1] states,
this is effectively memory corruption.
The fix is to leave the page in the page cache. If the user attempts to
use a poisoned HugeTLB page with a syscall, the syscall will fail with
EIO, the same error code that shmem uses. For attempts to map the page,
the thread will get a BUS_MCEERR_AR SIGBUS.
[1]: commit a76054266661 ("mm: shmem: don't truncate page if memory failure happens")
Link: https://lkml.kernel.org/r/20221018200125.848471-1-jthoughton@google.com
Signed-off-by: James Houghton <jthoughton@google.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Tested-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: James Houghton <jthoughton@google.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'mm')
| -rw-r--r-- | mm/hugetlb.c | 4 | ||||
| -rw-r--r-- | mm/memory-failure.c | 5 |
2 files changed, 8 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5e414c90f82f..dbb558e71e9e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6021,6 +6021,10 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, ptl = huge_pte_lockptr(h, dst_mm, dst_pte); spin_lock(ptl); + ret = -EIO; + if (PageHWPoison(page)) + goto out_release_unlock; + /* * Recheck the i_size after holding PT lock to make sure not * to leave any page mapped (as page_mapped()) beyond the end diff --git a/mm/memory-failure.c b/mm/memory-failure.c index e7ac570dda75..4d302f6b02fc 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1079,6 +1079,7 @@ static int me_huge_page(struct page_state *ps, struct page *p) int res; struct page *hpage = compound_head(p); struct address_space *mapping; + bool extra_pins = false; if (!PageHuge(hpage)) return MF_DELAYED; @@ -1086,6 +1087,8 @@ static int me_huge_page(struct page_state *ps, struct page *p) mapping = page_mapping(hpage); if (mapping) { res = truncate_error_page(hpage, page_to_pfn(p), mapping); + /* The page is kept in page cache. */ + extra_pins = true; unlock_page(hpage); } else { unlock_page(hpage); @@ -1103,7 +1106,7 @@ static int me_huge_page(struct page_state *ps, struct page *p) } } - if (has_extra_refcount(ps, p, false)) + if (has_extra_refcount(ps, p, extra_pins)) res = MF_FAILED; return res; |
