diff options
| author | Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> | 2015-02-11 15:27:34 -0800 |
|---|---|---|
| committer | Ben Hutchings <ben@decadent.org.uk> | 2018-10-03 04:10:02 +0100 |
| commit | 71875653e47c6f128e76105566043f949f896460 (patch) | |
| tree | c034d6c2e9c322b4a4eb7f985e494c088ee11a2e /include/linux | |
| parent | f941fa5197ec5bce8c08978444a8f221f52e975a (diff) | |
| download | linux-71875653e47c6f128e76105566043f949f896460.tar.gz linux-71875653e47c6f128e76105566043f949f896460.tar.bz2 linux-71875653e47c6f128e76105566043f949f896460.zip | |
mm/pagewalk: remove pgd_entry() and pud_entry()
commit 0b1fbfe50006c41014cc25660c0e735d21c34939 upstream.
Currently no user of page table walker sets ->pgd_entry() or
->pud_entry(), so checking their existence in each loop is just wasting
CPU cycle. So let's remove it to reduce overhead.
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[bwh: Backported to 3.16 as dependency of L1TF mitigation]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/mm.h | 6 |
1 files changed, 0 insertions, 6 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index 8081bdc5b4fb..1f83f017da96 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1114,8 +1114,6 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range - * @pgd_entry: if set, called for each non-empty PGD (top-level) entry - * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to @@ -1129,10 +1127,6 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see walk_page_range for more details) */ struct mm_walk { - int (*pgd_entry)(pgd_t *pgd, unsigned long addr, - unsigned long next, struct mm_walk *walk); - int (*pud_entry)(pud_t *pud, unsigned long addr, - unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pte_entry)(pte_t *pte, unsigned long addr, |
