From 7592dbfaf3efcfa36d5652e5713298776c793d40 Mon Sep 17 00:00:00 2001 From: Mikhail Zaslonko Date: Fri, 21 Dec 2018 14:30:46 -0800 Subject: mm, memory_hotplug: initialize struct pages for the full memory section commit 2830bf6f05fb3e05bc4743274b806c821807a684 upstream. If memory end is not aligned with the sparse memory section boundary, the mapping of such a section is only partly initialized. This may lead to VM_BUG_ON due to uninitialized struct page access from is_mem_section_removable() or test_pages_in_a_zone() function triggered by memory_hotplug sysfs handlers: Here are the the panic examples: CONFIG_DEBUG_VM=y CONFIG_DEBUG_VM_PGFLAGS=y kernel parameter mem=2050M -------------------------- page:000003d082008000 is uninitialized and poisoned page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p)) Call Trace: ( test_pages_in_a_zone+0xde/0x160) show_valid_zones+0x5c/0x190 dev_attr_show+0x34/0x70 sysfs_kf_seq_show+0xc8/0x148 seq_read+0x204/0x480 __vfs_read+0x32/0x178 vfs_read+0x82/0x138 ksys_read+0x5a/0xb0 system_call+0xdc/0x2d8 Last Breaking-Event-Address: test_pages_in_a_zone+0xde/0x160 Kernel panic - not syncing: Fatal exception: panic_on_oops kernel parameter mem=3075M -------------------------- page:000003d08300c000 is uninitialized and poisoned page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p)) Call Trace: ( is_mem_section_removable+0xb4/0x190) show_mem_removable+0x9a/0xd8 dev_attr_show+0x34/0x70 sysfs_kf_seq_show+0xc8/0x148 seq_read+0x204/0x480 __vfs_read+0x32/0x178 vfs_read+0x82/0x138 ksys_read+0x5a/0xb0 system_call+0xdc/0x2d8 Last Breaking-Event-Address: is_mem_section_removable+0xb4/0x190 Kernel panic - not syncing: Fatal exception: panic_on_oops Fix the problem by initializing the last memory section of each zone in memmap_init_zone() till the very end, even if it goes beyond the zone end. Michal said: : This has alwways been problem AFAIU. It just went unnoticed because we : have zeroed memmaps during allocation before f7f99100d8d9 ("mm: stop : zeroing memory during allocation in vmemmap") and so the above test : would simply skip these ranges as belonging to zone 0 or provided a : garbage. : : So I guess we do care for post f7f99100d8d9 kernels mostly and : therefore Fixes: f7f99100d8d9 ("mm: stop zeroing memory during : allocation in vmemmap") Link: http://lkml.kernel.org/r/20181212172712.34019-2-zaslonko@linux.ibm.com Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap") Signed-off-by: Mikhail Zaslonko Reviewed-by: Gerald Schaefer Suggested-by: Michal Hocko Acked-by: Michal Hocko Reported-by: Mikhail Gavrilov Tested-by: Mikhail Gavrilov Cc: Dave Hansen Cc: Alexander Duyck Cc: Pasha Tatashin Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/page_alloc.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6a62b2421cdf..fb55b81ff9df 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5538,6 +5538,18 @@ not_early: cond_resched(); } } +#ifdef CONFIG_SPARSEMEM + /* + * If the zone does not span the rest of the section then + * we should at least initialize those pages. Otherwise we + * could blow up on a poisoned page in some paths which depend + * on full sections being initialized (e.g. memory hotplug). + */ + while (end_pfn % PAGES_PER_SECTION) { + __init_single_page(pfn_to_page(end_pfn), end_pfn, zone, nid); + end_pfn++; + } +#endif } static void __meminit zone_init_free_lists(struct zone *zone) -- cgit v1.2.3