summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2012-04-27 08:42:53 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2012-05-21 10:46:19 -0700
commitc93fc7ebf9357371b32e2cd3c4b44a3693296136 (patch)
tree455df55b95da1b73ccc20b8a8fda4751bf53fd53
parentec4f4e71a06eb0028172a2cdf2f9930379dfb1b4 (diff)
percpu: pcpu_embed_first_chunk() should free unused parts after all allocs are complete
commit 42b64281453249dac52861f9b97d18552a7ec62b upstream. pcpu_embed_first_chunk() allocates memory for each node, copies percpu data and frees unused portions of it before proceeding to the next group. This assumes that allocations for different nodes doesn't overlap; however, depending on memory topology, the bootmem allocator may end up allocating memory from a different node than the requested one which may overlap with the portion freed from one of the previous percpu areas. This leads to percpu groups for different nodes overlapping which is a serious bug. This patch separates out copy & partial free from the allocation loop such that all allocations are complete before partial frees happen. This also fixes overlapping frees which could happen on allocation failure path - out_free_areas path frees whole groups but the groups could have portions freed at that point. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: "Pavel V. Panteleev" <pp_84@mail.ru> Tested-by: "Pavel V. Panteleev" <pp_84@mail.ru> LKML-Reference: <E1SNhwY-0007ui-V7.pp_84-mail-ru@f220.mail.ru> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--mm/percpu.c10
1 files changed, 10 insertions, 0 deletions
diff --git a/mm/percpu.c b/mm/percpu.c
index f47af9123af7..797569323629 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1650,6 +1650,16 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
areas[group] = ptr;
base = min(ptr, base);
+ }
+
+ /*
+ * Copy data and free unused parts. This should happen after all
+ * allocations are complete; otherwise, we may end up with
+ * overlapping groups.
+ */
+ for (group = 0; group < ai->nr_groups; group++) {
+ struct pcpu_group_info *gi = &ai->groups[group];
+ void *ptr = areas[group];
for (i = 0; i < gi->nr_units; i++, ptr += ai->unit_size) {
if (gi->cpu_map[i] == NR_CPUS) {