summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2014-06-05Merge branch 'linux-3.10.40' into rel-21Ishan Mittal
Bug 200004122 Conflicts: drivers/cpufreq/cpufreq.c drivers/regulator/core.c sound/soc/codecs/max98090.c Change-Id: I9418a05ad5c56b2e902249218bac2fa594d99f56 Signed-off-by: Ishan Mittal <imittal@nvidia.com>
2014-05-13powerpc/tm: Disable IRQ in tm_recheckpointMichael Neuling
commit e6b8fd028b584ffca7a7255b8971f254932c9fce upstream. We can't take an IRQ when we're about to do a trechkpt as our GPR state is set to user GPR values. We've hit this when running some IBM Java stress tests in the lab resulting in the following dump: cpu 0x3f: Vector: 700 (Program Check) at [c000000007eb3d40] pc: c000000000050074: restore_gprs+0xc0/0x148 lr: 00000000b52a8184 sp: ac57d360 msr: 8000000100201030 current = 0xc00000002c500000 paca = 0xc000000007dbfc00 softe: 0 irq_happened: 0x00 pid = 34535, comm = Pooled Thread # R00 = 00000000b52a8184 R16 = 00000000b3e48fda R01 = 00000000ac57d360 R17 = 00000000ade79bd8 R02 = 00000000ac586930 R18 = 000000000fac9bcc R03 = 00000000ade60000 R19 = 00000000ac57f930 R04 = 00000000f6624918 R20 = 00000000ade79be8 R05 = 00000000f663f238 R21 = 00000000ac218a54 R06 = 0000000000000002 R22 = 000000000f956280 R07 = 0000000000000008 R23 = 000000000000007e R08 = 000000000000000a R24 = 000000000000000c R09 = 00000000b6e69160 R25 = 00000000b424cf00 R10 = 0000000000000181 R26 = 00000000f66256d4 R11 = 000000000f365ec0 R27 = 00000000b6fdcdd0 R12 = 00000000f66400f0 R28 = 0000000000000001 R13 = 00000000ada71900 R29 = 00000000ade5a300 R14 = 00000000ac2185a8 R30 = 00000000f663f238 R15 = 0000000000000004 R31 = 00000000f6624918 pc = c000000000050074 restore_gprs+0xc0/0x148 cfar= c00000000004fe28 dont_restore_vec+0x1c/0x1a4 lr = 00000000b52a8184 msr = 8000000100201030 cr = 24804888 ctr = 0000000000000000 xer = 0000000000000000 trap = 700 This moves tm_recheckpoint to a C function and moves the tm_restore_sprs into that function. It then adds IRQ disabling over the trechkpt critical section. It also sets the TEXASR FS in the signals code to ensure this is never set now that we explictly write the TM sprs in tm_recheckpoint. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-05-13powerpc/compat: 32-bit little endian machine name is ppcle, not ppcAnton Blanchard
commit 422b9b9684db3c511e65c91842275c43f5910ae9 upstream. I noticed this when testing setarch. No, we don't magically support a big endian userspace on a little endian kernel. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-04-30arch: Introduce smp_load_acquire(), smp_store_release()Peter Zijlstra
A number of situations currently require the heavyweight smp_mb(), even though there is no need to order prior stores against later loads. Many architectures have much cheaper ways to handle these situations, but the Linux kernel currently has no portable way to make use of them. This commit therefore supplies smp_load_acquire() and smp_store_release() to remedy this situation. The new smp_load_acquire() primitive orders the specified load against any subsequent reads or writes, while the new smp_store_release() primitive orders the specifed store against any prior reads or writes. These primitives allow array-based circular FIFOs to be implemented without an smp_mb(), and also allow a theoretical hole in rcu_assign_pointer() to be closed at no additional expense on most architectures. In addition, the RCU experience transitioning from explicit smp_read_barrier_depends() and smp_wmb() to rcu_dereference() and rcu_assign_pointer(), respectively resulted in substantial improvements in readability. It therefore seems likely that replacing other explicit barriers with smp_load_acquire() and smp_store_release() will provide similar benefits. It appears that roughly half of the explicit barriers in core kernel code might be so replaced. [Changelog by PaulMck] (cherry picked from commit 47933ad41a86a4a9b50bed7c9b9bd2ba242aac63) Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-30lib: Add missing arch generic-y entries for asm-generic/hash.hDavid S. Miller
Conflicts: arch/avr32/include/asm/Kbuild arch/c6x/include/asm/Kbuild arch/cris/include/asm/Kbuild arch/ia64/include/asm/Kbuild arch/mips/include/asm/Kbuild arch/openrisc/include/asm/Kbuild arch/powerpc/include/asm/Kbuild arch/score/include/asm/Kbuild Change-Id: I6800e3f03dbc40e94de1495459dec4b29df4474e Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry picked from commit e3fec2f74f7f90d2149a24243a4d040caabe6f30) Signed-off-by: Ishan Mittal <imittal@nvidia.com>
2014-04-30kernel: remove CONFIG_USE_GENERIC_SMP_HELPERSChristoph Hellwig
We've switched over every architecture that supports SMP to it, so remove the new useless config variable. Conflicts: arch/arm/Kconfig block/blk-mq.c Change-Id: Ic19c3ac07a38a1636d6aa2fed5e55a58833f9b2c Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 0a06ff068f1255bcd7965ab07bc0f4adc3eb639a) Signed-off-by: Ishan Mittal <imittal@nvidia.com>
2014-04-30constify copy_siginfo_to_user{,32}()Al Viro
Conflicts: arch/x86/ia32/ia32_signal.c Change-Id: I701cb58fffb756a2d29e7c828728d149c35e6c6a Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> (cherry picked from commit ce3959604878c1c693979ec552069dc8bdb5ccde) Signed-off-by: Ishan Mittal <imittal@nvidia.com>
2014-04-30of: remove early_init_dt_setup_initrd_archRob Herring
All arches do essentially the same thing now for early_init_dt_setup_initrd_arch, so it can now be removed. Conflicts: arch/arm/mm/init.c arch/c6x/kernel/devicetree.c arch/powerpc/kernel/prom.c (cherry picked from commit 29eb45a9ab4839a1e9cef2bcf369b918c9c4fcad) Acked-by: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Acked-by: Grant Likely <grant.likely@linaro.org> Change-Id: I84b59cec16fa7e96fa8ff3de04aa959f7039a7a9 Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Ishan Mittal <imittal@nvidia.com>
2014-04-30sched, arch: Create asm/preempt.hPeter Zijlstra
In order to prepare to per-arch implementations of preempt_count move the required bits into an asm-generic header and use this for all archs. (cherry picked from commit a787870924dbd6f321661e06d4ec1c7a408c9ccf) Conflicts: arch/c6x/include/asm/Kbuild arch/cris/include/asm/Kbuild arch/h8300/include/asm/Kbuild arch/ia64/include/asm/Kbuild arch/mips/include/asm/Kbuild arch/openrisc/include/asm/Kbuild arch/powerpc/include/asm/Kbuild arch/score/include/asm/Kbuild include/linux/preempt.h Change-Id: I544914d3c23cc50da658296a34f9f2796854e259 Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-h5j0c1r3e3fk015m30h8f1zx@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Ishan Mittal <imittal@nvidia.com>
2014-04-30arch: mm: pass userspace fault flag to generic fault handlerJohannes Weiner
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. (cherry picked from commit 759496ba6407c6994d6a5ce3a5e74937d7816208) Conflicts: arch/arc/mm/fault.c Change-Id: I6ddf37c0feae69fcda0c2db76d2b10ca2a11c619 Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-30of: consolidate definition of early_init_dt_alloc_memory_arch()Grant Likely
Most architectures use the same implementation. Collapse the common ones into a single weak function that can be overridden. (cherry picked from commit a1727da599ad030ccaf4073473fd235c8ee28219) Change-Id: I5cfcac67b98407f8e4c4dcb89829f2a8e0d1b88b Signed-off-by: Grant Likely <grant.likely@linaro.org>
2014-04-30of: Specify initrd location using 64-bitSantosh Shilimkar
On some PAE architectures, the entire range of physical memory could reside outside the 32-bit limit. These systems need the ability to specify the initrd location using 64-bit numbers. This patch globally modifies the early_init_dt_setup_initrd_arch() function to use 64-bit numbers instead of the current unsigned long. There has been quite a bit of debate about whether to use u64 or phys_addr_t. It was concluded to stick to u64 to be consistent with rest of the device tree code. As summarized by Geert, "The address to load the initrd is decided by the bootloader/user and set at that point later in time. The dtb should not be tied to the kernel you are booting" More details on the discussion can be found here: https://lkml.org/lkml/2013/6/20/690 https://lkml.org/lkml/2012/9/13/544 (cherry picked from commit 374d5c9964c10373ba39bbe934f4262eb87d7114) Change-Id: Iab36378e1de4e6c2cb07a3b88aeb5ff4afbe535b Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by: Grant Likely <grant.likely@linaro.org>
2014-03-23powerpc: Align p_dyn, p_rela and p_st symbolsAnton Blanchard
commit a5b2cf5b1af424ee3dd9e3ce6d5cea18cb927e67 upstream. The 64bit relocation code places a few symbols in the text segment. These symbols are only 4 byte aligned where they need to be 8 byte aligned. Add an explicit alignment. Signed-off-by: Anton Blanchard <anton@samba.org> Tested-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-13Merge branch 'linux-3.10.33' into dev-kernel-3.10Deepak Nibade
Bug 1456092 Change-Id: I3021247ec68a3c2dddd9e98cde13d70a45191d53 Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
2014-03-06powerpc/crashdump : Fix page frame number check in copy_oldmem_pageLaurent Dufour
commit f5295bd8ea8a65dc5eac608b151386314cb978f1 upstream. In copy_oldmem_page, the current check using max_pfn and min_low_pfn to decide if the page is backed or not, is not valid when the memory layout is not continuous. This happens when running as a QEMU/KVM guest, where RTAS is mapped higher in the memory. In that case max_pfn points to the end of RTAS, and a hole between the end of the kdump kernel and RTAS is not backed by PTEs. As a consequence, the kdump kernel is crashing in copy_oldmem_page when accessing in a direct way the pages in that hole. This fix relies on the memblock's service memblock_is_region_memory to check if the read page is part or not of the directly accessible memory. Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Tested-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-06powerpc/le: Ensure that the 'stop-self' RTAS token is handled correctlyTony Breeds
commit 41dd03a94c7d408d2ef32530545097f7d1befe5c upstream. Currently we're storing a host endian RTAS token in rtas_stop_self_args.token. We then pass that directly to rtas. This is fine on big endian however on little endian the token is not what we expect. This will typically result in hitting: panic("Alas, I survived.\n"); To fix this we always use the stop-self token in host order and always convert it to be32 before passing this to rtas. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-06powerpc: Make sure "cache" directory is removed when offlining cpuPaul Mackerras
commit 91b973f90c1220d71923e7efe1e61f5329806380 upstream. The code in remove_cache_dir() is supposed to remove the "cache" subdirectory from the sysfs directory for a CPU when that CPU is being offlined. It tries to do this by calling kobject_put() on the kobject for the subdirectory. However, the subdirectory only gets removed once the last reference goes away, and the reference being put here may well not be the last reference. That means that the "cache" subdirectory may still exist when the offlining operation has finished. If the same CPU subsequently gets onlined, the code tries to add a new "cache" subdirectory. If the old subdirectory has not yet been removed, we get a WARN_ON in the sysfs code, with stack trace, and an error message printed on the console. Further, we ultimately end up with an online cpu with no "cache" subdirectory. This fixes it by doing an explicit kobject_del() at the point where we want the subdirectory to go away. kobject_del() removes the sysfs directory even though the object still exists in memory. The object will get freed at some point in the future. A subsequent onlining operation can create a new sysfs directory, even if the old object still exists in memory, without causing any problems. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-06powerpc: Fix the setup of CPU-to-Node mappings during CPU onlineSrivatsa S. Bhat
commit d4edc5b6c480a0917e61d93d55531d7efa6230be upstream. On POWER platforms, the hypervisor can notify the guest kernel about dynamic changes in the cpu-numa associativity (VPHN topology update). Hence the cpu-to-node mappings that we got from the firmware during boot, may no longer be valid after such updates. This is handled using the arch_update_cpu_topology() hook in the scheduler, and the sched-domains are rebuilt according to the new mappings. But unfortunately, at the moment, CPU hotplug ignores these updated mappings and instead queries the firmware for the cpu-to-numa relationships and uses them during CPU online. So the kernel can end up assigning wrong NUMA nodes to CPUs during subsequent CPU hotplug online operations (after booting). Further, a particularly problematic scenario can result from this bug: On POWER platforms, the SMT mode can be switched between 1, 2, 4 (and even 8) threads per core. The switch to Single-Threaded (ST) mode is performed by offlining all except the first CPU thread in each core. Switching back to SMT mode involves onlining those other threads back, in each core. Now consider this scenario: 1. During boot, the kernel gets the cpu-to-node mappings from the firmware and assigns the CPUs to NUMA nodes appropriately, during CPU online. 2. Later on, the hypervisor updates the cpu-to-node mappings dynamically and communicates this update to the kernel. The kernel in turn updates its cpu-to-node associations and rebuilds its sched domains. Everything is fine so far. 3. Now, the user switches the machine from SMT to ST mode (say, by running ppc64_cpu --smt=1). This involves offlining all except 1 thread in each core. 4. The user then tries to switch back from ST to SMT mode (say, by running ppc64_cpu --smt=4), and this involves onlining those threads back. Since CPU hotplug ignores the new mappings, it queries the firmware and tries to associate the newly onlined sibling threads to the old NUMA nodes. This results in sibling threads within the same core getting associated with different NUMA nodes, which is incorrect. The scheduler's build-sched-domains code gets thoroughly confused with this and enters an infinite loop and causes soft-lockups, as explained in detail in commit 3be7db6ab (powerpc: VPHN topology change updates all siblings). So to fix this, use the numa_cpu_lookup_table to remember the updated cpu-to-node mappings, and use them during CPU hotplug online operations. Further, we also need to ensure that all threads in a core are assigned to a common NUMA node, irrespective of whether all those threads were online during the topology update. To achieve this, we take care not to use cpu_sibling_mask() since it is not hotplug invariant. Instead, we use cpu_first_sibling_thread() and set up the mappings manually using the 'threads_per_core' value for that particular platform. This helps us ensure that we don't hit this bug with any combination of CPU hotplug and SMT mode switching. Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-06KVM: PPC: e500: Fix bad address type in deliver_tlb_misss()Mihai Caraman
commit 70713fe315ed14cd1bb07d1a7f33e973d136ae3d upstream. Use gva_t instead of unsigned int for eaddr in deliver_tlb_miss(). Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-06KVM: PPC: Book3S HV: use xics_wake_cpu only when definedAndreas Schwab
commit 48eaef0518a565d3852e301c860e1af6a6db5a84 upstream. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-06bpf: do not use reciprocal divideEric Dumazet
[ Upstream commit aee636c4809fa54848ff07a899b326eb1f9987a2 ] At first Jakub Zawadzki noticed that some divisions by reciprocal_divide were not correct. (off by one in some cases) http://www.wireshark.org/~darkjames/reciprocal-buggy.c He could also show this with BPF: http://www.wireshark.org/~darkjames/set-and-dump-filter-k-bug.c The reciprocal divide in linux kernel is not generic enough, lets remove its use in BPF, as it is not worth the pain with current cpus. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Jakub Zawadzki <darkjames-ws@darkjames.pl> Cc: Mircea Gherzan <mgherzan@gmail.com> Cc: Daniel Borkmann <dxchgb@gmail.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Matt Evans <matt@ozlabs.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-01-09powerpc: Align p_endAnton Blanchard
commit 286e4f90a72c0b0621dde0294af6ed4b0baddabb upstream. p_end is an 8 byte value embedded in the text section. This means it is only 4 byte aligned when it should be 8 byte aligned. Fix this by adding an explicit alignment. This fixes an issue where POWER7 little endian builds with CONFIG_RELOCATABLE=y fail to boot. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-01-09powerpc: Fix bad stack check in exception entryMichael Neuling
commit 90ff5d688e61f49f23545ffab6228bd7e87e6dc7 upstream. In EXCEPTION_PROLOG_COMMON() we check to see if the stack pointer (r1) is valid when coming from the kernel. If it's not valid, we die but with a nice oops message. Currently we allocate a stack frame (subtract INT_FRAME_SIZE) before we check to see if the stack pointer is negative. Unfortunately, this won't detect a bad stack where r1 is less than INT_FRAME_SIZE. This patch fixes the check to compare the modified r1 with -INT_FRAME_SIZE. With this, bad kernel stack pointers (including NULL pointers) are correctly detected again. Kudos to Paulus for finding this. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-01-09powerpc: kvm: fix rare but potential deadlock scenepingfan liu
commit 91648ec09c1ef69c4d840ab6dab391bfb452d554 upstream. Since kvmppc_hv_find_lock_hpte() is called from both virtmode and realmode, so it can trigger the deadlock. Suppose the following scene: Two physical cpuM, cpuN, two VM instances A, B, each VM has a group of vcpus. If on cpuM, vcpu_A_1 holds bitlock X (HPTE_V_HVLOCK), then is switched out, and on cpuN, vcpu_A_2 try to lock X in realmode, then cpuN will be caught in realmode for a long time. What makes things even worse if the following happens, On cpuM, bitlockX is hold, on cpuN, Y is hold. vcpu_B_2 try to lock Y on cpuM in realmode vcpu_A_2 try to lock X on cpuN in realmode Oops! deadlock happens Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com> Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-12-20powerpc: Fix PTE page address mismatch in pgtable ctor/dtorHong H. Pham
commit cf77ee54362a245f9a01f240adce03a06c05eb68 upstream. In pte_alloc_one(), pgtable_page_ctor() is passed an address that has not been converted by page_address() to the newly allocated PTE page. When the PTE is freed, __pte_free_tlb() calls pgtable_page_dtor() with an address to the PTE page that has been converted by page_address(). The mismatch in the PTE's page address causes pgtable_page_dtor() to access invalid memory, so resources for that PTE (such as the page lock) is not properly cleaned up. On PPC32, only SMP kernels are affected. On PPC64, only SMP kernels with 4K page size are affected. This bug was introduced by commit d614bb041209fd7cb5e4b35e11a7b2f6ee8f62b8 "powerpc: Move the pte free routines from common header". On a preempt-rt kernel, a spinlock is dynamically allocated for each PTE in pgtable_page_ctor(). When the PTE is freed, calling pgtable_page_dtor() with a mismatched page address causes a memory leak, as the pointer to the PTE's spinlock is bogus. On mainline, there isn't any immediately obvious symptoms, but the problem still exists here. Fixes: d614bb041209fd7c "powerpc: Move the pte free routes from common header" Cc: Paul Mackerras <paulus@samba.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Hong H. Pham <hong.pham@windriver.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-12-16Merge tag 'v3.10.24' into HEADAjay Nandakumar
This is the 3.10.24 stable release Change-Id: Ibd2734f93d44385ab86867272a1359158635133b
2013-12-04powerpc/signals: Improved mark VSX not saved with small contexts fixMichael Neuling
commit ec67ad82814bee92251fd963bf01c7a173856555 upstream. In a recent patch: commit c13f20ac48328b05cd3b8c19e31ed6c132b44b42 Author: Michael Neuling <mikey@neuling.org> powerpc/signals: Mark VSX not saved with small contexts We fixed an issue but an improved solution was later discussed after the patch was merged. Firstly, this patch doesn't handle the 64bit signals case, which could also hit this issue (but has never been reported). Secondly, the original patch isn't clear what MSR VSX should be set to. The new approach below always clears the MSR VSX bit (to indicate no VSX is in the context) and sets it only in the specific case where VSX is available (ie. when VSX has been used and the signal context passed has space to provide the state). This reverts the original patch and replaces it with the improved solution. It also adds a 64 bit version. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29powerpc/signals: Mark VSX not saved with small contextsMichael Neuling
commit c13f20ac48328b05cd3b8c19e31ed6c132b44b42 upstream. The VSX MSR bit in the user context indicates if the context contains VSX state. Currently we set this when the process has touched VSX at any stage. Unfortunately, if the user has not provided enough space to save the VSX state, we can't save it but we currently still set the MSR VSX bit. This patch changes this to clear the MSR VSX bit when the user doesn't provide enough space. This indicates that there is no valid VSX state in the user context. This is needed to support get/set/make/swapcontext for applications that use VSX but only provide a small context. For example, getcontext in glibc provides a smaller context since the VSX registers don't need to be saved over the glibc function call. But since the program calling getcontext may have used VSX, the kernel currently says the VSX state is valid when it's not. If the returned context is then used in setcontext (ie. a small context without VSX but with MSR VSX set), the kernel will refuse the context. This situation has been reported by the glibc community. Based on patch from Carlos O'Donell. Tested-by: Haren Myneni <haren@linux.vnet.ibm.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29powerpc: ppc64 address space capped at 32TB, mmap randomisation disabledAnton Blanchard
commit 5a049f14902982c26538250bdc8d54156d357252 upstream. Commit fba2369e6ceb (mm: use vm_unmapped_area() on powerpc architecture) has a bug in slice_scan_available() where we compare an unsigned long (high_slices) against a shifted int. As a result, comparisons against the top 32 bits of high_slices (representing the top 32TB) always returns 0 and the top of our mmap region is clamped at 32TB This also breaks mmap randomisation since the randomised address is always up near the top of the address space and it gets clamped down to 32TB. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Michel Lespinasse <walken@google.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29powerpc/powernv: Add PE to its own PELTVGavin Shan
commit 631ad691b5818291d89af9be607d2fe40be0886e upstream. We need add PE to its own PELTV. Otherwise, the errors originated from the PE might contribute to other PEs. In the result, we can't clear up the error successfully even we're checking and clearing errors during access to PCI config space. Reported-by: kalshett@in.ibm.com Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29powerpc/vio: use strcpy in modalias_showPrarit Bhargava
commit 411cabf79e684171669ad29a0628c400b4431e95 upstream. Commit e82b89a6f19bae73fb064d1b3dd91fcefbb478f4 used strcat instead of strcpy which can result in an overflow of newlines on the buffer. Signed-off-by: Prarit Bhargava Cc: benh@kernel.crashing.org Cc: ben@decadent.org.uk Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-11-29powerpc/52xx: fix build breakage for MPC5200 LPBFIFO moduleAnatolij Gustschin
commit 2bf75084f6d9f9a91ba6e30a501ff070d8a1acf6 upstream. The MPC5200 LPBFIFO driver requires the bestcomm module to be enabled, otherwise building will fail. Fix it. Reported-by: Wolfgang Denk <wd@denx.de> Signed-off-by: Anatolij Gustschin <agust@denx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-31Merge tag 'v3.10.17' into dev-kernel-3.10Ajay Nandakumar
This is the 3.10.17 stable release Conflicts: drivers/usb/host/xhci.c Change-Id: I6bd3b15ff92a0b94568b9d02e9bb1036becfca20
2013-10-18compiler/gcc4: Add quirk for 'asm goto' miscompilation bugIngo Molnar
commit 3f0116c3238a96bc18ad4b4acefe4e7be32fa861 upstream. Fengguang Wu, Oleg Nesterov and Peter Zijlstra tracked down a kernel crash to a GCC bug: GCC miscompiles certain 'asm goto' constructs, as outlined here: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 Implement a workaround suggested by Jakub Jelinek. Reported-and-tested-by: Fengguang Wu <fengguang.wu@intel.com> Reported-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Suggested-by: Jakub Jelinek <jakub@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20131015062351.GA4666@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-18KVM: PPC: Book3S HV: Fix typo in saving DSCRPaul Mackerras
commit cfc860253abd73e1681696c08ea268d33285a2c4 upstream. This fixes a typo in the code that saves the guest DSCR (Data Stream Control Register) into the kvm_vcpu_arch struct on guest exit. The effect of the typo was that the DSCR value was saved in the wrong place, so changes to the DSCR by the guest didn't persist across guest exit and entry, and some host kernel memory got corrupted. Signed-off-by: Paul Mackerras <paulus@samba.org> Acked-by: Alexander Graf <agraf@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13powerpc: Restore registers on error exit from csum_partial_copy_generic()Paul E. McKenney
commit 8f21bd0090052e740944f9397e2be5ac7957ded7 upstream. The csum_partial_copy_generic() function saves the PowerPC non-volatile r14, r15, and r16 registers for the main checksum-and-copy loop. Unfortunately, it fails to restore them upon error exit from this loop, which results in silent corruption of these registers in the presumably rare event of an access exception within that loop. This commit therefore restores these register on error exit from the loop. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13powerpc/sysfs: Disable writing to PURR in guest modeMadhavan Srinivasan
commit d1211af3049f4c9c1d8d4eb8f8098cc4f4f0d0c7 upstream. arch/powerpc/kernel/sysfs.c exports PURR with write permission. This may be valid for kernel in phyp mode. But writing to the file in guest mode causes crash due to a priviledge violation Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13powerpc: Fix parameter clobber in csum_partial_copy_generic()Paul E. McKenney
commit d9813c3681a36774b254c0cdc9cce53c9e22c756 upstream. The csum_partial_copy_generic() uses register r7 to adjust the remaining bytes to process. Unfortunately, r7 also holds a parameter, namely the address of the flag to set in case of access exceptions while reading the source buffer. Lacking a quantum implementation of PowerPC, this commit instead uses register r9 to do the adjusting, leaving r7's pointer uncorrupted. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13powerpc/vio: Fix modalias_show return valuesPrarit Bhargava
commit e82b89a6f19bae73fb064d1b3dd91fcefbb478f4 upstream. modalias_show() should return an empty string on error, not -ENODEV. This causes the following false and annoying error: > find /sys/devices -name modalias -print0 | xargs -0 cat >/dev/null cat: /sys/devices/vio/4000/modalias: No such device cat: /sys/devices/vio/4001/modalias: No such device cat: /sys/devices/vio/4002/modalias: No such device cat: /sys/devices/vio/4004/modalias: No such device cat: /sys/devices/vio/modalias: No such device Signed-off-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13powerpc/tm: Switch out userspace PPR and DSCR soonerMichael Neuling
commit e9bdc3d6143d1c4b8d8ce5231fc958268331f983 upstream. When we do a treclaim or trecheckpoint we end up running with userspace PPR and DSCR values. Currently we don't do anything special to avoid running with user values which could cause a severe performance degradation. This patch moves the PPR and DSCR save and restore around treclaim and trecheckpoint so that we run with user values for a much shorter period. More care is taken with the PPR as it's impact is greater than the DSCR. This is similar to user exceptions, where we run HTM_MEDIUM early to ensure that we don't run with a userspace PPR values in the kernel. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13powerpc/perf: Fix handling of FAB eventsMichael Ellerman
commit a53b27b3abeef406de92a2bb0ceb6fb4c3fb8fc4 upstream. Commit 4df4899 "Add power8 EBB support" included a bug in the handling of the FAB_CRESP_MATCH and FAB_TYPE_MATCH fields. These values are pulled out of the event code using EVENT_THR_CTL_SHIFT, however we were then or'ing that value directly into MMCR1. This meant we were failing to set the FAB fields correctly, and also potentially corrupting the value for PMC4SEL. Leading to no counts for the FAB events and incorrect counts for PMC4. The fix is simply to shift left the FAB value correctly before or'ing it with MMCR1. Reported-by: Sooraj Ravindran Nair <soonair3@in.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13powerpc/iommu: Use GFP_KERNEL instead of GFP_ATOMIC in iommu_init_table()Nishanth Aravamudan
commit 1cf389df090194a0976dc867b7fffe99d9d490cb upstream. Under heavy (DLPAR?) stress, we tripped this panic() in arch/powerpc/kernel/iommu.c::iommu_init_table(): page = alloc_pages_node(nid, GFP_ATOMIC, get_order(sz)); if (!page) panic("iommu_init_table: Can't allocate %ld bytes\n", sz); Before the panic() we got a page allocation failure for an order-2 allocation. There appears to be memory free, but perhaps not in the ATOMIC context. I looked through all the call-sites of iommu_init_table() and didn't see any obvious reason to need an ATOMIC allocation. Most call-sites in fact have an explicit GFP_KERNEL allocation shortly before the call to iommu_init_table(), indicating we are not in an atomic context. There is some indirection for some paths, but I didn't see any locks indicating that GFP_KERNEL is inappropriate. With this change under the same conditions, we have not been able to reproduce the panic. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-09-26KVM: PPC: Book3S: Fix compile error in XICS emulationPaul Mackerras
commit 7bfa9ad55d691f2b836b576769b11eca2cf50816 upstream. Commit 8e44ddc3f3 ("powerpc/kvm/book3s: Add support for H_IPOLL and H_XIRR_X in XICS emulation") added a call to get_tb() but didn't include the header that defines it, and on some configs this means book3s_xics.c fails to compile: arch/powerpc/kvm/book3s_xics.c: In function ‘kvmppc_xics_hcall’: arch/powerpc/kvm/book3s_xics.c:812:3: error: implicit declaration of function ‘get_tb’ [-Werror=implicit-function-declaration] Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-09-26powerpc: Default arch idle could cede processor on pseriesVaidyanathan Srinivasan
commit 363edbe2614aa90df706c0f19ccfa2a6c06af0be upstream. When adding cpuidle support to pSeries, we introduced two regressions: - The new cpuidle backend driver only works under hypervisors supporting the "SLPLAR" option, which isn't the case of the old POWER4 hypervisor and the HV "light" used on js2x blades - The cpuidle driver registers fairly late, meaning that for a significant portion of the boot process, we end up having all threads spinning. This slows down the boot process and increases the overall resource usage if the hypervisor has shared processors. This fixes both by implementing a "default" idle that will cede to the hypervisor when possible, in a very simple way without all the bells and whisles of cpuidle. Reported-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-09-26powerpc: Handle unaligned ldbrx/stdbrxAnton Blanchard
commit 230aef7a6a23b6166bd4003bfff5af23c9bd381f upstream. Normally when we haven't implemented an alignment handler for a load or store instruction the process will be terminated. The alignment handler uses the DSISR (or a pseudo one) to locate the right handler. Unfortunately ldbrx and stdbrx overlap lfs and stfs so we incorrectly think ldbrx is an lfs and stdbrx is an stfs. This bug is particularly nasty - instead of terminating the process we apply an incorrect fixup and continue on. With more and more overlapping instructions we should stop creating a pseudo DSISR and index using the instruction directly, but for now add a special case to catch ldbrx/stdbrx. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-09-14Merge tag 'v3.10.12' into android-tegra-nv-3.10-rebaseDan Willemsen
This is the 3.10.12 stable release Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
2013-09-14UPSTREAM arch: mm: pass userspace fault flag to generic fault handlerPrashant Gaikwad
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> (cherry picked from commit 407c454cb0ac6e68ca66974da787a71118cfef84) Conflicts: arch/arc/mm/fault.c arch/arm64/mm/fault.c arch/metag/mm/fault.c arch/parisc/mm/fault.c Change-Id: Iee53942737627be8dd8e2e325b5ba87fe85d6814 Signed-off-by: Prashant Gaikwad <pgaikwad@nvidia.com> Reviewed-on: http://git-master/r/266410 GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
2013-09-13Revert "cpufreq: Notify all policy->cpus in cpufreq_notify_transition()"Dan Willemsen
This reverts commit b43a7ffbf33be7e4d3b10b7714ee663ea2c52fe2. Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
2013-09-07powerpc: Don't Oops when accessing /proc/powerpc/lparcfg without hypervisorBenjamin Herrenschmidt
commit f5f6cbb61610b7bf9d9d96db9c3979d62a424bab upstream. /proc/powerpc/lparcfg is an ancient facility (though still actively used) which allows access to some informations relative to the partition when running underneath a PAPR compliant hypervisor. It makes no sense on non-pseries machines. However, currently, not only can it be created on these if the kernel has pseries support, but accessing it on such a machine will crash due to trying to do hypervisor calls. In fact, it should also not do HV calls on older pseries that didn't have an hypervisor either. Finally, it has the plumbing to be a module but is a "bool" Kconfig option. This fixes the whole lot by turning it into a machine_device_initcall that is only created on pseries, and adding the necessary hypervisor check before calling the H_GET_EM_PARMS hypercall Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-09-07powerpc: Work around gcc miscompilation of __pa() on 64-bitPaul Mackerras
commit bdbc29c19b2633b1d9c52638fb732bcde7a2031a upstream. On 64-bit, __pa(&static_var) gets miscompiled by recent versions of gcc as something like: addis 3,2,.LANCHOR1+4611686018427387904@toc@ha addi 3,3,.LANCHOR1+4611686018427387904@toc@l This ends up effectively ignoring the offset, since its bottom 32 bits are zero, and means that the result of __pa() still has 0xC in the top nibble. This happens with gcc 4.8.1, at least. To work around this, for 64-bit we make __pa() use an AND operator, and for symmetry, we make __va() use an OR operator. Using an AND operator rather than a subtraction ends up with slightly shorter code since it can be done with a single clrldi instruction, whereas it takes three instructions to form the constant (-PAGE_OFFSET) and add it on. (Note that MEMORY_START is always 0 on 64-bit.) Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>