summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-08-29 17:02:48 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2013-08-29 17:02:48 -0700
commitff497452636f4687e517964817b7e2bd99f4b44b (patch)
treea02fd3216941b71ea58c6b57f84344be64168301
parent06a557f7a68e126181f09831cb2fac6f7a7f43c2 (diff)
parentb22ce2785d97423846206cceec4efee0c4afd980 (diff)
Merge branch 'for-3.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq
Pull workqueue fix from Tejun Heo: "This contains one fix which could lead to system-wide lockup on !PREEMPT kernels. It's very late in the cycle but this definitely is a -stable material. The problem is that workqueue worker tasks may process unlimited number of work items back-to-back without every yielding inbetween. This usually isn't noticeable but a work item which re-queues itself waiting for someone else to do something can deadlock with stop_machine. stop_machine will ensure nothing else happens on all other cpus and the requeueing work item will reqeueue itself indefinitely without ever yielding and thus preventing the CPU from entering stop_machine. Kudos to Jamie Liu for spotting and diagnosing the problem. This can be trivially fixed by adding cond_resched() after processing each work item" * 'for-3.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: cond_resched() after processing each work item
-rw-r--r--kernel/workqueue.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7f5d4be22034..e93f7b9067d8 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2201,6 +2201,15 @@ __acquires(&pool->lock)
dump_stack();
}
+ /*
+ * The following prevents a kworker from hogging CPU on !PREEMPT
+ * kernels, where a requeueing work item waiting for something to
+ * happen could deadlock with stop_machine as such work item could
+ * indefinitely requeue itself while all other CPUs are trapped in
+ * stop_machine.
+ */
+ cond_resched();
+
spin_lock_irq(&pool->lock);
/* clear cpu intensive status */