summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2009-02-12 04:34:23 +0100
committerGreg Kroah-Hartman <gregkh@suse.de>2009-02-17 09:28:48 -0800
commitfa76ac6cbeb58256cf7de97a75d5d7f838a80b32 (patch)
tree6d19d67ae9241088a6385273f58011fae22d2f86
parent703c888fbf2e2e2f2b97bf75ffe5126da3008abb (diff)
Fix page writeback thinko, causing Berkeley DB slowdown
commit 3a4c6800f31ea8395628af5e7e490270ee5d0585 upstream. A bug was introduced into write_cache_pages cyclic writeout by commit 31a12666d8f0c22235297e1c1575f82061480029 ("mm: write_cache_pages cyclic fix"). The intention (and comments) is that we should cycle back and look for more dirty pages at the beginning of the file if there is no more work to be done. But the !done condition was dropped from the test. This means that any time the page writeout loop breaks (eg. due to nr_to_write == 0), we will set index to 0, then goto again. This will set done_index to index, then find done is set, so will proceed to the end of the function. When updating mapping->writeback_index for cyclic writeout, we now use done_index == 0, so we're always cycling back to 0. This seemed to be causing random mmap writes (slapadd and iozone) to start writing more pages from the LRU and writeout would slowdown, and caused bugzilla entry http://bugzilla.kernel.org/show_bug.cgi?id=12604 about Berkeley DB slowing down dramatically. With this patch, iozone random write performance is increased nearly 5x on my system (iozone -B -r 4k -s 64k -s 512m -s 1200m on ext2). Signed-off-by: Nick Piggin <npiggin@suse.de> Reported-and-tested-by: Jan Kara <jack@suse.cz> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-rw-r--r--mm/page-writeback.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 08d2b960b294..11400ed468e7 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -997,7 +997,7 @@ continue_unlock:
pagevec_release(&pvec);
cond_resched();
}
- if (!cycled) {
+ if (!cycled && !done) {
/*
* range_cyclic:
* We hit the last page and there is more work to be done: wrap