Go to:
Gentoo Home
Documentation
Forums
Lists
Bugs
Planet
Store
Wiki
Get Gentoo!
Gentoo's Bugzilla – Attachment 266435 Details for
Bug 357657
Fix possible cause of a page_mapped BUG in mm/mremap.c
Home
|
New
–
[Ex]
|
Browse
|
Search
|
Privacy Policy
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
[x]
|
Forgot Password
Login:
[x]
[patch]
prevent-concurrent-unmap_mapping_range fix (backported to 2.6.32.x)
2.6.32.33_mm-prevent-concurrent-unmap_mapping_range-on-same-inode.patch (text/plain), 7.07 KB, created by
RumpletonBongworth
on 2011-03-19 06:04:39 UTC
(
hide
)
Description:
prevent-concurrent-unmap_mapping_range fix (backported to 2.6.32.x)
Filename:
MIME Type:
Creator:
RumpletonBongworth
Created:
2011-03-19 06:04:39 UTC
Size:
7.07 KB
patch
obsolete
>From: Kerin Millar <kerframil@gmail.com> > >A backport of 2aa15890f3c191326678f1bd68af61ec6b8753ec to 2.6.32.x >--- > >From: Miklos Szeredi <mszeredi@suse.cz> >Date: Wed, 23 Feb 2011 12:49:47 +0000 (+0100) >Subject: mm: prevent concurrent unmap_mapping_range() on the same inode >X-Git-Tag: v2.6.38-rc7~45 >X-Git-Url: http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Flinux-2.6.git;a=commitdiff_plain;h=2aa15890f3c191326678f1bd68af61ec6b8753ec > >mm: prevent concurrent unmap_mapping_range() on the same inode > >Michael Leun reported that running parallel opens on a fuse filesystem >can trigger a "kernel BUG at mm/truncate.c:475" > >Gurudas Pai reported the same bug on NFS. > >The reason is, unmap_mapping_range() is not prepared for more than >one concurrent invocation per inode. For example: > > thread1: going through a big range, stops in the middle of a vma and > stores the restart address in vm_truncate_count. > > thread2: comes in with a small (e.g. single page) unmap request on > the same vma, somewhere before restart_address, finds that the > vma was already unmapped up to the restart address and happily > returns without doing anything. > >Another scenario would be two big unmap requests, both having to >restart the unmapping and each one setting vm_truncate_count to its >own value. This could go on forever without any of them being able to >finish. > >Truncate and hole punching already serialize with i_mutex. Other >callers of unmap_mapping_range() do not, and it's difficult to get >i_mutex protection for all callers. In particular ->d_revalidate(), >which calls invalidate_inode_pages2_range() in fuse, may be called >with or without i_mutex. > >This patch adds a new mutex to 'struct address_space' to prevent >running multiple concurrent unmap_mapping_range() on the same mapping. > >[ We'll hopefully get rid of all this with the upcoming mm > preemptibility series by Peter Zijlstra, the "mm: Remove i_mmap_mutex > lockbreak" patch in particular. But that is for 2.6.39 ] > >Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> >Reported-by: Michael Leun <lkml20101129@newton.leun.net> >Reported-by: Gurudas Pai <gurudas.pai@oracle.com> >Tested-by: Gurudas Pai <gurudas.pai@oracle.com> >Acked-by: Hugh Dickins <hughd@google.com> >Cc: stable@kernel.org >Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> >--- > >diff -urp a/fs/inode.c b/fs/inode.c >--- a/fs/inode.c 2009-12-03 03:51:21.000000000 +0000 >+++ b/fs/inode.c 2011-03-19 05:53:48.415083502 +0000 >@@ -256,6 +256,20 @@ void destroy_inode(struct inode *inode) > kmem_cache_free(inode_cachep, (inode)); > } > >+void address_space_init_once(struct address_space *mapping) >+{ >+ memset(mapping, 0, sizeof(*mapping)); >+ INIT_RADIX_TREE(&mapping->page_tree, GFP_ATOMIC); >+ spin_lock_init(&mapping->tree_lock); >+ spin_lock_init(&mapping->i_mmap_lock); >+ INIT_LIST_HEAD(&mapping->private_list); >+ spin_lock_init(&mapping->private_lock); >+ INIT_RAW_PRIO_TREE_ROOT(&mapping->i_mmap); >+ INIT_LIST_HEAD(&mapping->i_mmap_nonlinear); >+ mutex_init(&mapping->unmap_mutex); >+} >+EXPORT_SYMBOL(address_space_init_once); >+ > /* > * These are initializations that only need to be done > * once, because the fields are idempotent across use >@@ -267,13 +281,7 @@ void inode_init_once(struct inode *inode > INIT_HLIST_NODE(&inode->i_hash); > INIT_LIST_HEAD(&inode->i_dentry); > INIT_LIST_HEAD(&inode->i_devices); >- INIT_RADIX_TREE(&inode->i_data.page_tree, GFP_ATOMIC); >- spin_lock_init(&inode->i_data.tree_lock); >- spin_lock_init(&inode->i_data.i_mmap_lock); >- INIT_LIST_HEAD(&inode->i_data.private_list); >- spin_lock_init(&inode->i_data.private_lock); >- INIT_RAW_PRIO_TREE_ROOT(&inode->i_data.i_mmap); >- INIT_LIST_HEAD(&inode->i_data.i_mmap_nonlinear); >+ address_space_init_once(&inode->i_data); > i_size_ordered_init(inode); > #ifdef CONFIG_INOTIFY > INIT_LIST_HEAD(&inode->inotify_watches); >diff -urp a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c >--- a/fs/nilfs2/btnode.c 2009-12-03 03:51:21.000000000 +0000 >+++ b/fs/nilfs2/btnode.c 2011-03-19 05:50:52.935595044 +0000 >@@ -34,19 +34,6 @@ > #include "btnode.h" > > >-void nilfs_btnode_cache_init_once(struct address_space *btnc) >-{ >- memset(btnc, 0, sizeof(*btnc)); >- INIT_RADIX_TREE(&btnc->page_tree, GFP_ATOMIC); >- spin_lock_init(&btnc->tree_lock); >- INIT_LIST_HEAD(&btnc->private_list); >- spin_lock_init(&btnc->private_lock); >- >- spin_lock_init(&btnc->i_mmap_lock); >- INIT_RAW_PRIO_TREE_ROOT(&btnc->i_mmap); >- INIT_LIST_HEAD(&btnc->i_mmap_nonlinear); >-} >- > static const struct address_space_operations def_btnode_aops = { > .sync_page = block_sync_page, > }; >diff -urp a/fs/nilfs2/btnode.h b/fs/nilfs2/btnode.h >--- a/fs/nilfs2/btnode.h 2009-12-03 03:51:21.000000000 +0000 >+++ b/fs/nilfs2/btnode.h 2011-03-19 05:50:52.935595044 +0000 >@@ -37,7 +37,6 @@ struct nilfs_btnode_chkey_ctxt { > struct buffer_head *newbh; > }; > >-void nilfs_btnode_cache_init_once(struct address_space *); > void nilfs_btnode_cache_init(struct address_space *, struct backing_dev_info *); > void nilfs_btnode_cache_clear(struct address_space *); > int nilfs_btnode_submit_block(struct address_space *, __u64, sector_t, >diff -urp a/fs/nilfs2/super.c b/fs/nilfs2/super.c >--- a/fs/nilfs2/super.c 2011-03-19 04:21:49.896620815 +0000 >+++ b/fs/nilfs2/super.c 2011-03-19 05:50:52.935595044 +0000 >@@ -166,7 +166,7 @@ static void init_once(void *obj) > #ifdef CONFIG_NILFS_XATTR > init_rwsem(&ii->xattr_sem); > #endif >- nilfs_btnode_cache_init_once(&ii->i_btnode_cache); >+ address_space_init_once(&ii->i_btnode_cache); > ii->i_bmap = (struct nilfs_bmap *)&ii->i_bmap_union; > inode_init_once(&ii->vfs_inode); > } >diff -urp a/include/linux/fs.h b/include/linux/fs.h >--- a/include/linux/fs.h 2011-03-19 04:21:49.924620913 +0000 >+++ b/include/linux/fs.h 2011-03-19 05:50:52.935595044 +0000 >@@ -635,6 +635,7 @@ struct address_space { > spinlock_t private_lock; /* for use by the address_space */ > struct list_head private_list; /* ditto */ > struct address_space *assoc_mapping; /* ditto */ >+ struct mutex unmap_mutex; /* to protect unmapping */ > } __attribute__((aligned(sizeof(long)))); > /* > * On most architectures that alignment is already the case; but >@@ -2158,6 +2159,7 @@ extern loff_t vfs_llseek(struct file *fi > > extern int inode_init_always(struct super_block *, struct inode *); > extern void inode_init_once(struct inode *); >+extern void address_space_init_once(struct address_space *mapping); > extern void inode_add_to_lists(struct super_block *, struct inode *); > extern void iput(struct inode *); > extern struct inode * igrab(struct inode *); >diff -urp a/mm/memory.c b/mm/memory.c >--- a/mm/memory.c 2011-03-19 04:21:49.964620834 +0000 >+++ b/mm/memory.c 2011-03-19 05:50:52.939088167 +0000 >@@ -2454,6 +2454,7 @@ void unmap_mapping_range(struct address_ > details.last_index = ULONG_MAX; > details.i_mmap_lock = &mapping->i_mmap_lock; > >+ mutex_lock(&mapping->unmap_mutex); > spin_lock(&mapping->i_mmap_lock); > > /* Protect against endless unmapping loops */ >@@ -2470,6 +2471,7 @@ void unmap_mapping_range(struct address_ > if (unlikely(!list_empty(&mapping->i_mmap_nonlinear))) > unmap_mapping_range_list(&mapping->i_mmap_nonlinear, &details); > spin_unlock(&mapping->i_mmap_lock); >+ mutex_unlock(&mapping->unmap_mutex); > } > EXPORT_SYMBOL(unmap_mapping_range); >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
Attachments on
bug 357657
:
266427
|
266429
|
266431
| 266435