Go to:
Gentoo Home
Documentation
Forums
Lists
Bugs
Planet
Store
Wiki
Get Gentoo!
Gentoo's Bugzilla – Attachment 512710 Details for
Bug 643098
dev-util/wiggle-0.9-r1 40 tests fail
Home
|
New
–
[Ex]
|
Browse
|
Search
|
Privacy Policy
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
[x]
|
Forgot Password
Login:
[x]
build.log
wiggle_build.log (text/plain), 127.48 KB, created by
ernsteiswuerfel
on 2018-01-02 12:49:26 UTC
(
hide
)
Description:
build.log
Filename:
MIME Type:
Creator:
ernsteiswuerfel
Created:
2018-01-02 12:49:26 UTC
Size:
127.48 KB
patch
obsolete
>[32;01m * [39;49;00mPackage: dev-util/wiggle-0.9-r1 >[32;01m * [39;49;00mRepository: gentoo >[32;01m * [39;49;00mMaintainer: robbat2@gentoo.org >[32;01m * [39;49;00mUSE: abi_ppc_32 elibc_glibc kernel_linux ppc test userland_GNU >[32;01m * [39;49;00mFEATURES: network-sandbox preserve-libs sandbox test userpriv usersandbox >>>> Unpacking source... >>>> Unpacking wiggle-0.9.tar.gz to /var/tmp/portage/dev-util/wiggle-0.9-r1/work >>>> Source unpacked in /var/tmp/portage/dev-util/wiggle-0.9-r1/work >>>> Preparing source in /var/tmp/portage/dev-util/wiggle-0.9-r1/work/wiggle-0.9 ... > [32;01m*[0m Replacing obsolete head/tail with POSIX compliant ones > [32;01m*[0m - fixed p >>>> Source prepared. >>>> Configuring source in /var/tmp/portage/dev-util/wiggle-0.9-r1/work/wiggle-0.9 ... >>>> Source configured. >>>> Compiling source in /var/tmp/portage/dev-util/wiggle-0.9-r1/work/wiggle-0.9 ... >make -j3 CC=powerpc-unknown-linux-gnu-gcc 'CFLAGS=-O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall' wiggle >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o wiggle.o wiggle.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o load.o load.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o parse.o parse.c >[01m[Kwiggle.c:[m[K In function â[01m[Kxmalloc[m[Kâ: >[01m[Kwiggle.c:108:3:[m[K [01;35m[Kwarning: [m[Kignoring return value of â[01m[Kwrite[m[Kâ, declared with attribute warn_unused_result [[01;35m[K-Wunused-result[m[K] > [01;35m[Kwrite(2, msg, strlen(msg))[m[K; > [01;35m[K^~~~~~~~~~~~~~~~~~~~~~~~~~[m[K >[01m[Kwiggle.c:[m[K In function â[01m[Kmulti_merge[m[Kâ: >[01m[Kwiggle.c:625:3:[m[K [01;35m[Kwarning: [m[Kignoring return value of â[01m[Kasprintf[m[Kâ, declared with attribute warn_unused_result [[01;35m[K-Wunused-result[m[K] > [01;35m[Kasprintf(&name, "_wiggle_:%d:%d:%s",[m[K > [01;35m[K^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[m[K > [01;35m[K pl[i].start, pl[i].end, filename)[m[K; > [01;35m[K~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[m[K >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o split.o split.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o extract.o extract.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o diff.o diff.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o bestmatch.o bestmatch.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o ReadMe.o ReadMe.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o merge2.o merge2.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o vpatch.o vpatch.c >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o ccan/hash/hash.o ccan/hash/hash.c >[01m[Kvpatch.c:[m[K In function â[01m[Kmain_window[m[Kâ: >[01m[Kvpatch.c:2131:2:[m[K [01;35m[Kwarning: [m[Kignoring return value of â[01m[Kfreopen[m[Kâ, declared with attribute warn_unused_result [[01;35m[K-Wunused-result[m[K] > [01;35m[Kfreopen("/dev/null","w",stderr)[m[K; > [01;35m[K^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[m[K >[01m[Kvpatch.c:[m[K In function â[01m[Kshow_merge[m[Kâ: >[01m[Kvpatch.c:1894:2:[m[K [01;35m[Kwarning: [m[Kignoring return value of â[01m[Kfreopen[m[Kâ, declared with attribute warn_unused_result [[01;35m[K-Wunused-result[m[K] > [01;35m[Kfreopen("/dev/null","w",stderr)[m[K; > [01;35m[K^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[m[K >powerpc-unknown-linux-gnu-gcc -O2 -pipe -mcpu=powerpc -mtune=powerpc -Wall -I. -c -o split.o split.c >[01m[Kvpatch.c:[m[K In function â[01m[Kmerge_window[m[Kâ: >[01m[Kvpatch.c:1331:10:[m[K [01;35m[Kwarning: [m[Kâ[01m[Ke[m[Kâ may be used uninitialized in this function [[01;35m[K-Wmaybe-uninitialized[m[K] > char *[01;35m[Ke[m[K, e2[7]; > [01;35m[K^[m[K >powerpc-unknown-linux-gnu-gcc -Wl,-O1 -Wl,--as-needed -Wl,--hash-style=gnu wiggle.o load.o parse.o split.o extract.o diff.o bestmatch.o ReadMe.o merge2.o vpatch.o ccan/hash/hash.o -lncurses -o wiggle >>>> Source compiled. >>>> Test phase: dev-util/wiggle-0.9-r1 >make -j3 test >./dotest >./linux/md-messy/diff SUCCEEDED 0.00 >5 unresolved conflicts found >4136 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:40.926770759 +0100 >@@ -642,12 +642,14 @@ > > <<<<<<< > del_mddev_mapping(mddev, MKDEV(MD_MAJOR, mdidx(mddev))); >+ md_list_del(&mddev->all_mddevs); > ||||||| > del_mddev_mapping(mddev, mk_kdev(MD_MAJOR, mdidx(mddev))); >+ md_list_del(&mddev->all_mddevs); > ======= > mddev_map[mdidx(mddev)] = NULL; >->>>>>>> > md_list_del(&mddev->all_mddevs); >+>>>>>>> > kfree(mddev); > MOD_DEC_USE_COUNT; > } >@@ -1990,6 +1992,20 @@ > > #undef BAD_VERSION > #undef OUT_OF_MEM >+<<<<<<< >+#undef NO_DEVICE >+#undef AUTOADD_FAILED_USED >+#undef AUTOADD_FAILED >+#undef AUTORUN_FAILED >+#undef AUTOADDING >+||||||| >+#undef NO_DEVICE >+#undef AUTOADD_FAILED_USED >+#undef AUTOADD_FAILED >+#undef AUTORUN_FAILED >+#undef AUTOADDING >+#undef AUTORUNNING >+======= > #undef NO_DEVICE > #undef AUTOADD_FAILED_USED > #undef AUTOADD_FAILED >@@ -1997,10 +2013,24 @@ > #undef AUTOADDING > #undef AUTORUNNING > >+>>>>>>> >+#undef AUTORUNNING >+<<<<<<< >+ >+ >+static int get_version(void * arg) >+{ >+ mdu_version_t ver; >+||||||| > > static int get_version(void * arg) > { > mdu_version_t ver; >+======= >+static int get_version(void * arg) >+{ >+ mdu_version_t ver; >+>>>>>>> > > ver.major = MD_MAJOR_VERSION; > ver.minor = MD_MINOR_VERSION; >@@ -3949,14 +3979,18 @@ > MD_EXPORT_SYMBOL(md_update_sb); > MD_EXPORT_SYMBOL(md_wakeup_thread); > MD_EXPORT_SYMBOL(md_print_devices); >-MD_EXPORT_SYMBOL(find_rdev_nr); > <<<<<<< >+MD_EXPORT_SYMBOL(find_rdev_nr); > MD_EXPORT_SYMBOL(md_interrupt_thread); > MD_EXPORT_SYMBOL(mddev_map); >+MODULE_LICENSE("GPL"); > ||||||| >+MD_EXPORT_SYMBOL(find_rdev_nr); > MD_EXPORT_SYMBOL(md_interrupt_thread); > EXPORT_SYMBOL(mddev_map); >+MODULE_LICENSE("GPL"); > ======= >+MD_EXPORT_SYMBOL(find_rdev_nr); > MD_EXPORT_SYMBOL(md_interrupt_thread); >->>>>>>> > MODULE_LICENSE("GPL"); >+>>>>>>> >./linux/md-loop/merge FAILED 3.86 >2 unresolved conflicts found >./linux/raid5build/merge SUCCEEDED 0.00 >1 unresolved conflict found >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:47.915878414 +0100 >@@ -955,10 +955,6 @@ > > <<<<<<< > hlist_del_init(&inode->i_hash); >-||||||| >- list_del_init(&inode->i_hash); >-======= >->>>>>>> > list_del_init(&inode->i_list); > inode->i_state|=I_FREEING; > inodes_stat.nr_inodes--; >@@ -1356,3 +1352,13 @@ > printk(KERN_DEBUG "init_special_inode: bogus i_mode (%o)\n", > mode); > } >+||||||| >+ list_del_init(&inode->i_hash); >+ list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; >+ inodes_stat.nr_inodes--; >+======= >+ list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; >+ inodes_stat.nr_inodes--; >+>>>>>>> >./linux/inode-justrej/merge FAILED 0.01 >1 unresolved conflict found >--- wmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:47.969235030 +0100 >@@ -953,7 +953,8 @@ > { > struct super_operations *op = inode->i_sb->s_op; > >-<<<---hlist_del_init|||list_del_init===--->>> list_del_init(&inode->i_list); >+<<<---hlist_del_init(&inode->i_hash); >+ list_del_init(&inode->i_list); > inode->i_state|=I_FREEING; > inodes_stat.nr_inodes--; > spin_unlock(&inode_lock); >@@ -1350,3 +1351,11 @@ > printk(KERN_DEBUG "init_special_inode: bogus i_mode (%o)\n", > mode); > } >+|||list_del_init(&inode->i_hash); >+ list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; >+ inodes_stat.nr_inodes--; >+=== list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; >+ inodes_stat.nr_inodes--; >+--->>> >\ No newline at end of file >./linux/inode-justrej/wmerge FAILED 0.01 >1 unresolved conflict found >--- lmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:48.021764597 +0100 >@@ -955,12 +955,6 @@ > <<<<<<< > > hlist_del_init(&inode->i_hash); >-||||||| >- >- list_del_init(&inode->i_hash); >-======= >- >->>>>>>> > list_del_init(&inode->i_list); > inode->i_state|=I_FREEING; > inodes_stat.nr_inodes--; >@@ -1358,3 +1352,15 @@ > printk(KERN_DEBUG "init_special_inode: bogus i_mode (%o)\n", > mode); > } >+||||||| >+ >+ list_del_init(&inode->i_hash); >+ list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; >+ inodes_stat.nr_inodes--; >+======= >+ >+ list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; >+ inodes_stat.nr_inodes--; >+>>>>>>> >./linux/inode-justrej/lmerge FAILED 0.00 >1 unresolved conflict found >7 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:48.060014755 +0100 >@@ -40,8 +40,24 @@ > #define MAY_OWNER_OVERRIDE 64 > #define MAY_LOCAL_ACCESS 128 /* IRIX doing local access check on device special file*/ > #if (MAY_SATTR | MAY_TRUNC | MAY_LOCK | MAY_OWNER_OVERRIDE | MAY_LOCAL_ACCESS) & (MAY_READ | MAY_WRITE | MAY_EXEC) >+<<<<<<< >+# error "please use a different value for MAY_SATTR or MAY_TRUNC or MAY_LOCK or MAY_OWNER_OVERRIDE." >+#endif >+||||||| >+#define MAY_LOCK 32 >+#define MAY_OWNER_OVERRIDE 64 >+#define MAY_LOCAL_ACCESS 128 /* IRIX doing local access check on device special file*/ >+#if (MAY_SATTR | MAY_TRUNC | MAY_LOCK | MAX_OWNER_OVERRIDE | MAY_LOCAL_ACCESS) & (MAY_READ | MAY_WRITE | MAY_EXEC | MAY_OWNER_OVERRIDE) >+# error "please use a different value for MAY_SATTR or MAY_TRUNC or MAY_LOCK or MAY_OWNER_OVERRIDE." >+#endif >+======= >+#define MAY_LOCK 32 >+#define MAY_OWNER_OVERRIDE 64 >+#define MAY_LOCAL_ACCESS 128 /* IRIX doing local access check on device special file*/ >+#if (MAY_SATTR | MAY_TRUNC | MAY_LOCK | MAY_OWNER_OVERRIDE | MAY_LOCAL_ACCESS) & (MAY_READ | MAY_WRITE | MAY_EXEC) > # error "please use a different value for MAY_SATTR or MAY_TRUNC or MAY_LOCK or MAY_LOCAL_ACCESS or MAY_OWNER_OVERRIDE." > #endif >+>>>>>>> > #define MAY_CREATE (MAY_EXEC|MAY_WRITE) > #define MAY_REMOVE (MAY_EXEC|MAY_WRITE|MAY_TRUNC) > >./linux/nfsd-defines/merge FAILED 0.00 >2 unresolved conflicts found >6 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:48.102888680 +0100 >@@ -122,6 +122,7 @@ > */ > static void > svc_sock_enqueue(struct svc_sock *svsk) >+<<<<<<< > { > struct svc_serv *serv = svsk->sk_server; > struct svc_rqst *rqstp; >@@ -640,11 +641,43 @@ > svsk->sk_sk->data_ready = svc_udp_data_ready; > svsk->sk_sk->write_space = svc_write_space; > svsk->sk_recvfrom = svc_udp_recvfrom; >+||||||| >+ { >+ struct sock *sk = svsk->sk_sk; >+ >+ svsk->sk_recvfrom = svc_tcp_recvfrom; >+ svsk->sk_sendto = svc_tcp_sendto; >+======= >+ { >+ struct sock *sk = svsk->sk_sk; >+ struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp); >+ >+ svsk->sk_recvfrom = svc_tcp_recvfrom; >+ svsk->sk_sendto = svc_tcp_sendto; >+>>>>>>> >+<<<<<<< > svsk->sk_sendto = svc_udp_sendto; > > /* initialise setting must have enough space to > * receive and respond to one request. > * svc_udp_recvfrom will re-adjust if necessary >+||||||| >+ svsk->sk_reclen = 0; >+ svsk->sk_tcplen = 0; >+ >+ /* initialise setting must have enough space to >+ * receive and respond to one request. >+ * svc_tcp_recvfrom will re-adjust if necessary >+======= >+ svsk->sk_reclen = 0; >+ svsk->sk_tcplen = 0; >+ >+ tp->nonagle = 1; /* disable Nagle's algorithm */ >+ >+ /* initialise setting must have enough space to >+ * receive and respond to one request. >+ * svc_tcp_recvfrom will re-adjust if necessary >+>>>>>>> > */ > svc_sock_setbufsize(svsk->sk_sock, > 3 * svsk->sk_server->sv_bufsz, >@@ -1015,7 +1048,6 @@ > svc_tcp_init(struct svc_sock *svsk) > { > struct sock *sk = svsk->sk_sk; >- struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp); > > svsk->sk_recvfrom = svc_tcp_recvfrom; > svsk->sk_sendto = svc_tcp_sendto; >@@ -1031,27 +1063,11 @@ > sk->write_space = svc_write_space; > > svsk->sk_reclen = 0; >-<<<<<<< > svsk->sk_tcplen = 0; > > /* initialise setting must have enough space to > * receive and respond to one request. > * svc_tcp_recvfrom will re-adjust if necessary >-||||||| >- svsk->sk_tcplen = 0; >- >- /* initialise setting must have enough space to >- * receive and respond to one request. >- * svc_tcp_recvfrom will re-adjust if necessary >-======= >- svsk->sk_tcplen = 0; >- >- tp->nonagle = 1; /* disable Nagle's algorithm */ >- >- /* initialise setting must have enough space to >- * receive and respond to one request. >- * svc_tcp_recvfrom will re-adjust if necessary >->>>>>>> > */ > svc_sock_setbufsize(svsk->sk_sock, > 3 * svsk->sk_server->sv_bufsz, >./linux/rpc_tcp_nonagle/merge FAILED 0.01 >1 unresolved conflict found >./linux/raid5line/merge SUCCEEDED 0.00 >1 unresolved conflict found >--- wmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:48.194473765 +0100 >@@ -1 +1 @@ >-<<<--- clear_bit(BH_Uptodate, &||| clear_buffer_uptodate(=== dev--->>>-><<<---->b_state|||===flags = 0--->>>; >+<<<--- clear_bit(BH_Uptodate, &sh->bh_cache[i]->b_state||| clear_buffer_uptodate(sh->bh_cache[i]=== dev->flags = 0--->>>; >./linux/raid5line/wmerge FAILED 0.00 >1 unresolved conflict found >./linux/raid5line/lmerge SUCCEEDED 0.00 >18 unresolved conflicts found >10 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:48.261895159 +0100 >@@ -64,6 +64,7 @@ > struct buffer_head *bh=NULL; > > while(cnt) { >+<<<<<<< > struct buffer_head *t; > md_spin_lock_irq(&conf->device_lock); > if (!conf->freebh_blocked && conf->freebh_cnt >= cnt) >@@ -319,6 +320,37 @@ > __free_page(r1_bh->bh_req.b_page); > kfree(r1_bh); > } >+||||||| >+ * device if no resync is going on, or below the resync window. >+ * We take the first readable disk when above the resync window. >+ */ >+ if (conf->resync_mirrors && (this_sector + sectors >= conf->next_resync)) { >+ /* make sure that disk is operational */ >+ new_disk = 0; >+ while (!conf->mirrors[new_disk].operational || conf->mirrors[new_disk].write_only) { >+======= >+ * device if no resync is going on, or below the resync window. >+ * We take the first readable disk when above the resync window. >+ */ >+ if (!conf->mddev->in_sync && (this_sector + sectors >= conf->next_resync)) { >+ /* make sure that disk is operational */ >+ new_disk = 0; >+ while (!conf->mirrors[new_disk].operational || conf->mirrors[new_disk].write_only) { >+>>>>>>> >+<<<<<<< >+ >+||||||| >+ if (conf->barrier) BUG(); >+ if (waitqueue_active(&conf->wait_idle)) BUG(); >+ if (waitqueue_active(&conf->wait_resume)) BUG(); >+======= >+ if (conf->barrier) BUG(); >+ if (waitqueue_active(&conf->wait_idle)) BUG(); >+ if (waitqueue_active(&conf->wait_resume)) BUG(); >+>>>>>>> >+ >+ mempool_destroy(conf->r1buf_pool); >+ conf->r1buf_pool = NULL; > } > > static int raid1_map (mddev_t *mddev, kdev_t *rdev) >@@ -483,7 +515,7 @@ > * Check if it is sane at all to balance > */ > >- if (!conf->mddev->in_sync) >+ if (conf->resync_mirrors) > goto rb_out; > > >@@ -851,9 +883,6 @@ > conf->cnt_done = 0; > spin_unlock_irq(&conf->segment_lock); > wake_up(&conf->wait_done); >- >- mempool_destroy(conf->r1buf_pool); >- conf->r1buf_pool = NULL; > } > > static int raid1_diskop(mddev_t *mddev, mdp_disk_t **d, int state) >@@ -977,16 +1006,11 @@ > * Deactivate a spare disk: > */ > case DISKOP_SPARE_INACTIVE: >-<<<<<<< > if (conf->start_future > 0) { > MD_BUG(); > err = -EBUSY; > break; > } >-||||||| >- close_sync(conf); >-======= >->>>>>>> > sdisk = conf->mirrors + spare_disk; > sdisk->operational = 0; > sdisk->write_only = 0; >@@ -999,16 +1023,11 @@ > * property) > */ > case DISKOP_SPARE_ACTIVE: >-<<<<<<< > if (conf->start_future > 0) { > MD_BUG(); > err = -EBUSY; > break; > } >-||||||| >- close_sync(conf); >-======= >->>>>>>> > sdisk = conf->mirrors + spare_disk; > fdisk = conf->mirrors + failed_disk; > >@@ -1137,23 +1156,12 @@ > } > abort: > md_spin_unlock_irq(&conf->device_lock); >-<<<<<<< > if (state == DISKOP_SPARE_ACTIVE || state == DISKOP_SPARE_INACTIVE) > /* should move to "END_REBUILD" when such exists */ > raid1_shrink_buffers(conf); >+<<<<<<< > > print_raid1_conf(conf); >-||||||| >- if (state == DISKOP_SPARE_ACTIVE || state == DISKOP_SPARE_INACTIVE) { >- mempool_destroy(conf->r1buf_pool); >- conf->r1buf_pool = NULL; >- } >- >- print_conf(conf); >-======= >- >- print_conf(conf); >->>>>>>> > return err; > } > >@@ -1213,9 +1221,29 @@ > continue; > if (i==conf->last_used) > /* we read from here, no need to write */ >+||||||| >+ } >+abort: >+ spin_unlock_irq(&conf->device_lock); >+ if (state == DISKOP_SPARE_ACTIVE || state == DISKOP_SPARE_INACTIVE) { >+ mempool_destroy(conf->r1buf_pool); >+ conf->r1buf_pool = NULL; >+ } >+ >+ print_conf(conf); >+ return err; >+======= >+ } >+abort: >+ spin_unlock_irq(&conf->device_lock); >+ >+ print_conf(conf); >+ return err; >+>>>>>>> >+<<<<<<< > continue; > if (i < conf->raid_disks >- && mddev->in_sync) >+ && !conf->resync_mirrors) > /* don't need to write this, > * we are just rebuilding */ > continue; >@@ -1298,28 +1326,87 @@ > md_spin_unlock_irqrestore(&retry_list_lock, flags); > } > #undef IO_ERROR >+||||||| >+ * we read from here, no need to write >+ */ >+ continue; >+ if (i < conf->raid_disks && !conf->resync_mirrors) >+ /* >+ * don't need to write this we are just rebuilding >+ */ >+======= >+ * we read from here, no need to write >+ */ >+ continue; >+ if (i < conf->raid_disks && mddev->in_sync) >+ /* >+ * don't need to write this we are just rebuilding >+ */ >+>>>>>>> >+<<<<<<< > #undef REDIRECT_SECTOR > >+||||||| >+ spin_unlock_irqrestore(&retry_list_lock, flags); >+} >+ >+======= >+ spin_unlock_irqrestore(&retry_list_lock, flags); >+} >+ >+>>>>>>> > >-static int init_resync (conf_t *conf) >+<<<<<<< >+ * Private kernel thread to reconstruct mirrors after an unclean >+ * shutdown. >+ */ >+static void raid1syncd (void *data) >+||||||| >+ * Private kernel thread to reconstruct mirrors after an unclean >+ * shutdown. >+ */ >+static void raid1syncd(void *data) >+======= >+static int init_resync(conf_t *conf) >+>>>>>>> > { > *** 1144,16 **** 8 > <<<<<<< > raid1_conf_t *conf = data; > mddev_t *mddev = conf->mddev; >+ >+ if (!conf->resync_mirrors) >+ return; >+ if (mddev->recovery_running != 2) >+ return; >+ if (!md_do_sync(mddev, NULL)) { >+ /* >+ * Only if everything went Ok. >+ */ >+ conf->resync_mirrors = 0; > ||||||| > conf_t *conf = data; > mddev_t *mddev = conf->mddev; >+ >+ if (!conf->resync_mirrors) >+ return; >+ if (mddev->recovery_running != 2) >+ return; >+ if (!md_do_sync(mddev, NULL)) { >+ /* >+ * Only if everything went Ok. >+ */ >+ conf->resync_mirrors = 0; > ======= > sector_t max_sector, nr_sectors; > int disk, partial; >->>>>>>> > > if (sector_nr == 0) > if (init_resync(conf)) > return -ENOMEM; >- >+>>>>>>> > <<<<<<< >+ > close_sync(conf); > > } >@@ -1481,37 +1568,11 @@ > md_sync_acct(bh->b_dev, bh->b_size/512); > > return (bsize >> 9); >-||||||| >- close_sync(conf); > >-} >- >-static int init_resync(conf_t *conf) >-{ >-*** 1170,9 **** 8 >- sector_t max_sector, nr_sectors; >- int disk, partial; >-======= >- max_sector = mddev->sb->size << 1; >->>>>>>> > nomem: >-<<<<<<< > raid1_shrink_buffers(conf); > return -ENOMEM; > } >-||||||| >- if (!sector_nr) >- if (init_resync(conf)) >- return -ENOMEM; >- /* >-======= >- if (sector_nr >= max_sector) { >- close_sync(conf); >- return 0; >- } >- >- /* >->>>>>>> > > static void end_sync_read(struct buffer_head *bh, int uptodate) > { >@@ -1541,6 +1602,37 @@ > raid1_free_buf(r1_bh); > sync_request_done(sect, mddev_to_conf(mddev)); > md_done_sync(mddev,size>>9, uptodate); >+||||||| >+ >+ close_sync(conf); >+ >+} >+ >+static int init_resync(conf_t *conf) >+{ >+*** 1170,9 **** 8 >+ sector_t max_sector, nr_sectors; >+ int disk, partial; >+ >+ if (!sector_nr) >+ if (init_resync(conf)) >+ return -ENOMEM; >+ /* >+ * If there is non-resync activity waiting for us then >+ * put in a delay to throttle resync. >+======= >+ >+ max_sector = mddev->sb->size << 1; >+ if (sector_nr >= max_sector) { >+ close_sync(conf); >+ return 0; >+ } >+ >+ /* >+ * If there is non-resync activity waiting for us then >+ * put in a delay to throttle resync. >+>>>>>>> >+<<<<<<< > } > } > >@@ -1590,8 +1682,37 @@ > struct mirror_info *disk; > mdp_super_t *sb = mddev->sb; > mdp_disk_t *descriptor; >- mdk_rdev_t *rdev; >+||||||| >+ r1_bio->sector = sector_nr; >+ r1_bio->cmd = SPECIAL; >+ >+ max_sector = mddev->sb->size << 1; >+ if (sector_nr >= max_sector) >+ BUG(); >+ >+ bio = r1_bio->master_bio; >+ nr_sectors = RESYNC_BLOCK_SIZE >> 9; >+ if (max_sector - sector_nr < nr_sectors) >+======= >+ r1_bio->sector = sector_nr; >+ r1_bio->cmd = SPECIAL; >+ >+ bio = r1_bio->master_bio; >+ nr_sectors = RESYNC_BLOCK_SIZE >> 9; >+ if (max_sector - sector_nr < nr_sectors) >+>>>>>>> >+<<<<<<< >+ mdk_rdev_t *rdev; > struct md_list_head *tmp; >+||||||| >+ mdp_disk_t *descriptor; >+ mdk_rdev_t *rdev; >+ struct list_head *tmp; >+======= >+ mdp_disk_t *descriptor; >+ mdk_rdev_t *rdev; >+ struct list_head *tmp; >+>>>>>>> > > MOD_INC_USE_COUNT; > >@@ -1768,12 +1889,30 @@ > const char * name = "raid1syncd"; > > conf->resync_thread = md_register_thread(raid1syncd, conf,name); >+ if (!conf->resync_thread) { >+ printk(THREAD_ERROR, mdidx(mddev)); >+ goto out_free_conf; >+ } >+ >+ printk(START_RESYNC, mdidx(mddev)); >+ conf->resync_mirrors = 1; >+ mddev->recovery_running = 2; >+ md_wakeup_thread(conf->resync_thread); > ||||||| > if (!start_recovery && !(sb->state & (1 << MD_SB_CLEAN)) && > (conf->working_disks > 1)) { > const char * name = "raid1syncd"; > > conf->resync_thread = md_register_thread(raid1syncd, conf, name); >+ if (!conf->resync_thread) { >+ printk(THREAD_ERROR, mdidx(mddev)); >+ goto out_free_conf; >+ } >+ >+ printk(START_RESYNC, mdidx(mddev)); >+ conf->resync_mirrors = 1; >+ mddev->recovery_running = 2; >+ md_wakeup_thread(conf->resync_thread); > ======= > >>>>>>> > >@@ -1840,33 +1979,26 @@ > static int raid1_restart_resync (mddev_t *mddev) > { > raid1_conf_t *conf = mddev_to_conf(mddev); >-||||||| >-static int stop_resync(mddev_t *mddev) >-{ >- conf_t *conf = mddev_to_conf(mddev); > >- if (conf->resync_thread) { >- if (conf->resync_mirrors) { >- md_interrupt_thread(conf->resync_thread); >- >- printk(KERN_INFO "raid1: mirror resync was not fully finished, restarting next time.\n"); >- return 1; >+ if (conf->resync_mirrors) { >+ if (!conf->resync_thread) { >+ MD_BUG(); >+ return 0; > } >- return 0; >+ mddev->recovery_running = 2; >+ md_wakeup_thread(conf->resync_thread); >+ return 1; > } > return 0; > } > >-static int restart_resync(mddev_t *mddev) >-{ >- conf_t *conf = mddev_to_conf(mddev); >-======= >->>>>>>> > static int raid1_stop (mddev_t *mddev) > { > raid1_conf_t *conf = mddev_to_conf(mddev); > > md_unregister_thread(conf->thread); >+ if (conf->resync_thread) >+ md_unregister_thread(conf->resync_thread); > raid1_shrink_r1bh(conf); > raid1_shrink_bh(conf); > raid1_shrink_buffers(conf); >@@ -1885,20 +2017,90 @@ > status: raid1_status, > error_handler: raid1_error, > diskop: raid1_diskop, >-<<<<<<< > stop_resync: raid1_stop_resync, > restart_resync: raid1_restart_resync, >-||||||| >- stop_resync: stop_resync, >- restart_resync: restart_resync, >-======= >->>>>>>> > sync_request: raid1_sync_request > }; > > static int md__init raid1_init (void) > { > return register_md_personality (RAID1, &raid1_personality); >+||||||| >+ return -EIO; >+} >+ >+static int stop_resync(mddev_t *mddev) >+{ >+ conf_t *conf = mddev_to_conf(mddev); >+ >+ if (conf->resync_thread) { >+ if (conf->resync_mirrors) { >+ md_interrupt_thread(conf->resync_thread); >+ >+ printk(KERN_INFO "raid1: mirror resync was not fully finished, restarting next time.\n"); >+ return 1; >+ } >+ return 0; >+ } >+ return 0; >+} >+ >+static int restart_resync(mddev_t *mddev) >+{ >+ conf_t *conf = mddev_to_conf(mddev); >+ >+ if (conf->resync_mirrors) { >+ if (!conf->resync_thread) { >+ MD_BUG(); >+ return 0; >+ } >+ mddev->recovery_running = 2; >+ md_wakeup_thread(conf->resync_thread); >+ return 1; >+ } >+ return 0; >+} >+ >+static int stop(mddev_t *mddev) >+{ >+ conf_t *conf = mddev_to_conf(mddev); >+ int i; >+ >+ md_unregister_thread(conf->thread); >+ if (conf->resync_thread) >+ md_unregister_thread(conf->resync_thread); >+ if (conf->r1bio_pool) >+ mempool_destroy(conf->r1bio_pool); >+ for (i = 0; i < MD_SB_DISKS; i++) >+======= >+ return -EIO; >+} >+ >+static int stop(mddev_t *mddev) >+{ >+ conf_t *conf = mddev_to_conf(mddev); >+ int i; >+ >+ md_unregister_thread(conf->thread); >+ if (conf->r1bio_pool) >+ mempool_destroy(conf->r1bio_pool); >+ for (i = 0; i < MD_SB_DISKS; i++) >+>>>>>>> >+<<<<<<< >+ >+||||||| >+ status: status, >+ error_handler: error, >+ diskop: diskop, >+ stop_resync: stop_resync, >+ restart_resync: restart_resync, >+ sync_request: sync_request >+======= >+ status: status, >+ error_handler: error, >+ diskop: diskop, >+ sync_request: sync_request >+>>>>>>> > } > > static void raid1_exit (void) >./linux/md-resync/merge FAILED 0.26 >--- rediff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:59.408192749 +0100 >@@ -4,16 +4,16 @@ > > +void __wait_on_freeing_inode(struct inode *inode); > /* >- * Called with the inode lock held. >- * NOTE: we are not increasing the inode-refcount, you must call __iget() >+| * <<<--Called-->>><<<++Called++>>> <<<--with-->>><<<++with++>>> the <<<--inode-->>><<<++inode++>>> <<<--lock-->>><<<++lock++>>> <<<--held-->>><<<++held++>>>. >+| * <<<--NOTE-->>><<<++NOTE++>>>: <<<--we-->>><<<++we++>>> are not <<<--increasing-->>><<<++increasing++>>> the <<<--inode-->>><<<++inode++>>>-<<<--refcount-->>><<<++refcount++>>>, you <<<--must-->>><<<++must++>>> <<<--call-->>><<<++call++>>> <<<--__iget-->>><<<++__iget++>>>() > @@ -492,6 +493,11 @@ >- continue; >- if (!test(inode, data)) >- continue; >+| <<<--continue-->>><<<++continue++>>>; >+| <<<--if-->>><<<++if++>>> (!<<<--test-->>><<<++test++>>>(<<<--inode-->>><<<++inode++>>>, <<<--data-->>><<<++data++>>>)) >++ continue; > + if (inode->i_state & (I_FREEING|I_CLEAR)) { > + __wait_on_freeing_inode(inode); > + tmp = head; >-+ continue; >+ continue; > + } > break; > } >@@ -22,30 +22,30 @@ > continue; > if (inode->i_sb != sb) > continue; >-+ if (inode->i_state & (I_FREEING|I_CLEAR)) { >-+ __wait_on_freeing_inode(inode); >-+ tmp = head; >-+ continue; >-+ } >- break; >+| <<<--break-->>><<<++if (inode->i_state & (I_FREEING|I_CLEAR)) { >+| __wait_on_freeing_inode(inode); >+| tmp = head; >+| continue; >+| } >+| break++>>>; > } >- return inode; >+| <<<--return-->>><<<++return++>>> <<<--inode-->>><<<++inode++>>>; > @@ -949,7 +960,6 @@ > { >- struct super_operations *op = inode->i_sb->s_op; >+| <<<--struct-->>><<<++struct++>>> <<<--super_operations-->>><<<++super_operations++>>> *op = <<<--inode-->>><<<++inode++>>>-><<<--i_sb-->>><<<++i_sb++>>>-><<<--s_op-->>><<<++s_op++>>>; > >-- list_del_init(&inode->i_hash); >- list_del_init(&inode->i_list); >- inode->i_state|=I_FREEING; >- inodes_stat.nr_inodes--; >+| <<<--list_del_init-->>><<<++list_del_init++>>>(&<<<--inode-->>><<<++inode++>>>-><<<--i_hash-->>><<<++i_list++>>>); >+| <<<--list_del_init(&-->>>inode-><<<--i_list); >+| inode->i_state-->>><<<++i_state++>>>|=<<<--I_FREEING-->>><<<++I_FREEING++>>>; >+| <<<--inodes_stat-->>><<<++inodes_stat++>>>.<<<--nr_inodes-->>><<<++nr_inodes++>>>--; > @@ -968,6 +978,10 @@ >- delete(inode); >- } else >- clear_inode(inode); >-+ spin_lock(&inode_lock); >-+ list_del_init(&inode->i_hash); >-+ spin_unlock(&inode_lock); >-+ wake_up_inode(inode); >+| <<<--delete-->>><<<++delete++>>>(<<<--inode-->>><<<++inode++>>>); >+| } <<<--else-->>><<<++else++>>> >+| <<<--clear_inode-->>><<<++clear_inode(inode); >+| spin_lock(&inode_lock); >+| list_del_init(&inode->i_hash); >+| spin_unlock(&inode_lock); >+| wake_up_inode++>>>(inode); > if (inode->i_state != I_CLEAR) > BUG(); > destroy_inode(inode); >./linux/inode-fullpatch/rediff FAILED 0.00 >5 unresolved conflicts found >10 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:59.453832929 +0100 >@@ -472,6 +472,7 @@ > } > > void __wait_on_freeing_inode(struct inode *inode); >+<<<<<<< > /* > * Called with the inode lock held. > * NOTE: we are not increasing the inode-refcount, you must call __iget() >@@ -479,6 +480,16 @@ > * add any additional branch in the common code. > */ > static struct inode * find_inode(struct super_block * sb, struct hlist_head *head, int (*test)(struct inode *, void *), void *data) >+||||||| >+/* >+ * Called with the inode lock held. >+ * NOTE: we are not increasing the inode-refcount, you must call __iget() >+======= >+/* >+ * Called with the inode lock held. >+ * NOTE: we are not increasing the inode-refcount, you must call __iget() >+>>>>>>> >+<<<<<<< > { > struct hlist_node *node; > struct inode * inode = NULL; >@@ -486,7 +497,14 @@ > hlist_for_each (node, head) { > prefetch(node->next); > inode = hlist_entry(node, struct inode, i_hash); >- if (inode->i_sb != sb) >+||||||| >+ continue; >+ if (!test(inode, data)) >+ continue; >+ break; >+ } >+ return inode; >+======= > continue; > if (!test(inode, data)) > continue; >@@ -497,6 +515,15 @@ > } > break; > } >+ return inode; >+>>>>>>> >+<<<<<<< >+ if (inode->i_sb != sb) >+ continue; >+ if (!test(inode, data)) >+ continue; >+ break; >+ } > return node ? inode : NULL; > } > >@@ -516,11 +543,6 @@ > continue; > if (inode->i_sb != sb) > continue; >- if (inode->i_state & (I_FREEING|I_CLEAR)) { >- __wait_on_freeing_inode(inode); >- tmp = head; >- continue; >- } > break; > } > return node ? inode : NULL; >@@ -950,17 +972,45 @@ > } > > void generic_delete_inode(struct inode *inode) >+||||||| >+ continue; >+ if (inode->i_sb != sb) >+ continue; >+ break; >+ } >+ return inode; >+======= >+ continue; >+ if (inode->i_sb != sb) >+ continue; >+ if (inode->i_state & (I_FREEING|I_CLEAR)) { >+ __wait_on_freeing_inode(inode); >+ tmp = head; >+ continue; >+ } >+ break; >+ } >+ return inode; >+>>>>>>> > { >+<<<<<<< > struct super_operations *op = inode->i_sb->s_op; > >-<<<<<<< > hlist_del_init(&inode->i_hash); >+ list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; > ||||||| >+ struct super_operations *op = inode->i_sb->s_op; >+ > list_del_init(&inode->i_hash); >+ list_del_init(&inode->i_list); >+ inode->i_state|=I_FREEING; > ======= >->>>>>>> >+ struct super_operations *op = inode->i_sb->s_op; >+ > list_del_init(&inode->i_list); > inode->i_state|=I_FREEING; >+>>>>>>> > inodes_stat.nr_inodes--; > spin_unlock(&inode_lock); > >@@ -1057,6 +1107,21 @@ > * zero the inode is also then freed and may be destroyed. > */ > >+void __wait_on_freeing_inode(struct inode *inode) >+{ >+ DECLARE_WAITQUEUE(wait, current); >+ wait_queue_head_t *wq = i_waitq_head(inode); >+ >+ add_wait_queue(wq, &wait); >+ set_current_state(TASK_UNINTERRUPTIBLE); >+ spin_unlock(&inode_lock); >+ schedule(); >+ remove_wait_queue(wq, &wait); >+ current->state = TASK_RUNNING; >+ spin_lock(&inode_lock); >+} >+ >+ > void iput(struct inode *inode) > { > if (inode) { >@@ -1254,21 +1319,6 @@ > __set_current_state(TASK_RUNNING); > } > >-void __wait_on_freeing_inode(struct inode *inode) >-{ >- DECLARE_WAITQUEUE(wait, current); >- wait_queue_head_t *wq = i_waitq_head(inode); >- >- add_wait_queue(wq, &wait); >- set_current_state(TASK_UNINTERRUPTIBLE); >- spin_unlock(&inode_lock); >- schedule(); >- remove_wait_queue(wq, &wait); >- current->state = TASK_RUNNING; >- spin_lock(&inode_lock); >-} >- >- > void wake_up_inode(struct inode *inode) > { > wait_queue_head_t *wq = i_waitq_head(inode); >./linux/inode-fullpatch/merge FAILED 0.04 >--- diff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:59.539465723 +0100 >@@ -470,482 +470,490 @@ > - prune_icache(nr); > - } > +*** 470,6 **** 1 >-| return inodes_stat.<<<--nr_unused-->>><<<++nr_inodes++>>>; >+| return <<<--inodes_stat-->>><<<++inodes_stat++>>>.<<<--nr_unused-->>><<<++nr_inodes++>>>; > } > > /* >- * Called with the inode lock held. >- * NOTE: we are not increasing the inode-refcount, you must call __iget() >-- * by hand after calling find_inode now! This simplifies iunique and won't >-- * add any additional branch in the common code. >-- */ >--static struct inode * find_inode(struct super_block * sb, struct hlist_head *head, int (*test)(struct inode *, void *), void *data) >--{ >+| * <<<--Called-->>><<<++Called++>>> with <<<--the-->>><<<++the++>>> <<<--inode-->>><<<++inode++>>> <<<--lock-->>><<<++lock++>>> held. >+| * <<<--NOTE-->>><<<++NOTE++>>>: <<<--we-->>><<<++we++>>> are not increasing <<<--the inode-refcount, you must call __iget() >+| * by hand after calling find_inode now! This simplifies iunique and won't >+| * add any additional branch in the common code. >+| */ >+|static struct inode * find_inode(struct super_block * sb, struct hlist_head *head, int (*test)(struct inode *, void *), void-->>><<<++the inode-refcount, you must call __iget() >+|*** 492,6 **** 2 >+| continue; >+| if (!test(inode,++>>> <<<--*-->>>data<<<++)++>>>) >+|<<<--{-->>><<<++ continue;++>>> > - struct hlist_node *node; > - struct inode * inode = NULL; > - > - hlist_for_each (node, head) { > - prefetch(node->next); >-- inode = hlist_entry(node, struct inode, i_hash); >-- if (inode->i_sb != sb) >-+*** 492,6 **** 2 >- continue; >- if (!test(inode, data)) >- continue; >- break; >- } >-| return<<<-- node ?-->>> inode<<<-- : NULL-->>>; >--} >-- >--/* >-- * find_inode_fast is the fast path version of find_inode, see the comment at >-- * iget_locked for details. >-- */ >--static struct inode * find_inode_fast(struct super_block * sb, struct hlist_head *head, unsigned long ino) >--{ >-- struct hlist_node *node; >-- struct inode * inode = NULL; >-- >-- hlist_for_each (node, head) { >-- prefetch(node->next); >-- inode = list_entry(node, struct inode, i_hash); >-- if (inode->i_ino != ino) >-+*** 517,6 **** 3 >- continue; >+|<<<-- inode = hlist_entry(node, struct inode, i_hash)-->>><<<++ break; >+| } >+| return inode; >+|*** 517,6 **** 3 >+| continue++>>>; > if (inode->i_sb != sb) > continue; >- break; >- } >-| return<<<-- node ?-->>> inode<<<-- : NULL-->>>; >--} >-- >--/** >-- * new_inode - obtain an inode >-- * @sb: superblock >-- * >-- * Allocates a new inode for given superblock. >-- */ >-- >--struct inode *new_inode(struct super_block *sb) >--{ >-- static unsigned long last_ino; >-- struct inode * inode; >-- >-- spin_lock_prefetch(&inode_lock); >-- >-- inode = alloc_inode(sb); >-- if (inode) { >-- spin_lock(&inode_lock); >-- inodes_stat.nr_inodes++; >-- list_add(&inode->i_list, &inode_in_use); >-- inode->i_ino = ++last_ino; >-- inode->i_state = 0; >-- spin_unlock(&inode_lock); >-- } >-- return inode; >--} >-- >--void unlock_new_inode(struct inode *inode) >--{ >-- /* >-- * This is special! We do not need the spinlock >-- * when clearing I_LOCK, because we're guaranteed >-- * that nobody else tries to do anything about the >-- * state of the inode when it is locked, as we >-- * just created it (so there can be no old holders >-- * that haven't tested I_LOCK). >-- */ >-- inode->i_state &= ~(I_LOCK|I_NEW); >-- wake_up_inode(inode); >--} >--EXPORT_SYMBOL(unlock_new_inode); >-- >--/* >-- * This is called without the inode lock held.. Be careful. >-- * >-- * We no longer cache the sb_flags in i_flags - see fs.h >-- * -- rmk@arm.uk.linux.org >-- */ >--static struct inode * get_new_inode(struct super_block *sb, struct hlist_head *head, int (*test)(struct inode *, void *), int (*set)(struct inode *, void *), void *data) >--{ >-- struct inode * inode; >-- >-- inode = alloc_inode(sb); >-- if (inode) { >-- struct inode * old; >-- >-- spin_lock(&inode_lock); >-- /* We released the lock, so.. */ >-- old = find_inode(sb, head, test, data); >-- if (!old) { >-- if (set(inode, data)) >-- goto set_failed; >-- >-- inodes_stat.nr_inodes++; >-- list_add(&inode->i_list, &inode_in_use); >-- hlist_add_head(&inode->i_hash, head); >-- inode->i_state = I_LOCK|I_NEW; >-- spin_unlock(&inode_lock); >-- >-- /* Return the locked inode with I_NEW set, the >-- * caller is responsible for filling in the contents >-- */ >-- return inode; >-- } >-- >-- /* >-- * Uhhuh, somebody else created the same inode under >-- * us. Use the old inode instead of the one we just >-- * allocated. >-- */ >-- __iget(old); >-- spin_unlock(&inode_lock); >-- destroy_inode(inode); >-- inode = old; >-- wait_on_inode(inode); >-- } >-- return inode; >-- >--set_failed: >-- spin_unlock(&inode_lock); >-- destroy_inode(inode); >-- return NULL; >--} >-- >--/* >-- * get_new_inode_fast is the fast path version of get_new_inode, see the >-- * comment at iget_locked for details. >-- */ >--static struct inode * get_new_inode_fast(struct super_block *sb, struct hlist_head *head, unsigned long ino) >--{ >-- struct inode * inode; >-- >-- inode = alloc_inode(sb); >-- if (inode) { >-- struct inode * old; >-- >-- spin_lock(&inode_lock); >-- /* We released the lock, so.. */ >-- old = find_inode_fast(sb, head, ino); >-- if (!old) { >-- inode->i_ino = ino; >-- inodes_stat.nr_inodes++; >-- list_add(&inode->i_list, &inode_in_use); >-- hlist_add_head(&inode->i_hash, head); >-- inode->i_state = I_LOCK|I_NEW; >-- spin_unlock(&inode_lock); >-- >-- /* Return the locked inode with I_NEW set, the >-- * caller is responsible for filling in the contents >-- */ >-- return inode; >-- } >-- >-- /* >-- * Uhhuh, somebody else created the same inode under >-- * us. Use the old inode instead of the one we just >-- * allocated. >-- */ >-- __iget(old); >-- spin_unlock(&inode_lock); >-- destroy_inode(inode); >-- inode = old; >-- wait_on_inode(inode); >-- } >-- return inode; >--} >-- >--static inline unsigned long hash(struct super_block *sb, unsigned long hashval) >--{ >-- unsigned long tmp = hashval + ((unsigned long) sb / L1_CACHE_BYTES); >-- tmp = tmp + (tmp >> I_HASHBITS); >-- return tmp & I_HASHMASK; >--} >-- >--/* Yeah, I know about quadratic hash. Maybe, later. */ >-- >--/** >-- * iunique - get a unique inode number >-- * @sb: superblock >-- * @max_reserved: highest reserved inode number >-- * >-- * Obtain an inode number that is unique on the system for a given >-- * superblock. This is used by file systems that have no natural >-- * permanent inode numbering system. An inode number is returned that >-- * is higher than the reserved limit but unique. >-- * >-- * BUGS: >-- * With a large number of inodes live on the file system this function >-- * currently becomes quite slow. >-- */ >-- >--ino_t iunique(struct super_block *sb, ino_t max_reserved) >--{ >-- static ino_t counter = 0; >-- struct inode *inode; >-- struct hlist_head * head; >-- ino_t res; >-- spin_lock(&inode_lock); >--retry: >-- if (counter > max_reserved) { >-- head = inode_hashtable + hash(sb,counter); >-- res = counter++; >-- inode = find_inode_fast(sb, head, res); >-- if (!inode) { >-- spin_unlock(&inode_lock); >-- return res; >-- } >-- } else { >-- counter = max_reserved + 1; >-- } >-- goto retry; >-- >--} >-- >--struct inode *igrab(struct inode *inode) >--{ >-- spin_lock(&inode_lock); >-- if (!(inode->i_state & I_FREEING)) >-- __iget(inode); >-- else >-- /* >-- * Handle the case where s_op->clear_inode is not been >-- * called yet, and somebody is calling igrab >-- * while the inode is getting freed. >-- */ >-- inode = NULL; >-- spin_unlock(&inode_lock); >-- return inode; >--} >-- >--/** >-- * ifind - internal function, you want ilookup5() or iget5(). >-- * @sb: super block of file system to search >-- * @hashval: hash value (usually inode number) to search for >-- * @test: callback used for comparisons between inodes >-- * @data: opaque data pointer to pass to @test >-- * >-- * ifind() searches for the inode specified by @hashval and @data in the inode >-- * cache. This is a generalized version of ifind_fast() for file systems where >-- * the inode number is not sufficient for unique identification of an inode. >-- * >-- * If the inode is in the cache, the inode is returned with an incremented >-- * reference count. >-- * >-- * Otherwise NULL is returned. >-- * >-- * Note, @test is called with the inode_lock held, so can't sleep. >-- */ >--static inline struct inode *ifind(struct super_block *sb, >-- struct hlist_head *head, int (*test)(struct inode *, void *), >-- void *data) >--{ >-- struct inode *inode; >-- >-- spin_lock(&inode_lock); >-- inode = find_inode(sb, head, test, data); >-- if (inode) { >-- __iget(inode); >-- spin_unlock(&inode_lock); >-- wait_on_inode(inode); >-- return inode; >-- } >-- spin_unlock(&inode_lock); >-- return NULL; >--} >-- >--/** >-- * ifind_fast - internal function, you want ilookup() or iget(). >-- * @sb: super block of file system to search >-- * @ino: inode number to search for >-- * >-- * ifind_fast() searches for the inode @ino in the inode cache. This is for >-- * file systems where the inode number is sufficient for unique identification >-- * of an inode. >-- * >-- * If the inode is in the cache, the inode is returned with an incremented >-- * reference count. >-- * >-- * Otherwise NULL is returned. >-- */ >--static inline struct inode *ifind_fast(struct super_block *sb, >-- struct hlist_head *head, unsigned long ino) >--{ >-- struct inode *inode; >-- >-- spin_lock(&inode_lock); >-- inode = find_inode_fast(sb, head, ino); >-- if (inode) { >-- __iget(inode); >-- spin_unlock(&inode_lock); >-- wait_on_inode(inode); >-- return inode; >-- } >-- spin_unlock(&inode_lock); >-- return NULL; >--} >-- >--/** >-- * ilookup5 - search for an inode in the inode cache >-- * @sb: super block of file system to search >-- * @hashval: hash value (usually inode number) to search for >-- * @test: callback used for comparisons between inodes >-- * @data: opaque data pointer to pass to @test >-- * >-- * ilookup5() uses ifind() to search for the inode specified by @hashval and >-- * @data in the inode cache. This is a generalized version of ilookup() for >-- * file systems where the inode number is not sufficient for unique >-- * identification of an inode. >-- * >-- * If the inode is in the cache, the inode is returned with an incremented >-- * reference count. >-- * >-- * Otherwise NULL is returned. >-- * >-- * Note, @test is called with the inode_lock held, so can't sleep. >-- */ >--struct inode *ilookup5(struct super_block *sb, unsigned long hashval, >-- int (*test)(struct inode *, void *), void *data) >--{ >-- struct hlist_head *head = inode_hashtable + hash(sb, hashval); >-- >-- return ifind(sb, head, test, data); >--} >--EXPORT_SYMBOL(ilookup5); >-- >--/** >-- * ilookup - search for an inode in the inode cache >-- * @sb: super block of file system to search >-- * @ino: inode number to search for >-- * >-- * ilookup() uses ifind_fast() to search for the inode @ino in the inode cache. >-- * This is for file systems where the inode number is sufficient for unique >-- * identification of an inode. >-- * >-- * If the inode is in the cache, the inode is returned with an incremented >-- * reference count. >-- * >-- * Otherwise NULL is returned. >-- */ >--struct inode *ilookup(struct super_block *sb, unsigned long ino) >--{ >-- struct hlist_head *head = inode_hashtable + hash(sb, ino); >-- >-- return ifind_fast(sb, head, ino); >--} >--EXPORT_SYMBOL(ilookup); >-- >--/** >-- * iget5_locked - obtain an inode from a mounted file system >-- * @sb: super block of file system >-- * @hashval: hash value (usually inode number) to get >-- * @test: callback used for comparisons between inodes >-- * @set: callback used to initialize a new struct inode >-- * @data: opaque data pointer to pass to @test and @set >-- * >-- * This is iget() without the read_inode() portion of get_new_inode(). >-- * >-- * iget5_locked() uses ifind() to search for the inode specified by @hashval >-- * and @data in the inode cache and if present it is returned with an increased >-- * reference count. This is a generalized version of iget_locked() for file >-- * systems where the inode number is not sufficient for unique identification >-- * of an inode. >-- * >-- * If the inode is not in cache, get_new_inode() is called to allocate a new >-- * inode and this is returned locked, hashed, and with the I_NEW flag set. The >-- * file system gets to fill it in before unlocking it via unlock_new_inode(). >-- * >-- * Note both @test and @set are called with the inode_lock held, so can't sleep. >-- */ >--struct inode *iget5_locked(struct super_block *sb, unsigned long hashval, >-- int (*test)(struct inode *, void *), >-- int (*set)(struct inode *, void *), void *data) >--{ >-- struct hlist_head *head = inode_hashtable + hash(sb, hashval); >-- struct inode *inode; >-- >-- inode = ifind(sb, head, test, data); >-- if (inode) >-- return inode; >-- /* >-- * get_new_inode() will do the right thing, re-trying the search >-- * in case it had to block at any point. >-- */ >-- return get_new_inode(sb, head, test, set, data); >--} >--EXPORT_SYMBOL(iget5_locked); >-- >--/** >-- * iget_locked - obtain an inode from a mounted file system >-- * @sb: super block of file system >-- * @ino: inode number to get >-- * >-- * This is iget() without the read_inode() portion of get_new_inode_fast(). >-- * >-- * iget_locked() uses ifind_fast() to search for the inode specified by @ino in >-- * the inode cache and if present it is returned with an increased reference >-- * count. This is for file systems where the inode number is sufficient for >-- * unique identification of an inode. >-- * >-- * If the inode is not in cache, get_new_inode_fast() is called to allocate a >-- * new inode and this is returned locked, hashed, and with the I_NEW flag set. >-- * The file system gets to fill it in before unlocking it via >-- * unlock_new_inode(). >-- */ >--struct inode *iget_locked(struct super_block *sb, unsigned long ino) >--{ >-- struct hlist_head *head = inode_hashtable + hash(sb, ino); >-- struct inode *inode; >-- >-- inode = ifind_fast(sb, head, ino); >-- if (inode) >-- return inode; >-- /* >-- * get_new_inode_fast() will do the right thing, re-trying the search >-- * in case it had to block at any point. >-- */ >-- return get_new_inode_fast(sb, head, ino); >--} >--EXPORT_SYMBOL(iget_locked); >-- >--/** >-- * __insert_inode_hash - hash an inode >-- * @inode: unhashed inode >-- * @hashval: unsigned long value used to locate this object in the >-- * inode_hashtable. >-- * >-- * Add an inode to the inode hash for this superblock. If the inode >-- * has no superblock it is added to a separate anonymous chain. >-- */ >-- >--void __insert_inode_hash(struct inode *inode, unsigned long hashval) >--{ >-- struct hlist_head *head = &anon_hash_chain; >-- if (inode->i_sb) >-- head = inode_hashtable + hash(inode->i_sb, hashval); >-- spin_lock(&inode_lock); >-- hlist_add_head(&inode->i_hash, head); >-- spin_unlock(&inode_lock); >--} >-- >--/** >-- * remove_inode_hash - remove an inode from the hash >-- * @inode: inode to unhash >-- * >-- * Remove an inode from the superblock or anonymous hash. >-- */ >-- >--void remove_inode_hash(struct inode *inode) >--{ >-- spin_lock(&inode_lock); >-- hlist_del_init(&inode->i_hash); >-- spin_unlock(&inode_lock); >--} >-- >--void generic_delete_inode(struct inode *inode) >-+*** 949,7 **** 4 >- { >- struct super_operations *op = inode->i_sb->s_op; >+| <<<--if (!test(inode, data)) >+| continue; >+| break; >+| } >+| return node ? inode : NULL; >+|} >+| >+|/* >+| * find_inode_fast is the fast path version of find_inode, see the comment at >+| * iget_locked for details. >+| */ >+|static struct inode * find_inode_fast(struct super_block * sb, struct hlist_head *head, unsigned long ino) >+|{ >+| struct hlist_node *node; >+| struct inode * inode = NULL; >+| >+| hlist_for_each (node, head) { >+| prefetch(node->next); >+| inode = list_entry(node, struct inode, i_hash); >+| if (inode->i_ino != ino) >+| continue; >+| if (inode->i_sb != sb) >+| continue; >+| break; >+| } >+| return node ? inode : NULL; >+|} >+| >+|/** >+| * new_inode - obtain an inode >+| * @sb: superblock >+| * >+| * Allocates a new inode for given superblock. >+| */ >+| >+|struct inode *new_inode(struct super_block *sb) >+|{ >+| static unsigned long last_ino; >+| struct inode * inode; >+| >+| spin_lock_prefetch(&inode_lock); >+| >+| inode = alloc_inode(sb); >+| if (inode) { >+| spin_lock(&inode_lock); >+| inodes_stat.nr_inodes++; >+| list_add(&inode->i_list, &inode_in_use); >+| inode->i_ino = ++last_ino; >+| inode->i_state = 0; >+| spin_unlock(&inode_lock); >+| } >+| return inode; >+|} >+| >+|void unlock_new_inode(struct inode *inode) >+|{ >+| /* >+| * This is special! We do not need the spinlock >+| * when clearing I_LOCK, because we're guaranteed >+| * that nobody else tries to do anything about the >+| * state of the inode when it is locked, as we >+| * just created it (so there can be no old holders >+| * that haven't tested I_LOCK). >+| */ >+| inode->i_state &= ~(I_LOCK|I_NEW); >+| wake_up_inode(inode); >+|} >+|EXPORT_SYMBOL(unlock_new_inode); >+| >+|/* >+| * This is called without the inode lock held.. Be careful. >+| * >+| * We no longer cache the sb_flags in i_flags - see fs.h >+| * -- rmk@arm.uk.linux.org >+| */ >+|static struct inode * get_new_inode(struct super_block *sb, struct hlist_head *head, int (*test)(struct inode *, void *), int (*set)(struct inode *, void *), void *data) >+|{ >+| struct inode * inode; >+| >+| inode = alloc_inode(sb); >+| if (inode) { >+| struct inode * old; >+| >+| spin_lock(&inode_lock); >+| /* We released the lock, so.. */ >+| old = find_inode(sb, head, test, data); >+| if (!old) { >+| if (set(inode, data)) >+| goto set_failed; >+| >+| inodes_stat.nr_inodes++; >+| list_add(&inode->i_list, &inode_in_use); >+| hlist_add_head(&inode->i_hash, head); >+| inode->i_state = I_LOCK|I_NEW; >+| spin_unlock(&inode_lock); >+| >+| /* Return the locked inode with I_NEW set, the >+| * caller is responsible for filling in the contents >+| */ >+| return inode; >+| } >+| >+| /* >+| * Uhhuh, somebody else created the same inode under >+| * us. Use the old inode instead of the one we just >+| * allocated. >+| */ >+| __iget(old); >+| spin_unlock(&inode_lock); >+| destroy_inode(inode); >+| inode = old; >+| wait_on_inode(inode); >+| } >+| return inode; >+| >+|set_failed: >+| spin_unlock(&inode_lock); >+| destroy_inode(inode); >+| return NULL; >+|} >+| >+|/* >+| * get_new_inode_fast is the fast path version of get_new_inode, see the >+| * comment at iget_locked for details. >+| */ >+|static struct inode * get_new_inode_fast(struct super_block *sb, struct hlist_head *head, unsigned long ino) >+|{ >+| struct inode * inode; >+| >+| inode = alloc_inode(sb); >+| if (inode) { >+| struct inode * old; >+| >+| spin_lock(&inode_lock); >+| /* We released the lock, so.. */ >+| old = find_inode_fast(sb, head, ino); >+| if (!old) { >+| inode->i_ino = ino; >+| inodes_stat.nr_inodes++; >+| list_add(&inode->i_list, &inode_in_use); >+| hlist_add_head(&inode->i_hash, head); >+| inode->i_state = I_LOCK|I_NEW; >+| spin_unlock(&inode_lock); >+| >+| /* Return the locked inode with I_NEW set, the >+| * caller is responsible for filling in the contents >+| */ >+| return inode; >+| } >+| >+| /* >+| * Uhhuh, somebody else created the same inode under >+| * us. Use the old inode instead of the one we just >+| * allocated. >+| */ >+| __iget(old); >+| spin_unlock(&inode_lock); >+| destroy_inode(inode); >+| inode = old; >+| wait_on_inode(inode); >+| } >+| return inode; >+|} >+| >+|static inline unsigned long hash(struct super_block *sb, unsigned long hashval) >+|{ >+| unsigned long tmp = hashval + ((unsigned long) sb / L1_CACHE_BYTES); >+| tmp = tmp + (tmp >> I_HASHBITS); >+| return tmp & I_HASHMASK; >+|} >+| >+|/* Yeah, I know about quadratic hash. Maybe, later. */ >+| >+|/** >+| * iunique - get a unique inode number >+| * @sb: superblock >+| * @max_reserved: highest reserved inode number >+| * >+| * Obtain an inode number that is unique on the system for a given >+| * superblock. This is used by file systems that have no natural >+| * permanent inode numbering system. An inode number is returned that >+| * is higher than the reserved limit but unique. >+| * >+| * BUGS: >+| * With a large number of inodes live on the file system this function >+| * currently becomes quite slow. >+| */ >+| >+|ino_t iunique(struct super_block *sb, ino_t max_reserved) >+|{ >+| static ino_t counter = 0; >+| struct inode *inode; >+| struct hlist_head * head; >+| ino_t res; >+| spin_lock(&inode_lock); >+|retry: >+| if (counter > max_reserved) { >+| head = inode_hashtable + hash(sb,counter); >+| res = counter++; >+| inode = find_inode_fast(sb, head, res); >+| if (!inode) { >+| spin_unlock(&inode_lock); >+| return res; >+| } >+| } else { >+| counter = max_reserved + 1; >+| } >+| goto retry; >+| >+|} >+| >+|struct inode *igrab(struct inode *inode) >+|{ >+| spin_lock(&inode_lock); >+| if (!(inode->i_state & I_FREEING)) >+| __iget(inode); >+| else >+| /* >+| * Handle the case where s_op->clear_inode is not been >+| * called yet, and somebody is calling igrab >+| * while the inode is getting freed. >+| */ >+| inode = NULL; >+| spin_unlock(&inode_lock); >+| return inode; >+|} >+| >+|/** >+| * ifind - internal function, you want ilookup5() or iget5(). >+| * @sb: super block of file system to search >+| * @hashval: hash value (usually inode number) to search for >+| * @test: callback used for comparisons between inodes >+| * @data: opaque data pointer to pass to @test >+| * >+| * ifind() searches for the inode specified by @hashval and @data in the inode >+| * cache. This is a generalized version of ifind_fast() for file systems where >+| * the inode number is not sufficient for unique identification of an inode. >+| * >+| * If the inode is in the cache, the inode is returned with an incremented >+| * reference count. >+| * >+| * Otherwise NULL is returned. >+| * >+| * Note, @test is called with the inode_lock held, so can't sleep. >+| */ >+|static inline struct inode *ifind(struct super_block *sb, >+| struct hlist_head *head, int (*test)(struct inode *, void *), >+| void *data) >+|{ >+| struct inode *inode; >+| >+| spin_lock(&inode_lock); >+| inode = find_inode(sb, head, test, data); >+| if (inode) { >+| __iget(inode); >+| spin_unlock(&inode_lock); >+| wait_on_inode(inode); >+| return inode; >+| } >+| spin_unlock(&inode_lock); >+| return NULL; >+|} >+| >+|/** >+| * ifind_fast - internal function, you want ilookup() or iget(). >+| * @sb: super block of file system to search >+| * @ino: inode number to search for >+| * >+| * ifind_fast() searches for the inode @ino in the inode cache. This is for >+| * file systems where the inode number is sufficient for unique identification >+| * of an inode. >+| * >+| * If the inode is in the cache, the inode is returned with an incremented >+| * reference count. >+| * >+| * Otherwise NULL is returned. >+| */ >+|static inline struct inode *ifind_fast(struct super_block *sb, >+| struct hlist_head *head, unsigned long ino) >+|{ >+| struct inode *inode; >+| >+| spin_lock(&inode_lock); >+| inode = find_inode_fast(sb, head, ino); >+| if (inode) { >+| __iget(inode); >+| spin_unlock(&inode_lock); >+| wait_on_inode(inode); >+| return inode; >+| } >+| spin_unlock(&inode_lock); >+| return NULL; >+|} >+| >+|/** >+| * ilookup5 - search for an inode in the inode cache >+| * @sb: super block of file system to search >+| * @hashval: hash value (usually inode number) to search for >+| * @test: callback used for comparisons between inodes >+| * @data: opaque data pointer to pass to @test >+| * >+| * ilookup5() uses ifind() to search for the inode specified by @hashval and >+| * @data in the inode cache. This is a generalized version of ilookup() for >+| * file systems where the inode number is not sufficient for unique >+| * identification of an inode. >+| * >+| * If the inode is in the cache, the inode is returned with an incremented >+| * reference count. >+| * >+| * Otherwise NULL is returned. >+| * >+| * Note, @test is called with the inode_lock held, so can't sleep. >+| */ >+|struct inode *ilookup5(struct super_block *sb, unsigned long hashval, >+| int (*test)(struct inode *, void *), void *data) >+|{ >+| struct hlist_head *head = inode_hashtable + hash(sb, hashval); >+| >+| return ifind(sb, head, test, data); >+|} >+|EXPORT_SYMBOL(ilookup5); >+| >+|/** >+| * ilookup - search for an inode in the inode cache >+| * @sb: super block of file system to search >+| * @ino: inode number to search for >+| * >+| * ilookup() uses ifind_fast() to search for the inode @ino in the inode cache. >+| * This is for file systems where the inode number is sufficient for unique >+| * identification of an inode. >+| * >+| * If the inode is in the cache, the inode is returned with an incremented >+| * reference count. >+| * >+| * Otherwise NULL is returned. >+| */ >+|struct inode *ilookup(struct super_block *sb, unsigned long ino) >+|{ >+| struct hlist_head *head = inode_hashtable + hash(sb, ino); >+| >+| return ifind_fast(sb, head, ino); >+|} >+|EXPORT_SYMBOL(ilookup); >+| >+|/** >+| * iget5_locked - obtain an inode from a mounted file system >+| * @sb: super block of file system >+| * @hashval: hash value (usually inode number) to get >+| * @test: callback used for comparisons between inodes >+| * @set: callback used to initialize a new struct inode >+| * @data: opaque data pointer to pass to @test and @set >+| * >+| * This is iget() without the read_inode() portion of get_new_inode(). >+| * >+| * iget5_locked() uses ifind() to search for the inode specified by @hashval >+| * and @data in the inode cache and if present it is returned with an increased >+| * reference count. This is a generalized version of iget_locked() for file >+| * systems where the inode number is not sufficient for unique identification >+| * of an inode. >+| * >+| * If the inode is not in cache, get_new_inode() is called to allocate a new >+| * inode and this is returned locked, hashed, and with the I_NEW flag set. The >+| * file system gets to fill it in before unlocking it via unlock_new_inode(). >+| * >+| * Note both @test and @set are called with the inode_lock held, so can't sleep. >+| */ >+|struct inode *iget5_locked(struct super_block *sb, unsigned long hashval, >+| int (*test)(struct inode *, void *), >+| int (*set)(struct inode *, void *), void *data) >+|{ >+| struct hlist_head *head = inode_hashtable + hash(sb, hashval); >+| struct inode *inode; >+| >+| inode = ifind(sb, head, test, data); >+| if (inode) >+| return inode; >+| /* >+| * get_new_inode() will do the right thing, re-trying the search >+| * in case it had to block at any point. >+| */ >+| return get_new_inode(sb, head, test, set, data); >+|} >+|EXPORT_SYMBOL(iget5_locked); >+| >+|/** >+| * iget_locked - obtain an inode from a mounted file system >+| * @sb: super block of file system >+| * @ino: inode number to get >+| * >+| * This is iget() without the read_inode() portion of get_new_inode_fast(). >+| * >+| * iget_locked() uses ifind_fast() to search for the inode specified by @ino in >+| * the inode cache and if present it is returned with an increased reference >+| * count. This is for file systems where the inode number is sufficient for >+| * unique identification of an inode. >+| * >+| * If the inode is not in cache, get_new_inode_fast() is called to allocate a >+| * new inode and this is returned locked, hashed, and with the I_NEW flag set. >+| * The file system gets to fill it in before unlocking it via >+| * unlock_new_inode(). >+| */ >+|struct inode *iget_locked(struct super_block *sb, unsigned long ino) >+|{ >+| struct hlist_head *head = inode_hashtable + hash(sb, ino); >+| struct inode *inode; >+| >+| inode = ifind_fast(sb, head, ino); >+| if (inode) >+| return inode; >+| /* >+| * get_new_inode_fast() will do the right thing, re-trying the search >+| * in case it had to block at any point. >+| */ >+| return get_new_inode_fast(sb, head, ino); >+|} >+|EXPORT_SYMBOL(iget_locked); >+| >+|/** >+| * __insert_inode_hash - hash an inode >+| * @inode: unhashed inode >+| * @hashval: unsigned long value used to locate this object in the >+| * inode_hashtable. >+| * >+| * Add an inode to the inode hash for this superblock. If the inode >+| * has no superblock it is added to a separate anonymous chain. >+| */ >+| >+|void __insert_inode_hash(struct inode *inode, unsigned long hashval) >+|{ >+| struct hlist_head *head = &anon_hash_chain; >+| if (inode->i_sb) >+| head = inode_hashtable + hash(inode->i_sb, hashval); >+| spin_lock(&inode_lock); >+| hlist_add_head(&inode->i_hash, head); >+| spin_unlock(&inode_lock); >+|} >+| >+|/** >+| * remove_inode_hash - remove an inode from the hash >+| * @inode: inode to unhash >+| * >+| * Remove an inode from the superblock or anonymous hash. >+| */ >+| >+|void remove_inode_hash(struct inode *inode) >+|{ >+| spin_lock(&inode_lock); >+| hlist_del_init(&inode->i_hash); >+| spin_unlock(&inode_lock); >+|} >+| >+|void generic_delete_inode(struct inode *inode) >+|-->>><<<++break; >+| } >+| return inode; >+|*** 949,7 **** 4 >+|++>>>{ >+| <<<--struct-->>><<<++struct++>>> <<<--super_operations-->>><<<++super_operations++>>> *op = <<<--inode-->>><<<++inode++>>>-><<<--i_sb-->>><<<++i_sb++>>>-><<<--s_op-->>><<<++s_op++>>>; > > | <<<--hlist_del_init-->>><<<++list_del_init++>>>(&inode->i_hash); > list_del_init(&inode->i_list); >@@ -1040,12 +1048,13 @@ > - * @inode: inode to put > - * > - * Puts an inode, dropping its usage count. If the inode use count hits >-- * zero the inode is also then freed and may be destroyed. >-- */ >-- >--void iput(struct inode *inode) >--{ >-- if (inode) { >+|<<<-- * zero the inode is also then freed and may be destroyed.-->>><<<++*** 1219,6 **** 6 >+| current->state = TASK_RUNNING;++>>> >+|<<<-- */-->>><<<++}++>>> >+|<<<-- -->>> >+|void <<<--iput-->>><<<++wake_up_inode++>>>(<<<--struct-->>><<<++struct++>>> <<<--inode-->>><<<++inode++>>> *<<<--inode-->>><<<++inode++>>>) >+ { >+| <<<--if-->>><<<++wait_queue_head_t++>>> <<<++*wq = i_waitq_head++>>>(inode)<<<-- {-->>><<<++;++>>> > - struct super_operations *op = inode->i_sb->s_op; > - > - if (inode->i_state == I_CLEAR) >@@ -1237,13 +1246,12 @@ > - goto repeat; > - } > - remove_wait_queue(wq, &wait); >-|<<<-- __set_current_state(-->>><<<++*** 1219,6 **** 6 >-| current->state = ++>>>TASK_RUNNING<<<--)-->>>; >- } >- >- void wake_up_inode(struct inode *inode) >- { >- wait_queue_head_t *wq = i_waitq_head(inode); >+- __set_current_state(TASK_RUNNING); >+-} >+- >+-void wake_up_inode(struct inode *inode) >+-{ >+- wait_queue_head_t *wq = i_waitq_head(inode); > - > - /* > - * Prevent speculative execution through spin_unlock(&inode_lock); >./linux/inode-fullpatch/diff FAILED 0.04 >5 unresolved conflicts found >10 already-applied changes ignored >--- wmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:59.654915023 +0100 >@@ -474,27 +474,39 @@ > void __wait_on_freeing_inode(struct inode *inode); > /* > * Called with the inode lock held. >- * NOTE: we are not increasing the inode-refcount, you must call __iget() >+ * NOTE: we are not increasing <<<---the inode-refcount, you must call __iget() > * by hand after calling find_inode now! This simplifies iunique and won't > * add any additional branch in the common code. > */ > static struct inode * find_inode(struct super_block * sb, struct hlist_head *head, int (*test)(struct inode *, void *), void *data) >-{ >- struct hlist_node *node; >+|||the inode-refcount, you must call __iget() >+===the inode-refcount, you must call __iget() >+--->>><<<---{||| continue; >+ if (!test(inode, data)) >+ continue;=== continue; >+ if (!test(inode, data)) >+ continue; >+ if (inode->i_state & (I_FREEING|I_CLEAR)) { >+ __wait_on_freeing_inode(inode); >+ tmp = head; >+ continue;--->>> >+<<<--- struct hlist_node *node; > struct inode * inode = NULL; > > hlist_for_each (node, head) { > prefetch(node->next); > inode = hlist_entry(node, struct inode, i_hash); >- if (inode->i_sb != sb) >+||| break; >+ } >+ return inode; >+=== } >+ break; >+ } >+ return inode; >+--->>> if (inode->i_sb != sb) > continue; >- if (!test(inode, data)) >+ if <<<---(!test(inode, data)) > continue; >- if (inode->i_state & (I_FREEING|I_CLEAR)) { >- __wait_on_freeing_inode(inode); >- tmp = head; >- continue; >- } > break; > } > return node ? inode : NULL; >@@ -516,11 +528,6 @@ > continue; > if (inode->i_sb != sb) > continue; >- if (inode->i_state & (I_FREEING|I_CLEAR)) { >- __wait_on_freeing_inode(inode); >- tmp = head; >- continue; >- } > break; > } > return node ? inode : NULL; >@@ -950,10 +957,21 @@ > } > > void generic_delete_inode(struct inode *inode) >-{ >+|||break; >+ } >+ return inode; >+===if (inode->i_state & (I_FREEING|I_CLEAR)) { >+ __wait_on_freeing_inode(inode); >+ tmp = head; >+ continue; >+ } >+ break; >+ } >+ return inode; >+--->>>{ > struct super_operations *op = inode->i_sb->s_op; > >-<<<---hlist_del_init|||list_del_init===--->>> list_del_init(&inode->i_list); >+ <<<---hlist_del_init|||list_del_init===list_del_init--->>>(&inode->i_list); > inode->i_state|=I_FREEING; > inodes_stat.nr_inodes--; > spin_unlock(&inode_lock); >@@ -1051,6 +1069,21 @@ > * zero the inode is also then freed and may be destroyed. > */ > >+void __wait_on_freeing_inode(struct inode *inode) >+{ >+ DECLARE_WAITQUEUE(wait, current); >+ wait_queue_head_t *wq = i_waitq_head(inode); >+ >+ add_wait_queue(wq, &wait); >+ set_current_state(TASK_UNINTERRUPTIBLE); >+ spin_unlock(&inode_lock); >+ schedule(); >+ remove_wait_queue(wq, &wait); >+ current->state = TASK_RUNNING; >+ spin_lock(&inode_lock); >+} >+ >+ > void iput(struct inode *inode) > { > if (inode) { >@@ -1248,21 +1281,6 @@ > __set_current_state(TASK_RUNNING); > } > >-void __wait_on_freeing_inode(struct inode *inode) >-{ >- DECLARE_WAITQUEUE(wait, current); >- wait_queue_head_t *wq = i_waitq_head(inode); >- >- add_wait_queue(wq, &wait); >- set_current_state(TASK_UNINTERRUPTIBLE); >- spin_unlock(&inode_lock); >- schedule(); >- remove_wait_queue(wq, &wait); >- current->state = TASK_RUNNING; >- spin_lock(&inode_lock); >-} >- >- > void wake_up_inode(struct inode *inode) > { > wait_queue_head_t *wq = i_waitq_head(inode); >./linux/inode-fullpatch/wmerge FAILED 0.04 >8 unresolved conflicts found >6 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:59.741859541 +0100 >@@ -542,12 +542,26 @@ > > if (conf->pending_bio_list.head) { > struct bio *bio; >+<<<<<<< >+ bio = bio_list_get(&conf->pending_bio_list); >+ spin_unlock_irq(&conf->device_lock); >+ /* flush any pending bitmap writes to >+ * disk before proceeding w/ I/O */ >+||||||| > bio = bio_list_get(&conf->pending_bio_list); >+ blk_remove_plug(conf->mddev->queue); >+ spin_unlock_irq(&conf->device_lock); >+ /* flush any pending bitmap writes to >+ * disk before proceeding w/ I/O */ >+======= >+ bio = bio_list_get(&conf->pending_bio_list); >+ blk_remove_plug(conf->mddev->queue); > conf->pending_count = 0; > spin_unlock_irq(&conf->device_lock); > wake_up(&conf->wait_barrier); > /* flush any pending bitmap writes to > * disk before proceeding w/ I/O */ >+>>>>>>> > bitmap_unplug(conf->mddev->bitmap); > > while (bio) { /* submit pending writes */ >@@ -712,16 +726,6 @@ > struct bitmap *bitmap; > <<<<<<< > unsigned long flags; >-||||||| >- unsigned long flags; >- struct bio_list bl; >- struct page **behind_pages = NULL; >-======= >- unsigned long flags; >- struct bio_list bl; >- int bl_count; >- struct page **behind_pages = NULL; >->>>>>>> > const int rw = bio_data_dir(bio); > const unsigned long do_sync = (bio->bi_rw & REQ_SYNC); > const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA)); >@@ -811,6 +815,22 @@ > return 0; > } > >+||||||| >+ struct bitmap *bitmap; >+ unsigned long flags; >+ struct bio_list bl; >+ struct page **behind_pages = NULL; >+ const int rw = bio_data_dir(bio); >+ const bool do_sync = bio_rw_flagged(bio, BIO_RW_SYNCIO); >+======= >+ struct bitmap *bitmap; >+ unsigned long flags; >+ struct bio_list bl; >+ int bl_count; >+ struct page **behind_pages = NULL; >+ const int rw = bio_data_dir(bio); >+ const bool do_sync = bio_rw_flagged(bio, BIO_RW_SYNCIO); >+>>>>>>> > /* > * WRITE: > */ >@@ -887,7 +907,6 @@ > > bitmap_startwrite(bitmap, bio->bi_sector, r1_bio->sectors, > test_bit(R1BIO_BehindIO, &r1_bio->state)); >- bl_count = 0; > for (i = 0; i < disks; i++) { > struct bio *mbio; > if (!r1_bio->bios[i]) >@@ -917,41 +936,13 @@ > bvec->bv_page = r1_bio->behind_pages[j]; > if (test_bit(WriteMostly, &conf->mirrors[i].rdev->flags)) > atomic_inc(&r1_bio->behind_remaining); >-<<<<<<< > } > > atomic_inc(&r1_bio->remaining); >-||||||| >- bio_list_add(&bl, mbio); >- } >- kfree(behind_pages); /* the behind pages are attached to the bios now */ >- >-======= >- bio_list_add(&bl, mbio); >- bl_count++; >- } >- kfree(behind_pages); /* the behind pages are attached to the bios now */ >- >->>>>>>> >-<<<<<<< > spin_lock_irqsave(&conf->device_lock, flags); > bio_list_add(&conf->pending_bio_list, mbio); > spin_unlock_irqrestore(&conf->device_lock, flags); > } >-||||||| >- test_bit(R1BIO_BehindIO, &r1_bio->state)); >- spin_lock_irqsave(&conf->device_lock, flags); >- bio_list_merge(&conf->pending_bio_list, &bl); >- bio_list_init(&bl); >- >-======= >- test_bit(R1BIO_BehindIO, &r1_bio->state)); >- spin_lock_irqsave(&conf->device_lock, flags); >- bio_list_merge(&conf->pending_bio_list, &bl); >- conf->pending_count += bl_count; >- bio_list_init(&bl); >- >->>>>>>> > r1_bio_write_done(r1_bio); > > /* In case raid1d snuck in to freeze_array */ >@@ -1443,6 +1434,7 @@ > /* > * schedule writes > */ >+<<<<<<< > atomic_set(&r1_bio->remaining, 1); > for (i = 0; i < disks ; i++) { > wbio = r1_bio->bios[i]; >@@ -1464,6 +1456,31 @@ > /* if we're here, all write(s) have completed, so clean up */ > md_done_sync(mddev, r1_bio->sectors, 1); > put_buf(r1_bio); >+||||||| >+ bio_list_init(&bl); >+ for (i = 0; i < disks; i++) { >+ struct bio *mbio; >+ if (!r1_bio->bios[i]) >+======= >+ bio_list_init(&bl); >+ bl_count = 0; >+ for (i = 0; i < disks; i++) { >+ struct bio *mbio; >+ if (!r1_bio->bios[i]) >+>>>>>>> >+<<<<<<< >+); >+||||||| >+ atomic_inc(&r1_bio->remaining); >+ >+ bio_list_add(&bl, mbio); >+======= >+ atomic_inc(&r1_bio->remaining); >+ >+ bio_list_add(&bl, mbio); >+>>>>>>> >+ bl_count++; >+<<<<<<< > } > } > >@@ -1578,6 +1595,16 @@ > > if (atomic_read(&mddev->plug_cnt) == 0) > flush_pending_writes(conf); >+||||||| >+ } >+ kfree(behind_pages); /* the behind pages are attached to the bios now */ >+ >+======= >+ } >+ kfree(behind_pages); /* the behind pages are attached to the bios now */ >+ >+>>>>>>> >+<<<<<<< > > spin_lock_irqsave(&conf->device_lock, flags); > if (list_empty(head)) { >@@ -1949,6 +1976,22 @@ > INIT_LIST_HEAD(&conf->retry_list); > > spin_lock_init(&conf->resync_lock); >+||||||| >+ test_bit(R1BIO_BehindIO, &r1_bio->state)); >+ spin_lock_irqsave(&conf->device_lock, flags); >+ bio_list_merge(&conf->pending_bio_list, &bl); >+ bio_list_init(&bl); >+ >+ blk_plug_device(mddev->queue); >+======= >+ test_bit(R1BIO_BehindIO, &r1_bio->state)); >+ spin_lock_irqsave(&conf->device_lock, flags); >+ bio_list_merge(&conf->pending_bio_list, &bl); >+ conf->pending_count += bl_count; >+ bio_list_init(&bl); >+ >+ blk_plug_device(mddev->queue); >+>>>>>>> > init_waitqueue_head(&conf->wait_barrier); > > <<<<<<< >./linux/raid1-A/merge FAILED 0.08 >1 unresolved conflict found >4 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:24:59.872197218 +0100 >@@ -2602,15 +2602,29 @@ > goto done_unlock; > > <<<<<<< >+ case START_ARRAY: >+ /* >+ * possibly make it lock the array ... >+ */ > err = autostart_array((kdev_t)arg); > if (err) { > printk(KERN_WARNING "md: autostart %s failed!\n", > partition_name((kdev_t)arg)); >+ goto abort_unlock; >+ } >+ goto done_unlock; > ||||||| >+ case START_ARRAY: >+ /* >+ * possibly make it lock the array ... >+ */ > err = autostart_array(val_to_kdev(arg)); > if (err) { > printk(KERN_WARNING "md: autostart %s failed!\n", > partition_name(val_to_kdev(arg))); >+ goto abort_unlock; >+ } >+ goto done_unlock; > ======= > >>>>>>> > default:; >./linux/md-autostart/merge FAILED 0.04 >1 unresolved conflict found >./linux/idmap.h/merge SUCCEEDED 0.00 >--- rediff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:01.142844406 +0100 >@@ -2,93 +2,93 @@ > return 1; > } > >--#undef OLD_LEVEL >-- >--static int device_size_calculation(mddev_t * mddev) >--{ >-- int data_disks = 0; >-- unsigned int readahead; >-- struct list_head *tmp; >-- mdk_rdev_t *rdev; >-- >-- /* >-- * Do device size calculation. Bail out if too small. >-- * (we have to do this after having validated chunk_size, >-- * because device size has to be modulo chunk_size) >-- */ >-- >-- ITERATE_RDEV(mddev,rdev,tmp) { >-- if (rdev->faulty) >-- continue; >-- if (rdev->size < mddev->chunk_size / 1024) { >-- printk(KERN_WARNING >-- "md: Dev %s smaller than chunk_size:" >-- " %lluk < %dk\n", >-- bdev_partition_name(rdev->bdev), >-- (unsigned long long)rdev->size, >-- mddev->chunk_size / 1024); >-- return -EINVAL; >-- } >-- } >-- >-- switch (mddev->level) { >-- case LEVEL_MULTIPATH: >-- data_disks = 1; >-- break; >-- case -3: >-- data_disks = 1; >-- break; >-- case -2: >-- data_disks = 1; >-- break; >-- case LEVEL_LINEAR: >-- zoned_raid_size(mddev); >-- data_disks = 1; >-- break; >-- case 0: >-- zoned_raid_size(mddev); >-- data_disks = mddev->raid_disks; >-- break; >-- case 1: >-- data_disks = 1; >-- break; >-- case 4: >-- case 5: >-- data_disks = mddev->raid_disks-1; >-- break; >-- default: >-- printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >-- mdidx(mddev), mddev->level); >-- goto abort; >-- } >-- if (!md_size[mdidx(mddev)]) >-- md_size[mdidx(mddev)] = mddev->size * data_disks; >-- >-- readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >-- if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >-- readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >-- if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >-- readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >-- } else { >-- // (no multipath branch - it uses the default setting) >-- if (mddev->level == -3) >-- readahead = 0; >-- } >-- >-- printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >-- mdidx(mddev), readahead*(PAGE_SIZE/1024)); >-- >-- printk(KERN_INFO >-- "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >-- mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >-- return 0; >--abort: >-- return 1; >--} >-- >- static struct gendisk *md_probe(dev_t dev, int *part, void *data) >+|<<<--#undef-->>><<<++static++>>> <<<--OLD_LEVEL >+| >+|static int device_size_calculation(mddev_t * mddev) >+|{ >+| int data_disks = 0; >+| unsigned int readahead; >+| -->>>struct <<<--list_head *tmp; >+| mdk_rdev_t *rdev; >+| >+| /* >+| * Do device size calculation. Bail out if too small. >+| * (we have to do this after having validated chunk_size, >+| * because device size has to be modulo chunk_size) >+| */ >+| >+| ITERATE_RDEV(mddev,rdev,tmp) { >+| if (rdev->faulty) >+| continue; >+| if (rdev->size < mddev->chunk_size / 1024) { >+| printk(KERN_WARNING >+| "md: Dev %s smaller than chunk_size:" >+| " %lluk < %dk\n", >+| bdev_partition_name(rdev->bdev), >+| (unsigned long long)rdev->size, >+| mddev->chunk_size / 1024); >+| return -EINVAL; >+| } >+| } >+| >+| switch (mddev->level) { >+| case LEVEL_MULTIPATH: >+| data_disks = 1; >+| break; >+| case -3: >+| data_disks = 1; >+| break; >+| case -2: >+| data_disks = 1; >+| break; >+| case LEVEL_LINEAR: >+| zoned_raid_size(mddev); >+| data_disks = 1; >+| break; >+| case 0: >+| zoned_raid_size(mddev); >+| data_disks = mddev->raid_disks; >+| break; >+| case 1: >+| data_disks = 1; >+| break; >+| case 4: >+| case 5: >+| data_disks = mddev->raid_disks-1; >+| break; >+| default: >+| printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+| mdidx(mddev), mddev->level); >+| goto abort; >+| } >+| if (!md_size[mdidx(mddev)]) >+| md_size[mdidx(mddev)] = mddev->size * data_disks; >+| >+| readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+| if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+| readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+| if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+| readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+| } else { >+| // (no multipath branch - it uses the default setting) >+| if (mddev->level == -3) >+| readahead = 0; >+| } >+| >+| printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+| mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+| >+| printk(KERN_INFO >+| "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+| mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+| return 0; >+|abort: >+| return 1; >+|} >+| >+|static struct gendisk-->>><<<++gendisk++>>> *md_probe(<<<--dev_t-->>><<<++dev_t++>>> <<<--dev-->>><<<++dev++>>>, int *<<<--part-->>><<<++part++>>>, <<<--void-->>><<<++void++>>> *<<<--data-->>><<<++data++>>>) > { >- static DECLARE_MUTEX(disks_sem); >+| static <<<--DECLARE_MUTEX-->>><<<++DECLARE_MUTEX++>>>(<<<--disks_sem-->>><<<++disks_sem++>>>); > @@ -1664,9 +1571,6 @@ > } > } >@@ -97,5 +97,5 @@ > - return -EINVAL; > - > /* >- * Drop all container device buffers, from now on >- * the only valid external interface is through the md >+|<<<-- -->>><<<++ ++>>>* <<<--Drop-->>><<<++Drop++>>> all container device <<<--buffers-->>><<<++buffers++>>>, from <<<--now-->>><<<++now++>>> <<<--on-->>><<<++on++>>> >+| * the only <<<--valid-->>><<<++valid++>>> <<<--external-->>><<<++external++>>> interface is <<<--through-->>><<<++through++>>> <<<--the-->>><<<++the++>>> <<<--md-->>><<<++md++>>> >./linux/md/rediff FAILED 0.00 >3 unresolved conflicts found >4 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:01.191517494 +0100 >@@ -1435,6 +1435,89 @@ > abort: > return 1; > } >+<<<<<<< >+ >+static int device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead; >+ struct list_head *tmp; >+ mdk_rdev_t *rdev; >+ >+ /* >+ * Do device size calculation. Bail out if too small. >+ * (we have to do this after having validated chunk_size, >+ * because device size has to be modulo chunk_size) >+ */ >+ >+ ITERATE_RDEV(mddev,rdev,tmp) { >+ if (rdev->faulty) >+ continue; >+ if (rdev->size < mddev->chunk_size / 1024) { >+ printk(KERN_WARNING >+ "md: Dev %s smaller than chunk_size:" >+ " %lluk < %dk\n", >+ bdev_partition_name(rdev->bdev), >+ (unsigned long long)rdev->size, >+ mddev->chunk_size / 1024); >+ return -EINVAL; >+ } >+ } >+ >+ switch (mddev->level) { >+ case LEVEL_MULTIPATH: >+ data_disks = 1; >+ break; >+ case -3: >+ data_disks = 1; >+ break; >+ case -2: >+ data_disks = 1; >+ break; >+ case LEVEL_LINEAR: >+ zoned_raid_size(mddev); >+ data_disks = 1; >+ break; >+ case 0: >+ zoned_raid_size(mddev); >+ data_disks = mddev->raid_disks; >+ break; >+ case 1: >+ data_disks = 1; >+ break; >+ case 4: >+ case 5: >+ data_disks = mddev->raid_disks-1; >+ break; >+ default: >+ printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+ mdidx(mddev), mddev->level); >+ goto abort; >+ } >+ if (!md_size[mdidx(mddev)]) >+ md_size[mdidx(mddev)] = mddev->size * data_disks; >+ >+ readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+ if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+ readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+ if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+ readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+ } else { >+ // (no multipath branch - it uses the default setting) >+ if (mddev->level == -3) >+ readahead = 0; >+ } >+ >+ printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+ mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+ >+ printk(KERN_INFO >+ "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+ mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+ return 0; >+abort: >+ return 1; >+} > > static struct gendisk *md_probe(dev_t dev, int *part, void *data) > { >@@ -1567,6 +1650,9 @@ > } > #endif > >+ if (device_size_calculation(mddev)) >+ return -EINVAL; >+ > /* > * Drop all container device buffers, from now on > * the only valid external interface is through the md >@@ -3009,9 +3095,105 @@ > spin_unlock(&pers_lock); > return 0; > } >+||||||| >+ >+#undef OLD_LEVEL >+ >+static int device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead; >+ struct list_head *tmp; >+ mdk_rdev_t *rdev; >+ >+ /* >+ * Do device size calculation. Bail out if too small. >+ * (we have to do this after having validated chunk_size, >+ * because device size has to be modulo chunk_size) >+ */ >+ >+ ITERATE_RDEV(mddev,rdev,tmp) { >+ if (rdev->faulty) >+ continue; >+ if (rdev->size < mddev->chunk_size / 1024) { >+ printk(KERN_WARNING >+ "md: Dev %s smaller than chunk_size:" >+ " %lluk < %dk\n", >+ bdev_partition_name(rdev->bdev), >+ (unsigned long long)rdev->size, >+ mddev->chunk_size / 1024); >+ return -EINVAL; >+ } >+ } >+ >+ switch (mddev->level) { >+ case LEVEL_MULTIPATH: >+ data_disks = 1; >+ break; >+ case -3: >+ data_disks = 1; >+ break; >+ case -2: >+ data_disks = 1; >+ break; >+ case LEVEL_LINEAR: >+ zoned_raid_size(mddev); >+ data_disks = 1; >+ break; >+ case 0: >+ zoned_raid_size(mddev); >+ data_disks = mddev->raid_disks; >+ break; >+ case 1: >+ data_disks = 1; >+ break; >+ case 4: >+ case 5: >+ data_disks = mddev->raid_disks-1; >+ break; >+ default: >+ printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+ mdidx(mddev), mddev->level); >+ goto abort; >+ } >+ if (!md_size[mdidx(mddev)]) >+ md_size[mdidx(mddev)] = mddev->size * data_disks; >+ >+ readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+ if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+ readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+ if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+ readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+ } else { >+ // (no multipath branch - it uses the default setting) >+ if (mddev->level == -3) >+ readahead = 0; >+ } >+ >+ printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+ mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+ >+ printk(KERN_INFO >+ "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+ mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+ return 0; >+abort: >+ return 1; >+} >+ >+static struct gendisk *md_probe(dev_t dev, int *part, void *data) >+{ >+ static DECLARE_MUTEX(disks_sem); >+======= >+ >+static struct gendisk *md_probe(dev_t dev, int *part, void *data) >+{ >+ static DECLARE_MUTEX(disks_sem); >+>>>>>>> > > int unregister_md_personality(int pnum) > { >+<<<<<<< > if (pnum >= MAX_PERSONALITY) { > MD_BUG(); > return -EINVAL; >@@ -3587,3 +3769,15 @@ > EXPORT_SYMBOL(md_interrupt_thread); > EXPORT_SYMBOL(md_check_recovery); > MODULE_LICENSE("GPL"); >+||||||| >+ if (device_size_calculation(mddev)) >+ return -EINVAL; >+ >+ /* >+ * Drop all container device buffers, from now on >+ * the only valid external interface is through the md >+======= >+ /* >+ * Drop all container device buffers, from now on >+ * the only valid external interface is through the md >+>>>>>>> >./linux/md/merge FAILED 0.15 >3 unresolved conflicts found >4 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ orig.tmp 2018-01-02 13:25:01.553309339 +0100 >@@ -1435,6 +1435,89 @@ > abort: > return 1; > } >+<<<<<<< >+ >+static int device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead; >+ struct list_head *tmp; >+ mdk_rdev_t *rdev; >+ >+ /* >+ * Do device size calculation. Bail out if too small. >+ * (we have to do this after having validated chunk_size, >+ * because device size has to be modulo chunk_size) >+ */ >+ >+ ITERATE_RDEV(mddev,rdev,tmp) { >+ if (rdev->faulty) >+ continue; >+ if (rdev->size < mddev->chunk_size / 1024) { >+ printk(KERN_WARNING >+ "md: Dev %s smaller than chunk_size:" >+ " %lluk < %dk\n", >+ bdev_partition_name(rdev->bdev), >+ (unsigned long long)rdev->size, >+ mddev->chunk_size / 1024); >+ return -EINVAL; >+ } >+ } >+ >+ switch (mddev->level) { >+ case LEVEL_MULTIPATH: >+ data_disks = 1; >+ break; >+ case -3: >+ data_disks = 1; >+ break; >+ case -2: >+ data_disks = 1; >+ break; >+ case LEVEL_LINEAR: >+ zoned_raid_size(mddev); >+ data_disks = 1; >+ break; >+ case 0: >+ zoned_raid_size(mddev); >+ data_disks = mddev->raid_disks; >+ break; >+ case 1: >+ data_disks = 1; >+ break; >+ case 4: >+ case 5: >+ data_disks = mddev->raid_disks-1; >+ break; >+ default: >+ printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+ mdidx(mddev), mddev->level); >+ goto abort; >+ } >+ if (!md_size[mdidx(mddev)]) >+ md_size[mdidx(mddev)] = mddev->size * data_disks; >+ >+ readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+ if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+ readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+ if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+ readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+ } else { >+ // (no multipath branch - it uses the default setting) >+ if (mddev->level == -3) >+ readahead = 0; >+ } >+ >+ printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+ mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+ >+ printk(KERN_INFO >+ "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+ mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+ return 0; >+abort: >+ return 1; >+} > > static struct gendisk *md_probe(dev_t dev, int *part, void *data) > { >@@ -1567,6 +1650,9 @@ > } > #endif > >+ if (device_size_calculation(mddev)) >+ return -EINVAL; >+ > /* > * Drop all container device buffers, from now on > * the only valid external interface is through the md >@@ -3009,9 +3095,105 @@ > spin_unlock(&pers_lock); > return 0; > } >+||||||| >+ >+#undef OLD_LEVEL >+ >+static int device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead; >+ struct list_head *tmp; >+ mdk_rdev_t *rdev; >+ >+ /* >+ * Do device size calculation. Bail out if too small. >+ * (we have to do this after having validated chunk_size, >+ * because device size has to be modulo chunk_size) >+ */ >+ >+ ITERATE_RDEV(mddev,rdev,tmp) { >+ if (rdev->faulty) >+ continue; >+ if (rdev->size < mddev->chunk_size / 1024) { >+ printk(KERN_WARNING >+ "md: Dev %s smaller than chunk_size:" >+ " %lluk < %dk\n", >+ bdev_partition_name(rdev->bdev), >+ (unsigned long long)rdev->size, >+ mddev->chunk_size / 1024); >+ return -EINVAL; >+ } >+ } >+ >+ switch (mddev->level) { >+ case LEVEL_MULTIPATH: >+ data_disks = 1; >+ break; >+ case -3: >+ data_disks = 1; >+ break; >+ case -2: >+ data_disks = 1; >+ break; >+ case LEVEL_LINEAR: >+ zoned_raid_size(mddev); >+ data_disks = 1; >+ break; >+ case 0: >+ zoned_raid_size(mddev); >+ data_disks = mddev->raid_disks; >+ break; >+ case 1: >+ data_disks = 1; >+ break; >+ case 4: >+ case 5: >+ data_disks = mddev->raid_disks-1; >+ break; >+ default: >+ printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+ mdidx(mddev), mddev->level); >+ goto abort; >+ } >+ if (!md_size[mdidx(mddev)]) >+ md_size[mdidx(mddev)] = mddev->size * data_disks; >+ >+ readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+ if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+ readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+ if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+ readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+ } else { >+ // (no multipath branch - it uses the default setting) >+ if (mddev->level == -3) >+ readahead = 0; >+ } >+ >+ printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+ mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+ >+ printk(KERN_INFO >+ "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+ mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+ return 0; >+abort: >+ return 1; >+} >+ >+static struct gendisk *md_probe(dev_t dev, int *part, void *data) >+{ >+ static DECLARE_MUTEX(disks_sem); >+======= >+ >+static struct gendisk *md_probe(dev_t dev, int *part, void *data) >+{ >+ static DECLARE_MUTEX(disks_sem); >+>>>>>>> > > int unregister_md_personality(int pnum) > { >+<<<<<<< > if (pnum >= MAX_PERSONALITY) { > MD_BUG(); > return -EINVAL; >@@ -3587,3 +3769,15 @@ > EXPORT_SYMBOL(md_interrupt_thread); > EXPORT_SYMBOL(md_check_recovery); > MODULE_LICENSE("GPL"); >+||||||| >+ if (device_size_calculation(mddev)) >+ return -EINVAL; >+ >+ /* >+ * Drop all container device buffers, from now on >+ * the only valid external interface is through the md >+======= >+ /* >+ * Drop all container device buffers, from now on >+ * the only valid external interface is through the md >+>>>>>>> >./linux/md/replace FAILED 0.15 >--- diff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:01.607383501 +0100 >@@ -1434,97 +1434,97 @@ > - > - return 0; > -abort: >-+*** 1453,90 **** 1 >- return 1; >+|<<<-- return-->>><<<++*** 1453,90 **** 1 >+| return++>>> 1; > } > >-+#undef OLD_LEVEL >-+ >- static int device_size_calculation(mddev_t * mddev) >+|<<<--static-->>><<<++#undef OLD_LEVEL >+| >+|static++>>> int <<<--device_size_calculation-->>><<<++device_size_calculation++>>>(<<<--mddev_t-->>><<<++mddev_t++>>> * <<<--mddev-->>><<<++mddev++>>>) > { >- int data_disks = 0; >- unsigned int readahead; >- struct list_head *tmp; >- mdk_rdev_t *rdev; >+| int <<<--data_disks-->>><<<++data_disks++>>> = 0; >+| <<<--unsigned-->>><<<++unsigned++>>> <<<--int-->>><<<++int++>>> <<<--readahead-->>><<<++readahead++>>>; >+| <<<--struct-->>><<<++struct++>>> <<<--list_head-->>><<<++list_head++>>> *tmp; >+| <<<--mdk_rdev_t-->>><<<++mdk_rdev_t++>>> *<<<--rdev-->>><<<++rdev++>>>; > > /* >- * Do device size calculation. Bail out if too small. >- * (we have to do this after having validated chunk_size, >- * because device size has to be modulo chunk_size) >+| * Do <<<--device-->>><<<++device++>>> <<<--size-->>><<<++size++>>> <<<--calculation-->>><<<++calculation++>>>. <<<--Bail-->>><<<++Bail++>>> <<<--out-->>><<<++out++>>> <<<--if-->>><<<++if++>>> too <<<--small-->>><<<++small++>>>. >+| * (we <<<--have-->>><<<++have++>>> to do <<<--this-->>><<<++this++>>> <<<--after-->>><<<++after++>>> <<<--having-->>><<<++having++>>> <<<--validated-->>><<<++validated++>>> <<<--chunk_size-->>><<<++chunk_size++>>>, >+|<<<-- -->>><<<++ ++>>>* <<<--because-->>><<<++because++>>> <<<--device-->>><<<++device++>>> <<<--size-->>><<<++size++>>> <<<--has-->>><<<++has++>>> <<<--to-->>><<<++to++>>> <<<--be-->>><<<++be++>>> <<<--modulo-->>><<<++modulo++>>> <<<--chunk_size-->>><<<++chunk_size++>>>) > */ > >- ITERATE_RDEV(mddev,rdev,tmp) { >- if (rdev->faulty) >- continue; >- if (rdev->size < mddev->chunk_size / 1024) { >- printk(KERN_WARNING >- "md: Dev %s smaller than chunk_size:" >- " %lluk < %dk\n", >- bdev_partition_name(rdev->bdev), >- (unsigned long long)rdev->size, >- mddev->chunk_size / 1024); >- return -EINVAL; >+| <<<--ITERATE_RDEV-->>><<<++ITERATE_RDEV++>>>(<<<--mddev-->>><<<++mddev++>>>,<<<--rdev-->>><<<++rdev++>>>,<<<--tmp-->>><<<++tmp++>>>) { >+| if (<<<--rdev-->>><<<++rdev++>>>-><<<--faulty-->>><<<++faulty++>>>) >+| <<<--continue-->>><<<++continue++>>>; >+| if (<<<--rdev-->>><<<++rdev++>>>-><<<--size-->>><<<++size++>>> < <<<--mddev-->>><<<++mddev++>>>-><<<--chunk_size-->>><<<++chunk_size++>>> / <<<--1024-->>><<<++1024++>>>) { >+| <<<--printk-->>><<<++printk++>>>(<<<--KERN_WARNING-->>><<<++KERN_WARNING++>>> >+| "<<<--md-->>><<<++md++>>>: <<<--Dev-->>><<<++Dev++>>> %s <<<--smaller-->>><<<++smaller++>>> <<<--than-->>><<<++than++>>> <<<--chunk_size-->>><<<++chunk_size++>>>:" >+| " %<<<--lluk-->>><<<++lluk++>>> < %<<<--dk-->>><<<++dk++>>>\n", >+| <<<--bdev_partition_name-->>><<<++bdev_partition_name++>>>(<<<--rdev-->>><<<++rdev++>>>-><<<--bdev-->>><<<++bdev++>>>), >+| (<<<--unsigned-->>><<<++unsigned long++>>> long<<<-- long-->>>)<<<--rdev-->>><<<++rdev++>>>-><<<--size-->>><<<++size++>>>, >+| <<<--mddev-->>><<<++mddev++>>>-><<<--chunk_size-->>><<<++chunk_size++>>> / <<<--1024-->>><<<++1024++>>>); >+| <<<--return-->>><<<++return++>>> -<<<--EINVAL-->>><<<++EINVAL++>>>; > } > } > >- switch (mddev->level) { >- case LEVEL_MULTIPATH: >- data_disks = 1; >- break; >- case -3: >- data_disks = 1; >- break; >- case -2: >- data_disks = 1; >- break; >- case LEVEL_LINEAR: >- zoned_raid_size(mddev); >- data_disks = 1; >- break; >- case 0: >- zoned_raid_size(mddev); >- data_disks = mddev->raid_disks; >- break; >- case 1: >- data_disks = 1; >- break; >- case 4: >- case 5: >- data_disks = mddev->raid_disks-1; >- break; >- default: >- printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >- mdidx(mddev), mddev->level); >- goto abort; >+| <<<--switch-->>><<<++switch++>>> (<<<--mddev-->>><<<++mddev++>>>-><<<--level-->>><<<++level++>>>) { >+| <<<--case-->>><<<++case++>>> <<<--LEVEL_MULTIPATH-->>><<<++LEVEL_MULTIPATH++>>>: >+| <<<--data_disks-->>><<<++data_disks++>>> = 1; >+| <<<--break-->>><<<++break++>>>; >+| <<<--case-->>><<<++case++>>> -3: >+| <<<--data_disks-->>><<<++data_disks++>>> = 1; >+| <<<--break-->>><<<++break++>>>; >+| <<<--case-->>><<<++case++>>> -2: >+| <<<--data_disks-->>><<<++data_disks++>>> = 1; >+| <<<--break-->>><<<++break++>>>; >+| <<<--case-->>><<<++case++>>> <<<--LEVEL_LINEAR-->>><<<++LEVEL_LINEAR++>>>: >+| <<<--zoned_raid_size-->>><<<++zoned_raid_size++>>>(<<<--mddev-->>><<<++mddev++>>>); >+| <<<--data_disks-->>><<<++data_disks++>>> = 1; >+| <<<--break-->>><<<++break++>>>; >+| <<<--case-->>><<<++case++>>> 0: >+| <<<--zoned_raid_size-->>><<<++zoned_raid_size++>>>(<<<--mddev-->>><<<++mddev++>>>); >+| <<<--data_disks-->>><<<++data_disks++>>> = <<<--mddev-->>><<<++mddev++>>>-><<<--raid_disks-->>><<<++raid_disks++>>>; >+| <<<--break-->>><<<++break++>>>; >+| <<<--case-->>><<<++case++>>> 1: >+| <<<--data_disks-->>><<<++data_disks++>>> = 1; >+| <<<--break-->>><<<++break++>>>; >+| <<<--case-->>><<<++case++>>> 4: >+| <<<--case-->>><<<++case++>>> 5: >+| <<<--data_disks-->>><<<++data_disks++>>> = <<<--mddev-->>><<<++mddev++>>>-><<<--raid_disks-->>><<<++raid_disks++>>>-1; >+| <<<--break-->>><<<++break++>>>; >+| <<<--default-->>><<<++default++>>>: >+| <<<--printk-->>><<<++printk++>>>(<<<--KERN_ERR-->>><<<++KERN_ERR++>>> "<<<--md-->>><<<++md++>>>: <<<--md-->>><<<++md++>>>%d: <<<--unsupported-->>><<<++unsupported++>>> <<<--raid-->>><<<++raid++>>> <<<--level-->>><<<++level++>>> %d\n", >+| <<<--mdidx-->>><<<++mdidx++>>>(<<<--mddev-->>><<<++mddev++>>>), <<<--mddev-->>><<<++mddev++>>>-><<<--level-->>><<<++level++>>>); >+| <<<--goto-->>><<<++goto++>>> <<<--abort-->>><<<++abort++>>>; > } >- if (!md_size[mdidx(mddev)]) >- md_size[mdidx(mddev)] = mddev->size * data_disks; >+| <<<--if-->>><<<++if++>>> (!<<<--md_size-->>><<<++md_size++>>>[<<<--mdidx-->>><<<++mdidx++>>>(<<<--mddev-->>><<<++mddev++>>>)]) >+| <<<--md_size-->>><<<++md_size++>>>[<<<--mdidx-->>><<<++mdidx++>>>(<<<--mddev-->>><<<++mddev++>>>)] = <<<--mddev-->>><<<++mddev++>>>-><<<--size-->>><<<++size++>>> * <<<--data_disks-->>><<<++data_disks++>>>; > >- readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >- if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >- readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >- if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >- readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >- } else { >- // (no multipath branch - it uses the default setting) >- if (mddev->level == -3) >- readahead = 0; >+| <<<--readahead-->>><<<++readahead++>>> = (<<<--VM_MAX_READAHEAD-->>><<<++VM_MAX_READAHEAD++>>> * <<<--1024-->>><<<++1024++>>>) / <<<--PAGE_SIZE-->>><<<++PAGE_SIZE++>>>; >+| if (!<<<--mddev-->>><<<++mddev++>>>-><<<--level-->>><<<++level++>>> || (<<<--mddev-->>><<<++mddev++>>>-><<<--level-->>><<<++level++>>> == 4) || (<<<--mddev-->>><<<++mddev++>>>-><<<--level-->>><<<++level++>>> == 5)) { >+| <<<--readahead-->>><<<++readahead++>>> = (<<<--mddev-->>><<<++mddev++>>>-><<<--chunk_size-->>><<<++chunk_size++>>>>><<<--PAGE_SHIFT-->>><<<++PAGE_SHIFT++>>>) * 4 * <<<--data_disks-->>><<<++data_disks++>>>; >+| if (<<<--readahead-->>><<<++readahead++>>> < <<<--data_disks-->>><<<++data_disks++>>> * (<<<--MAX_SECTORS-->>><<<++MAX_SECTORS++>>>>>(<<<--PAGE_SHIFT-->>><<<++PAGE_SHIFT++>>>-9))*2) >+| <<<--readahead-->>><<<++readahead++>>> = <<<--data_disks-->>><<<++data_disks++>>> * (<<<--MAX_SECTORS-->>><<<++MAX_SECTORS++>>>>>(<<<--PAGE_SHIFT-->>><<<++PAGE_SHIFT++>>>-9))*2; >+| } <<<--else-->>><<<++else++>>> { >+| // (no <<<--multipath-->>><<<++multipath++>>> <<<--branch-->>><<<++branch++>>> - <<<--it-->>><<<++it++>>> <<<--uses-->>><<<++uses++>>> <<<--the-->>><<<++the++>>> <<<--default-->>><<<++default++>>> <<<--setting-->>><<<++setting++>>>) >+| if (<<<--mddev-->>><<<++mddev++>>>-><<<--level-->>><<<++level++>>> == -3) >+| <<<--readahead-->>><<<++readahead++>>> = 0; > } > >- printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >- mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+| <<<--printk-->>><<<++printk++>>>(<<<--KERN_INFO-->>><<<++KERN_INFO++>>> "<<<--md-->>><<<++md++>>>%d: max <<<--total-->>><<<++total++>>> <<<--readahead-->>><<<++readahead++>>> <<<--window-->>><<<++window++>>> <<<--set-->>><<<++set++>>> <<<--to-->>><<<++to++>>> %<<<--ldk-->>><<<++ldk++>>>\n", >+| <<<--mdidx-->>><<<++mdidx++>>>(<<<--mddev-->>><<<++mddev++>>>), <<<--readahead-->>><<<++readahead++>>>*(<<<--PAGE_SIZE-->>><<<++PAGE_SIZE++>>>/<<<--1024-->>><<<++1024++>>>)); > >- printk(KERN_INFO >- "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >- mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >- return 0; >- abort: >- return 1; >+| <<<--printk-->>><<<++printk++>>>(<<<--KERN_INFO-->>><<<++KERN_INFO++>>> >+| "<<<--md-->>><<<++md++>>>%d: %d <<<--data-->>><<<++data++>>>-<<<--disks-->>><<<++disks++>>>, <<<--max-->>><<<++max++>>> <<<--readahead-->>><<<++readahead++>>> per <<<--data-->>><<<++data++>>>-<<<--disk-->>><<<++disk++>>>: %ldk\n", >+| <<<--mdidx-->>><<<++mdidx++>>>(<<<--mddev-->>><<<++mddev++>>>), <<<--data_disks-->>><<<++data_disks++>>>, <<<--readahead-->>><<<++readahead++>>>/<<<--data_disks-->>><<<++data_disks++>>>*(<<<--PAGE_SIZE-->>><<<++PAGE_SIZE++>>>/<<<--1024-->>><<<++1024++>>>)); >+| <<<--return-->>><<<++return++>>> 0; >+|<<<--abort-->>><<<++abort++>>>: >+| <<<--return-->>><<<++return++>>> 1; > } > >- static struct gendisk *md_probe(dev_t dev, int *part, void *data) >- { >- static DECLARE_MUTEX(disks_sem); >+-static struct gendisk *md_probe(dev_t dev, int *part, void *data) >+-{ >+- static DECLARE_MUTEX(disks_sem); > - int unit = MINOR(dev); > - mddev_t *mddev = mddev_find(unit); > - struct gendisk *disk; >@@ -1650,17 +1650,15 @@ > - char module_name[80]; > - sprintf (module_name, "md-personality-%d", pnum); > - request_module (module_name); >-+*** 1664,9 **** 2 >-+ } >- } >+- } > -#endif >- >- if (device_size_calculation(mddev)) >- return -EINVAL; >- >- /* >- * Drop all container device buffers, from now on >- * the only valid external interface is through the md >+- >+- if (device_size_calculation(mddev)) >+- return -EINVAL; >+- >+- /* >+- * Drop all container device buffers, from now on >+- * the only valid external interface is through the md > - * device. > - * Also find largest hardsector size > - */ >@@ -3100,13 +3098,18 @@ > - spin_unlock(&pers_lock); > - return 0; > -} >-- >--int unregister_md_personality(int pnum) >--{ >-- if (pnum >= MAX_PERSONALITY) { >-- MD_BUG(); >-- return -EINVAL; >-- } >++static struct gendisk *md_probe(dev_t dev, int *part, void *data) >++{ >++ static DECLARE_MUTEX(disks_sem); >++*** 1664,9 **** 2 >+|<<<++ }++>>> >+|<<<--int unregister_md_personality(int pnum)-->>><<<++ }++>>> >+|<<<--{-->>> >+| if (<<<--pnum >= MAX_PERSONALITY) { >+| MD_BUG-->>><<<++device_size_calculation++>>>(<<<++mddev++>>>)<<<--;-->>><<<++)++>>> >+ return -EINVAL; >++ >+| <<<--}-->>><<<++/*++>>> > - > - printk(KERN_INFO "md: %s personality unregistered\n", pers[pnum]->name); > - spin_lock(&pers_lock); >@@ -3678,3 +3681,5 @@ > -EXPORT_SYMBOL(md_interrupt_thread); > -EXPORT_SYMBOL(md_check_recovery); > -MODULE_LICENSE("GPL"); >++ * Drop all container device buffers, from now on >++ * the only valid external interface is through the md >./linux/md/diff FAILED 0.15 >5 unresolved conflicts found >4 already-applied changes ignored >--- wmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:07.929989109 +0100 >@@ -1436,6 +1436,90 @@ > return 1; > } > >+static<<<---device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead|||device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead===--->>>struct <<<---list_head *tmp; >+ mdk_rdev_t *rdev; >+ >+ /* >+ * Do device size calculation. Bail out if too small. >+ * (we have to do this after having validated chunk_size, >+ * because device size has to be modulo chunk_size) >+ */ >+ >+ ITERATE_RDEV(mddev,rdev,tmp) { >+ if (rdev->faulty) >+ continue; >+ if (rdev->size < mddev->chunk_size / 1024) { >+ printk(KERN_WARNING >+ "md: Dev %s smaller than chunk_size:" >+ " %lluk < %dk\n", >+ bdev_partition_name(rdev->bdev), >+ (unsigned long long)rdev->size, >+ mddev->chunk_size / 1024); >+ return -EINVAL; >+ } >+ } >+ >+ switch (mddev->level) { >+ case LEVEL_MULTIPATH: >+ data_disks = 1; >+ break; >+ case -3: >+ data_disks = 1; >+ break; >+ case -2: >+ data_disks = 1; >+ break; >+ case LEVEL_LINEAR: >+ zoned_raid_size(mddev); >+ data_disks = 1; >+ break; >+ case 0: >+ zoned_raid_size(mddev); >+ data_disks = mddev->raid_disks; >+ break; >+ case 1: >+ data_disks = 1; >+ break; >+ case 4: >+ case 5: >+ data_disks = mddev->raid_disks-1; >+ break; >+ default: >+ printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+ mdidx(mddev), mddev->level); >+ goto abort; >+ } >+ if (!md_size[mdidx(mddev)]) >+ md_size[mdidx(mddev)] = mddev->size * data_disks; >+ >+ readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+ if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+ readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+ if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+ readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+ } else { >+ // (no multipath branch - it uses the default setting) >+ if (mddev->level == -3) >+ readahead = 0; >+ } >+ >+ printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+ mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+ >+ printk(KERN_INFO >+ "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+ mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+ return 0; >+abort: >+ return 1; >+} >+ > static struct gendisk *md_probe(dev_t dev, int *part, void *data) > { > static DECLARE_MUTEX(disks_sem); >@@ -1567,6 +1651,9 @@ > } > #endif > >+ if (device_size_calculation(mddev)) >+ return -EINVAL; >+ > /* > * Drop all container device buffers, from now on > * the only valid external interface is through the md >@@ -3009,14 +3096,96 @@ > spin_unlock(&pers_lock); > return 0; > } >+|||list_head *tmp; >+ mdk_rdev_t *rdev; > >-int unregister_md_personality(int pnum) >-{ >- if (pnum >= MAX_PERSONALITY) { >- MD_BUG(); >- return -EINVAL; >+ /* >+ * Do device size calculation. Bail out if too small. >+ * (we have to do this after having validated chunk_size, >+ * because device size has to be modulo chunk_size) >+ */ >+ >+ ITERATE_RDEV(mddev,rdev,tmp) { >+ if (rdev->faulty) >+ continue; >+ if (rdev->size < mddev->chunk_size / 1024) { >+ printk(KERN_WARNING >+ "md: Dev %s smaller than chunk_size:" >+ " %lluk < %dk\n", >+ bdev_partition_name(rdev->bdev), >+ (unsigned long long)rdev->size, >+ mddev->chunk_size / 1024); >+ return -EINVAL; >+ } > } > >+ switch (mddev->level) { >+ case LEVEL_MULTIPATH: >+ data_disks = 1; >+ break; >+ case -3: >+ data_disks = 1; >+ break; >+ case -2: >+ data_disks = 1; >+ break; >+ case LEVEL_LINEAR: >+ zoned_raid_size(mddev); >+ data_disks = 1; >+ break; >+ case 0: >+ zoned_raid_size(mddev); >+ data_disks = mddev->raid_disks; >+ break; >+ case 1: >+ data_disks = 1; >+ break; >+ case 4: >+ case 5: >+ data_disks = mddev->raid_disks-1; >+ break; >+ default: >+ printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+ mdidx(mddev), mddev->level); >+ goto abort; >+ } >+ if (!md_size[mdidx(mddev)]) >+ md_size[mdidx(mddev)] = mddev->size * data_disks; >+ >+ readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+ if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+ readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+ if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+ readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+ } else { >+ // (no multipath branch - it uses the default setting) >+ if (mddev->level == -3) >+ readahead = 0; >+ } >+ >+ printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+ mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+ >+ printk(KERN_INFO >+ "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+ mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+ return 0; >+abort: >+ return 1; >+} >+ >+static struct gendisk *md_probe(dev_t dev, int *part, void *data) >+{ >+ static DECLARE_MUTEX(disks_sem); >+===gendisk *md_probe(dev_t dev, int *part, void *data) >+{ >+ static DECLARE_MUTEX(disks_sem); >+--->>> >+int unregister_md_personality(int pnum) >+{ >+<<<---pnum >= MAX_PERSONALITY) { >+ MD_BUG|||device_size_calculation===--->>><<<---;|||)===--->>> } >+<<<--- > printk(KERN_INFO "md: %s personality unregistered\n", pers[pnum]->name); > spin_lock(&pers_lock); > pers[pnum] = NULL; >@@ -3587,3 +3756,8 @@ > EXPORT_SYMBOL(md_interrupt_thread); > EXPORT_SYMBOL(md_check_recovery); > MODULE_LICENSE("GPL"); >+||| * Drop all container device buffers, from now on >+ * the only valid external interface is through the md >+=== * Drop all container device buffers, from now on >+ * the only valid external interface is through the md >+--->>> >\ No newline at end of file >./linux/md/wmerge FAILED 0.15 >4 unresolved conflicts found >1 already-applied change ignored >--- lmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.139826111 +0100 >@@ -1224,194 +1224,36 @@ > rdev->sb_loaded = 1; > } > } >+<<<<<<< > > static void md_update_sb(mddev_t * mddev) > { > int err, count = 100; >- struct list_head *tmp; >- mdk_rdev_t *rdev; >- >- mddev->sb_dirty = 0; >-repeat: >- mddev->utime = get_seconds(); >- mddev->events ++; >- >- if (!mddev->events) { >- /* >- * oops, this 64-bit counter should never wrap. >- * Either we are in around ~1 trillion A.C., assuming >- * 1 reboot per second, or we have a bug: >- */ >- MD_BUG(); >- mddev->events --; >- } >- sync_sbs(mddev); >- >- /* >- * do not write anything to disk if using >- * nonpersistent superblocks >- */ >- if (!mddev->persistent) >- return; >- >- dprintk(KERN_INFO >- "md: updating md%d RAID superblock on device (in sync %d)\n", >- mdidx(mddev),mddev->in_sync); >- >- err = 0; >- ITERATE_RDEV(mddev,rdev,tmp) { >- dprintk(KERN_INFO "md: "); >- if (rdev->faulty) >- dprintk("(skipping faulty "); >- >- dprintk("%s ", bdev_partition_name(rdev->bdev)); >- if (!rdev->faulty) { >- err += write_disk_sb(rdev); >- } else >- dprintk(")\n"); >- if (!err && mddev->level == LEVEL_MULTIPATH) >- /* only need to write one superblock... */ >- break; >- } >- if (err) { >- if (--count) { >- printk(KERN_ERR "md: errors occurred during superblock" >- " update, repeating\n"); >- goto repeat; >- } >- printk(KERN_ERR \ >- "md: excessive errors occurred during superblock update, exiting\n"); >- } >+||||||| >+ return 1; > } > >-/* >- * Import a device. If 'super_format' >= 0, then sanity check the superblock >- * >- * mark the device faulty if: >- * >- * - the device is nonexistent (zero size) >- * - the device has no valid superblock >- * >- * a faulty rdev _never_ has rdev->sb set. >- */ >-static mdk_rdev_t *md_import_device(dev_t newdev, int super_format, int super_minor) >-{ >- int err; >- mdk_rdev_t *rdev; >- sector_t size; >- >- rdev = (mdk_rdev_t *) kmalloc(sizeof(*rdev), GFP_KERNEL); >- if (!rdev) { >- printk(KERN_ERR "md: could not alloc mem for %s!\n", >- partition_name(newdev)); >- return ERR_PTR(-ENOMEM); >- } >- memset(rdev, 0, sizeof(*rdev)); >- >- if ((err = alloc_disk_sb(rdev))) >- goto abort_free; >- >- err = lock_rdev(rdev, newdev); >- if (err) { >- printk(KERN_ERR "md: could not lock %s.\n", >- partition_name(newdev)); >- goto abort_free; >- } >- rdev->desc_nr = -1; >- rdev->faulty = 0; >- rdev->in_sync = 0; >- rdev->data_offset = 0; >- atomic_set(&rdev->nr_pending, 0); >- >- size = rdev->bdev->bd_inode->i_size >> BLOCK_SIZE_BITS; >- if (!size) { >- printk(KERN_WARNING >- "md: %s has zero or unknown size, marking faulty!\n", >- bdev_partition_name(rdev->bdev)); >- err = -EINVAL; >- goto abort_free; >- } >+#undef OLD_LEVEL > >- if (super_format >= 0) { >- err = super_types[super_format]. >- load_super(rdev, NULL, super_minor); >- if (err == -EINVAL) { >- printk(KERN_WARNING >- "md: %s has invalid sb, not importing!\n", >- bdev_partition_name(rdev->bdev)); >- goto abort_free; >- } >- if (err < 0) { >- printk(KERN_WARNING >- "md: could not read %s's sb, not importing!\n", >- bdev_partition_name(rdev->bdev)); >- goto abort_free; >- } >- } >- INIT_LIST_HEAD(&rdev->same_set); >- >- return rdev; >- >-abort_free: >- if (rdev->sb_page) { >- if (rdev->bdev) >- unlock_rdev(rdev); >- free_disk_sb(rdev); >- } >- kfree(rdev); >- return ERR_PTR(err); >+static int device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead; >+======= >+ return 1; > } > >-/* >- * Check a full RAID array for plausibility >- */ >- >- >-static int analyze_sbs(mddev_t * mddev) >+static struct gendisk *md_probe(dev_t dev, int *part, void *data) > { >- int i; >- struct list_head *tmp; >- mdk_rdev_t *rdev, *freshest; >- >- freshest = NULL; >- ITERATE_RDEV(mddev,rdev,tmp) >- switch (super_types[mddev->major_version]. >- load_super(rdev, freshest, mddev->minor_version)) { >- case 1: >- freshest = rdev; >- break; >- case 0: >- break; >- default: >- printk( KERN_ERR \ >- "md: fatal superblock inconsistency in %s" >- " -- removing from array\n", >- bdev_partition_name(rdev->bdev)); >- kick_rdev_from_array(rdev); >- } >- >- >- super_types[mddev->major_version]. >- validate_super(mddev, freshest); >- >- i = 0; >- ITERATE_RDEV(mddev,rdev,tmp) { >- if (rdev != freshest) >- if (super_types[mddev->major_version]. >- validate_super(mddev, rdev)) { >- printk(KERN_WARNING "md: kicking non-fresh %s" >- " from array!\n", >- bdev_partition_name(rdev->bdev)); >- kick_rdev_from_array(rdev); >- continue; >- } >- if (mddev->level == LEVEL_MULTIPATH) { >- rdev->desc_nr = i++; >- rdev->raid_disk = rdev->desc_nr; >- rdev->in_sync = 1; >+ static DECLARE_MUTEX(disks_sem); >+*** 1571,6 **** 2 > } > } > >+ /* >+ * Drop all container device buffers, from now on >+>>>>>>> >+<<<<<<< > > /* > * Check if we can support this RAID array >@@ -1436,6 +1278,88 @@ > return 1; > } > >+static int device_size_calculation(mddev_t * mddev) >+{ >+ int data_disks = 0; >+ unsigned int readahead; >+ struct list_head *tmp; >+ mdk_rdev_t *rdev; >+ >+ /* >+ * Do device size calculation. Bail out if too small. >+ * (we have to do this after having validated chunk_size, >+ * because device size has to be modulo chunk_size) >+ */ >+ >+ ITERATE_RDEV(mddev,rdev,tmp) { >+ if (rdev->faulty) >+ continue; >+ if (rdev->size < mddev->chunk_size / 1024) { >+ printk(KERN_WARNING >+ "md: Dev %s smaller than chunk_size:" >+ " %lluk < %dk\n", >+ bdev_partition_name(rdev->bdev), >+ (unsigned long long)rdev->size, >+ mddev->chunk_size / 1024); >+ return -EINVAL; >+ } >+ } >+ >+ switch (mddev->level) { >+ case LEVEL_MULTIPATH: >+ data_disks = 1; >+ break; >+ case -3: >+ data_disks = 1; >+ break; >+ case -2: >+ data_disks = 1; >+ break; >+ case LEVEL_LINEAR: >+ zoned_raid_size(mddev); >+ data_disks = 1; >+ break; >+ case 0: >+ zoned_raid_size(mddev); >+ data_disks = mddev->raid_disks; >+ break; >+ case 1: >+ data_disks = 1; >+ break; >+ case 4: >+ case 5: >+ data_disks = mddev->raid_disks-1; >+ break; >+ default: >+ printk(KERN_ERR "md: md%d: unsupported raid level %d\n", >+ mdidx(mddev), mddev->level); >+ goto abort; >+ } >+ if (!md_size[mdidx(mddev)]) >+ md_size[mdidx(mddev)] = mddev->size * data_disks; >+ >+ readahead = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; >+ if (!mddev->level || (mddev->level == 4) || (mddev->level == 5)) { >+ readahead = (mddev->chunk_size>>PAGE_SHIFT) * 4 * data_disks; >+ if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) >+ readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; >+ } else { >+ // (no multipath branch - it uses the default setting) >+ if (mddev->level == -3) >+ readahead = 0; >+ } >+ >+ printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", >+ mdidx(mddev), readahead*(PAGE_SIZE/1024)); >+ >+ printk(KERN_INFO >+ "md%d: %d data-disks, max readahead per data-disk: %ldk\n", >+ mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); >+ return 0; >+abort: >+ return 1; >+} >+ > static struct gendisk *md_probe(dev_t dev, int *part, void *data) > { > static DECLARE_MUTEX(disks_sem); >@@ -1567,6 +1491,9 @@ > } > #endif > >+ if (device_size_calculation(mddev)) >+ return -EINVAL; >+ > /* > * Drop all container device buffers, from now on > * the only valid external interface is through the md >@@ -3587,3 +3514,11 @@ > EXPORT_SYMBOL(md_interrupt_thread); > EXPORT_SYMBOL(md_check_recovery); > MODULE_LICENSE("GPL"); >+||||||| >+ >+ /* >+ * Drop all container device buffers, from now on >+ * the only valid external interface is through the md >+======= >+ * the only valid external interface is through the md >+>>>>>>> >./linux/md/lmerge FAILED 0.01 >3 already-applied changes ignored >./simple/bothadd/merge SUCCEEDED 0.00 >1 already-applied change ignored >./simple/bothadd/lmerge SUCCEEDED 0.00 >4 unresolved conflicts found >./simple/all-different-2/merge SUCCEEDED 0.00 >10 unresolved conflicts found >./simple/all-different-2/wmerge SUCCEEDED 0.00 >1 unresolved conflict found >./simple/all-different-2/lmerge SUCCEEDED 0.00 >1 unresolved conflict found >3 already-applied changes ignored >./simple/conflict/merge SUCCEEDED 0.00 >--- diff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.397836856 +0100 >@@ -1,5 +1,5 @@ > @@ -1,4 +1,4 @@ > this is a file > with the word >-|<<<--two-->>><<<++to++>>> which is >- misspelt >+|<<<--two-->>><<<++to++>>> <<<--which-->>><<<++which++>>> <<<--is-->>><<<++is++>>> >+|<<<--misspelt-->>><<<++misspelt++>>> >./simple/conflict/diff FAILED 0.00 >--- ldiff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.431984824 +0100 >@@ -2,5 +2,6 @@ > this is a file > with the word > -two which is >+-misspelt > +to which is >- misspelt >++misspelt >./simple/conflict/ldiff FAILED 0.00 >1 unresolved conflict found >3 already-applied changes ignored >./simple/conflict/wmerge SUCCEEDED 0.00 >1 unresolved conflict found >1 already-applied change ignored >./simple/conflictmixed/merge SUCCEEDED 0.00 >--- diff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.540159849 +0100 >@@ -1,5 +1,5 @@ > @@ -1,4 +1,4 @@ > this is a file > with the word >-|<<<--two-->>><<<++to++>>> which is >- misspelt >+|<<<--two-->>><<<++to++>>> <<<--which-->>><<<++which++>>> <<<--is-->>><<<++is++>>> >+|<<<--misspelt-->>><<<++misspelt++>>> >./simple/conflictmixed/diff FAILED 0.00 >--- ldiff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.574139929 +0100 >@@ -2,5 +2,6 @@ > this is a file > with the word > -two which is >+-misspelt > +to which is >- misspelt >++misspelt >./simple/conflictmixed/ldiff FAILED 0.00 >2 unresolved conflicts found >1 already-applied change ignored >--- wmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.607969552 +0100 >@@ -1,4 +1,4 @@ > this is a file > with the word >-<<<---two|||to===too--->>> which was >+<<<---two|||to===too--->>> which <<<---is|||is===was--->>> > misspelt >./simple/conflictmixed/wmerge FAILED 0.00 >1 unresolved conflict found >--- lmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.645477954 +0100 >@@ -2,13 +2,15 @@ > this is a file > with the word > two which is >+misspelt > ||||||| > this is a file > with the word > to which is >+misspelt > ======= > this is a file > with the word > too which was >->>>>>>> > misspelt >+>>>>>>> >./simple/conflictmixed/lmerge FAILED 0.00 >1 unresolved conflict found >./simple/trivial-conflict/merge SUCCEEDED 0.00 >1 already-applied change ignored >./simple/already-applied/merge SUCCEEDED 0.00 >1 unresolved conflict found >--- Wmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.748798382 +0100 >@@ -2,19 +2,22 @@ > > This is one line of the file > >+I think this is another line > ||||||| > > This is 1 line of the file > >+I think this is another line > ======= > > This is 1 line of the document > >+I think this is another line > &&&&&&& > > This is one line of the document > >->>>>>>> > I think this is another line >+>>>>>>> > > So is this >./simple/show-wiggle-1/Wmerge FAILED 0.00 >1 unresolved conflict found >1 already-applied change ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.784299296 +0100 >@@ -1,2 +1,18 @@ > First line >+<<<<<<< >+this line will go >+this one too >+and this >+||||||| >+this line will go >+Some more padding >+this one too >+This stuff is padding too >+and this >+Guess what you find here? >+======= >+Some more padding >+This stuff is padding too >+Guess what you find here? >+>>>>>>> > last line >./simple/multideletes/merge FAILED 0.00 >1 unresolved conflict found >--- lmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.821497183 +0100 >@@ -1,2 +1,14 @@ > First line >+<<<<<<< >+this one too >+and this >+||||||| >+this one too >+This stuff is padding too >+and this >+Guess what you find here? >+======= >+This stuff is padding too >+Guess what you find here? >+>>>>>>> > last line >./simple/multideletes/lmerge FAILED 0.00 >1 unresolved conflict found >--- Wmerge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.867277350 +0100 >@@ -2,12 +2,14 @@ > > <<<<<<< > content line with content >+ >+closing line > ||||||| > content line content >+ >+closing line > ======= > middle line content >-&&&&&&& >-middle line with content >->>>>>>> > > closing line >+>>>>>>> >./simple/show-wiggle-2/Wmerge FAILED 0.00 >1 unresolved conflict found >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.913258106 +0100 >@@ -1,5 +1,15 @@ >-This is a longish line that might be split up >+<<<<<<< >+This is a long line that might be broken > and this is > a broken line > that might be >-catenated >+joined >+||||||| >+This is a long line that has been >+broken >+and this is a broken line that will be joined >+======= >+This is a longish line that has been >+split up >+and this is a broken line that will be catenated >+>>>>>>> >./simple/brokenlines/merge FAILED 0.00 >--- diff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:08.959039083 +0100 >@@ -3,5 +3,5 @@ > |++>>>broken > |and this is<<<-- > |-->>><<<++ ++>>>a broken line<<<-- >-|-->>><<<++ ++>>>that <<<--might-->>><<<++will++>>> be<<<-- >-|-->>><<<++ ++>>>joined >+|-->>><<<++ ++>>>that <<<--might-->>><<<++will++>>> <<<--be >+|joined-->>><<<++be joined++>>> >./simple/brokenlines/diff FAILED 0.00 >1 unresolved conflict found >./simple/multiple-add/merge SUCCEEDED 0.00 >1 unresolved conflict found >./simple/multiple-add/wmerge SUCCEEDED 0.00 >1 unresolved conflict found >./simple/multiple-add/lmerge SUCCEEDED 0.00 >1 unresolved conflict found >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.104859377 +0100 >@@ -1,5 +1,17 @@ > here > is >+<<<<<<< > the >+original >+file >+||||||| >+the >+new version of the >+original >+file >+======= >+the >+new version of the > inaugural > file >+>>>>>>> >./simple/changeafteradd/merge FAILED 0.00 >4 unresolved conflicts found >./simple/all-different/merge SUCCEEDED 0.00 >10 unresolved conflicts found >./simple/all-different/wmerge SUCCEEDED 0.00 >1 unresolved conflict found >./simple/all-different/lmerge SUCCEEDED 0.00 >2 unresolved conflicts found >7 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.241731673 +0100 >@@ -5,14 +5,34 @@ > several lines > so that alll > the changes >+<<<<<<< > don't h... > I don't know waht I am saying. >-This lion will have some modifications made. >+This lion will have some changes made. > but this one wont > stuf stuf stuff > thing thing > xxxxx > that is all >+||||||| >+don't h... >+I don't know what I am saying. >+This line will have some changes made. >+but this one wont >+stuf stuf stuff >+thing thing >+xxxxx >+that is all >+======= >+don't h... >+I don't know what I am saying. >+This line will have some modifications made. >+but this one wont >+stuf stuf stuff >+thing thing >+xxxxx >+that is all >+>>>>>>> > except > for > this >./simple/base/merge FAILED 0.00 >--- diff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.288981342 +0100 >@@ -1,23 +1,23 @@ > @@ -1,20 +1,21 @@ > - >- This is a base file >- some changes are going to happen to it >- but it has >-+had >- several lines >- so that alll >- the changes >- don't h... >-|I don't know <<<--waht-->>><<<++what++>>> I am saying. >-|This <<<--lion-->>><<<++line++>>> will have some changes made. >- but this one wont >- stuf stuf stuff >- thing thing >+|<<<--This-->>><<<++This++>>> <<<--is-->>><<<++is++>>> a <<<--base-->>><<<++base++>>> <<<--file-->>><<<++file++>>> >+|<<<--some-->>><<<++some++>>> <<<--changes-->>><<<++changes++>>> <<<--are-->>><<<++are++>>> <<<--going-->>><<<++going++>>> to <<<--happen-->>><<<++happen++>>> <<<--to-->>><<<++to++>>> it >+|but it <<<--has-->>><<<++has++>>> >+|<<<--several-->>><<<++had >+|several++>>> <<<--lines-->>><<<++lines++>>> >+|so <<<--that-->>><<<++that++>>> <<<--alll-->>><<<++alll++>>> >+|<<<--the-->>><<<++the++>>> <<<--changes-->>><<<++changes++>>> >+|<<<--don-->>><<<++don++>>>'t h... >+|I <<<--don-->>><<<++don++>>>'t <<<--know-->>><<<++know++>>> <<<--waht-->>><<<++what++>>> I am <<<--saying-->>><<<++saying++>>>. >+|<<<--This-->>><<<++This++>>> <<<--lion-->>><<<++line++>>> <<<--will-->>><<<++will++>>> <<<--have-->>><<<++have++>>> <<<--some-->>><<<++some++>>> <<<--changes-->>><<<++changes++>>> <<<--made-->>><<<++made++>>>. >+|<<<--but-->>><<<++but++>>> <<<--this-->>><<<++this++>>> one <<<--wont-->>><<<++wont++>>> >+|<<<++stuf ++>>>stuf <<<--stuf stuff-->>><<<++stuff++>>> >+|<<<--thing-->>><<<++thing++>>> <<<--thing-->>><<<++thing++>>> > xxxxx >- that is all >- except >- for >- this >- last >+|<<<--that-->>><<<++that++>>> is <<<--all-->>><<<++all++>>> >+|<<<--except-->>><<<++except++>>> >+|<<<--for-->>><<<++for++>>> >+|<<<--this-->>><<<++this++>>> >+|<<<--last-->>><<<++last++>>> > bit > +x >./simple/base/diff FAILED 0.00 >--- ldiff 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.331514210 +0100 >@@ -1,25 +1,41 @@ > @@ -1,20 +1,21 @@ > - >- This is a base file >- some changes are going to happen to it >- but it has >-+had >- several lines >- so that alll >- the changes >- don't h... >+-This is a base file >+-some changes are going to happen to it >+-but it has >+-several lines >+-so that alll >+-the changes >+-don't h... > -I don't know waht I am saying. > -This lion will have some changes made. >+-but this one wont >+-stuf stuf stuff >+-thing thing >++This is a base file >++some changes are going to happen to it >++but it has >++had >++several lines >++so that alll >++the changes >++don't h... > +I don't know what I am saying. > +This line will have some changes made. >- but this one wont >- stuf stuf stuff >- thing thing >++but this one wont >++stuf stuf stuff >++thing thing > xxxxx >- that is all >- except >- for >- this >- last >- bit >+-that is all >+-except >+-for >+-this >+-last >+-bit >++that is all >++except >++for >++this >++last >++bit > +x >./simple/base/ldiff FAILED 0.00 >1 unresolved conflict found >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.378425641 +0100 >@@ -22,11 +22,21 @@ > #include <linux/interrupt.h> > #include <linux/mc146818rtc.h> > #include <linux/kernel_stat.h> >+<<<<<<< > #include <linux/module.h> > #include <linux/nmi.h> >+||||||| > #include <linux/sysdev.h> >+#include <linux/debugger.h> >+ >+======= >+#include <linux/sysdev.h> >+#include <linux/debugger.h> > #include <linux/dump.h> > >+>>>>>>> >+#include <linux/sysdev.h> >+ > #include <asm/smp.h> > #include <asm/mtrr.h> > #include <asm/mpspec.h> >./contrib/nmi.c/merge FAILED 0.00 >3 unresolved conflicts found >5 already-applied changes ignored >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.417672959 +0100 >@@ -4288,40 +4288,54 @@ > lface = lface_from_face_name (f, face, 1); > LFACE_BACKGROUND (lface) = (STRINGP (new_value) > ? new_value : Qunspecified); >+<<<<<<< > realize_basic_faces (f); > } > else if (EQ (param, Qborder_color)) > { >- face = Qborder; >- lface = lface_from_face_name (f, face, 1); >+ lface = lface_from_face_name (f, Qborder, 1); > LFACE_BACKGROUND (lface) = (STRINGP (new_value) > ? new_value : Qunspecified); > } > else if (EQ (param, Qcursor_color)) > { >- face = Qcursor; >- lface = lface_from_face_name (f, face, 1); >+ lface = lface_from_face_name (f, Qcursor, 1); > LFACE_BACKGROUND (lface) = (STRINGP (new_value) > ? new_value : Qunspecified); > } > else if (EQ (param, Qmouse_color)) > { >- face = Qmouse; >+ lface = lface_from_face_name (f, Qmouse, 1); >+||||||| >+ realize_basic_faces (f); >+ } >+ if (EQ (param, Qborder_color)) >+ { >+ lface = lface_from_face_name (f, Qborder, 1); >+ LFACE_BACKGROUND (lface) = (STRINGP (new_value) >+ ? new_value : Qunspecified); >+ } >+ else if (EQ (param, Qcursor_color)) >+ { >+ lface = lface_from_face_name (f, Qcursor, 1); >+======= >+ realize_basic_faces (f); >+ } >+ else if (EQ (param, Qborder_color)) >+ { >+ face = Qborder; > lface = lface_from_face_name (f, face, 1); > LFACE_BACKGROUND (lface) = (STRINGP (new_value) > ? new_value : Qunspecified); > } >- >- /* Changing a named face means that all realized faces depending on >- that face are invalid. Since we cannot tell which realized faces >- depend on the face, make sure they are all removed. This is done >- by incrementing face_change_count. The next call to >- init_iterator will then free realized faces. */ >- if (!NILP (face) >- && NILP (Fget (face, Qface_no_inherit))) >+ else if (EQ (param, Qcursor_color)) > { >- ++face_change_count; >- ++windows_or_buffers_changed; >+ face = Qcursor; >+ lface = lface_from_face_name (f, face, 1); >+>>>>>>> >+ LFACE_BACKGROUND (lface) = (STRINGP (new_value) >+<<<<<<< >+ ? new_value : Qunspecified); > } > } > >@@ -7267,3 +7281,41 @@ > defsubr (&Sx_font_family_list); > #endif /* HAVE_WINDOW_SYSTEM */ > } >+||||||| >+ ? new_value : Qunspecified); >+ } >+ else if (EQ (param, Qmouse_color)) >+ { >+ lface = lface_from_face_name (f, Qmouse, 1); >+ LFACE_BACKGROUND (lface) = (STRINGP (new_value) >+ ? new_value : Qunspecified); >+ } >+} >+ >+ >+======= >+ ? new_value : Qunspecified); >+ } >+ else if (EQ (param, Qmouse_color)) >+ { >+ face = Qmouse; >+ lface = lface_from_face_name (f, face, 1); >+ LFACE_BACKGROUND (lface) = (STRINGP (new_value) >+ ? new_value : Qunspecified); >+ } >+ >+ /* Changing a named face means that all realized faces depending on >+ that face are invalid. Since we cannot tell which realized faces >+ depend on the face, make sure they are all removed. This is done >+ by incrementing face_change_count. The next call to >+ init_iterator will then free realized faces. */ >+ if (!NILP (face) >+ && NILP (Fget (face, Qface_no_inherit))) >+ { >+ ++face_change_count; >+ ++windows_or_buffers_changed; >+ } >+} >+ >+ >+>>>>>>> >./contrib/xfaces/merge FAILED 0.10 >1 unresolved conflict found >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.575495459 +0100 >@@ -18,8 +18,17 @@ > char *http_proxy_server_addr; //for reverse lookups > int http_proxy_server_port; //for reverse lookups > int auth_d_reload_interval; >+<<<<<<< > char *debuglvl_path; > >+||||||| >+ char *debuglvl_path; >+} tbill_state; >+======= >+ char *debuglvl_path; >+ int production_mode; >+} tbill_state; >+>>>>>>> > /* internal state */ > DB *db; > FunctionHash * fh; >@@ -27,7 +36,6 @@ > String *linkPathPrefix; > int auth_d_pipe_fd; > int auth_d_fd; >- int production_mode; > } tbill_state; > > void generatePage(tbill_state *, ServerRequest *); >./contrib/mod_tbill/merge FAILED 0.00 >2 unresolved conflicts found >--- merge 2012-05-14 13:42:09.000000000 +0200 >+++ - 2018-01-02 13:25:09.621485366 +0100 >@@ -1154,15 +1154,27 @@ > DEFINEPARSER(pfkey_prop_parse); > DEFINEPARSER(pfkey_supported_parse); > DEFINEPARSER(pfkey_spirange_parse); >+<<<<<<< > DEFINEPARSER(pfkey_x_kmprivate_parse); > DEFINEPARSER(pfkey_x_satype_parse); > DEFINEPARSER(pfkey_x_ext_debug_parse); >+ >+||||||| >+DEFINEPARSER(pfkey_x_ext_debug_parse); >+DEFINEPARSER(pfkey_x_ext_protocol_parse); >+ >+struct pf_key_ext_parsers_def *ext_default_parsers[]= >+======= >+DEFINEPARSER(pfkey_x_ext_debug_parse); >+DEFINEPARSER(pfkey_x_ext_protocol_parse); > #ifdef NAT_TRAVERSAL > DEFINEPARSER(pfkey_x_ext_nat_t_type_parse); > DEFINEPARSER(pfkey_x_ext_nat_t_port_parse); > #endif > > struct pf_key_ext_parsers_def *ext_default_parsers[]= >+>>>>>>> >+struct pf_key_ext_parsers_def *ext_default_parsers[]= > { > NULL, /* pfkey_msg_parse, */ > &pfkey_sa_parse_def, >@@ -1197,6 +1209,7 @@ > &pfkey_x_ext_nat_t_port_parse_def, > &pfkey_address_parse_def > #endif >+<<<<<<< > }; > > int >@@ -1787,3 +1800,16 @@ > * End: > * > */ >+||||||| >+}; >+ >+int >+pfkey_msg_parse(struct sadb_msg *pfkey_msg, >+ struct pf_key_ext_parsers_def *ext_parsers[], >+======= >+}; >+ >+int >+pfkey_msg_parse(struct sadb_msg *pfkey_msg, >+ struct pf_key_ext_parsers_def *ext_parsers[], >+>>>>>>> >./contrib/pfkey_v2_parse.c/merge FAILED 0.02 >21 succeeded and 40 failed >make: *** [Makefile:27: test] Error 1 > [31;01m*[0m ERROR: dev-util/wiggle-0.9-r1::gentoo failed (test phase): > [31;01m*[0m Make test failed. See above for details. > [31;01m*[0m > [31;01m*[0m Call stack: > [31;01m*[0m ebuild.sh, line 124: Called src_test > [31;01m*[0m environment, line 2282: Called default > [31;01m*[0m phase-functions.sh, line 853: Called default_src_test > [31;01m*[0m phase-functions.sh, line 882: Called __eapi0_src_test > [31;01m*[0m phase-helpers.sh, line 767: Called die > [31;01m*[0m The specific snippet of code: > [31;01m*[0m $emake_cmd ${internal_opts} test || \ > [31;01m*[0m die "Make test failed. See above for details." > [31;01m*[0m > [31;01m*[0m If you need support, post the output of `emerge --info '=dev-util/wiggle-0.9-r1::gentoo'`, > [31;01m*[0m the complete build log and the output of `emerge -pqv '=dev-util/wiggle-0.9-r1::gentoo'`. > [31;01m*[0m The complete build log is located at '/var/tmp/portage/dev-util/wiggle-0.9-r1/temp/build.log'. > [31;01m*[0m The ebuild environment file is located at '/var/tmp/portage/dev-util/wiggle-0.9-r1/temp/environment'. > [31;01m*[0m Working directory: '/var/tmp/portage/dev-util/wiggle-0.9-r1/work/wiggle-0.9' > [31;01m*[0m S: '/var/tmp/portage/dev-util/wiggle-0.9-r1/work/wiggle-0.9'
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 643098
:
512708
| 512710 |
541852
|
907056