The CIFS Performance drops to 4-12 MiB/s (30-50 MiB/s in gentoo-sources-2.6.34-r12) when using a Kernel >= 2.6.36 (36-r5 and 27-r1 tested). This happens for gentoo-sources as for vanilla-sources. Some people report 60 MiB/s if they use smbclient/get (non kernel implementation) and only 8 MiB/s with mount.cifs. (see also the forum discussion) Reproducible: Always Steps to Reproduce: 1. install kernel >= 2.6.36 2. use mount.cifs 3. dd if=/dev/zero of=/samba/share bs=1M count=1k Actual Results: slow speed (4-12 MB/s) Expected Results: higher speed (30-50 MB/s)
Please test with gentoo-sources-2.6.38
till@wgw-till ~ $ dd if=/some/data/on/server bs=1M of=/dev/null 697+1 Datensätze ein 697+1 Datensätze aus 730935452 Bytes (731 MB) kopiert, 95,012 s, 7,7 MB/s till@wgw-till ~ $ dd if=/dev/zero of=/data/server/dfs/wg/temp/bench.img bs=1M count=1k 1024+0 Datensätze ein 1024+0 Datensätze aus 1073741824 Bytes (1,1 GB) kopiert, 27,8255 s, 38,6 MB/s
English?
Sorry, i thought you can guess the meaning. - if i test the speed from client to server i get about 40 MiB/s - if i test the speed from server to client i get about 8 MiB/s so the performance regression is still there for download
Please take this upstream to bugzilla.kernel.org and post the url back here.
Done -> https://bugzilla.kernel.org/show_bug.cgi?id=31662
Thanks, we'll watch the upstream report and backport any fixes to active gentoo-source versions
(In reply to comment #7) > Thanks, we'll watch the upstream report and backport any fixes to active > gentoo-source versions There is a patch upstream (see upstream bug report). It seems to do the trick for me and the author thinks it is stable too.
Great news, when it hits upstream, reopening this bug will let me know and I will work to backport the patch.
merged into cifs-2.6.git by Steve French. (Hope it is the "upstream" you mean for reopening) > Commit 522440ed made cifs set backing_dev_info on the mapping attached > to new inodes. This change caused a fairly significant read performance > regression, as cifs started doing page-sized reads exclusively. > > By virtue of the fact that they're allocated as part of cifs_sb_info by > kzalloc, the ra_pages on cifs BDIs get set to 0, which prevents any > readahead. This forces the normal read codepaths to use readpage instead > of readpages causing a four-fold increase in the number of read calls > with the default rsize. > > Fix it by setting ra_pages in the BDI to the same value as that in the > default_backing_dev_info. > > Cc: stable@kernel.org > Reported-and-Tested-by: Till <till2.schaefer@uni-dortmund.de> > Signed-off-by: Jeff Layton <jlayton@redhat.com> > --- > fs/cifs/cifsfs.c | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c > index 1af2470..fb6a2ad 100644 > --- a/fs/cifs/cifsfs.c > +++ b/fs/cifs/cifsfs.c > @@ -131,6 +131,7 @@ cifs_read_super(struct super_block *sb, void *data, > kfree(cifs_sb); > return rc; > } > + cifs_sb->bdi.ra_pages = default_backing_dev_info.ra_pages; > > #ifdef CONFIG_CIFS_DFS_UPCALL > /* copy mount params to sb for use in submounts */ > -- > 1.7.4.1
This will be in the next release of 2.6.38-gentoo-sources
Released in gentoo-sources-2.6.38-r3
did you meam r2? I just got this three Mails today: Patch "cifs: set ra_pages in backing_dev_info" has been added to the 2.6.33-longterm tree Patch "cifs: set ra_pages in backing_dev_info" has been added to the 2.6.32-longterm tree Patch "cifs: set ra_pages in backing_dev_info" has been added to the 2.6.38-stable tree
Darn I made that typo all over the place this morning... Yes, -r2