A quote from Wikipedia (http://en.wikipedia.org/wiki/ZFS#Linux): A native port of ZFS for Linux is now available. This ZFS on Linux port was produced at the Lawrence Livermore National Laboratory (LLNL) under Contract No. DE-AC52-07NA27344 (Contract 44) between the U.S. Department of Energy (DOE) and Lawrence Livermore National Security, LLC (LLNS) for the operation of LLNL. It has been approved for release under LLNL-CODE-403049. The site and tarball is at the URL for this bug. It’s really really nice to finally have the chance to a fast ZFS implementation in Linux. I’d give a arm and a leg to be able to use this, to finally be able to use a scrubbing mirrored ZFS as my main file system. As I don’t trust hard disks further than my nose goes. (With good reason.) Let’s make this into an ebuild. Unfortunately it’s a bit harder than my ebuild skill (“n00b”;) goes. Luckily there are a lot of people wishing for this. Maybe you too. :) I will provide whatever assistance and testing I am able to do, as this is a top priority thing for me. Reproducible: Always Actual Results: zfs-fuse is slow as hell and a memory hog that lets even the worst offenders pale in comparison. ;) Also because it’s FUSE, it lacks important features, like being a first class citizen in the system. All this makes it impossible, to reasonably use it on a normal workstation or home server where it would be needed. Expected Results: A fast and stable bootable ZFS in the kernel, resulting from installing a zfs module ebuild.
A first *experimental* ebuild for ZFS 0.4.9 just hit the sping overlay: # sudo layman -a sping More or less it's just making it compile, don't expect anything from it, yet. In case you do play with it please feed bugs and patches back to me.
Wow, awesome man! I’ll try it as soon as I can! (Pretty busy right now.) Just, really, thanks! :)
Hey, you (In reply to comment #1) I just synced, and you missed the keyword on it and on spl, and to create a new digest/manifest. :) (Not a problem though, as I simply did it myself. …uuunless we got an evil cracker there with a trojan. ;) But weirdly, spl depends on sys-kernel/gentoo-sources-2.6.32-r12. But I have zen-sources installed, so… that’s where I’m stuck. I assume it’s a kernel patch. I never did those, to be honest. Ok, I did, but it was nothing more than running patch. This is not something like that, I assume. Do I have to use gentoo-sources? (In other words: Was it already hard enough to get it into those at all, so that making it work with all *-sources would not be an option?) Because that would limit my testing time quite a lot. :/ Well, thank you anyway. :)
(In reply to comment #3) > I just synced, and you missed the keyword on it and on spl, and to create a new > digest/manifest. :) > (Not a problem though, as I simply did it myself. …uuunless we got an evil > cracker there with a trojan. ;) I left KEYWORDS empty purposely: it's sort of an alternative to an entry in package.mask, a way of masking. Keyword "**" unmasks ebuilds masked like that. I checked the manifests: seem sane. I guess the mismatch came from filling keywords on your end. > But weirdly, spl depends on sys-kernel/gentoo-sources-2.6.32-r12. But I have > zen-sources installed, so… that’s where I’m stuck. I don't plan to keep it at that, no. It may become >=virtual/linux-sources-... later.
(In reply to comment #4) > I left KEYWORDS empty purposely: it's sort of an alternative to an entry in > package.mask, a way of masking. Keyword "**" unmasks ebuilds masked like that. OK. I already guessed something like that. :) Good to know. > I checked the manifests: seem sane. I guess the mismatch came from filling > keywords on your end. Sorry, that’s impossible. I first tried to emerge the ebuild, and it did not even accept it at all, because of the missing manifest. Only when I created it, did I get to the point where portage complained about missing keywords. But… doesn’t matter anymore. :) > I don't plan to keep it at that, no. > It may become >=virtual/linux-sources-... later. Would it be OK, if I just replace that in the ebuild, to try it out? Or is there no chance of this working?
(In reply to comment #5) > > It may become >=virtual/linux-sources-... later. > > Would it be OK, if I just replace that in the ebuild, to try it out? Or is > there no chance of this working? I have played around with this now and fixed dependencies as following: - */*-0.4.9 requires ~virtual/linux-sources-2.6.32 (any revision of 2.6.32) - */*-9999 requires >=virtual/linux-sources-2.6.32
Ok, I just want to add links to some file system testing tools, in case others also want to run them so we can quickly make this thing reliable. LTP has a pretty nice list: http://ltp.sourceforge.net/tooltable.php (Scroll down to the “Filesystems” section.) It’s available in portage as “app-benchmarks/ltp” but hard-masked because of several problems. NTFS-3G also have a suite to offer: http://lwn.net/Articles/276617/ or http://www.tuxera.com/community/posix-test-suite/ which I could not find in portage or any overlay. But it specifically is designed to work with ZFS. So I hope to give it a try by manually compiling it. There are probably more, so if you know something fitting, just add it. But first I have lots of other things to do. Also I won’t trust anything earlier than after a few weeks of intense use of all features. So it will take a bit of time too.
There is a small bug in there. But I don’t know if the ebuild is wrong, or something else. The problem is, that spl complains that there is no /usr/src/$youKernelVersion/arch/amd64/Makefile, so it fails early on in the compilation. There really is no such directory there. There only is x86 and x86_64 (among all the others). As a quick fix, after doing “cd /usr/src/linux/arch && ln -s x86 amd64”, spl compiled. But I have no idea if this is good. Feels nasty and wrong to me. :) @Sebastian Pipping: I bet you know more about these things, right? :)
OK, now I got to another problem in sys-fs/zfs itself. Here is the relevant snippet: > make[3]: Entering directory `/var/tmp/portage/sys-fs/zfs-9999/work/zfs-9999/cmd' > make[3]: Für das Ziel »all-am« ist nichts zu tun. > make[3]: Leaving directory `/var/tmp/portage/sys-fs/zfs-9999/work/zfs-9999/cmd' > make[2]: Leaving directory `/var/tmp/portage/sys-fs/zfs-9999/work/zfs-9999/cmd' > Making all in module > make[2]: Entering directory `/var/tmp/portage/sys-fs/zfs-9999/work/zfs-9999/module' > # Make the exported SPL symbols available to these modules. > cp /module/Module.symvers . > cp: Aufruf von stat für „/module/Module.symvers“ nicht möglich: Datei oder Verzeichnis nicht gefunden > make[2]: *** [modules] Fehler 1 Or in short: “/module/Module.symvers” is missing. No idea where and why though. (I’m very tired. I should sleep. So I’ll call it a day for “today”. I hope I could help a bit with the debugging.) @Everyone in CC: If you want this thing to work, your help will help yourself too. :)
Ok, one last thing: There is a Module.symvers in my /usr/src/linux/ kernel source directory. (Yes, right in the root dir of it.) But no idea if it’s the right one. The Makefile in /var/tmp/portage/sys-fs/zfs-9999/work/zfs-9999/module/ says: > # Make the exported SPL symbols available to these modules. > cp /module/Module.symvers . So my guess is, that the zfs ebuild somehow assumes it also is the spl ebuild and shares the same directory/sandbox or something. No idea… I swear, I’m gonna stop now! *gah* ;)
Navid, please sync and try again. rbu and I have been fixing a few bits ..
Created attachment 241951 [details] /var/tmp/portage/sys-devel/spl-9999/temp/spl-9999-includedir.patch.out OK, I now found some time, to try it again, and I got: * Failed Patch: spl-9999-includedir.patch ! * ( /usr/local/portage/layman/sping/sys-devel/spl/files/spl-9999-includedir.patch ) The patch output is attached.
(In reply to comment #12) > * Failed Patch: spl-9999-includedir.patch ! That's the "beauty" of live ebuilds. Fixed for the moment. Alternatively, give just-added v0.5.0 a try. SPL got updated, too.
(In reply to comment #13) > That's the "beauty" of live ebuilds. Fixed for the moment. No sweat, I already assumed that. :) I always give it a couple of tries over a few days, before I even say something. Btw, I think problems like in this case, where the code may be OK, but the ebuild isnt, would be solved, if ebuilds could request an update of the overlay (and then using the updated ebuild) before an update of the software itself. Hmm, maybe I’l create a quick live-update script that goes like this eix-sync && emerge $@ @live-rebuild :)
Ok, compiled fine now. :) A small suggestion: For all binaries that don’t have a man page, add a line that describes what they are for to the top of their --help output. (e.g. zpios) Now it’s time to test its stability and feature-completeness. I’ll check the tools from comment #7 in the next days…
Hmm… the first possible problem: There is no way to mount the zfs. ^^ I now have the following: # zfs list NAME USED AVAIL REFER MOUNTPOINT test 131K 63,4M 19K /test test/archiv 19K 63,4M 19K /test/archiv But when I run “mount” there is nothing new mounted. Also “zfs mount -a” seems to change nothing. How do I mount the created zfs?
You can't the FAQ say: 1.3 How do I mount the file system? You can’t… at least not today. While we have ported the majority of the ZFS code to the Linux kernel that does not yet include the ZFS posix layer. The only interface currently available from user space is the ZVOL virtual block device. I look forward this will be implemented soon, I hope.
(In reply to comment #17) > You can't the FAQ say: Aaah, allright. Nothing said there’s an FAQ. It’s OK. I already thought, I’m missing something. ^^ Of course that way, I can’t test it. I wish you luck and that it’s fun to do. :) Just tell me when it’s ready to do some testing, and I’ll be there.
Hey, I’m checking back, to see how the state of the project is… The ebuild (patch) still doesn’t work. Are you coming along well? Unfortunately I don’t write C/C++ (ever). I wish I could help a bit.
Created attachment 256155 [details, diff] A fixed version of the patch. OK, I’ve got SPL to compile again, by fixing the patch. Just clone the ebuild (’s directory), replace the patch in the file directory by this one, re-generate the manifest, and off you go! :) (Now Ill try to do the same with the zfs ebuild.)
Created attachment 256157 [details] Fixed zfs-9999.ebuild. Hey, a quick fix made the zfs-9999.ebuild compile too! How nice! I only had to remove the line: > epatch "${FILESDIR}"/${PN}-0.4.9-linking.patch Why was it me who had to do this? It’s not like it couldn’t have been integrated already…
@ Sebastian Pipping: Can you please integrate this into the sping overlay? Thanks! :)
KQ Infotech released a version based on the LLNL code on January 14th, and it is said to be more feature complete. Maybe the ebuild should use that instead? https://github.com/zfs-linux
(In reply to comment #23) Sounds great, but with them being a company, what’s the chance of them not suddenly switching licenses and only offering updates for money? It would definitely be more feature-complete though. ;) (See comment #17.) I have no idea if this project is still developed. The last updates are from 2010-12-14 (https://github.com/behlendorf/zfs/commits/master via http://git.goodpoint.de/?p=overlay-sping.git;a=history;f=sys-fs/zfs;hb=HEAD). So if you want to make the KQ version into an ebuild, that would be most welcome. :)
(In reply to comment #24) > So if you want to make the KQ version into an ebuild, that would be most > welcome. :) There is already an ebuild for zfs-linux in the sping overlay: http://git.goodpoint.de/?p=overlay-sping.git;a=tree;f=sys-fs/zfs;hb=HEAD
(In reply to comment #25) > There is already an ebuild for zfs-linux in the sping overlay: > http://git.goodpoint.de/?p=overlay-sping.git;a=tree;f=sys-fs/zfs;hb=HEAD Wait, that’s the old one. What’s the difference of what KQ did then?
KQ also offers lzfs which is the posix layer for zfs filesystems (ie you can use more than just zvols) see https://github.com/zfs-linux however I could not get it to build on kernel 2.6.37
I think this post explains the situation quite well: https://groups.google.com/a/zfsonlinux.org/group/zfs-devel/browse_thread/thread/032a2109f02fc1ff/844594e248f197cb#844594e248f197cb About 2.6.37 support: https://github.com/behlendorf/zfs/issues#issue/85 But these changes did not make it into https://github.com/zfs-linux nor into top branch in the behlendorf git tree. So if you are trying to emerge the zfs-9999.ebuild from sping overlay, you have to delete the following line: EGIT_BRANCH="top"
Created attachment 260960 [details] Prototype ebuild for ZFS POSIX layer. I modified the zfs ebuild (from the sping overlay) for lzfs. Seems fine, except that it can’t find the spl version when configuring. I guess it needs the same kind of patch that spl and zfs needed, to fix the include dirs. I’m too tired right now to fix this, and also haven’t really got the knowledge to do this, so I’ll leave it until I remember. It’s actually pretty easy. You • look at those patches for spl and zfs, • do a “ebuild ${yourLocalRepository}/lzfs-9999.ebuild prepare”, • edit the makefiles (there are two, as far as I know) in the temp dir (/var/tmp/portage/sys-fs/lzfs-9999/work/lzfs-9999/) in the same way as those patches, • do a diff, and save that one as a patch in “${yourLocalRepository}/files/lzfs-9999-includedir.patch”. • Then you uncomment the line using the patch (line 31), • do a “ebuild ${yourLocalRepository}/lzfs-9999.ebuild manifest”, and you’re ready for testing the ebuild.
(In reply to comment #28) I forgot to reply the last time: > I think this post explains the situation quite well: > https://groups.google.com/a/zfsonlinux.org/group/zfs-devel/browse_thread/thread/032a2109f02fc1ff/844594e248f197cb#844594e248f197cb So basically, it’s dead. Until someone steps up to the plate. Which is unlikely to happen. I wish I could do it, but no C/C++ here. Certainly not such “mission-critical” work. :/ Should I resolve this as UPSTREAM, CANTFIX, WONTFIX, LATER,or REMIND? ;)
I added ebuild for zfs and zpl to sci overlay
(In reply to comment #31) > I added ebuild for zfs and zpl to sci overlay Why to the sci overlay? If it's not right for the main tree than this may be a better fit to the betagarden overlay. You already have push access to it.
(In reply to comment #32) > (In reply to comment #31) > > I added ebuild for zfs and zpl to sci overlay > > Why to the sci overlay? > > If it's not right for the main tree than this may be a better fit to the > betagarden overlay. You already have push access to it. I'll move it to tree soon. Actualy i'm going to use zfs as ldiskfs for lustre from LLNL. So i moved it science
> I'll move it to tree soon. Actualy i'm going to use zfs as ldiskfs for lustre > from LLNL. So i moved it science There are two SPL's, one in the raw overlay that is pulling from GIT and one in sys-devel that has both spl-0.6.0_rc4 and a GIT-9999 version. The one from 'raw' installs okay on my machine but the one from sys-devel/spl-0.6.0_rc4 does not. sys-fs/zfs also gives the same are as the one below: How should I proceed to get zfs running on my box? Package: sys-devel/spl-0.6.0_rc4 Repository: science Maintainer: cluster@gentoo.org USE: amd64 elibc_glibc kernel_linux multilib userland_GNU FEATURES: sandbox splitdebug Determining the location of the kernel source code Found kernel source directory: /usr/src/linux Found kernel object directory: /lib/modules/2.6.38-gentoo-r5/build Found sources for kernel version: 2.6.38-gentoo-r5 >>> Unpacking source... >>> Unpacking spl-0.6.0-rc4.tar.gz to /var/tmp/portage/sys-devel/spl-0.6.0_rc4/work >>> Source unpacked in /var/tmp/portage/sys-devel/spl-0.6.0_rc4/work >>> Preparing source in /var/tmp/portage/sys-devel/spl-0.6.0_rc4/work/spl-0.6.0-rc4 ... Applying spl-0.6.0-includedir.patch ... Running eautoreconf in '/var/tmp/portage/sys-devel/spl-0.6.0_rc4/work/spl-0.6.0-rc4' ... Running autoconf -I ./config ... Failed Running autoconf ! Include in your bugreport the contents of: [It is below] /var/tmp/portage/sys-devel/spl-0.6.0_rc4/temp/autoconf.out ERROR: sys-devel/spl-0.6.0_rc4 failed (prepare phase): Failed Running autoconf ! Call stack: ebuild.sh, line 56: Called src_prepare environment, line 3578: Called eautoreconf environment, line 1148: Called eautoconf environment, line 1084: Called autotools_run_tool 'autoconf' '-I' './config' environment, line 545: Called die The specific snippet of code: die "Failed Running $1 !"; If I run things manually I have to run ./autogen.sh or I end up with missing Auto macros.