Ricardo is no longer maintaining and developing zfs-fuse. the new developer team is here: http://rudd-o.com/new-projects/zfs they also got a new beta version, please bump a new ebuild, thanks. Reproducible: Always
I am working on an ebuild here, zfs-fuse-0.6.0_beta-r433.ebuild compiles fine so far, I am still wrangling with getting the init.d-script working. I was able to use zpool and zfs already by starting zfs-fuse manually.
(In reply to comment #1) > I was able to use zpool and zfs already by starting zfs-fuse manually. Been playing around with that again. I am not able to reach http://rudd-o.com/new-projects/zfs for now. Does anyone know more about the current status of zfs-fuse upstream?
Created attachment 212198 [details] The ebuild
Created attachment 212200 [details] File to put into files/0.6.0 (init script) Both attachments are 0.5.0 modifications. I am not very fluent writing ebuilds. Any comments are welcome.
(In reply to comment #4) > Both attachments are 0.5.0 modifications. > I am not very fluent writing ebuilds. > Any comments are welcome. Re-edited your ebuild to build 0.6.0 (instead of the beta). I still get problems with starting up and mounting (sorry for the german parts, but you get the picture): # /etc/init.d/zfs-fuse start * Starting ZFS-FUSE... [ ok ] * Mounting ZFS filesystems... connect: Datei oder Verzeichnis nicht gefunden Please make sure that the zfs-fuse daemon is running. internal error: failed to initialize ZFS library [ !! ] * ERROR: zfs-fuse failed to start This is ~amd64 here, sys-fs/fuse-2.8.1, kernel 2.6.31-tuxonice.
(In reply to comment #5) > (In reply to comment #4) ... > > Re-edited your ebuild to build 0.6.0 (instead of the beta). Why? Actually it is a beta! Look at its site. > > I still get problems with starting up and mounting (sorry for the german parts, It's running pretty fine here, but I have a special init script for crypto mounts and I changed *my* init script to depend on it and it runs in the "default" run level instead of "boot". So, try rc-update del zfs-fuse followed by rc-update add zfs-fuse default. If it works and you need it in the "boot" run level you may need to check what is missing, for ex. if fuse module is already running. Just insert the modprobe into the script. HTH
Created attachment 212431 [details] ebuild for 0.6.0
(In reply to comment #6) > (In reply to comment #5) > > (In reply to comment #4) > ... > > > > Re-edited your ebuild to build 0.6.0 (instead of the beta). > Why? Actually it is a beta! Look at its site. I hadn't seen that, in the meanwhile, betas went away. I am very sorry for misunderstanding your message. I am attaching the same ebuild but for 0.6.0.
> I hadn't seen that, in the meanwhile, betas went away. > I am very sorry for misunderstanding your message. No problem. > I am attaching the same ebuild but for 0.6.0. Built your ebuild and restarted things in different runlevels, same error as posted above. fuse-module is loaded correctly.
(In reply to comment #9) ... > > Built your ebuild and restarted things in different runlevels, same error as > posted above. fuse-module is loaded correctly. > > If I understand, you are able to run manually using zfs-fuse but not using /etc/init.d/zfs.fuse. See if you have group disk and user daemon /etc/{group,passwd}. Remove the mount part of the init script and then try zfs mount -a. If this works, try to put a sync and a sleep 5 between the zfs-fuse launch and zfs mount. Pls. do each experiment after a fresh (re)boot. I ran into troubles after zfs crashes only fixed by rebooting. Beyond these I don't see any other reason.
(In reply to comment #10) > See if you have group disk and user daemon /etc/{group,passwd}. Yep, I have user and group. > Remove the mount part of the init script and then try zfs mount -a. I could successfully do: zfs-fuse zfs mount -a zpool create test /dev/sda3 zfs list ls /test zpool status --> # zpool status pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 sda3 ONLINE 0 0 0 So it seems the init-script is the problem ... correct? > If this works, try to put a sync and a sleep 5 between the zfs-fuse launch and > zfs mount. OK. # /etc/init.d/zfs-fuse start * Caching service dependencies... [ ok ] * Starting ZFS-FUSE... [ ok ] * Mounting ZFS filesystems... cannot mount 'test': Input/output error. Make sure the FUSE module is loaded. [ !! ] * ERROR: zfs-fuse failed to start > Pls. do each experiment after a fresh (re)boot. I ran into troubles after zfs > crashes only fixed by rebooting. Will do another test after SENDing this and a reboot. Thanks, greets, Stefan
(In reply to comment #11) > (In reply to comment #10) > > > See if you have group disk and user daemon /etc/{group,passwd}. > > Yep, I have user and group. > > > Remove the mount part of the init script and then try zfs mount -a. > > I could successfully do: > zfs-fuse > zfs mount -a > zpool create test /dev/sda3 > zfs list > ls /test > zpool status > > --> > # zpool status > pool: test > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > test ONLINE 0 0 0 > sda3 ONLINE 0 0 0 > > So it seems the init-script is the problem ... correct? The scrip executes a zfs mount -a. I am not sure but may be you need to create a filesystem first. After zpool create test /dev/... do zfs create test/tmp Then you should have /test/tmp fs. zfs list Then reboot your computer with the unmodified init script and see what happens
(In reply to comment #12) > I am not sure but may be you need to create a filesystem first. > After zpool create test /dev/... > do > zfs create test/tmp > > Then you should have /test/tmp fs. > zfs list > > Then reboot your computer with the unmodified init script and see what happens Tried that. No difference. I noticed that my system seems to try to mount zfs-filesystems twice ... I am on ~amd64 here, with baselayout-2/openrc, could that be a problem? I checked that service zfs-fuse is only in runlevel boot and not in boot AND default, yes. At first it lists something like "mounting zfs file systems" and later "staring zfs-fuse" ... (sorry, I didn't write down the exact wording, I could reproduce if needed). Thanks, Stefan.
(In reply to comment #13) > (In reply to comment #12) ... > I noticed that my system seems to try to mount zfs-filesystems twice ... Then remove the "zfs mount -a" code from the init script and see if the filesystem gets mounted. > I am > on ~amd64 here, with baselayout-2/openrc, could that be a problem? I am also on amd64 but with baselayout-1. So, I can't provide any additional help.
(In reply to comment #14) > > I am > > on ~amd64 here, with baselayout-2/openrc, could that be a problem? > I am also on amd64 but with baselayout-1. > So, I can't provide any additional help. Thanks anyway, no big problem ... just playing around with zfs-fuse on that machine, no urgent need ... will try to solve that myself.
Hey you both, this looks very interesting, I'm going into your work during this or the next free days. Many thanks for your notice and work :)
(In reply to comment #0) > Ricardo is no longer maintaining and developing zfs-fuse. the new developer > team is here: > http://rudd-o.com/new-projects/zfs This redirects to http://zfs-fuse.net/ I think the ebuild should reflect that, by linking straight to that URL. The tar file is there too.
Uum, sorry, I did not look at the ebuild first. It already does that. On the other hand: What is the reason, the ebuild only has KEYWORDS="~amd64" when the INSTALL doc states, that it can be KEYWORDS="~x86 ~amd64 ~sparc64" ? I would also recommend changing einfo "Don't forget this is an beta-quality release. Testing has been" einfo "very limited so please make sure you backup any important data." to ewarn "Don't forget this is an beta-quality release. Testing has been" ewarn "very limited so please make sure you backup any important data." .
The ebuild itself seems to emerge fine, but I get an error message when trying to create a pool: Medusa zfs # zpool create herring /root/zfs/disk1 cannot create 'herring': permission denied Medusa zfs # Not sure if this is a problem with my local setup though, I could never get zfs-fuse to work the way I want it to.
(In reply to comment #16) > Hey you both, this looks very interesting, I'm going into your work during this > or the next free days. > Many thanks for your notice and work :) > Are you still maintaining this package? Thanks
Created attachment 230213 [details] portage/sys.fs: tar.gz of zfs-fuse version 0.6.0 only files/0.6.0/zfs-fuse.rc: wait for socket, run as root, zfs mount zfs-fuse-0.6.0.ebuild: download, website files/0.6.0/fix_zdb_path.patch
(In reply to comment #21) sample session: tux64 ~ # eselect rc start zfs-fuse Starting init script * Starting ZFS-FUSE ... [ ok ] * Mounting ZFS filesystems ... zhallo /zhallo zhallo/hihi /zhallo/hihi tux64 ~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zhallo 805M 171M 23K /zhallo zhallo/hihi 805M 171M 805M /zhallo/hihi tux64 ~ # zpool status pool: zhallo state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zhallo ONLINE 0 0 0 /var/dddev/zp1 ONLINE 0 0 0 /var/dddev/zp2 ONLINE 0 0 0 errors: No known data errors tux64 ~ # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT zhallo 1008M 805M 203M 79% ONLINE - tux64 ~ # mount ... zhallo on /zhallo type fuse (rw,allow_other) zhallo/hihi on /zhallo/hihi type fuse (rw,allow_other) tux64 ~ # eselect rc stop zfs-fuse Stopping init script * Unmounting ZFS filesystems ... [ ok ] * Stopping ZFS-FUSE ...
Thanks, Wolfgang, your ebuild works OK here so far, also the init-script does the right thing when I reboot here with openrc. Your patch is better than mine ;-) Anything specific you would like to have tested? ~amd64 here. Stefan
Just to be sure: http://zfs-fuse.net/releases/0.6.0 is the version we have right here?
(In reply to comment #24) > Just to be sure: http://zfs-fuse.net/releases/0.6.0 is the version we have > right here? That's right. Are there plans to update this to the emerging 0.6.9 release? The current beta also looks good... I tried hacking the existing ebuild but I don't really know how they work.
Created attachment 232949 [details] ebuild for zfs-fuse-0.6.9_beta3 compiles here, not yet emerged. Use with caution, as every beta ...
Created attachment 232963 [details] ebuild for zfs-fuse-0.6.9_beta3, corrected corrected and working version. Just learned about SRC_URI arrows ;-)
I am sorry to need 3 postings for one ebuild ... I should wait for the full emerge to run through before posting :-( You need to create files/0.6.9_beta3 and put zfs-fuse.rc into it as well.
(In reply to comment #28) > I am sorry to need 3 postings for one ebuild ... I should wait for the full > emerge to run through before posting :-( > You need to create files/0.6.9_beta3 and put zfs-fuse.rc into it as well. > Much obliged! Appears to work perfectly.
(In reply to comment #29) > Much obliged! Appears to work perfectly. Yep, the ebuild seems OK. I get errors here: # zfs create test9/urks cannot mount 'test9/urks': Input/output error. Make sure the FUSE module is loaded. filesystem successfully created, but not mounted hiro sgw # lsmod | grep fuse fuse 58457 2 # tail /var/log/messages May 27 10:59:19 hiro zfs-fuse: kstat: fuse_mount error - trying to umount May 27 10:59:19 hiro zfs-fuse: kstat: fuse_mount error - trying to umount May 27 10:59:19 hiro zfs-fuse: !created version 23 pool test9 using 23 May 27 10:59:19 hiro zfs-fuse: kstat: fuse_mount error - trying to umount May 27 10:59:19 hiro zfs-fuse: kstat: fuse_mount error - trying to umount # cat /proc/version Linux version 2.6.33-tuxonice-r2
(In reply to comment #30) > I get errors here: forget it, my mistake ...
Created attachment 233105 [details] updated ebuild w/ patches added optional ebuild: re-added 2 of Wolfgang's patches, and the patch mentioned in http://zfs-fuse.net/issues/36
Created attachment 233107 [details, diff] files/0.6.9_beta3/fix_zfs-fuse-socket.patch patch applied with zfs-fuse-0.6.9_beta3-r1.ebuild
How to participate in the zfs-fuse-ml? I only find http://groups.google.com/group/zfs-fuse/ which needs a gmail-account. I think we should give some feedback upstream ...
Created attachment 233173 [details] ebuild pulling in git-sources of zfs-fuse This ebuild pulls in sources as in git clone http://git.zfs-fuse.net/official Emerged ok here, but you know ...
(In reply to comment #35) > Created an attachment (id=233173) [details] This gives you 0.6.0 or so, and not the latest as intented. I currently fiddle with checking out the "testing"-branch.
Created attachment 233525 [details] ebuild pulling in testing-branch of git-repo at zfs-fuse.net git-checkout now OK afai see. Doesn't build here, though :-( Could anyone else test this?
Created attachment 233553 [details] latest beta4 from zfs-fuse.net A new beta is out, visible in the git-repo, but not yet on the website at http://zfs-fuse.net/ My current ebuild pulls in the relevant git-tag from the git-repo at git.zfs-fuse.net. Merges OK here. Pls create the content of files/0.6.9_beta4 yourself, as in the latest ebuilds. Thanks.
(In reply to comment #37) > Doesn't build here, though :-( > Could anyone else test this? Solved this with autoreconf, thanks ...
Christian, could you put some of the ebuilds into portage? As 0.6.9_beta4 is out on zfs-fuse.net it would help gentoo-users having a look and test things if those ebuilds were in the tree already, even if beta ... Any concerns? Thanks, Stefan
Created attachment 234029 [details] new release upstream: 0.6.9 New stable release 0.6.9: http://zfs-fuse.net/releases/0.6.9
(In reply to comment #41) > Created an attachment (id=234029) [details] > new release upstream: 0.6.9 > > New stable release 0.6.9: http://zfs-fuse.net/releases/0.6.9 > I tried it ... It emerged fine but after that, my pool was not recognized!!! My home dir was lost. I emerged the old 0.6.0 and everything started working fine again. Anything else I can try without loosing my Pool or, at least, getting it back again with 0.6.9?
(In reply to comment #42) >> New stable release 0.6.9: http://zfs-fuse.net/releases/0.6.9 > > I tried it ... > It emerged fine but after that, my pool was not recognized!!! My home dir was > lost. > I emerged the old 0.6.0 and everything started working fine again. > Anything else I can try without loosing my Pool or, at least, getting it back > again with 0.6.9? I had that in my tests also. Stop the 0.6.0 before emerging 0.6.9. And maybe try to export the pool before and import after. Trying to restart zfs-fuse after the merge is somewhat tricky, I assume.
(In reply to comment #43) > (In reply to comment #42) > >> New stable release 0.6.9: http://zfs-fuse.net/releases/0.6.9 > > > > I tried it ... > > It emerged fine but after that, my pool was not recognized!!! My home dir was > > lost. > > I emerged the old 0.6.0 and everything started working fine again. > > Anything else I can try without loosing my Pool or, at least, getting it back > > again with 0.6.9? > > I had that in my tests also. Stop the 0.6.0 before emerging 0.6.9. > And maybe try to export the pool before and import after. > Trying to restart zfs-fuse after the merge is somewhat tricky, I assume. And, as always, do backups :-)
(In reply to comment #44) > > I had that in my tests also. Stop the 0.6.0 before emerging 0.6.9. > > And maybe try to export the pool before and import after. > > Trying to restart zfs-fuse after the merge is somewhat tricky, I assume. > > And, as always, do backups :-) > Have a look at http://groups.google.com/group/zfs-fuse/browse_thread/thread/c95ed85d3be36bbd as well. Export/import etc ...
The export/import fixed things up!!! Thank you. When does this go in the tree?
(In reply to comment #41) > Created an attachment (id=234029) [details] > new release upstream: 0.6.9 > > New stable release 0.6.9: http://zfs-fuse.net/releases/0.6.9 > Having issues building the ebuild due to missing patches. * Cannot find $EPATCH_SOURCE! Value for $EPATCH_SOURCE is: * * /usr/portage/sys-fs/zfs-fuse/files/0.6.9/fix_zfs-fuse_path.patch * ( fix_zfs-fuse_path.patch ) Where are these patches located?
(In reply to comment #47) > Having issues building the ebuild due to missing patches. > > * Cannot find $EPATCH_SOURCE! Value for $EPATCH_SOURCE is: > * > * /usr/portage/sys-fs/zfs-fuse/files/0.6.9/fix_zfs-fuse_path.patch > * ( fix_zfs-fuse_path.patch ) > > Where are these patches located? Find them in the zfs-fuse.tgz tarball listed on top of the page and put them into /usr/portage/sys-fs/zfs-fuse/files/0.6.9 Small annoyance, will be gone when stuff gets into tree.
*** Bug 307993 has been marked as a duplicate of this bug. ***
Created attachment 234245 [details] Improved 0.6.9 ebuild The patches and init script are not version specific. Install additional tools and man pages. Install zfsrc and zfs_pool_alert
Created attachment 234247 [details, diff] Goes into files/fix_zdb_path.patch
Created attachment 234249 [details, diff] Goes into files/fix_ztest_path.patch
Created attachment 234251 [details, diff] Goes into files/fix_zfs-fuse_path.patch
Created attachment 234253 [details] Goes into files/zfs-fuse.rc
Actually, I was reviewing the three patches we have. And it looks like we don't need them. But someone more knowledgeable about why they were there in the first place may comment on that.
The init script in the tgz file does a 'zfs mount' during 'start' and 'stop'. My script doesn't do that. Not sure if that's needed. Typically, a large number of FSs will just destroy the nice startup messages from openrc on the console.
I've committed 0.6.9 to Portage. If I missed something, please open a new bug and describe the problem there, this bug is getting a bit too long and old.