/etc/init.d/zfs-fuse does not enable the existing and valid zpool at boot time. If I manually run "zfs import" the zpool is there, with a manual "zfs import tank" everything is fine again. AFAI understand zfs-fuse needs no entries in /etc/fstab, so I don't have any. I have zfs-fuse in runlevel boot, it gets started OK. I now patched /etc/init.d/zfs-fuse to run "zpool import tank" before mounting the zfs-filesystems, this works but isn't a real solution. Reproducible: Always Steps to Reproduce: 1. zfs-fuse in boot runlevel 2. create zpool and zfs-fs 3. reboot Actual Results: No zpools available, no zfs-filesystems are mounted. Expected Results: the zpools should be online, the zfs-filesystems mounted I am unsure if running zfs-fuse on LVM-volumes is the problem here. My pool contains two LVs in a mirror: # zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 dm-13 ONLINE 0 0 0 dm-10 ONLINE 0 0 0 I also tried this with the syntaxes /dev/disk/by-id/dm-name-SDB-zfsmirr2, /dev/mapper/SDB-zfsmirr2, /dev/SDB/zfsmirr2 (detached device, attached other device-file, resilvered the zpool, rebooted). None did help. Maybe it is unlikely as "zpool import" finds the zpool without a problem. The q is "why is the pool in exported state at all at boot time?". Does zfs-fuse actually export the zpools at shutdown?
Well, if you outer volumes are up and mounted, at the time the inner volumes have to access them, it can’t be the problem, can it?
Maybe you can check when the exporting command actually happens, by hooking into something, or easier: By making it log the exporting, and then look at what happened at the time it got logged. Should make it possible to pin down the place/code where it happens, and look into the script that is controlling that part.
(In reply to comment #1) > Well, if you outer volumes are up and mounted, at the time the inner volumes > have to access them, it can’t be the problem, can it? You mean, zfs-fuse is started too early, before the LVM-volumes are available? Shouldn't the ebuild take care of this? I have zfs-fuse in runlevel boot, OK ?
I got the same problem and solved it by creating the directory /etc/zfs, and then recreating the zpool. My understanding is: when /etc/zfs does not exist, "zpool create" cannot create the cachefile /etc/zfs/zpool.cache. It silently ignores the error and configures the pool as "temporary" (i.e. cachefile=none). Temporary pools don't get imported at boot. Would it be a solution to have the ebuild create /etc/zfs ?
(In reply to comment #4) > I got the same problem and solved it by creating the directory /etc/zfs, and > then recreating the zpool. > > My understanding is: when /etc/zfs does not exist, "zpool create" cannot create > the cachefile /etc/zfs/zpool.cache. It silently ignores the error and > configures the pool as "temporary" (i.e. cachefile=none). Temporary pools don't > get imported at boot. > > Would it be a solution to have the ebuild create /etc/zfs ? > I can confirm creating the directory /etc/zfs indeed fixes the problem. It is not mandatory to re-create the pool. Just do: zpool export POOL zpool import POOL and /etc/zfs/zpool.cache gets generated. Maybe it's worth mentioning, I added "after fuse" in depend-class in the init-script /etc/init.d/zfs-fuse. It's a matter of preference of course how you want to load fuse. zfs-fuse depends on the fuse module, hence my suggestion.
confirmation: created /etc/zfs as root, created new zfs-fuse-pool, it was up and mounted again after reboot. Why not let the ebuild create that dir? Seems logical to me ;) thx, Stefan
Created attachment 208043 [details] 0001-fix-for-gentoo-bug-260362.patch patch adds directory, and fixes fuse init script dependency.
Should be fixed now, with 0.6.9