Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!
Bug 260362 - sys-fs/zfs-fuse-0.5.0 does not import/enable zpool at boot
Summary: sys-fs/zfs-fuse-0.5.0 does not import/enable zpool at boot
Status: RESOLVED FIXED
Alias: None
Product: Gentoo Linux
Classification: Unclassified
Component: New packages (show other bugs)
Hardware: All Linux
: High normal (vote)
Assignee: Christian Parpart (RETIRED)
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-02-26 11:02 UTC by Stefan G. Weichinger
Modified: 2010-06-05 23:49 UTC (History)
4 users (show)

See Also:
Package list:
Runtime testing required: ---


Attachments
0001-fix-for-gentoo-bug-260362.patch (0001-fix-for-gentoo-bug-260362.patch,1.21 KB, text/plain)
2009-10-23 19:34 UTC, Kyle Cavin
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Stefan G. Weichinger 2009-02-26 11:02:39 UTC
/etc/init.d/zfs-fuse does not enable the existing and valid zpool at boot time. If I manually run "zfs import" the zpool is there, with a manual "zfs import tank" everything is fine again.
AFAI understand zfs-fuse needs no entries in /etc/fstab, so I don't have any.
I have zfs-fuse in runlevel boot, it gets started OK.
I now patched /etc/init.d/zfs-fuse to run "zpool import tank" before mounting the zfs-filesystems, this works but isn't a real solution.


Reproducible: Always

Steps to Reproduce:
1. zfs-fuse in boot runlevel
2. create zpool and zfs-fs
3. reboot


Actual Results:  
No zpools available, no zfs-filesystems are mounted.

Expected Results:  
the zpools should be online, the zfs-filesystems mounted

I am unsure if running zfs-fuse on LVM-volumes is the problem here.
My pool contains two LVs in a mirror:

# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  mirror    ONLINE       0     0     0
	    dm-13   ONLINE       0     0     0
	    dm-10   ONLINE       0     0     0

I also tried this with the syntaxes /dev/disk/by-id/dm-name-SDB-zfsmirr2, /dev/mapper/SDB-zfsmirr2, /dev/SDB/zfsmirr2 (detached device, attached other device-file, resilvered the zpool, rebooted).
None did help.
Maybe it is unlikely as "zpool import" finds the zpool without a problem. The q is "why is the pool in exported state at all at boot time?". Does zfs-fuse actually export the zpools at shutdown?
Comment 1 Navid Zamani 2009-03-18 11:12:28 UTC
Well, if you outer volumes are up and mounted, at the time the inner volumes have to access them, it can’t be the problem, can it?
Comment 2 Navid Zamani 2009-03-18 11:14:53 UTC
Maybe you can check when the exporting command actually happens, by hooking into something, or easier: By making it log the exporting, and then look at what happened at the time it got logged.

Should make it possible to pin down the place/code where it happens, and look into the script that is controlling that part.
Comment 3 Stefan G. Weichinger 2009-03-18 19:57:32 UTC
(In reply to comment #1)
> Well, if you outer volumes are up and mounted, at the time the inner volumes
> have to access them, it can’t be the problem, can it?

You mean, zfs-fuse is started too early, before the LVM-volumes are available?
Shouldn't the ebuild take care of this? I have zfs-fuse in runlevel boot, OK ?


Comment 4 Nicolas LE-BAS 2009-03-29 16:32:46 UTC
I got the same problem and solved it by creating the directory /etc/zfs, and then recreating the zpool.

My understanding is: when /etc/zfs does not exist, "zpool create" cannot create the cachefile /etc/zfs/zpool.cache. It silently ignores the error and configures the pool as "temporary" (i.e. cachefile=none). Temporary pools don't get imported at boot.

Would it be a solution to have the ebuild create /etc/zfs ?
Comment 5 Xiwen Cheng 2009-03-29 16:55:43 UTC
(In reply to comment #4)
> I got the same problem and solved it by creating the directory /etc/zfs, and
> then recreating the zpool.
> 
> My understanding is: when /etc/zfs does not exist, "zpool create" cannot create
> the cachefile /etc/zfs/zpool.cache. It silently ignores the error and
> configures the pool as "temporary" (i.e. cachefile=none). Temporary pools don't
> get imported at boot.
> 
> Would it be a solution to have the ebuild create /etc/zfs ?
> 

I can confirm creating the directory /etc/zfs indeed fixes the problem. It is not mandatory to re-create the pool. Just do:
zpool export POOL
zpool import POOL
and /etc/zfs/zpool.cache gets generated. 

Maybe it's worth mentioning, I added "after fuse" in depend-class in the init-script /etc/init.d/zfs-fuse. It's a matter of preference of course how you want to load fuse. zfs-fuse depends on the fuse module, hence my suggestion.

Comment 6 Stefan G. Weichinger 2009-03-30 21:25:00 UTC
confirmation: created /etc/zfs as root, created new zfs-fuse-pool, it was up and mounted again after reboot. Why not let the ebuild create that dir? Seems logical to me ;) thx, Stefan
Comment 7 Kyle Cavin 2009-10-23 19:34:09 UTC
Created attachment 208043 [details]
0001-fix-for-gentoo-bug-260362.patch

patch adds directory, and fixes fuse init script dependency.
Comment 8 Samuli Suominen (RETIRED) gentoo-dev 2010-06-05 23:49:41 UTC
Should be fixed now, with 0.6.9