current order in /etc/init.d/checkfs is that LVM is started first and software
raid afterwards. This means, if a software raid device is used as an LVM
physical volume, it won't be recognized by vgscan during LVM initialization.
I think it is more common to have a software raid device used as an LVM
physical volume than the other way round.
So, I would like to make the suggestion to reverse the order of LVM and
software raid startup in /etc/init.d/checkfs or to call "vgscan" and
"vgchange -a y" a second time after setup of software raid devices to let LVM recognize those software raid devices.
The background is, that I have a setup as described above and have to manually run vgscan and vgchange after bootup to let LVM fully initialize the volume groups. And I remember that I have never seen a software raid setup using LVM logical volumes as array members.
Created attachment 25586 [details]
Proposed checkfs vith correct raid lvm order
I have the exact configuration, lvm on top of sw raid.
Each time I update baselayout I have to change /etc/init.d/checkfs,
I (too early attached) my modified checkfs of 220.127.116.11.
I am sorry, I forgot to do a patch file before deleting the
original gentoo checkfs .... :(
Since this slight modification dos not endanger other users,
can we expect to see the order adjusted to
suit our needs as well?
I think bug #42658 is a duplicate of this.
*** Bug 42658 has been marked as a duplicate of this bug. ***
setting the partition types to 'fd' (linux raid autodetect) might be a work around.
however, i agree that raid should be started before lvm in the startup scripts.
see also bug #5310.
partition type 0xfd will only work if you don't use raid as modules.
I also have LVM on top of RAID,
and also need to firstly activate raid and then vgscan.
please change it,
can you run RAID on top of LVM? If yes, maybe it should
vgscan before and after the raid setup, or even more times? ;)
AFAIK RAID on LVM is not possible.
Technically it should possible. The software raid layer can use any block device
as an array member. You can run software raid using some RAM discs as array
members if you want to, which is senseless, but never mind ;)
What I've done recently is using a loop device as array member, because the
real disc's controller had a failure. I made a dd-dump of the disc using another
system and setup up the complete array with that image as an loop device.
sorry my fault. tested it. RAID on LVM is possible.
Perhaps we should add a configure variable LVM_BEFORE_RAID to set what should be started first. (lvm on raid seems to be more common than raid on lvm).
PS: what about nested configurations? e.g. raid on lvm on raid?
ok, this is committed to rc-scripts cvs. Thanks for the patch. It will appear in the next release of baselayout
Regarding LVM_BEFORE_RAID or nested configurations, I don't think that's necessary (yet). It's hard to imagine *why* someone would want to run RAID on LVM. When it happens then we can deal with it. For now this is just a trivial fix.
baselayout-1.10 is now in portage. It's package.masked at the moment but will be unmasked shortly.
*** Bug 61419 has been marked as a duplicate of this bug. ***
I don't still see a new baselayout in x86, is it coming or should I manually use a more recent version of baselayout? I'd assume most if not all all LVM+RAID setups are on servers which generally should run x86. Since the change is rather small and shouldn't break anything I don't see any point in delaying this fix from x86 much longer.
*** Bug 51248 has been marked as a duplicate of this bug. ***
*** Bug 75902 has been marked as a duplicate of this bug. ***
Hi all, I'd like just to tell you that the problem persist whith the ~x86 version of the baselayout (in my case ~1.12.0_alpha1-r1).
Did you change the order in /etc/conf.d/rc ??
I know this was said before but x86 baslayouts still have this problem, only ~x86 baselayouts contain the fix. I don't know when one of these will reach stable but I guess not any time soon because of the many open bugs in the bugzilla. So please release a new version of the stable one. The attached checkfs script is still good, only the header needs updating. Thanks in advance.
The reason for the order as it is, is because whoever that sent the patch wanted it that way round, thus changing it now will just get another bug added. As for 1.12.0 - we will have to see when going stable - there is anyhow no new bugs open against just it, and it fixes a lot of things, so it will be logical to just get it to stable soon than leave something else with less fixes around (although it will still need some more testing first).
(In reply to comment #11)
> ok, this is committed to rc-scripts cvs. Thanks for the patch. It will appear
> in the next release of baselayout
> Regarding LVM_BEFORE_RAID or nested configurations, I don't think that's
> necessary (yet). It's hard to imagine *why* someone would want to run RAID on
> LVM. When it happens then we can deal with it. For now this is just a trivial
Phew. Could someone please provide a proper, flexible configuration way?
I am running LVM2 on RAID on LVM2 because I don't want to have fixed size partitions - ever.