|Summary:||baselayout: wrong order of lvm and raid in checkfs|
|Product:||Gentoo Linux||Reporter:||Sven Wegener <swegener>|
|Component:||[OLD] Core system||Assignee:||Martin Schlemmer (RETIRED) <azarah>|
|Severity:||normal||CC:||3.14159, dberkholz, dsean, eckert.thomas, giamma, kl, ladanyi, micah_compton, quintino, rajiv, robtone|
|Package list:||Runtime testing required:||---|
|Attachments:||Proposed checkfs vith correct raid lvm order|
Description Sven Wegener 2004-01-06 13:36:08 UTC
Hi, current order in /etc/init.d/checkfs is that LVM is started first and software raid afterwards. This means, if a software raid device is used as an LVM physical volume, it won't be recognized by vgscan during LVM initialization. I think it is more common to have a software raid device used as an LVM physical volume than the other way round. So, I would like to make the suggestion to reverse the order of LVM and software raid startup in /etc/init.d/checkfs or to call "vgscan" and "vgchange -a y" a second time after setup of software raid devices to let LVM recognize those software raid devices. The background is, that I have a setup as described above and have to manually run vgscan and vgchange after bootup to let LVM fully initialize the volume groups. And I remember that I have never seen a software raid setup using LVM logical volumes as array members. Sven
Comment 1 Raimondo Giammanco 2004-02-14 02:39:20 UTC
Created attachment 25586 [details] Proposed checkfs vith correct raid lvm order
Comment 2 Raimondo Giammanco 2004-02-14 02:40:29 UTC
Hello, I have the exact configuration, lvm on top of sw raid. Each time I update baselayout I have to change /etc/init.d/checkfs, I (too early attached) my modified checkfs of 220.127.116.11. I am sorry, I forgot to do a patch file before deleting the original gentoo checkfs .... :( Since this slight modification dos not endanger other users, can we expect to see the order adjusted to suit our needs as well? Best Regards
Comment 4 Rajiv Aaron Manglani (RETIRED) 2004-03-17 21:17:20 UTC
*** Bug 42658 has been marked as a duplicate of this bug. ***
Comment 5 Rajiv Aaron Manglani (RETIRED) 2004-03-17 21:38:10 UTC
setting the partition types to 'fd' (linux raid autodetect) might be a work around. however, i agree that raid should be started before lvm in the startup scripts. see also bug #5310.
Comment 6 Thomas Weidner 2004-03-18 06:00:37 UTC
partition type 0xfd will only work if you don't use raid as modules.
Comment 7 Bas Huisman 2004-03-29 05:55:54 UTC
I also have LVM on top of RAID, and also need to firstly activate raid and then vgscan. please change it, can you run RAID on top of LVM? If yes, maybe it should vgscan before and after the raid setup, or even more times? ;) Bas
Comment 8 Thomas Weidner 2004-03-29 08:53:21 UTC
AFAIK RAID on LVM is not possible.
Comment 9 Sven Wegener 2004-03-29 09:18:49 UTC
Technically it should possible. The software raid layer can use any block device as an array member. You can run software raid using some RAM discs as array members if you want to, which is senseless, but never mind ;) What I've done recently is using a loop device as array member, because the real disc's controller had a failure. I made a dd-dump of the disc using another system and setup up the complete array with that image as an loop device.
Comment 10 Thomas Weidner 2004-03-29 13:07:53 UTC
sorry my fault. tested it. RAID on LVM is possible. Perhaps we should add a configure variable LVM_BEFORE_RAID to set what should be started first. (lvm on raid seems to be more common than raid on lvm). PS: what about nested configurations? e.g. raid on lvm on raid?
Comment 11 Aron Griffis (RETIRED) 2004-06-15 14:05:10 UTC
ok, this is committed to rc-scripts cvs. Thanks for the patch. It will appear in the next release of baselayout Regarding LVM_BEFORE_RAID or nested configurations, I don't think that's necessary (yet). It's hard to imagine *why* someone would want to run RAID on LVM. When it happens then we can deal with it. For now this is just a trivial fix.
Comment 12 Aron Griffis (RETIRED) 2004-06-26 15:48:52 UTC
baselayout-1.10 is now in portage. It's package.masked at the moment but will be unmasked shortly.
Comment 13 Sven Wegener 2004-08-23 14:14:42 UTC
*** Bug 61419 has been marked as a duplicate of this bug. ***
Comment 14 Joonas Kortesalmi 2004-09-05 13:58:02 UTC
I don't still see a new baselayout in x86, is it coming or should I manually use a more recent version of baselayout? I'd assume most if not all all LVM+RAID setups are on servers which generally should run x86. Since the change is rather small and shouldn't break anything I don't see any point in delaying this fix from x86 much longer.
Comment 15 Sven Wegener 2004-12-21 10:58:29 UTC
*** Bug 51248 has been marked as a duplicate of this bug. ***
Comment 16 Sven Wegener 2005-01-03 07:38:47 UTC
*** Bug 75902 has been marked as a duplicate of this bug. ***
Comment 17 Leo 2005-03-17 00:15:30 UTC
Hi all, I'd like just to tell you that the problem persist whith the ~x86 version of the baselayout (in my case ~1.12.0_alpha1-r1).
Comment 18 Martin Schlemmer (RETIRED) 2005-03-17 00:27:21 UTC
Did you change the order in /etc/conf.d/rc ??
Comment 19 Akos Ladanyi 2005-03-17 02:25:22 UTC
I know this was said before but x86 baslayouts still have this problem, only ~x86 baselayouts contain the fix. I don't know when one of these will reach stable but I guess not any time soon because of the many open bugs in the bugzilla. So please release a new version of the stable one. The attached checkfs script is still good, only the header needs updating. Thanks in advance.
Comment 20 Martin Schlemmer (RETIRED) 2005-03-17 02:57:54 UTC
The reason for the order as it is, is because whoever that sent the patch wanted it that way round, thus changing it now will just get another bug added. As for 1.12.0 - we will have to see when going stable - there is anyhow no new bugs open against just it, and it fixes a lot of things, so it will be logical to just get it to stable soon than leave something else with less fixes around (although it will still need some more testing first).
Comment 21 Robert Felber 2008-03-10 08:06:36 UTC
(In reply to comment #11) > ok, this is committed to rc-scripts cvs. Thanks for the patch. It will appear > in the next release of baselayout > Regarding LVM_BEFORE_RAID or nested configurations, I don't think that's > necessary (yet). It's hard to imagine *why* someone would want to run RAID on > LVM. When it happens then we can deal with it. For now this is just a trivial > fix. Phew. Could someone please provide a proper, flexible configuration way? I am running LVM2 on RAID on LVM2 because I don't want to have fixed size partitions - ever.