Hi; I set up one of my test boxen to play with lvm, and it was running gentoo 1.0 profile rsync'd and updated to the latest, as of yesterday. With gentoo-sources-2.4.19-r10 kernel and lvm enabled, I initially had no problems adding physical volumes, creating a volume group, and creating logical volumes. Then I found I could reproducibly produce a kernel oops and lockup by this: (from memory) lvcreate -L1000M -nfoo vg mke2fs -m1 -b1024 -c /dev/vg/foo mount /dev/vg/foo /mnt (use tar to tranfer a filesystem to /mnt) tune2fs -j /dev/vg/foo umount /mnt e2fsck -f /dev/vg/foo mount /dev/vg/foo /mnt vi /mnt/bar.txt [oops] (and fs corrupted) I was experimenting with what exactly caused the oops (eg varying the blocksize) and rebooting many times. I thought it was ext3 related, but on one reboot, my usr was fscked. (everything, including root was on an lvm logical volume) and after that, the filesystem was completely hosed, and unrecoverable. (worst fs corruption Ive seen in 8 or so years of linux use:) The box was stable before, running gentoo without lvm, and seems to be stable under tests running a different dist with lvm, but using a virgin 2.4.19 kernel. This leaves me suspecting the gentoo-kernel, although there are some other variables (such as after I found the ext3 oops, I reverted all the other fs's to ext2) Anyway, if there is anything to it, it could really suck for someone. Paul
Thanks, I'll see what I can dig up.
Hmm, did you enable XFS in the emerge process? acpi4linux?
If you have a chance please test with the latest lolo-sources, I don't believe you'll have any problems of this sort.
db fix