Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!
Bug 694778 - sys-kernel/genkernel initramfs support for multiple crypt_roots
Summary: sys-kernel/genkernel initramfs support for multiple crypt_roots
Status: UNCONFIRMED
Alias: None
Product: Gentoo Hosted Projects
Classification: Unclassified
Component: genkernel (show other bugs)
Hardware: All Linux
: Normal normal with 1 vote (vote)
Assignee: Gentoo Genkernel Maintainers
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-09-18 02:15 UTC by Vladi
Modified: 2022-07-28 00:45 UTC (History)
9 users (show)

See Also:
Package list:
Runtime testing required: ---


Attachments
patch to add support for crypt_roots to genkernel 4.0.9 (add-crypt-roots-support.patch,4.13 KB, patch)
2020-07-14 12:58 UTC, Edward Middleton
Details | Diff
minimal patch to add support for crypt_roots to genkernel 4.0.9 (add-crypt-roots-support.patch,1.44 KB, patch)
2020-07-15 06:56 UTC, Edward Middleton
Details | Diff
boot failes with multi luks + btrfs raid1 (IMG_20200715_095511_small.jpg,836.57 KB, image/jpeg)
2020-07-15 17:03 UTC, Vladi
Details
minimal patch to add support for crypt_roots to genkernel 4.0.10 (add-crypt-roots-support-4.0.10.patch,1.45 KB, patch)
2020-07-24 06:25 UTC, Edward Middleton
Details | Diff
minimal patch to add support for crypt_roots to genkernel 4.1.0 (add-crypt-roots-support-4.1.0.patch,1.44 KB, patch)
2020-08-22 10:15 UTC, Edward Middleton
Details | Diff
minimal patch to add support for crypt_roots to genkernel 4.1.2 (add-crypt-roots-support-4.1.2.patch,1.44 KB, patch)
2020-09-02 14:34 UTC, Edward Middleton
Details | Diff
minimal patch to add support for crypt_roots to genkernel 4.2.1 (add-crypt-roots-support-4.2.1.patch,1.37 KB, patch)
2021-04-01 04:31 UTC, Edward Middleton
Details | Diff
minimal patch to add support for crypt_roots to genkernel 4.2.3 (add-crypt-roots-support-4.2.3.patch,1.40 KB, patch)
2021-07-13 13:14 UTC, Edward Middleton
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Vladi 2019-09-18 02:15:54 UTC
Hi, do not see how to decrypt multiple LUKS2 partitions before mounting my raid1 btrfs volume.

emerge info: http://dpaste.com/381SAH6

GRUB_CMDLINE_LINUX="rootfs=btrfs dobtrfs dolvm root_trim=yes crypt_root=UUID=25597508-6959-40a1-9eff-2fc0f5027b47"
Comment 1 Thomas Deutschmann (RETIRED) gentoo-dev 2019-09-18 10:11:17 UTC
Please describe your disk/volume layout.
Comment 2 Vladi 2019-09-18 16:39:01 UTC
Similar to below:

nvme0n1 => p3 Luks => device 1 btrfs raid1
nvme1n1 => p3 Luks => device 2 btrfs raid2

nvme0n1     259:0    0 953.9G  0 disk  
├─nvme0n1p1 259:1    0   260M  0 part  /boot/efi
├─nvme0n1p2 259:2    0   256M  0 part  /boot
├─nvme0n1p3 259:3    0 937.8G  0 part  
│ └─root    253:0    0 937.8G  0 crypt /


nvme0n1     259:0    0 953.9G  0 disk  
├─nvme1n1p1 259:1    0   260M  0 part  /boot/efi
├─nvme1n1p2 259:2    0   256M  0 part  /boot
├─nvme1n1p3 259:3    0 937.8G  0 part  
│ └─root    253:0    0 937.8G  0 crypt /
Comment 3 Vladi 2020-03-16 15:26:28 UTC
Any other information needed? Basically need to decrypt two lucks block devices during root before the btrfs dev scan happens.
Comment 4 Xeha 2020-03-29 12:04:07 UTC
Are there any plans to merge this from genkernel-next?
Comment 5 Thomas Deutschmann (RETIRED) gentoo-dev 2020-03-29 13:09:32 UTC
No, there are no plans to support multiple crypt_roots.

Also, I still don't understand your problem. That looks like a basic setup which is already supported and doesn't even require multiple crypt_roots.
Comment 6 Xeha 2020-03-29 14:06:20 UTC
If you have 2 disks which are part of a raid/pool and they are neccessary to mount the rootfs, how would you do that with genkernel?
Comment 7 Thomas Deutschmann (RETIRED) gentoo-dev 2020-03-29 15:21:51 UTC
A typical RAID setup with genkernel looks like

> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
> md126 : active raid1 nvme3n1p2[0] nvme2n1p2[1]
>       4190208 blocks super 1.2 [2/2] [UU]
> 
> md127 : active raid1 nvme3n1p3[0] nvme2n1p3[1]
>       1623336128 blocks super 1.2 [2/2] [UU]
>       bitmap: 0/13 pages [0KB], 65536KB chunk

> # lsblk
> NAME                        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> nvme2n1                     259:6    0   1,8T  0 disk
> ├─nvme2n1p1                 259:9    0   260M  0 part
> ├─nvme2n1p2                 259:10   0     4G  0 part
> │ └─md126                     9:126  0     4G  0 raid1 /boot
> └─nvme2n1p3                 259:11   0   1,5T  0 part
>   └─md127                     9:127  0   1,5T  0 raid1
>     └─root                  253:0    0   1,5T  0 crypt
>       ├─dev1Storage-volSwap 253:1    0    16G  0 lvm   [SWAP]
>       ├─dev1Storage-volLog  253:2    0    20G  0 lvm   /var/log
>       └─dev1Storage-volRoot 253:3    0   1,4T  0 lvm   /
> nvme3n1                     259:13   0   1,8T  0 disk
> ├─nvme3n1p1                 259:14   0   260M  0 part
> ├─nvme3n1p2                 259:15   0     4G  0 part
> │ └─md126                     9:126  0     4G  0 raid1 /boot
> └─nvme3n1p3                 259:16   0   1,5T  0 part
>   └─md127                     9:127  0   1,5T  0 raid1
>     └─root                  253:0    0   1,5T  0 crypt
>       ├─dev1Storage-volSwap 253:1    0    16G  0 lvm   [SWAP]
>       ├─dev1Storage-volLog  253:2    0    20G  0 lvm   /var/log
>       └─dev1Storage-volRoot 253:3    0   1,4T  0 lvm   /

In this system I have two NVMe devices with 3 partitions (EFI, /boot and my 'vault').
You will create the RAID, first (in my case I created RAID for /boot and LUKS volume).
On top of that RAID volume, you will add LUKS (md127 in my case).
On top of that LUKS volume, you can add LVM for example like shown.

But this is just one example. Please read source code: https://gitweb.gentoo.org/proj/genkernel.git/tree/defaults/linuxrc?h=v4.0.5#n588

We basically try to start everything first. Then we will scan for root device. And once we detect a root device, we will re-run most functions to allow volumes stored on root volume to become available (in my example the LVM thing).
Comment 8 Xeha 2020-03-29 15:34:50 UTC
So you have luks on top of mdadm, therefore just 1 crypt_root.

But for my setup, with ZFS, i have multiple partitions that are LUKS. After decrypting all those, i can import my root pool. So its ZFS on LUKS.

Any chances to get this working with normal genkernel? Im currently using genkernel-next to specify multiple LUKS devices to be unlocked.
Comment 9 Vladi 2020-03-29 16:39:23 UTC
Same for my setup which is BTRFS. I have multiple LUKS that need to be unlocked so that btrfs device scan can detect all members of the array.
Comment 10 Thomas Deutschmann (RETIRED) gentoo-dev 2020-03-29 18:35:42 UTC
When you require multiple LUKS devices, there is no chance.

We have several ZFS users, also using encryption. You maybe have to adjust your setup.
Comment 11 Vladi 2020-03-29 18:50:22 UTC
Do you mean change or chance? The block at line 605 calls a do_LUKS function does that detect other devices? How does genkernel detect the other LUKS devices or how can we help him do it. Currently I have 2 and only one is picked up by genkernel. If you know of a way to detect these please help us out. I have read the wiki and the man page and do not see a way to do this currently.
Comment 12 Vladi 2020-03-29 18:59:01 UTC
her is the resoning genkernel-next has for the multiple crypt_roots:
            # The first entry will be the one that
            # is going to be mapped to ${REAL_ROOT}.
            # Multiple "roots" devices are needed
            # in order to support multiple dmcrypts
            # aggregated through software raid arrays.
            CRYPT_ROOTS="${CRYPT_ROOTS} ${x#*=}"
Comment 13 Xeha 2020-03-29 19:29:45 UTC
(In reply to Thomas Deutschmann from comment #10)
> When you require multiple LUKS devices, there is no chance.
> 
> We have several ZFS users, also using encryption. You maybe have to adjust
> your setup.

Hence this request to have multiple LUKS to decrypt with crypt_roots in genkernel.

Is there a specific reason to not support multiple LUKS devices?
Comment 14 Thomas Deutschmann (RETIRED) gentoo-dev 2020-03-29 20:46:59 UTC
It's against the intention of genkernel. Please read the note in https://wiki.gentoo.org/wiki/Genkernel about genkernel's primary job.

Everything we add has to be maintained/tested. Nobody is doing that so we keep feature set limited. Just saying, "but my setup will require..." is not enough. Explain why your setup is common and why you can't do it like most people are doing for years who don't have the need for multiple crypt_roots.

I am sorry but until there's a *real* technical reason why we must support something like that the answer is: No, we are not going to support that. Please fix/adjust your setup if you want to use genkernel.
Comment 15 Vladi 2020-03-30 00:44:03 UTC
Unless I am mistaken and there is some sort of other way that we are missing using genkernel to activate a LUKS multi volume BTRFS/ZFS root then is that not enough of a reason to qualify for mainstreem? If people using btrfs raid1 + luks are unable to boot getntoo then that should be a a bug no?
Comment 16 Vladi 2020-03-30 02:38:12 UTC
https://forums.gentoo.org/viewtopic-t-1024794-start-0.html
this forum post seems to also run into this issue.
Comment 17 Xeha 2020-04-01 13:29:19 UTC
From the genkernel description:

>Its primary job is to bring up only the basic stuff needed to mount a (block) 
>device containing the root filesystem so that it can hand off control to real 
>system as soon as possible.

Without multiple crypt devices, you cannot import some rootfs to hand off control for the real system to boot up.

Meanwhile i just switched away from genkernel/genkernel-next for initramfs to dracut. Its sad that a tool that should make all this easy makes such a basic thing so complicated.

I can understand if you still dont want to implement this especially since you dont see a "*real* technical reason" to have multiple LUKS for the rootfs.

So in the end, i did as you suggested. I adjusted my setup to not have this genkernel issue anymore and im so far happy with dracut...
Comment 18 Vladi 2020-04-05 16:38:14 UTC
Here is what the boot looks like with dracut and what we are trying to get with genkernel..


❯ sudo blkid
/dev/nvme1n1p1: UUID="1435079933A60EB4" TYPE="ntfs" PTTYPE="dos" PARTUUID="1783a775-8b86-3644-9c92-8ecd5ef1e451"
/dev/nvme1n1p2: UUID="25597508-6959-40a1-9eff-2fc0f5027b47" TYPE="crypto_LUKS" PARTUUID="656c957b-d4dd-734e-a84f-610e10b4f6ea"
/dev/nvme1n1p3: LABEL="cryptswap" UUID="f0ba1391-3970-421d-9938-f1b078cd4c4a" TYPE="ext2" PARTUUID="e9ac5fdd-0951-8140-b820-b97cdb4e81b2"
/dev/nvme1n1p4: LABEL="boot" UUID="6b1a2cc1-bdfc-4785-86dd-1e4ba7b125fe" TYPE="ext4" PARTUUID="e7055681-a5ff-ac42-9ec8-e732cb8b6a2e"
/dev/nvme0n1p1: LABEL="Recovery" UUID="A6802D31802D08FF" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="76daf90e-ef80-4c47-b20e-cb3184c55ee8"
/dev/nvme0n1p2: UUID="6E2D-B681" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="140ba496-73bc-491a-8a46-fe1e6bfbde93"
/dev/nvme0n1p3: LABEL="crypto-swap" UUID="e95ab8f6-82bf-41f1-8a4a-e1b144079573" TYPE="swap" PARTLABEL="Microsoft reserved partition" PARTUUID="1cfbbd97-8ef7-4c9b-85fb-6874ddb45dcb"
/dev/nvme0n1p4: UUID="901A47491A472B92" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="34fb6b38-b8e6-439b-b7f4-bcf7c7efc355"
/dev/nvme0n1p5: UUID="ec2e18a1-3a59-49f6-b7ec-1e3e4fbd8065" TYPE="crypto_LUKS" PARTUUID="2846b6d8-5f95-40fc-b2d1-c509f178d0b0"
/dev/mapper/luks-25597508-6959-40a1-9eff-2fc0f5027b47: LABEL="root" UUID="ed8ec199-1fba-4905-a63f-d2b8427a0f55" UUID_SUB="e120284f-0113-403d-bd47-85aa85625d10" TYPE="btrfs"
/dev/mapper/luks-ec2e18a1-3a59-49f6-b7ec-1e3e4fbd8065: LABEL="root" UUID="ed8ec199-1fba-4905-a63f-d2b8427a0f55" UUID_SUB="21a3ee31-961f-4d9c-a7a1-46cee63db7cb" TYPE="btrfs"
/dev/nvme1n1: PTUUID="49e48905-aad9-11e9-abb3-00d86153fb3e" PTTYPE="gpt"
/dev/nvme0n1: PTUUID="c66534c8-966a-4dd1-9f15-34d8b8e786ef" PTTYPE="gpt"

~
❯ lsblk 
NAME                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme1n1                                       259:0    0 953.9G  0 disk  
├─nvme1n1p1                                   259:2    0   450G  0 part  
├─nvme1n1p2                                   259:3    0   488G  0 part  
│ └─luks-25597508-6959-40a1-9eff-2fc0f5027b47 254:1    0   488G  0 crypt /home
├─nvme1n1p3                                   259:4    0  15.4G  0 part  
└─nvme1n1p4                                   259:5    0 489.3M  0 part  /boot
nvme0n1                                       259:1    0 953.9G  0 disk  
├─nvme0n1p1                                   259:6    0   499M  0 part  
├─nvme0n1p2                                   259:7    0    99M  0 part  
├─nvme0n1p3                                   259:8    0    16M  0 part  
├─nvme0n1p4                                   259:9    0 488.4G  0 part  
└─nvme0n1p5                                   259:10   0 464.9G  0 part  
  └─luks-ec2e18a1-3a59-49f6-b7ec-1e3e4fbd8065 254:0    0 464.9G  0 crypt 

~
❯ cat /proc/cmdline 
BOOT_IMAGE=/vmlinuz-5.6.0 root=UUID=ed8ec199-1fba-4905-a63f-d2b8427a0f55 ro rootflags=subvol=@root rd.luks.uuid=ec2e18a1-3a59-49f6-b7ec-1e3e4fbd8065 rd.luks.uuid=25597508-6959-40a1-9eff-2fc0f5027b47 rootfs=btrfs

~
❯ sudo btrfs fi show
Label: 'root'  uuid: ed8ec199-1fba-4905-a63f-d2b8427a0f55
	Total devices 2 FS bytes used 301.08GiB
	devid    1 size 488.00GiB used 310.02GiB path /dev/mapper/luks-25597508-6959-40a1-9eff-2fc0f5027b47
	devid    2 size 464.84GiB used 25.00GiB path /dev/mapper/luks-ec2e18a1-3a59-49f6-b7ec-1e3e4fbd8065
Comment 19 Thomas Deutschmann (RETIRED) gentoo-dev 2020-04-05 17:36:54 UTC
We didn't need to support multiple LUKS volumes in the past 15 years and sorry, I still don't see that this has changed. Just fix your setup if you want to use genkernel or don't use genkernel like there are alternatives:

From your example, /root resides on

> └─nvme0n1p5                                   259:10   0 464.9G  0 part  
>   └─luks-ec2e18a1-3a59-49f6-b7ec-1e3e4fbd8065 254:0    0 464.9G  0 crypt 
That's the volume genkernel is supposed to open during boot.

Now configure your system services to open additional LUKS volume 

> ├─nvme1n1p2                                   259:3    0   488G  0 part  
> │ └─luks-25597508-6959-40a1-9eff-2fc0f5027b47 254:1    0   488G  0 crypt /home
which contains your /home.

When using OpenRC you would configure dmcrypt service to do that using a key file which can be located on encrypted root device. It's not complicated to do that.
Comment 20 Vladi 2020-04-05 18:16:10 UTC
@Thomas Deutschmann, the root and home are on the same btrfs volume which is a raid1:
❯ sudo btrfs subvol list /
ID 256 gen 33127 top level 5 path @root
ID 258 gen 33127 top level 5 path @home

so if I only had one of my luks partitions at boot like you suggest btrfs would fail to mount because there are two devices part of the btrfs raid1:

❯ sudo btrfs fi show
Label: 'root'  uuid: ed8ec199-1fba-4905-a63f-d2b8427a0f55
	Total devices 2 FS bytes used 307.20GiB
	devid    1 size 488.00GiB used 309.03GiB path /dev/mapper/luks-25597508-6959-40a1-9eff-2fc0f5027b47
	devid    2 size 464.84GiB used 309.03GiB path /dev/mapper/luks-ec2e18a1-3a59-49f6-b7ec-1e3e4fbd8065


those two devices are needed at the initramfs level for btrfs to mount and then in fstab I mount my home and root:
UUID=ed8ec199-1fba-4905-a63f-d2b8427a0f55         /		        btrfs		defaults,noatime,subvol=@root 0 0
UUID=ed8ec199-1fba-4905-a63f-d2b8427a0f55         /home	        btrfs		defaults,noatime,subvol=@home,nofail 0 0


so as you are asking to use dmcrypt at boot that means I need my root to be already mounted which it wont because of the missing luks volume that is part of the btrfs raid1 group.
Comment 21 Xeha 2020-04-05 18:35:01 UTC
In my ZFS setup, it looks like this:

lsblk:
/dev/sda1: LABEL="bpool" UUID="2188703416925228116" UUID_SUB="17643639836280086655" TYPE="zfs_member" PARTUUID="04ce98b2-01"
/dev/sda2: UUID="2ecdef65-5d0b-5e82-a465-8921b5c17ebc" UUID_SUB="03c7f590-b604-2ed8-3fa9-06b13e7f35fe" LABEL="T520:swap" TYPE="linux_raid_member" PARTUUID="04ce98b2-02"
/dev/sda3: UUID="3bc3119a-23ec-4675-9268-986700943d80" TYPE="crypto_LUKS" PARTUUID="04ce98b2-03"
/dev/sdb1: LABEL="bpool" UUID="2188703416925228116" UUID_SUB="10115771453936582708" TYPE="zfs_member" PARTUUID="8ca73bef-01"
/dev/sdb2: UUID="2ecdef65-5d0b-5e82-a465-8921b5c17ebc" UUID_SUB="5af5ce1d-f25e-3ee5-c126-da9c1d541408" LABEL="T520:swap" TYPE="linux_raid_member" PARTUUID="8ca73bef-02"
/dev/sdb3: UUID="aab7bf38-a8f7-4757-acca-c93061941707" TYPE="crypto_LUKS" PARTUUID="8ca73bef-03"
/dev/md127: UUID="f18e83a2-af0d-4cf0-a2ff-0dd15aa50ccb" TYPE="crypto_LUKS"
/dev/mapper/luks-f18e83a2-af0d-4cf0-a2ff-0dd15aa50ccb: UUID="17166d30-de0e-4727-adac-99c6e3f52f6d" TYPE="swap"
/dev/sdc: UUID="34207630-c450-4b81-9595-153decd6b8a9" TYPE="crypto_LUKS"
/dev/mapper/luks-3bc3119a-23ec-4675-9268-986700943d80: LABEL="rpool" UUID="17445744107752055202" UUID_SUB="16402149316153681789" TYPE="zfs_member"
/dev/mapper/luks-aab7bf38-a8f7-4757-acca-c93061941707: LABEL="rpool" UUID="17445744107752055202" UUID_SUB="16725569833090245865" TYPE="zfs_member"
/dev/mapper/luks-34207630-c450-4b81-9595-153decd6b8a9: UUID_SUB="14194125744820836073" TYPE="zfs_member"

zpool status:
  pool: bpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 0 days 00:00:02 with 0 errors on Wed Apr  1 17:39:58 2020
config:

	NAME                                                                  STATE     READ WRITE CKSUM
	bpool                                                                 ONLINE       0     0     0
	  mirror-0                                                            ONLINE       0     0     0
	    /dev/disk/by-id/ata-HITACHI_HTS727550A9E364_J3320082GRM5VA-part1  ONLINE       0     0     0
	    /dev/disk/by-id/ata-ST500LM000-SSHD-8GB_W764KNPY-part1            ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE

action: Export this pool on all systems on which it is imported.
	Then import it to correct the mismatch.
  scan: scrub repaired 0 in 1h14m with 0 errors on Sun Mar  8 15:38:32 2020
config:

	NAME                                                       STATE     READ WRITE CKSUM
	rpool                                                      ONLINE       0     0     0
	  mirror-0                                                 ONLINE       0     0     0
	    /dev/mapper/luks-3bc3119a-23ec-4675-9268-986700943d80  ONLINE       0     0     0
	    /dev/mapper/luks-aab7bf38-a8f7-4757-acca-c93061941707  ONLINE       0     0     0
	cache
	  /dev/mapper/luks-34207630-c450-4b81-9595-153decd6b8a9    ONLINE       0     0     0

errors: No known data errors

lsblk:
NAME                                            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                               8:0    0 465.8G  0 disk  
├─sda1                                            8:1    0     5G  0 part  
├─sda2                                            8:2    0    16G  0 part  
│ └─md127                                         9:127  0    16G  0 raid1 
│   └─luks-f18e83a2-af0d-4cf0-a2ff-0dd15aa50ccb 253:0    0    16G  0 crypt [SWAP]
└─sda3                                            8:3    0 444.8G  0 part  
  └─luks-3bc3119a-23ec-4675-9268-986700943d80   253:1    0 444.8G  0 crypt 
sdb                                               8:16   0 465.8G  0 disk  
├─sdb1                                            8:17   0     5G  0 part  
├─sdb2                                            8:18   0    16G  0 part  
│ └─md127                                         9:127  0    16G  0 raid1 
│   └─luks-f18e83a2-af0d-4cf0-a2ff-0dd15aa50ccb 253:0    0    16G  0 crypt [SWAP]
└─sdb3                                            8:19   0 444.8G  0 part  
  └─luks-aab7bf38-a8f7-4757-acca-c93061941707   253:2    0 444.8G  0 crypt 
sdc                                               8:32   0    60G  0 disk  
└─luks-34207630-c450-4b81-9595-153decd6b8a9     253:3    0    60G  0 crypt 
zram0                                           252:0    0   1.4G  0 disk  [SWAP]
zram1                                           252:1    0   1.4G  0 disk  [SWAP]
zram2                                           252:2    0   1.4G  0 disk  [SWAP]
zram3                                           252:3    0   1.4G  0 disk  [SWAP]


So i have 2 LUKS devices that are mirrored for the root pool that need unlocking: sda3 and sdb3. sdc is a fast SSD cache which could be mounted later to the zfs pool. Then theres a mdadm with LUKS on top for swap ( could also be done later).

I think its quite ignorant to say that "we" didnt need it for 15 years. Just cause you dont have this need dosnt mean it applies to everyone else. The others had the same issues and just ditched genkernel initramfs like we did now... This dosnt mean there is no need for it, just cause it gets ignored or worked around.

If you are arguing like this, i could argue: *sarcasm* There is no need for genkernel initramfs since their are alternatives and no one had a "*real* technical reason" to depend on genkernel initramfs.
From what i've seen in other bug reports and requests, this seems to be more of a political thing to block everything that's been done in genkernel-next.



TLDR; there is a "*real* technical reason" for this setup and ignoring it dosnt mean it dosnt exist.
Comment 22 Edward Middleton 2020-07-14 05:25:45 UTC
You are seeing this now because gentoo-next has been masked and everyone who was running btrfs raid on encrypted block devices is looking at moving back to genkernel.

Btrfs does not support internal encryption and the recommended way to encrypt btrfs filesystems is to run them on at least 2 luks encrypted block devices (you need at least 2 devices to restore data caused by bit level corruption on the devices).

If there is not going to be support for multiple encrypted block devices I will need to either fork genkernel or work on updating genkernel-next.

If the issue is the work in adding this I am happy to look at it but there having not looked at the genkernel code in years and it might take me a bit of time to get started.
Comment 23 Edward Middleton 2020-07-14 05:27:16 UTC
That should be genkernel-next not gentoo-next.
Comment 24 Edward Middleton 2020-07-14 12:58:19 UTC
Created attachment 649158 [details, diff]
patch to add support for crypt_roots to genkernel 4.0.9

This is a first pass at adding crypt_roots to genkernel 4.0.9 it could do with some cleanup.  I have tested it with a btrfs root on two luks encrypted block devices.
Comment 25 Edward Middleton 2020-07-14 16:42:57 UTC
The above patch breaks the single device case.  I am looking at that now.
Comment 26 Edward Middleton 2020-07-15 06:56:36 UTC
Created attachment 649244 [details, diff]
minimal patch to add support for crypt_roots to genkernel 4.0.9

This is a minimal change to add support for crypt_roots support.  It allows the use of root filesystems built on multiple encrypted block devices,  which is the recommended way of implementing encryption for btrfs.  It has has been tested on configurations with single and multiple encrypted devices.
Comment 27 Edward Middleton 2020-07-15 08:05:17 UTC
(In reply to Thomas Deutschmann from comment #7)
> A typical RAID setup with genkernel looks like
> 
> > # cat /proc/mdstat
> > Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
> > md126 : active raid1 nvme3n1p2[0] nvme2n1p2[1]
> >       4190208 blocks super 1.2 [2/2] [UU]
> > 
> > md127 : active raid1 nvme3n1p3[0] nvme2n1p3[1]
> >       1623336128 blocks super 1.2 [2/2] [UU]
> >       bitmap: 0/13 pages [0KB], 65536KB chunk
> 
> > # lsblk
> > NAME                        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> > nvme2n1                     259:6    0   1,8T  0 disk
> > ├─nvme2n1p1                 259:9    0   260M  0 part
> > ├─nvme2n1p2                 259:10   0     4G  0 part
> > │ └─md126                     9:126  0     4G  0 raid1 /boot
> > └─nvme2n1p3                 259:11   0   1,5T  0 part
> >   └─md127                     9:127  0   1,5T  0 raid1
> >     └─root                  253:0    0   1,5T  0 crypt
> >       ├─dev1Storage-volSwap 253:1    0    16G  0 lvm   [SWAP]
> >       ├─dev1Storage-volLog  253:2    0    20G  0 lvm   /var/log
> >       └─dev1Storage-volRoot 253:3    0   1,4T  0 lvm   /
> > nvme3n1                     259:13   0   1,8T  0 disk
> > ├─nvme3n1p1                 259:14   0   260M  0 part
> > ├─nvme3n1p2                 259:15   0     4G  0 part
> > │ └─md126                     9:126  0     4G  0 raid1 /boot
> > └─nvme3n1p3                 259:16   0   1,5T  0 part
> >   └─md127                     9:127  0   1,5T  0 raid1
> >     └─root                  253:0    0   1,5T  0 crypt
> >       ├─dev1Storage-volSwap 253:1    0    16G  0 lvm   [SWAP]
> >       ├─dev1Storage-volLog  253:2    0    20G  0 lvm   /var/log
> >       └─dev1Storage-volRoot 253:3    0   1,4T  0 lvm   /
> 
> In this system I have two NVMe devices with 3 partitions (EFI, /boot and my
> 'vault').
> You will create the RAID, first (in my case I created RAID for /boot and
> LUKS volume).
> On top of that RAID volume, you will add LUKS (md127 in my case).
> On top of that LUKS volume, you can add LVM for example like shown.
> 
> But this is just one example. Please read source code:
> https://gitweb.gentoo.org/proj/genkernel.git/tree/defaults/linuxrc?h=v4.0.
> 5#n588
> 
> We basically try to start everything first. Then we will scan for root
> device. And once we detect a root device, we will re-run most functions to
> allow volumes stored on root volume to become available (in my example the
> LVM thing).

btrfs supports raid1 in filesystem which allows you to recover from bit errors in the underlying block devices.  This can happen when you have a mirrored raid and the same blocks contain different data between two mirrors.  Linux software raid can't tell which block is correct if the mirrors are different. btrfs raid1 stores checksums which allow the filesystem to determine which version is corrupted and replace it.  To do this btrfs needs to handle raid the underlying devices together.

I have had this bit level corruption occur with Linux software raid and it is very hard to recover from because there is no general way to know which data is corrupted.

I believe zfs raid works in a similar way.
Comment 28 Vladi 2020-07-15 17:03:42 UTC
Created attachment 649314 [details]
boot failes with multi luks + btrfs raid1

unable to boot with my raid1 btrfs setup and applied patch..
Comment 29 Edward Middleton 2020-07-15 17:30:08 UTC
(In reply to Vladi from comment #28)
> Created attachment 649314 [details]
> boot failes with multi luks + btrfs raid1
> 
> unable to boot with my raid1 btrfs setup and applied patch..

This looks like it caused by something else.  The patch sets up the first device as /dev/mapper/root and any subsequent devices as /dev/mapper/root_n which is what it has done.

I assume /sbin/switch_root moves you into /newroot so you would need to

# ls /usr/lib/systemd/systemd

to check if it was there.

I have tested multi-disk setups using systemd and using regular openrc and it worked for both.

Is this a systemd setup or openrc?
Comment 30 Edward Middleton 2020-07-15 17:59:32 UTC
I have the rootfstype=btrfs set so it doesn't probe for the filesystem type.
Comment 31 Vladi 2020-07-16 15:18:29 UTC
Yup, it was my fault. I uncommented my genkernel line but did not remove the systemd line but now I use openrc. Thanks for getting this done works great!
Comment 32 Edward Middleton 2020-07-24 06:25:57 UTC
Created attachment 650442 [details, diff]
minimal patch to add support for crypt_roots to genkernel 4.0.10

This is the updated patch for 4.0.10.  If there is interest in merging this I can  created a patch based the git repo.
Comment 33 Christian Becke 2020-08-06 11:12:37 UTC
(In reply to Edward Middleton from comment #22)
> You are seeing this now because gentoo-next has been masked and everyone who
> was running btrfs raid on encrypted block devices is looking at moving back
> to genkernel.
This is the case for me. I've been booting my systems with LUKS -> mdadm -> lvm -> root for years without issues using genkernel-next. Since this is masked now, I am looking into genkernel and it does not work for my setup.

@Thomas Deutschmann: genkernel did not need to support multiple LUKS volumes, because there was genkernl-next! Please consider adding Edward Middelton's patches.
Comment 34 Thomas Deutschmann (RETIRED) gentoo-dev 2020-08-06 11:32:08 UTC
I am not saying no but adding something which *can* break stuff if you don't have a setup to test on your own is challenging.
Comment 35 Christian Becke 2020-08-06 11:59:36 UTC
Maybe I misread your posts, but your previous comments pretty much sounded like "no" to me: "No, there are no plans to support multiple crypt_roots.", "When you require multiple LUKS devices, there is no chance.", "It's against the intention of genkernel.", "We didn't need to support multiple LUKS volumes in the past 15 years and sorry, I still don't see that this has changed." I am glad to hear that this is not the case.
I will test the patch, will report back later.
Comment 36 Edward Middleton 2020-08-06 14:19:51 UTC
(In reply to Christian Becke from comment #33)
> (In reply to Edward Middleton from comment #22)
> > You are seeing this now because gentoo-next has been masked and everyone who
> > was running btrfs raid on encrypted block devices is looking at moving back
> > to genkernel.
> This is the case for me. I've been booting my systems with LUKS -> mdadm ->
> lvm -> root for years without issues using genkernel-next. Since this is
> masked now, I am looking into genkernel and it does not work for my setup.

I haven't tested this configuration.  If the raid setup occurs after the luks devices are setup it should work.  Is there a reason for running luks on the raw block devices rather then on the mdadm raid?  I have only ever run with 

mdadm (raid1) -> luks -> lvm
Comment 37 Thomas Deutschmann (RETIRED) gentoo-dev 2020-08-06 14:43:51 UTC
Well, I still don't really understand why this must be supported. I still believe that you can get it working when you change something in your existing setup... but the "WONTFIX" attitude is gone given that genkernel-next is going away...

But before I can have a look I must understand the use case and also have time to setup a test system...
Comment 38 Edward Middleton 2020-08-07 03:07:37 UTC
(In reply to Thomas Deutschmann from comment #37)
> Well, I still don't really understand why this must be supported. I still
> believe that you can get it working when you change something in your
> existing setup... but the "WONTFIX" attitude is gone given that
> genkernel-next is going away...
> 
> But before I can have a look I must understand the use case and also have
> time to setup a test system...

For btrfs its the recommended setup.  It gives you

* recovery from single disk hardware failure 
* recovery from bit level hardware corruption (through checksums)
* disclosure of data due to resale of rma's drives

RAID alone will not protect you against bit level corruption as it has no way to determine which disk has the correct data.  Bit level errors are more common with large multi TB disks and flash memory based drives.

Minimal setup is two drives

/dev/{sda1,sdb1} boot (ext2 (md/raid1)       256MiB boot
/dev/{sda2,sdb2} root (luks -> btrfs[raid1]) *      root

I haven't specifically tested the below steps as I don't currently have a machine available but its the basic procedure.

# cryptsetup luksFormat -s 512 -c aes-xts-plain64 /dev/sda2
# cryptsetup luksFormat -s 512 -c aes-xts-plain64 /dev/sdb2

# cryptsetup open /dev/sda2 root
# cryptsetup open /dev/sdb2 root1

# mkfs.btrfs -L BTRFS  --d raid1 -m raid1 /dev/mapper/root /dev/mapper/root1
# mkdir /mnt/btrfs
# mount /dev/mapper/root /mnt/btrfs
# btrfs subvolume create /mnt/btrfs/root

you then install root filesystem to /mnt/btrfs/root

add the following to your kernel command line.
"domdadm dobtrfs crypt_roots=UUID=[uuid for /dev/sda2] crypt_roots=UUID=[uuid for /dev/sdb2] real_root=LABEL=BTRFS real_rootflags=compress=lzo rootfstype=btrfs"
Comment 39 Edward Middleton 2020-08-22 10:15:46 UTC
Created attachment 656064 [details, diff]
minimal patch to add support for crypt_roots to genkernel 4.1.0
Comment 40 Diagon 2020-08-24 03:01:20 UTC
I am also here with the same issue.  This is my first gentoo install, and I signed up to this bug tracker specifically to report this problem.  

If we want to use BTRFS RAID with LUKS for ROOT, then we have no choice - we need to be able to unlock multiple LUKS disks on boot!  

This is possible with systemd, but I came here to escape Debian in order to get away from that.  I used to use mdadm, but btrfs raid is *much* better.  I am absolutely not going back to either one.

One way or another, if I'm staying with gentoo, I'm working around this problem.
Comment 41 Diagon 2020-08-24 03:14:53 UTC
@EdwardMiddleton - 

I haven't understood your patch yet, but please note the additional issue, that to cover the case when multiple disks have the same password, that password should be tried against each disk, to save typing it in multiple times. This is the way systemd handles it. There is a flag in /etc/crypttab to indicate disks that are to be opened on boot.

In my case, I have 3 luks disks (root_1, root_2, & data).  The first two need to be opened at boot.
Comment 42 Edward Middleton 2020-08-28 09:45:49 UTC
(In reply to Diagon from comment #41)
> @EdwardMiddleton - 
> 
> I haven't understood your patch yet, but please note the additional issue,
> that to cover the case when multiple disks have the same password, that
> password should be tried against each disk, to save typing it in multiple
> times.

This is a minimal patch that just allows you to setup multiple encrypted block devices.
Comment 43 Edward Middleton 2020-09-02 14:34:25 UTC
Created attachment 657942 [details, diff]
minimal patch to add support for crypt_roots to genkernel 4.1.2
Comment 44 Diagon 2021-01-11 00:57:02 UTC
I did get this to work on an install.  It needs to be pointed out that:

1. It necessitates putting in the password separately for each disk
2. The opened disks have fixed names: /dev/mapper/root and root_1

So the kernel line is:

dobtrfs root=/dev/mapper/root crypt_roots=/dev/sda crypt_roots=/dev/scc rootfstype=btrfs ro
Comment 45 Vladi 2021-03-31 04:05:03 UTC
patch now fails for version 4.2.1-r1, any chance to include this into genkernel?
Comment 46 Edward Middleton 2021-04-01 04:31:24 UTC
Created attachment 696615 [details, diff]
minimal patch to add support for crypt_roots to genkernel 4.2.1

minimal patch to add support for crypt_roots to genkernel 4.2.1
Comment 47 Vladi 2021-04-01 19:48:56 UTC
(In reply to Edward Middleton from comment #46)
> Created attachment 696615 [details, diff] [details, diff]
> minimal patch to add support for crypt_roots to genkernel 4.2.1
> 
> minimal patch to add support for crypt_roots to genkernel 4.2.1

Thanks so much Edward!
Comment 48 Edward Middleton 2021-07-13 13:14:12 UTC
Created attachment 723700 [details, diff]
minimal patch to add support for crypt_roots to genkernel 4.2.3
Comment 49 David Sardari 2021-08-22 19:24:56 UTC
Thanks for the patches! I decided to document my Gentoo installation steps. As I always try to have as much as possible encrypted, I created the following repo:
https://github.com/duxco/genkernel-patches

I'll try to keep that up to date. Perhaps, it's helpful for the one or the other :)
Comment 50 anonymous 2021-11-20 11:54:17 UTC
I want to create a zfs mirror on top of two LUKS devices.

I prefer zpool scrub to any scrub functionality that mdraid has.
Comment 51 anonymous 2021-11-20 12:21:23 UTC
I just realized that ZFS native encryption is almost as good as LUKS, so I'm going to use it.
Comment 52 David Sardari 2022-05-24 21:37:12 UTC
fyi, support for "keyctl" got merged into genkernel:
https://github.com/gentoo/genkernel/pull/10

Now, I just have to wait for the next genkernel release. If my tests succeed, I use that instead of integrating a keyfile into initramfs. This avoids accidental leakage of the keyfile and allows for having initramfs on unencrypted /boot which would otherwise have to realised via Grub's "GRUB_ENABLE_CRYPTODISK" setting. I personally use Secure Boot and GnuPG sign files required for booting.