Created attachment 435070 [details] mdraid stop answer when grow over 32Tib I have three disks: /dev/sdb1 2048 31255951359 14.6T Microsoft basic data /dev/sdc1 2048 31255951359 14.6T Microsoft basic data /dev/sdd1 2048 31255951359 14.6T Microsoft basic data and raid0 on /dev/sd[bc]1 then i add third disk and try to grow raid: mdadm --add /dev/md0 /dev/sdd1 mdadm --grow /dev/md0 when mdraid grow over 32Tib it eat cpu to 100%, lock filesystem, and no answer. # cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md0 : active raid4 sdb1[0] sdc1[1] sdd1[3] 31255687168 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/3] [UU__] [>....................] reshape = 1.8% (294738104/15627843584) finish=10409141422.2min speed=0K/sec top: Tasks: 118 total, 2 running, 116 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 50.0 sy, 0.0 ni, 50.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 4049160 total, 555124 used, 3494036 free, 68036 buffers KiB Swap: 511996 total, 0 used, 511996 free. 379232 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3044 root 20 0 0 0 0 R 100.0 0.0 56436:31 md0_raid4 when i try to exec mdadm or mount filesystem, they freeze.
This is certainly well beyond genkernel, and into the realm of upstream kernel bugs. I'd strongly suggest taking it to the upstream Linux Kernel mailing list, because I don't have the hardware to do this (I have just under 10TB in my entire system). However, I do also note that you said RAID0, but it's actually a RAID4 on md0, which is quite unusual.
this is strange, but (i think) normal for mdadm. when you try to grow raid0, mdadm report you have raid4 now.
Is this issue still outstanding?