I've ordered 4 identical 2TB drives and wanted to hook them up into RAID10. After doing so with the RAID controller utility, I ended up with the expected result... so far.
After booting Linux, I tried to start using the RAID set, and things weren't adding up.
Here's a bit of BG info:
Motherboard: Asus M5A97, BIOS 1605, southbridge SB950
=== START OF INFORMATION SECTION ===
Device Model: ST2000DM006-2DM164
Serial Number: Z4Z9EYC7
LU WWN Device Id: 5 000c50 0a5512870
Firmware Version: CC26
User Capacity: 2,000,398,934,016 bytes [2,00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Thu Dec 21 01:59:52 2017 CET
SMART support is: Available - device has SMART capability.
SMART support is: Disabled
Upon inspecting each member drive with fdisk, I get reminded that they have a promise_fasttrack_raid_member signature.
The controller has the RAID set configured with:
- 64KB stripes
- 4096B sector size
- ReadCache policy
- no GB boundary
The gist of the problem is that dmraid incorrectly reads the amount of sectors on the fakeraid. Since the controller is supplying 4000.66GB of usable space, with each sector taking 4096B, we can conclude there is a ball park figure of 1,048,749,015 sectors, or 8,389,992,120 if we pretend the sectors are 512 byte long. The results from `dmraid -r` don't add up:
/dev/sdc: pdc, "pdc_fgcdefcj-0", stripe, ok, 1759414400 sectors, data@ 0
/dev/sde: pdc, "pdc_fgcdefcj-1", stripe, ok, 1759414400 sectors, data@ 0
/dev/sdd: pdc, "pdc_fgcdefcj-0", stripe, ok, 1759414400 sectors, data@ 0
/dev/sdf: pdc, "pdc_fgcdefcj-1", stripe, ok, 1759414400 sectors, data@ 0
Which makes for 3,518,828,800 usable sectors. Way more than we anticipated. It gets funnier however. Activating the RAID set results in 3 identical block devices appearing, pdc_fgcdefcj-0 and pdc_fgcdefcj-1.
And this is what fdisk has to say:
Sector size (logical/physical) in bytes: 512 / 4096
IO size (minimal/optimal) in bytes: 65536 / 131072
Here's also the dmraid version:
dmraid version: 1.0.0.rc16-3 (2010.11.12)
dmraid library version: 1.0.0.rc16-3 (2010.11.12)
device-mapper version: 4.35.0
Does this mean I have to re-create the RAID set with 128KB stripes or is this a dmraid issue?
I tried to re-create the RAID with 128KB stripes.
/dev/sdc: pdc, "pdc_ghaehjgj-0", stripe, ok, 1759414272 sectors, data@ 0
/dev/sde: pdc, "pdc_ghaehjgj-1", stripe, ok, 1759414272 sectors, data@ 0
/dev/sdd: pdc, "pdc_ghaehjgj-0", stripe, ok, 1759414272 sectors, data@ 0
/dev/sdf: pdc, "pdc_ghaehjgj-1", stripe, ok, 1759414272 sectors, data@ 0
IO 131072 / 262144
It's still nowhere near expected results.
Re-created as RAID0. 2742690304 sectors, roughly 1.3TB instead of 8.
Oddly enough, the RAID controller detects as follows in lspci:
00:11.0 RAID bus controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [RAID5 mode] (rev 40)
Subsystem: ASUSTeK Computer Inc. SB7x0/SB8x0/SB9x0 SATA Controller [RAID5 mode]
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Interrupt: pin A routed to IRQ 19
Region 0: I/O ports at f090 [size=8]
Region 1: I/O ports at f080 [size=4]
Region 2: I/O ports at f070 [size=8]
Region 3: I/O ports at f060 [size=4]
Region 4: I/O ports at f050 [size=16]
Region 5: Memory at fe40b000 (32-bit, non-prefetchable) [size=1K]
Capabilities:  SATA HBA v1.0 InCfgSpace
Capabilities: [a4] PCI Advanced Features
AFCap: TP+ FLR+
Kernel driver in use: ahci
Kernel modules: ahci
Should I be worried about the "RAID5 mode"?
This is a bug tracker, not a support forum. Please try our IRC channels, mailing lists and web forums.