| Summary: | sys-kernel/gentoo-sources-3.16.5 - secondary server fails to mount OCFS2 with DRBD volume | ||
|---|---|---|---|
| Product: | Gentoo Linux | Reporter: | Adam Randall <randalla> |
| Component: | [OLD] Core system | Assignee: | Gentoo Kernel Bug Wranglers and Kernel Maintainers <kernel> |
| Status: | RESOLVED TEST-REQUEST | ||
| Severity: | normal | CC: | randalla |
| Priority: | Normal | ||
| Version: | unspecified | ||
| Hardware: | AMD64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Package list: | Runtime testing required: | --- | |
| Attachments: |
Kernel configuration for 3.10.25-gentoo
Kernel configuration for 3.12.21-gentoo-r1 Kernel configuration for 3.16.5-gentoo |
||
|
Description
Adam Randall
2014-11-17 19:40:19 UTC
Created attachment 389589 [details]
Kernel configuration for 3.10.25-gentoo
Created attachment 389591 [details]
Kernel configuration for 3.12.21-gentoo-r1
Created attachment 389593 [details]
Kernel configuration for 3.16.5-gentoo
If it matters, here is my DRBD configuration:
resource r0 {
disk {
al-extents 3389;
disk-barrier no;
disk-flushes no;
}
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
become-primary-on both;
}
net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
max-buffers 8000;
max-epoch-size 8000;
sndbuf-size 512k;
}
on node1 {
device /dev/drbd1;
disk /dev/sda1;
address 10.10.254.10:7789;
meta-disk internal;
}
on node2 {
device /dev/drbd1;
disk /dev/sda1;
address 10.10.254.11:7789;
meta-disk internal;
}
}
And here is the OCFS2 configuration:
cluster:
heartbeat_mode = global
node_count = 2
name = c1
node:
number = 1
cluster = c1
ip_port = 7777
ip_address = 10.10.254.10
name = node1
node:
number = 2
cluster = c1
ip_port = 7777
ip_address = 10.10.254.11
name = node2
heartbeat:
cluster = c1
region = 14DF63D68F504B188E4370E0C31523C3
you have a couple of choices here. You can do a git bisect from that last working kernel to the first non-working one. Or you can upgrade to the latest kernel (3.18.1 as of this writing) and see if this issue has been addressed. I do see some things out there that say this was fixed in 3.16.7 but I cannot locate a commit that fixed it. To be honest, I never thought I'd hear back on this report since it's so niche. Still, thank you very much for the feedback. It will be somewhat time consuming for me to bring a pair of my servers up to 3.18.1, so confirmation that it's fixed will be awhile. Happy holidays! Ok, I'll close as test-request for now, if you do get the time to test please let me know the results. |