3.4.0 has some annoying bugs, they are fixed in this maintance release. Reproducible: Always I changed these things in the ebuild to work: Line 44-46 removed: these patches are applied by upstream. Line 47 changed to: "${FILESDIR}/${PN}-3.4.0-nfs-exit-when-all-volumes-are-disabled.patch" because this patch applies. I think it had a reason I don't know, so it should stay. Real life testing is in progress... I will tell if I find any problem :)
well, 3.4.0 upgrade to 3.4.1 doesn't seem to be working without restarting all members of the cluster (which I can't do at the moment)... Adding the 3.4.1 node to the cluster makes very strange things, peer status shows only the node that were added by, voumes are visible, but ... Worse than migrating to 3.4.0 from 3.3 :(
For the record: I upgraded all nodes to 3.4.1, restarted all glsuterd (all stop, all start). This had to be done, because peer status showed like random on nodes... After restart one node was rejected because of volume checksum differ. I stopped, and manually copied the volume info files and checksums, and now it works. This config has 12 nodes and 7 volumes, most of them replicated on 2 node.
Hi László, Thanks for your patience and for opening this bug, it's now in tree ! Have fun *glusterfs-3.4.1 (31 Dec 2013) 31 Dec 2013; Ultrabug <ultrabug@gentoo.org> +glusterfs-3.4.1.ebuild: version bump fix #489434 thx to László Szalma