Not available in portage. Could you please add 2.0.1?
Also, 2.0.0 (released 2012-04-10) is still masked for testing.
Made my own ebuild just by copying the 2.0.0 one and running ebuild corosync-2.0.1.ebuild manifest .. seemed to build and install fine. However, then I got some other information from #linux-ha suggesting git HEAD was better...
<bobnormal> cheers. since no action on #linux-cluster, can anyone here tell me if there's a changelog available from corosync 2.0.0 -> 2.0.1? or any large known bugs? my distro (gentoo) doesn't have 2.0.1 available yet - i filed a bug to have it added - just wondering if it's worth waiting or rolling my own vs. using 2.0.0
<beekhof> bobnormal: yeah, i think you'd want it
* beekhof knows of a couple of nasty bugs that finally got squashed
<beekhof> changelog straight from git:
<beekhof> + Fabio M. Di Nitto (2 months ago) d7e205d: init: major cleanup (v2.0.1)
<beekhof> + Jan Friesse (3 months ago) 0791f44: Include ringid in processor joined log message
<beekhof> + Jan Friesse (3 months ago) dc5b898: Update TODO file
<beekhof> + Dan Clark (3 months ago) 88dd3e1: Improve testcpg to handle change of node identity
<beekhof> + Fabio M. Di Nitto (3 months ago) f2444ef: icmap: don't leak memory when changing ro/rw status on a key
<beekhof> + Fabio M. Di Nitto (3 months ago) 1dcb2d4: icmap: fix a valgrind errors (pass 1)
<beekhof> + Fabio M. Di Nitto (4 months ago) d2872ae: crypto init: release *_slot resource after init
<beekhof> + Fabio M. Di Nitto (4 months ago) b34c1e2: ipcs: allow connections only after all services are ready
<beekhof> actually the fixes i'm thinking of are queued for 2.0.2
... because of this, I made a new -9999 (git HEAD) ebuild. Could you please consider including this one masked?
Created attachment 320006 [details]
git HEAD ebuild
Hi Walter, I'd be happy to process this request further but so far as the mask suggests I still miss some test and feedbacks on corosync-2 series.
Would you mind sharing your experience with it ? I for one have been unable to get it working with current pacemaker-1.1.7 for instance.
What versions do you use, is there a guide somewhere you used to migrate something (I've not had the time to really dig into this enough) ?
git HEAD is the 2.x tree, apparently. To make the git HEAD ebuild here I just modified the latest existing 2.x ebuild to use git as a source.
There are instructions here:
Works on my machine!
(In reply to comment #4)
> git HEAD is the 2.x tree, apparently. To make the git HEAD ebuild here I
> just modified the latest existing 2.x ebuild to use git as a source.
> There are instructions here:
> Works on my machine!
I know how git HEAD ebuilds work mate but we're talking about existing clusters and a user base here so I'm afraid that saying "it works for me" won't be enough.
Can you provide the result of the following command : qlist -ICv sys-cluster
testnode local.d # qlist -ICv sys-cluster
testnode local.d #
Re-reading your question: this is a 'from scratch' setup - I did not migrate.
@Walter : well it still doesn't work for me, fresh setup and all... mind sharing your config plz ?
From /etc/corosync :
Thank you for your help
# Load the Pacemaker Cluster Resource Manager
PS. I noticed that there were some commits to the git repo yesterday, thought I doubt they would have broken things.
I bumped corosync-2.1.0 today and got it working with pacemaker-1.1.8.
FYI, upstream confirmed that starting from corosync-2.x plugins ain't supported anymore so the service.d/pacemaker file is obsolete.
Would you mind trying it out maybe using current pacemaker-1.1.7 or the 1.1.8 ebuild in #439762
corosync 2.1.0 won't work with crypto_cipher and crypto_hash set to anything other than none.
+*corosync-2.3.1 (29 Aug 2013)
+ 29 Aug 2013; Ultrabug <email@example.com> -corosync-2.3.0-r1.ebuild,
+ version bump, drop old
+*pacemaker-1.1.10 (29 Aug 2013)
+ 29 Aug 2013; Ultrabug <firstname.lastname@example.org> -pacemaker-1.1.10_rc1.ebuild,
+ version bump, drop old
Maybe they're a good combination, need feedback
I will try to take a look on at this on Saturday night... however have to deal with openrc changing their network scripts and not accepting my bonding patches @ #428604 before I can construct a reasonable cluster test environment.
Just to ask..
Anyone working on this?
(In reply to Chan Min Wai from comment #16)
> Just to ask..
> Anyone working on this?
Well to me this bug should be closed now as we have a working 2.X version in tree which has been unmasked last week.
Anything in particular is problematic for you mate ?
From the last reply and from the state of the tree it looks like this issue should be closed. If I'm wrong please re-open.