I have dropped the ~hppa and ~mips keywords on johntheripper, as they arches do not have openmpi keyworded, and john doesn't work well with other MPI implementations. Please keyword openmpi OR place a block for john w/ USE=mpi in your package.use.mask, then re-add your keyword to john -r5.
Marked ~hppa. USE=mpi use.masked altogether for HPPA.
I currently use the MPI patch with mpich2 instead of openmpi, as recommended by upstream; any reason openmpi was chosen over that?
aoz.syn: mpich2 didn't work properly on my ppc box (ppc64-32ul), while openmpi did. My test env for johntheripper+MPI was: PPC (4 cores) + amd64 (4 cores) + x86 (2 cores) Launched via torque. Exceeded 150K/sec against crypt-md5.
That's curious - I ran this particular patch against a 14-node set of JS20s (28 PPC970s) using mpich2 for nearly a year with no problems. They were all running a stripped-down (PXE boot) of Gentoo (64/64). Regardless, maybe it would be fruitful to work out an MPI virtual since there are several competing (but colliding) implementations that work.
I suspect that it may be more I was crossing architectures and endian-ness (deliberately, because I wanted to use all of my available CPUs). Better than than the MPI virtual, there was a wrapper lib I saw a year or so ago, that saved having to rebuild apps against the specific MPI implementation (just build the wrapper once per implementation instead).
I've just moved over to OpenMPI; even though bindshell.net recommends mpich2, I have no opinion. OpenMPI works well enough: from my POV, this bug can be closed.
mips - up to you when you get time - rekeyword =>johntheripper-1.7.2-r5 and sys-cluster/openmpi dependencies or use.mask mpi
mpi use flag is masked on mips too. FIXED.