The gromacs-3.3.1-r1.ebuild ignores the mpi USE variable. Actually, this ebuild is a complete mess. Reproducible: Always Steps to Reproduce: 1. set mpi USE variable 2. emerge gromacs (3.3.1-r1) Actual Results: configures gromacs without --enable-mpi builds without mpi support. Expected Results: should configure gromacs with the --enable-mpi option gromacs should be built with mpi support.
quick fix is to place if use mpi ; then myconf="$myconf --enable-mpi" fi before econf ${myconf} --enable-float || die "configure single-precision failed"
look further, mpi is not ignored: if use mpi ; then cd "${WORKDIR}"/${P}-single-mpi econf ${myconf} --enable-float --enable-mpi --program-suffix=_mpi \ || die "failed to configure single-mpi mdrun" emake mdrun || die "failed to make single-precision mpi mdrun" ; fi
You're right to assert that they ebuild does, in fact, do something with the mpi USE flag. The problem is that it does not do the right thing with it. Gromacs is built, but without mpi support. Give it a try and see for yourself. The --enable-mpi needs to be part of the first call to econf.
> Actually, this ebuild is a complete mess. I'm glad you approve...and thanks for your support. Checking mpi right now...
Okay I'm using lam_mpi and it works fine here. ~/methanol $ mpirun -c 2 /usr/bin/mdrun -v NODE (s) Real (s) (%) Time: 19.250 20.000 96.2 (Mnbf/s) (MFlops) (ns/day) (hour/ns) Performance: 23.305 866.122 84.165 0.285 AND mdrun_mpi -V NODE (s) Real (s) (%) Time: 19.080 20.000 95.4 (Mnbf/s) (MFlops) (ns/day) (hour/ns) Performance: 25.070 931.741 90.566 0.265 Either one uses multiple CPU's. What MPI implementation are you using?
You're right. That was probably an overstatement. It's only a bit messy. ;-) Thank you for checking. I'm using mpich2 on amd64. The sanfu is that with mpi enabled it puts mpi support in a separate executable (mdrun_mpi)... This can be problematic when you have programs and scripts depending on 'mdrun' supporting mpi.
I think in that case you should modify your scripts...other than that, patches are welcome :)
Thank you very much. I'll modify my scripts for now. It seems like an extra [potentially unexpected?] feature to keep the serial code around even when the mpi USE flag is set. Maybe it would be better to flip it around, i.e., have mdrun (mpi version) and mdrun-serial (serial version)? I guess if there are other users with a similar dependency then it might make sense for a patch. I've noticed that unexperienced cluster users often distribute complicated scripts for launching their jobs, processing results etc. Renaming mdrun can be really problematic in these cases. For example, the users don't know tricks like: perl -pi -e 's/mdrun/mdrun_mpi/g' ... In such cases, it ultimately becomes more work for the Gentoo admin.
I hear ya.... I think most people like it the way it is...but you could easily create a local overlay and use an ebuild that works the way you want it to.
Created attachment 123064 [details] gromacs-3.3.1-r1.ebuild (mpi default when mpi USE flag set)
Created attachment 123065 [details, diff] gromacs-3.3.1-r1.ebuild.patch (mpi default with mpi USE flag)
Thanks man. Fair enough. :-) Just in case it could be helpful to someone else... I've posted gromacs-3.3.1-r1.ebuild, an ebuild setting the mpi-enabled gromacs as the default when the mpi USE flag is set. The serial gromacs is called mdrun_serial. Also I've posted gromacs-3.3.1-r1.ebuild.patch: a patch that neatens things up a drop, but also has mpi-enabled gromacs as default. The serial gromacs has the _serial suffix.
Cool, thanks!
I kinda like the idea of switching around which binaries get tagged. If you build with USE=mpi, you presumably have intent to cluster it, and clustering is arguably your primary application (unless you're just testing, in which case you can recompile).
(In reply to comment #9) > I hear ya.... > I think most people like it the way it is...but you could easily create a local > overlay and use an ebuild that works the way you want it to. We talked a bit on IRC. It's more than just "most people like it." It's that this is pretty standard for building with MPI support, not just in Gentoo. Google for: gromacs _mpi and watch how much stuff pops up.
> We talked a bit on IRC. It's more than just "most people like it." It's that > this is pretty standard for building with MPI support, not just in Gentoo. Then it sounds like you have made a very reasonable decision... Thanks. :-)