PETSc is a parallel/serial library for scientific computations. I have submitted this ebuild to the gentoo-science ml, but somebody asked me to put here to be more easily accessible. Here is the message I send to the ml: ############################ Hello, I was wondering how many persons here make use of the PETSc library (http://www-unix.mcs.anl.gov/petsc/petsc-2/) and what are their opinions about it. I am currently trying to use it as a core of a small project and I have tried to write an ebuild for it, that you find attached. It uses mpich, atlas and lapack, and no strange additional flags. I have my own way of dealing with the atlas-blas inconsistency, and I have been out of the loop regarding the atlas-blas ebuild, so I guess this could be a possible source of problems, but then modifying the DEPEND section is trivial. I wrote this some time ago and I realize now I never made it available for others to take advantage of it; however, before submitting it to bugs.gentoo.org, I was wondering if there was somebody willing to give it a spin to have some comments/suggestions/enhancements (eh!) . Best Regards ######################### I have put it in app-sci on my overlay, but maybe dev-libs or something similar is better. Best Regards
Created attachment 32960 [details] petsc-2.2.0.ebuild (New Version) To verify the whole blas/atlas dependency. Not tested with lam.
Created attachment 32972 [details, diff] petsc-2.2.0-r1.diff Patching the ebuild to have a correct description.
Created attachment 36631 [details] Working with new virtual/blas and virtual/lapack
Petsc 2.2.0 now is old, need to change the ebuild, from r1 to r2. Adding patch below.
Created attachment 41230 [details, diff] patch to pass from -r1 to -r2, just the SRC_URI changes Petsc 2.2.0 can be needed from who cannot update stright away to 2.2.1
Created attachment 41233 [details, diff] Patch from 2.2.0 to 2.2.1 Patch from the original 2.2.0 ebuild to the new 2.2.1
Created attachment 41239 [details, diff] path-2.2.1_to_2.2.1-r1 This patch handle correctly the PETSC_DIR and PETSC_ARCH variables, plus putting the bin directory in the path. In this way the changes are global.
It should be possible to use the mpi use flag to control whether PETSc uses mpi or not. I will attach modifications to the ebuilds that I think will do this. Here's a bug: cd /opt/petsc-2.2.1 grep * -r -i -e /var/tmp Note many things point to the temporary build directory! I couldn't even follow this simplify tutorial: http://www-unix.mcs.anl.gov/petsc/petsc-2/documentation/exercises/compiling/index.html because it looks for libtool in /var/tmp/... I'm not yet familiar enough with PETSc to know how to fix this.
Created attachment 43473 [details] petsc-2.2.1-r2 ebuild This has the patches from above applied too. I will attach a patch to 2.2.1-r1 as well.
Created attachment 43474 [details, diff] patch from 2.2.1-r1 to 2.2.1-r2 same as my ebuild but as a patch
Created attachment 43477 [details] proposed petsc-2.2.1-r3 Tried to address the problem raised by Mr. Macdonald. Some minor changes, like the location where to install the library and some info to take care of not perfect installation that leads libtool to complain. Problem seen in other packages, so not addressed for the moment.
Created attachment 43557 [details] cbm-petsc-2.2.1-r3.ebuild (1) The first line of the RDEPEND should be: RDEPEND="mpi? ( sys-cluster/mpich ) (if I use -mpi then portage shouldn't try to build mpich). (2) I added the following lines to src_install and commented out the warning at the end: # fix the libtool .la files LAOBJS=`find . | grep ".la$"` dosed -i -e "s:${WORKDIR}:/opt:g" ${LAOBJS} # fix broken libtool executable stuff dosed -i -e "s:${SHELL} ${top_builddir}/libtool:`which libtool`:g" bmake/linux_local/variables dosed -i -e "s:/bin/sh /opt/petsc/libtool:`which libtool`:g" bmake/linux_local/petscmachineinfo.h The .la stuff is fine but the changes to petscmachineinfo.h and variable worry me: somewhere earlier in the build process something went seriously wrong. I may rebuilt with --with-libtool=0 and see what happens. (3) Add sys-devel/libtool to RDEPEND and sys-apps/sed to DEPEND. (4) Is it FHS compliant to put petsc in /opt? (5) I think the "X" use flag should control --with-x=0 or 1? Letting these things autodetect results in funny things like I emerge Xorg then emerge petsc and unemerge Xorg but then petsc won't run because its linked against Xorg but this information isn't in the dependency information. This could actually be a problem for people building petsc and making a package on a machine with X and then using that binary package to install on the (possibly no X) nodes of a cluster.
Sorry, should point out (1), (2), (3), (5) are things I did in the ebuild. (4) is a request for comments.
Created attachment 43562 [details] cbm-petsc-2.2.1-r3.ebuild Sorry, the earlier one was b0rked.
Created attachment 43565 [details] cbm-petsc-2.2.1-r3.ebuild Sorry, just found out that you shouldn't put sed and stuff like that in RDEPEND (see bug #25335, comment #31).
About comment (1) in #12, the ebuild is made to explicitely depend on mpich rather than LAM-mpi. The reason is that when I started thinkering with Petsc (2.1.X), installing it with mpich was way easier than with LAM, and the ROMIO extension of lam was causing problems with other packages. Now I know of another person using petsc with lam, so I guess it could be possible to make depending form it. I am for a wirtual-mpi version. There are some packages (fftw-2) that could be made compile with mpich even if in the ebuild the depend from lam. Last thing, the installation of Petsc is indeed not so simple. The easiest way would be to compile in the place the archive is decompressed, but of course this is not what we want.
I agree with you and probably a virtual mpi would be a good thing. However, I think you're missing the point of my comment (1). PETSc can be built *WITHOUT ANY* mpi. If I have -mpi as a use flag then it *SHOULD NOT* depend on mpich (or any mpi library for that matter). I agree that packages which built inside their own trees are difficult to support in Gentoo. Bug #25335 is another culprit and probably anything else built with CMake as well (ITK, for example)
Absolutely. You have to excuse me, the mpi flag is embedded in my thinking process.
"New version" is a bit misleading for a package not in portage.
According to http://www.gentoo.org/doc/en/ebuild-submit.xml I used New Package, exactly because was not in portage. Is there another nomenclature I am not aware of ?
Attaching an ebuild for PETSc 2.3.0. They changed some things, BOPT they say is not functional anymore (pity, because I was using it in my makefiles), and it seems it is now possible to used single and a "matsingle" precision. I have no time to test it now, since I cannot update my main code to 2.3.0 since there is another that is changing too many things to change anything under his feet. In any case, you need to create two different architectures for debug and optimized, not anymore a arch and inside libg and libO. Onestly I liked it before. In anycase, here it is, if you can, check the warning messages to avoid surprises. I think is not quite safe yet to use a precision different from single. Of course I could be completely wrong, and I wait incontinental tomatoes at open jaw. P.s. the patches are now included in the main petsc.tar.gz, so that when a new patch version is available, you can just run emerge petsc again.
Created attachment 63512 [details] petsc-2.3.0 ebuild The ebuild works and install correctly, still have not tested the installed suite beside its own tests.
Created attachment 75640 [details] sci-libs/petsc-2.3.0.ebuild I wrote this ebuild without knowing another existed... Tested with a custom mpich, the gentoo lam-mpi and the gentoo atlas blas/lapack (the PETSC_MPI_DIR was helpful for our custom mpich), on a 32 nodes cluster, and on several workstations. The "without MPI" option, fortran and X11 are not tested, though, since I only have some parallel C code. If somebody could have a try with these, it would be appreciated. There is some aggressive cleaning in src_install, which yields only the required stuff (for me, at least) : * the includes (only C, or with fortran) * the library (only C, or with fortran) * the build system for the linux-gnu arch (bmake) * optionally, the doc, all together in /usr/share/doc I guess it could be a nice thing to integrate features from the ebuild already submitted : the multi-BOPT build stuff is something new for me...
I guess you could sed s/BOPT/PETSC_ARCH/ my previous comment... I was also wondering (if there is some PETSc guru in here) : Is it possible to remove the bmake stuff, and have the libs in /usr/lib, and the includes in /usr/include? Has anybody tried this approach?
(In reply to comment #24) > I was also wondering (if there is some PETSc guru in here) : > Is it possible to remove the bmake stuff, and have the libs in /usr/lib, and > the includes in /usr/include? Has anybody tried this approach? Not really, I guess it should be asked upstream. They have just opened some public mailing lists (http://www-unix.mcs.anl.gov/petsc/petsc-2/miscellaneous/mailing-lists.html), maybe the question can be recasted there. In the coming days I will try to find time to look at your ebuild, and see if it is possible to merge the good things of both versions into a new one.
Created attachment 101485 [details] sci-libs/petsc-2.3.2-p5 ebuild Taking the ebuild for 2.3.0, I have created an ebuild for the latest version which is 2.3.2-p5. Attached is the ebuild that I have modified. Please look it over and see if there is anything that I have done completely wrong.
Created attachment 101486 [details] sci-libs/petsc-2.3.2-p5 ebuild Sorry about the extra post, the ebuild was detected incorrectly as application/octet-stream instead of plain text.
*** Bug 82055 has been marked as a duplicate of this bug. ***
*** Bug 154490 has been marked as a duplicate of this bug. ***
Created attachment 101826 [details] PETSc 2.3.5-p5 ebuild This ebuild is an improvement to the previous one I submitted. This, along with the patch in the next post, builds no matter what FEATURES a user as has enabled unlike the previous one.
Created attachment 101827 [details, diff] patch for sci-libs/petsc-2.3.5_p5 This is a patch that is needed for the ebuild in the previous post. This removes the check in the makefile that checks that the user doing the configure step isn't root or using sudo.
Hi, I am trying to compile PETSc and I tried out tommoyer's ebuild for 2.3.5. Whatever happened in the meantime (rollback by developers?) the current version of PETSc seems to be actually 2.3.2-p8, not 2.3.5. The ebuild unfortunately downloads the a file with no version number in its name, and is vulnerable to further modifications on the side of the developers. I'll post a new ebuild if I manage to get it working...
Hi, I modified the ebuild and patch and it's working for me (amd64): http://compel.bu.edu/~nuno/gentoo/sci-libs/petsc/ You will have to download both the ebuild and the two patches inside files/
Is there some way to get this into an overlay? I just came to add my own ebuild and patch fix -- I had forgotten to cc myself on the bug, and a whole slew of similar fixes came in. I'll pass up on adding to the noise -- but clearly the demand is there, and folks are putting some work into it.
Hi, We do plan to include this application to the tree, the actual posted petsc needs more work, together with a scalapack ebuild. Also link in comment #33 does not work. Anyone interested in beefing up the ebuild and deps?
Created attachment 203072 [details] New Ebuild I made my own ebuild for 3.0.0. I know at least that it works and installs in the /usr/local directory. I removed stuff that would make testing more difficult but at least its a good starting place. Enjoy.
This is a pretty old ebuild, but I moved the link above to http://aeminium.org/slug/software/gentoo/sci-libs/petsc/ in case it's useful.
Created attachment 246466 [details] New ebuild This new ebuild is based on the ebuilds posted before but uses the recent upstram version 3.1-p4. Additionally, the new option --with-single-library=1 is used which causes all petsc functions to be built in a single library "libpets". With the USE-Flag "static-libs" it is possible to select if a shared library libpetsc.so (with flag disabled, default) or a static library libpetsc.a (with flag enabled) will be built. The installed files are located in /usr/lib (/usr/lib64 on amd64) and /usr/include/petsc. Installation of the html documentation to /usr/share/doc/petsc-xyz/ is also supported via the doc useflag. I tested this ebuild using the built-in mpiuni (disabled mpi useflag) and it worked fine.
Created attachment 246468 [details, diff] fix detection of -fPIC for shared libraries patch needed by petsc-3.1_p4.ebuild Changes the order of PIC-Flags to be tested by the configure script since gcc only prints a warning if a flag is not recognized which is interpreted as (false) positive result. Without this patch, creation of shared libraries did not work.
Hi, this looks interesting. Would it help you to have this in the science overlay? I could 'proxy-maintain' it there and you might possibly get more feedback. A more active stance could be to contact the science team and get accounts to work on it in the overlay yourself.
Created attachment 246470 [details, diff] avoid gcc warnings when using mpiuni patch needed by petsc-3.1_p4.ebuild Fixes compiler warnings when compiling packages using petsc (right hand statement without effect) using mpiuni implementation.
(In reply to comment #40) > Hi, this looks interesting. Would it help you to have this in the science > overlay? I could 'proxy-maintain' it there and you might possibly get more > feedback. A more active stance could be to contact the science team and get > accounts to work on it in the overlay yourself. I would consider the state of the ebuild to be testing or beta. I am working on the scientific charon-suite project (http://charon-suite.sf.net) and created a small overlay there (see gentoo instructions on https://sourceforge.net/apps/trac/charon-suite/wiki/QuickBuildGuide#Dependencies). To increase the popularity of the petsc package it would be useful to place it in some overlay directly accessible by layman, the science or sunrise overays would be nice. Feedback and test results are welcome.
Hi! The ebuild looks good except some details. Let's try to figure them out: I have changed -) Made some "if" blocks shorter. In particular the $(get_libdir) function is a nifty trick to solve such amd64-conditional sed you had in install. -) Python dep is unecessary because it is in the system set. I could not fix the following issues: 1) with USE="mpi" configure fails for me: Could be a missing dependency? >>> Configuring source in /var/tmp/portage/sci-mathematics/petsc-3.1_p4/work/petsc-3.1-p4 ... =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== Warning: [with-mpi-dir] option is used along with options: ['with-cc', 'with-cxx'] This prevents configure from picking up MPI compilers from specified mpi-dir. Suggest using *only* [with-mpi-dir] option - and no other compiler option. This way - mpi compilers from /usr are used. =============================================================================== =============================================================================== WARNING! Compiling PETSc with no debugging, this should only be done for timing and production runs. All development should be done when configured using --with-debugging=1 =============================================================================== TESTING: CxxMPICheck from config.packages.MPI(config/BuildSystem/config/packages/MPI.py:618) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- C++ error! MPI_Finalize() could not be located! ******************************************************************************* 2) I used tc_getC?? from the toolchain-funcs eclass to get the compiler. This is better than hardcoding "g++" and friends, however I think the makefile does not fully respect our compiler choices and makes some direct calls. At least my ccache seems to work only for some of the compilations. I don't really understand what's going on here... 3) Parallel build does not work (see the QA jobserver warnings)
Created attachment 247082 [details] slightly updated ebuild
(In reply to comment #43) > 3) Parallel build does not work (see the QA jobserver warnings) Thanks for your suggestions and improvements of the new ebuild. I would consider the missing ability to perform parallel builds as upstream-bug since the PETSc developers are working to switch to the CMake build system (see http://www.mcs.anl.gov/petsc/petsc-2/developers/index.html, section "Compiling and using the development copy (petsc-dev)"). Im investigating further how to fix the mpi-problem, perhaps it's possible to specify the mpi-compiler by hand.
Created attachment 247148 [details] fixed version of the latest ebuild The configure errors with USE="mpi" are caused by missing library dependencies using g++. Using -lmpi (as set by default) is not sufficient, -lmpi_cxx has to be added. If fortran is also used, -lmpi_f77 has to be added additionally. This fixed ebuild works for me using openmpi, mpich has not been tested yet.
Testing with mpich2 (which is also a possible implementation of virtual/mpi), the ebuild fails beacause the library names differ and are located in different directories. Regarding the comments in the section "Any of Many Dependencies" of http://devmanual.gentoo.org/general-concepts/dependencies/index.html, it would be better, if an additional use-flag is used to distinguish between openmpi and mpich2 and set the flags accordingly. The reason is that petsc will stop working if e.g. openmpi is unmerged and mpich2 is installed or vice versa. Without the additional use-flag, using /usr/bin/mpicc and /usr/bin/mpicxx as c/cxx-Compilers fixes the problem of the different library names and paths, but leaves breaking of link-dependencies on changing the mpi implementation.
Created attachment 247160 [details] ebuild independent of openmpi This ebuild now uses /usr/bin/mpicc and /usr/bin/micxx as c/cxx-compilers if USE="mpi" is set. This way, no explicit linking against mpi libraries is necessary.
Ok, thanks for figuring these things out. The fortran linker was missing in your last version. I added it. I also added some elog-info and comitted the ebuild to the science overlay. Thanks for the effort you have put into this. If you have further patches just email me, or <ad> (even better!!!) get an account for the science overlay yourself. Just mail to sci@gentoo.org, and did I already mention our IRC channel on freenode :) </ad> The ebuilds you have in your overlay might get some more people to look at them in this way! Last question: I bet we do need the cxx and fortran USE-flags on virtual/mpi to mirror what we have on petsc, right? I added that, correct me if I'm wrong.
Created attachment 247187 [details] minor fixes to last version - This version is in the science overlay now.
it is essential not to put ${myconfig} into parentheses at the configure step since portage does add some "" to the configure options at the wrong places (here at the --with-cc option): Starting Configure Run at Tue Sep 14 12:20:11 2010 Configure Options: --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions --with-cc="/usr/bin/mpicc --with-cxx=/usr/bin/mpicxx --with-fc=/usr/bin/mpif77 --with-mpi=1 --with-mpi-compilers=1 --with-X=1 --with-shared=1 --with-64-bit-indices=1 --with-fortran=1 --with-debugging=1" CFLAGS="-march=native -pipe" CXXFLAGS="-march=native -pipe" LDFLAGS="-Wl,-O1 -Wl,--as-needed" --with-windows-graphics=0 --with-matlab=0 --with-python=0 --with-clanguage=cxx --with-single-library=1 --with-petsc-arch=linux-gnu-cxx-debug --with-precision=double --with-blas-lapack-lib="-llapack -lblas -lpthread -lcblas -latlas " The following does fix the problem: --- a/sci-mathematics/petsc/petsc-3.1_p4.ebuild +++ b/sci-mathematics/petsc/petsc-3.1_p4.ebuild @@ -88,7 +88,7 @@ src_configure(){ myconf="${myconf} --with-debugging=0" fi - python "${S}"/config/configure.py "${myconf}" \ + python "${S}"/config/configure.py ${myconf} \ CFLAGS="${CFLAGS}" CXXFLAGS="${CXXFLAGS}" LDFLAGS="${LDFLAGS}" \ --with-windows-graphics=0 --with-matlab=0 --with-python=0 \ --with-clanguage="${mylang}" --with-single-library=1 \
You are absolutely right. I committed the fix and also added the license of petsc to the overlay. Thanks!
A new patchset (p5) of petsc was released on 27 Sep 2010. I copied the p4 ebuild and performed some cleanup. The result was added to the science overlay. The p5 ebuild seems to work fine but further tests and experiences are welcome.
(In reply to comment #47) > Testing with mpich2 (which is also a possible implementation of virtual/mpi), > the ebuild fails beacause the library names differ and are located in different > directories. > > Regarding the comments in the section "Any of Many Dependencies" of > http://devmanual.gentoo.org/general-concepts/dependencies/index.html, it would > be better, if an additional use-flag is used to distinguish between openmpi and > mpich2 and set the flags accordingly. The reason is that petsc will stop > working if e.g. openmpi is unmerged and mpich2 is installed or vice versa. > > Without the additional use-flag, using /usr/bin/mpicc and /usr/bin/mpicxx as > c/cxx-Compilers fixes the problem of the different library names and paths, but > leaves breaking of link-dependencies on changing the mpi implementation. I should mention with mpich2-1.3 someone switched to using mpiexec.hydra and there isn't an mpiexec or mpirun with that MPI. So I had to add the following into the `if use mpi' section of the ebuild. This seems like a failing on the mpich2 guys to include a proper symlink for mpirun and mpiexec in 1.3. myconf[35]="--with-mpiexec=/usr/bin/mpiexec.hydra"
Sorry for the double post.
(In reply to comment #54) > I should mention with mpich2-1.3 someone switched to using mpiexec.hydra and > there isn't an mpiexec or mpirun with that MPI. So I had to add the following > into the `if use mpi' section of the ebuild. This seems like a failing on the > mpich2 guys to include a proper symlink for mpirun and mpiexec in 1.3. > > myconf[35]="--with-mpiexec=/usr/bin/mpiexec.hydra" Thanks you for this report. I hope this symlink will be set in future versions of the mpich2-1.3 series ebuild. Setting this config-option to mpiexec.hydra will break compiling e.g. using openmpi. If this symlink keeps missing, further checks or, as mentioned above, different use-flags for openmpi and mpich have to be introduced.
Petsc 3.1-p8 has been released a few days ago. An ebuild for this new version has been added to the science overlay. Also the hypre preconditioner has been updated and is now available in version 2.7.0b.
Created attachment 268551 [details, diff] patch to petsc-3.1_p8.ebuild, complex scalar support, tweaked header install paths Hi, Here is a patch to the current ebuild in the science overlay (petsc-3.1_p8.ebuild) that: - Adds a new USE flag, "complex-scalars" that sets the "--with-scalar-type=complex" configure script option (to use complex as opposed to real petsc scalars default). - Tweaks the install paths of the header files to a hierarchy under /usr/include/petsc similar to what upstream intends(?) - Installs fortran header files if USE=fortran. - Installs some extra files that allow libraries such as slepc (http://www.grycap.upv.es/slepc/) to be built against petsc. This has allowed me to create an ebuild the former. - Installs an environment file that sets PETSC_DIR and PETSC_ARCH for software that needs these set to build/run (i.e. slepc). The updated ebuild is available in the overlay found here: https://github.com/joshuar/MSI-Cluster-Gentoo-Overlay There is also an ebuild for slepc in that overlay, which was the primary reason for making the changes above. I'm not a mathematician though this effort was done at the request of one. They have tested using petsc (and slepc) installed from an ebuild with these modifications on a Gentoo machine and haven't reported any problems... Hope this might help someone. Regards, Joshua Rich
(In reply to comment #59) > Created attachment 268551 [details, diff] > patch to petsc-3.1_p8.ebuild, complex scalar support, tweaked header install > paths Hi Joshua, thanks a bunch for your patch. It would be really cool if you could format it as a git patch against the science overlay, I could then commit it attributing the credit of authorship to you. If you are interested this could become a road to access to the science overlay for you. Thanks, Thomas
Created attachment 270465 [details] git format-patch for petsc-3.1_p8-r1.ebuild against current science overlay HEAD. Hi Thomas, Is this what you are after? Regards, Josh
(In reply to comment #61) > Created attachment 270465 [details] > git format-patch for petsc-3.1_p8-r1.ebuild against current science overlay > HEAD. committed to science overlay, thanks.
Based on the latest ebuild, i just committed a new ebuild for petsc-3.2-p6. It uses cmake during build, so the parallel-build issue should be fixed now. Please report about problems with this new version, if present.
Created attachment 325670 [details, diff] diff for petsc-3.3_p1.ebuild, prefix support As this seems to be some sort of tracking bug for the petsc ebuild in science. The following minor changes are necessary to build petsc in Gentoo Prefix.
(In reply to comment #64) > diff for petsc-3.3_p1.ebuild, prefix support > The following minor changes are necessary to build petsc in Gentoo Prefix. Thanks for reporting, your changes have been included into the science overlay. I moved the prefix to the petsc_with funciton for consistent handling.
PETSc is well maintained within the science overlay for years now. Therefore I close :-]