I have found a sandbox error violation during configure phase with all petsc versions currently in portage, the problem arises with the execution of mpiexec but I suspect it's somewhere else. There are many bug reports for other packages reporting the same issue so some library is creating this mess During configure phase I get ... ... TESTING: configureMPIEXEC from config.packages.MPI(/var/tmp/portage/sci-mathematics/petsc-3.16.0-r1/work/petsc-3.16.0/config/BuildSystem/config/packages/MPI.py:185) * ACCESS DENIED: open_wr: /dev/dri/by-path/pci-0000:00:02.0-render * ACCESS DENIED: open_wr: /dev/dri/by-path/pci-0000:00:02.0-render ... ... and at the end I get: >>> Source configured. * ----------------------- SANDBOX ACCESS VIOLATION SUMMARY ----------------------- * LOG FILE: "/var/tmp/portage/sci-mathematics/petsc-3.16.0-r1/temp/sandbox.log" * VERSION 1.0 FORMAT: F - Function called FORMAT: S - Access Status FORMAT: P - Path as passed to function FORMAT: A - Absolute Path (not canonical) FORMAT: R - Canonical Path FORMAT: C - Command Line F: open_wr S: deny P: /dev/dri/by-path/pci-0000:00:02.0-render A: /dev/dri/by-path/pci-0000:00:02.0-render R: /dev/dri/renderD128 C: mpiexec --oversubscribe -n 1 printenv F: open_wr S: deny P: /dev/dri/by-path/pci-0000:00:02.0-render A: /dev/dri/by-path/pci-0000:00:02.0-render R: /dev/dri/renderD128 C: mpiexec --oversubscribe -n 1 /var/tmp/portage/sci-mathematics/petsc-3.16.0-r1/temp/petsc-0j2549oq/config.packages.MPI/conftest * -------------------------------------------------------------------------------- I have tested all the other petsc versions with the same result. This is the use configuration: [ebuild R ] sys-cluster/openmpi-4.1.4-r1::gentoo USE="cxx fortran ipv6 java romio -cma (-cuda) -libompitrace -peruse -valgrind" ABI_X86="(64) -32 (-x32)" OPENMPI_FABRICS="(-knem) (-ofed)" OPENMPI_OFED_FEATURES="(-control-hdr-padding) (-dynamic-sl) (-rdmacm) (-udcm)" OPENMPI_RM="(-pbs) (-slurm)" 0 KiB [ebuild U ~] sci-mathematics/petsc-3.19.1::gentoo [3.17.1::gentoo] USE="X boost examples%* fftw fortran hdf5 metis mpi mumps scotch threads -afterimage -complex-scalars -debug -hypre -int64 -superlu" 0 KiB As you can see in the past I was able to merge petsc against openmpi-4.1.2 but now it doesn't work anymore. Unfortunately I cannot tell you which was the past config. I have already tried downgrading openmpi without success. adding a "addpredict /dev/dri/renderD128" line in the petsc ebuild solves the problem but it's only a workaround