Becksteinlab / GromacsWrapper

GromacsWrapper wraps system calls to GROMACS tools into thin Python classes (GROMACS 4.6.5 - 2024 supported).

Home Page:https://gromacswrapper.readthedocs.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to use GROMACS wrapper with `mpirun`?

wehs7661 opened this issue · comments

I was trying to use Gromacs wrapper on a supercomputer node on Bridges-2. On the login node, where mpirun is not required, Gromacs wrapper worked just fine. It was just that the GROMACS commands were decorated with the suffix _mpi. (For example, instead of using gromacs.editconf, I had to use gromacs.editconf_mpi).

However, I found that on an interactive node, where mpirun -np xx is required when launch GROMACS commands, I had the following error when importing gromacs in a Python console:

[r488.ib.bridges2.psc.edu:62197] OPAL ERROR: Not initialized in file pmix2x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.
  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[r488.ib.bridges2.psc.edu:62197] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

This is the same error I would get if I use gmx_mpi instead of mpirun -np xx gmx_mpi on the interactive node. I guess the problem was that Gromacs wrapper was not aware of the MPI I was using, but I'm not sure how to deal with this problem.

In my case, I was trying to use commands of MPI-enabled (CPU version) GROMACS 2020-4 via Gromacs wrapper. To enable mpirun, I had to execute module load openmpi/3.1.6-gcc10.2.0. I wonder if it is possible to use GROMACS wrapper in general (not just MDrunner) with mpirun . Or did I miss something in the documentation? I'm new to Gromacs wrapper and I'm sorry if this is a naive question.

I believe you can just override the driver attribute to change how things are run.

gromacs.mdrun.driver = 'mpiexec -np 8 gmx_mpi'

and then you can call gromacs.mdrun as normal.

@whitead Thanks for the reply! I got the error described above right after importing the package and after that, I only had the following modules:

>>> gromacs.
gromacs.AutoCorrectionWarning(     gromacs.enable_gromacs_warnings(
gromacs.BadParameterWarning(       gromacs.environment
gromacs.GromacsError(              gromacs.exceptions
gromacs.GromacsFailureWarning(     gromacs.fileformats
gromacs.GromacsImportWarning(      gromacs.filter_gromacs_warnings(
gromacs.GromacsValueWarning(       gromacs.g_gmx_mpi(
gromacs.LowAccuracyWarning(        gromacs.gmx_mpi(
gromacs.MissingDataError(          gromacs.less_important_warnings
gromacs.MissingDataWarning(        gromacs.logging
gromacs.NullHandler(               gromacs.os
gromacs.ParseError(                gromacs.release(
gromacs.UsageWarning(              gromacs.start_logging(
gromacs.absolute_import            gromacs.stop_logging(
gromacs.collections                gromacs.tools
gromacs.config                     gromacs.utilities
gromacs.core                       gromacs.warnings
gromacs.disable_gromacs_warnings( 

As you can see, a lot of modules are missing. Therefore, specifying the attribute via gromacs.mdrun.driver does not seem to work. (I don't even have gromacs.mdrun_mpi.driver.) Also, does this only allow running mdrun with MPI or does it also work for any other GROMACS commands as well? Specifcally I was trying to use GROMACS wrapper to launch commands for data analysis like gmx sasa or gmx trjconv. Without the GROMACS wrapper, the commands would be mpirun -np 1 gmx_mpi sasa, and mpirun -np 1 gmx_mpi trjconv , ... etc.

@wehs7661 In your .gromacswrapper.cfg file, add a line append_suffix = no after you specify your tools and groups. Also, make sure you have only gmx_mpi in your tools. So, your .gromacswrapper.cfg will look something like this:

[DEFAULT]
qscriptdir = %(configdir)s/qscripts
templatesdir = %(configdir)s/templates
configdir = ~/.gromacswrapper

[Gromacs]
release = 2020.4
gmxrc = "/path/to/GMXRC"
extra =
tools = gmx_mpi
groups = tools extra
append_suffix = no

[Logging]
logfilename = gromacs.log
loglevel_console = INFO
loglevel_file = DEBUG

Hi @hgandhi2411 , thanks a lot for the reply! I've changed my .gromacswrapper.cfg as you suggested. With the parameter append_suffix set as no, in the login node (where mpirun is not required), no suffix was appended anymore. However, in an interactive node where I must use mpirun -np xx, I still got the same error and some modules were still missing (as described in my previous comment), including mdrun and other common GROMACS commands. Do you know how I should solve the problem? As a reference, below is the content of my .gromacswrapper.cfg:

[DEFAULT]
qscriptdir = %(configdir)s/qscripts
templatesdir = %(configdir)s/templates
configdir = ~/.gromacswrapper

[Gromacs]
release = 2020.4
gmxrc = /jet/home/${USER}/pkgs/gromacs/2020.4/bin/GMXRC
tools = gmx_mpi
extra = 
groups = tools extra
append_suffix = no

[Logging]
logfilename = gromacs.log
loglevel_console = INFO
loglevel_file = DEBUG

I've also tried to set tools as mpirun -np 1 gmx_mpi, but that did not help.

I don't have an immediate answer. But you can try to get more debugging output right from the start by setting the environment variable GW_START_LOGGING=1 (can be set to any "true-ish" value like true, yes, sure, ...) before importing gromacswrapper, e.g.,

export GW_START_LOGGING=1
python

and then

import gromacs

Perhaps that gives a hint what happens during the gromacs tool discovery process.

Personally, I haven't used GW in the context that you describe so I am not sure if we ever tested setting driver to mpirun gmx_mpi. Looking at the code

return [self.driver, self.command_name] + self.transform_args(*args, **kwargs)
I believe that driver must be a single callable executable. You could perhaps work around this limitation by creating an executable shell script mpirun_gmx_mpi

#!/bin/bash

mpirun -n 1 gmx_mpi $*

that just runs these two commands with all arguments passed to gmx_mpi. Then set mpirun_gmx_mpi as the driver.

It's not very flexible or pretty but perhaps it's a start.

Other solutions are welcome!