Pittsburgh Supercomputing Center 

Advancing the state-of-the-art in high-performance computing,
communications and data analytics.


NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. It is file-compatible with AMBER, CHARMM and X-PLOR.

Installed on: blacklight.

Please read the license agreement.

Blacklight Version

Improving Performance Using Hyperthreading

Your performance can be improved by using hyperthreading with NAMD. In hyperthreading, two threads are run on each core. We have seen significant improvements in some test cases, and recommend that you always use it.

To use hyperthreading, replace $PBS_NCPUS with $PBS_HT_NCPUS in the mpirun command. This will run two threads on a single core. The number of processess passed to mpirun is double the number of cores requested in the #PBS -l ncpus directive.

Bug: Passing -np 32 to mpirun

If the value that you are passing to the mpirun command with -np is 32, your job will fail with no indication of what went wrong.  There are two ways to pass -np 32 to mpirun, and they both trigger this bug. They are

  1. Request 32 cores and do not use hyperthreading. We recommend that you always use hyperthreading. This eliminates this bug and will improve performance.
  2. Request 16 cores and use hyperthreading. To work around this bug, define the MPI_MAPPED_STACK_SIZE variable. You should define MPI_MAPPED_STACK_SIZE in your job script before the mpiruncommand.
    #PBS -l ncpus=16
    export MPI_MAPPED_STACK_SIZE=64M   ! for the bash shell
    setenv MPI_MAPPED_STACK_SIZE 64M   ! for the C shell
    mpirun -np $PBS_HT_NCPUS...

If the value you are passing to mpirun -np is not 32, do not set the value of MPI_MAPPED_STACK_SIZE.


The executable is named namd2.

To use NAMD:

Currently, NAMD can be run only from the bash shell. We are working to make it available under all shells.

  1. Prepare a job script containing commands to:
    1. Set up the module command
    2. Load the NAMD module. First, check to see the available versions and choose which module to load.
      module avail namd
      module load namd
      loads the default version.
    3. Use an MPI command like:
      mpirun -np $PBS_HT_NCPUS dplace -s1  namd2 prog.namd > prog.log 
      Note that this example uses hyperthreading, which will run two threads on each core. We recommend that you always do this.
  2. Submit the job script with a qsub command.