From: John Stone (
Date: Fri Sep 01 2017 - 13:12:19 CDT

  I'll give a quick summary of VMD parallelism architecture and
the value of various approaches that are worth being aware of:

- An increasing fraction of the core analytical algorithms in VMD
   have been made multithreaded over time, but this is primarily of
   benefit when working with very large structures (millions of atoms).
   The multithreaded parallelism in VMD uses multiple CPU cores to perform
   a single calculation, e.g., electrostatic field calculation for a
   single trajectory frame.

 - The GPU-accelerated algorithms in VMD are akin to the multithreaded
   algorithms in that they currently parallelize individual analysis
   computations. They can be substantially faster than the CPU algorithms,
   but they won't help your scripting performance at all since no
   Tcl or Python runs on the GPU at present.

 - The main limitation with the multithreading/GPU approaches above is that
   they gives you no speed benefits for things that are done purely
   via scripting. So, if you have a script that does a few simple things
   with atom selections, and then does a bunch of number crunching in
   either Tcl or Python, over thousands or millions of trajectory
   frames, the built-in multithreading/GPUs currently get you no, or
   very little speedup.
 - The value of the MPI-enabled compilations of VMD and the
   "parallel xxx" commands is that you can run multiple entire
   VMD instances (one VMD process per MPI rank), which each have
   their own Tcl/Python interpreters, and the built-in parallel
   work scheduler can distribute work among all of those VMD processes,
   thereby giving you coarse-grained parallelism at the
   Tcl/Python scripting level as well as fine-grained parallelism
   for specific analysis computations among multiple CPU cores and GPUs.
   It is very easy, using the "parallel xxx" commands, to write scripts
   that will run on hundreds or thousands of compute nodes to do
   many common trajectory analysis tasks. The biggest problem at
   present is just getting an MPI-enabled VMD up and running on your
   computer, since it currently requires recompilation from source code.

The reason that the standard VMD builds don't include MPI by default
is because MPI lacks a binary runtime interface, and requires that
applications be compiled from source code on the actual target
hardware where they will be run. To make the deployment of VMD
with MPI easier, I have been considering making the MPI aspects of
VMD operate using a plugin type interface and shipping the MPI-related
plugin source code along with VMD so that an existing VMD binary for
Linux/Windows/MacOS could just load such an MPI plugin at runtime,
and you wouldn't have to recompile the whole program, which as many
here already know is quite a task, particularly for beginners.

Best regards,
  John Stone

On Fri, Sep 01, 2017 at 01:34:59PM +0200, Ajasja Ljubeti?? wrote:
> Hi!
> You're not giving a whole lot of details, so my advice will be generic as
> well. Usually I split my trajectories into 1 ns chunks and then I use
> [1]gnu parallel┬ to run analysis scripts in parallel.┬
> If you mean the parallel [2]tcl command┬ you have to compile VMD with MPI
> support and have MPI installed. But I guess this is meant for large
> clusters of computers not just a four core processor (or two, in case of
> hyper-threading).
> Best,
> Ajasja
> On 1 September 2017 at 12:59, Saikat Pal
> <[3]> wrote:
> Dear all,
> I want o install VMD IN parallel to analysis data.What should I do ??I
> have already installed VMD in serial mode .In my computer there are 4
> processors are available.Please help me out .
> Thanks And Regards,
> Saikat
> References
> Visible links
> 1.
> 2.
> 3.

NIH Center for Macromolecular Modeling and Bioinformatics
Beckman Institute for Advanced Science and Technology
University of Illinois, 405 N. Mathews Ave, Urbana, IL 61801           Phone: 217-244-3349