From: Aleksandr Kivenson (
Date: Wed Nov 09 2011 - 14:41:55 CST

I run simulations which generate trajectory files that may be tens of
gigabytes in size. I use the vmd python interface to analyze them, and
currently I iterate over them using this code:, 'trr', trrName, beg=firstToLoad, end=lastToLoad,
waitfor=(lastToLoad - firstToLoad + 1))

The variables firstToLoad and lastToLoad are set by a loop so that the
entire trajectory is divided into chunks, each of which fits in memory.
 The problem with this approach is that VMD apparently reads .trr files
from the very beginning every time this command is called, and so reading
the tenth chunk of a file takes ten times as long as reading the first
chunk (since the nine previous chunks must be read too) and overall the
time to read a file scales with the number of chunks squared. This means
that files composed of many chunks take a very long time to analyze.

Does anyone have a way to read through very large .trr files in an amount
of time that scales linearly with the file size?