VMD-L Mailing List
From: Aleksandr Kivenson (kivenson_at_brandeis.edu)
Date: Wed Nov 09 2011 - 14:41:55 CST
- Next message: Axel Kohlmeyer: "Re: How to parse very large trajectories?"
- Previous message: sajad falsafi: "Re: rlwrap problem on"
- Next in thread: Axel Kohlmeyer: "Re: How to parse very large trajectories?"
- Reply: Axel Kohlmeyer: "Re: How to parse very large trajectories?"
- Reply: Bogdan Costescu: "Re: How to parse very large trajectories?"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
I run simulations which generate trajectory files that may be tens of
gigabytes in size. I use the vmd python interface to analyze them, and
currently I iterate over them using this code:
molecule.read(molid, 'trr', trrName, beg=firstToLoad, end=lastToLoad,
waitfor=(lastToLoad - firstToLoad + 1))
The variables firstToLoad and lastToLoad are set by a loop so that the
entire trajectory is divided into chunks, each of which fits in memory.
The problem with this approach is that VMD apparently reads .trr files
from the very beginning every time this command is called, and so reading
the tenth chunk of a file takes ten times as long as reading the first
chunk (since the nine previous chunks must be read too) and overall the
time to read a file scales with the number of chunks squared. This means
that files composed of many chunks take a very long time to analyze.
Does anyone have a way to read through very large .trr files in an amount
of time that scales linearly with the file size?
Thanks,
Alex
- Next message: Axel Kohlmeyer: "Re: How to parse very large trajectories?"
- Previous message: sajad falsafi: "Re: rlwrap problem on"
- Next in thread: Axel Kohlmeyer: "Re: How to parse very large trajectories?"
- Reply: Axel Kohlmeyer: "Re: How to parse very large trajectories?"
- Reply: Bogdan Costescu: "Re: How to parse very large trajectories?"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]