From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Mon Aug 21 2006 - 15:05:37 CDT
I can't tell much from just a segfault. Does the charm++ megatest work?
Does NAMD run on one processor? Is there *any* output at all?
My only comments looking at your build script are that on the charm
./build line "-language charm++ -balance rand" shouldn't be needed and may
be harmful. Also, you shouldn't need "CHARMOPTS = -thread pthreads
-memory os" with the TopSpin MPI library. It looks like you're following
http://www.ks.uiuc.edu/Research/namd/wiki/?NamdOnInfiniBand but using the
VMI build instructions. Also, please use the charm-5.9 source distributed
with the NAMD source code, since this is the stable tree.
-Jim
On Mon, 21 Aug 2006, Morad Alawneh wrote:
> Dear users,
>
> I have installed successfully NAMD2.6b1 onto my system, the installation
> instructions are attached with this email, and the program was working
> without any problem.
>
> I followed the same way for installing NAMD2.6b2, but after submitting a
> job I received the following message in the error log file:
>
> bash: line 1: 31904 Segmentation fault /usr/bin/env MPIRUN_MPD=0
> MPIRUN_HOST=m4a-7-11.local MPIRUN_PORT=40732
> MPIRUN_PROCESSES='m4a-7-11i:m4a-7-11i:m4a-7-11i:m4a-7-11i:m4a-7-10i:m4a-7-10i:m4a-7-10i:m4a-7-10i:m4a-7-9i:m4a-7-9i:m4a-7-9i:m4a-7-9i:m4a-7-8i:m4a-7-8i:m4a-7-8i:m4a-7-8i:m4a-7-7i:m4a-7-7i:m4a-7-7i:m4a-7-7i:m4a-7-6i:m4a-7-6i:m4a-7-6i:m4a-7-6i:m4a-7-5i:m4a-7-5i:m4a-7-5i:m4a-7-5i:m4a-7-4i:m4a-7-4i:m4a-7-4i:m4a-7-4i:m4a-6-24i:m4a-6-24i:m4a-6-24i:m4a-6-24i:m4a-6-23i:m4a-6-23i:m4a-6-23i:m4a-6-23i:m4a-6-22i:m4a-6-22i:m4a-6-22i:m4a-6-22i:m4a-6-21i:m4a-6-21i:m4a-6-21i:m4a-6-21i:m4a-6-20i:m4a-6-20i:m4a-6-20i:m4a-6-20i:m4a-6-19i:m4a-6-19i:m4a-6-19i:m4a-6-19i:m4a-6-18i:m4a-6-18i:m4a-6-18i:m4a-6-18i:m4a-6-17i:m4a-6-17i:m4a-6-17i:m4a-6-17i:m4a-6-16i:m4a-6-16i:m4a-6-16i:m4a-6-16i:m4a-6-15i:m4a-6-15i:m4a-6-15i:m4a-6-15i:m4a-6-14i:m4a-6-14i:m4a-6-14i:m4a-6-14i:m4a-6-13i:m4a-6-13i:m4a-6-13i:m4a-6-13i:m4a-6-12i:m4a-6-12i:m4a-6-12i:m4a-6-12i:m4a-6-11i:m4a-6-11i:m4a-6-11i:m4a-6-11i:m4a-6-10i:m4a-6-10i:m4a-6-10i:m4a-6-10i:m4a-6-9i:m4a-6-9i:m4a-6-9i:m4a-6-9i:m4a-6-8i:m4a-6-8i:m4a-6-8i:m4a-6-8i:m4a-6-7i:m4a-6-7i:m4a-6-7i:m4a-6-7i:m4a-6-6i:m4a-6-6i:m4a-6-6i:m4a-6-6i:m4a-6-5i:m4a-6-5i:m4a-6-5i:m4a-6-5i:m4a-6-4i:m4a-6-4i:m4a-6-4i:m4a-6-4i:m4a-6-3i:m4a-6-3i:m4a-6-3i:m4a-6-3i:m4a-6-2i:m4a-6-2i:m4a-6-2i:m4a-6-2i:m4a-6-1i:m4a-6-1i:m4a-6-1i:m4a-6-1i:'
> MPIRUN_RANK=16 MPIRUN_NPROCS=128 MPIRUN_ID=32469
> /ibrix/home/mfm42/opt/namd-IB/Linux-amd64-MPI/namd2 +strategy USE_GRID
> equil3_sys.namd
>
> Any suggestions for that kind of error will be appreciated.
>
>
> My system info:
>
> Dell 1855 Linux cluster consisting that is equipped with four Intel Xeon
> EM64T processors (3.6GHz) and 8 GB of memory. The nodes are connected
> with Infiniband, a high-speed, low-latency copper interconnect.
>
>
>
> --
>
>
>
> /*Morad Alawneh*/
>
> *Department of Chemistry and Biochemistry*
>
> *C100 BNSN, BYU*
>
> *Provo, UT 84602*
>
>
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:42:29 CST