From chemistry-request@server.ccl.net Fri Dec 21 04:47:27 2001
Received: from sesosn01.astrazeneca.com ([212.209.42.131])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBL9lQS10774
	for <CHEMISTRY@ccl.net>; Fri, 21 Dec 2001 04:47:26 -0500
Received: from sesosn11.seso.astrazeneca.net (sesosn11.seso.astrazeneca.net [192.168.199.18])
	by sesosn01.astrazeneca.com (8.9.3/8.9.3.DEF) with ESMTP id KAA05739
	for <CHEMISTRY@ccl.net>; Fri, 21 Dec 2001 10:47:02 +0100 (MET)
From: Steve.St-Gallay@astrazeneca.com
Received: from sesosn21.seso.astrazeneca.net (sesosn21.seso.astrazeneca.net [192.71.145.181])
	by sesosn11.seso.astrazeneca.net (8.9.3/8.9.3) with ESMTP id KAA00417
	for <CHEMISTRY@ccl.net>; Fri, 21 Dec 2001 10:45:45 +0100 (MET)
Received: from astra-cdc-x03.seso.astrazeneca.net (astra-cdc-x03.seso.astrazeneca.net [192.71.145.48])
	by sesosn21.seso.astrazeneca.net (8.10.1/8.10.1) with ESMTP id fBL9jjs17193
	for <CHEMISTRY@ccl.net>; Fri, 21 Dec 2001 10:45:45 +0100 (MET)
Received: by astra-cdc-x03.seso.astrazeneca.net with Internet Mail Service (5.5.2650.21)
	id <XXCG9W4P>; Fri, 21 Dec 2001 10:45:44 +0100
Message-ID: <CC3DFAB6AEA9D411BF9500508BE33BDB06C84D61@gb-chw-mail1.ukcw.astrazeneca.net>
To: CHEMISTRY@ccl.net
Subject: Help Compiling Mopac93 on Linux
Date: Fri, 21 Dec 2001 10:45:42 +0100
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2650.21)
Content-Type: text/plain;
	charset="iso-8859-1"

Seasons Greetings CCLers,

	Has anyone successfully compiled mopac93 (not mopac7) under Linux
(249 MOSIX kernel, recompiled after Red Hat 7.1 was installed)? I have
compiled it OK (no errors) and the executable is there but the tests I have
run fail to give the right answers.

	Thanks in advance.

		Steve


-=-o-=-=-o-=-=-o-=-=-o-=-=-o-=-=-o-=-=-o-
Dr S.A.St-Gallay

Phone:	+44-(0)150-964-4882
Fax:	+44-(0)150-964-5576
E-Mail:	Steve.St-Gallay@astrazeneca.com

AstraZeneca R&D Charnwood
Bakewell Road
Loughborough
Leics
LE11 5RH
England
-=-o-=-=-o-=-=-o-=-=-o-=-=-o-=-=-o-=-=-o-


From chemistry-request@server.ccl.net Fri Dec 21 05:12:04 2001
Received: from far1.far.ub.es ([161.116.93.19])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBLAC4S11216
	for <chemistry@ccl.net>; Fri, 21 Dec 2001 05:12:04 -0500
Received: (from javier@localhost) by far1.far.ub.es (980427.SGI.8.8.8/970903.SGI.AUTOCF) id LAA10989; Fri, 21 Dec 2001 11:01:30 +0100 (MET)
From: "F. J. Luque" <javier@far1.far.ub.es>
Message-Id: <10112211101.ZM10985@far1.far.ub.es>
Date: Fri, 21 Dec 2001 11:01:17 +0100
In-Reply-To: Xavier Girones <xaviergirones@yahoo.com>
        "CCL:Solvation in Gaussian within AM1" (Mar 29,  6:03pm)
References: <20011220210245.79971.qmail@web9604.mail.yahoo.com>
X-Mailer: Z-Mail (3.2.0 26oct94 MediaMail)
To: Xavier Girones <xaviergirones@yahoo.com>
Subject: Re: CCL:Solvation in Gaussian within AM1
Cc: chemistry@ccl.net
Mime-Version: 1.0
Content-Type: multipart/mixed;
	boundary="PART-BOUNDARY=.110112211101.ZM10985.far.ub.es"


--PART-BOUNDARY=.110112211101.ZM10985.far.ub.es
Content-Description: Text
Content-Type: text/plain ; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
X-Zm-Decoding-Hint: mimencode -q -u 

Dear Xavier,

If you are interested in continuum calculations using the AM1 level, you =
can
use our MST continuum model parametrized for AM1 and PM3 Hamiltonians. It=
 is
implemented in Mopac6.0 program. Simply contact us in order to send you a=
 copy
of the code.

			F. J. Luque



Dear CCL-clients,
>
> Is there any way to similate solvent effects with
> Gaussian within the AM1 level of theory? I have been
> browsing the manual, and it seems that solvation is
> only available for HF methods and above. Any cue?
>
> Xavier Giron=E9s
> Institute of Computational Chemistry
> University of Girona
> Girona, Spain
>
> __________________________________________________
> Do You Yahoo!?
> Check out Yahoo! Shopping and Yahoo! Auctions for all of
> your unique holiday gifts! Buy at http://shopping.yahoo.com
> or bid at http://auctions.yahoo.com
>
>
> -=3D This is automatically added to each message by mailing script =3D-=

> CHEMISTRY@ccl.net -- To Everybody  | CHEMISTRY-REQUEST@ccl.net -- To Ad=
mins
> MAILSERV@ccl.net -- HELP CHEMISTRY or HELP SEARCH
> CHEMISTRY-SEARCH@ccl.net -- archive search    |    Gopher: gopher.ccl.n=
et 70
> Ftp: ftp.ccl.net  |  WWW: http://www.ccl.net/chemistry/   | Jan: jkl@os=
c.edu
>
>
>
>
>-- End of excerpt from Xavier Girones



-- =

F. J. Luque

Departament de Fisicoquimica      e-mail: javier@far1.far.ub.es =

Facultat de Farmacia              phone: + 34 93 402 45 57
Universitat de Barcelona          FAX:   + 34 93 403 59 87
Av. Diagonal s/n                         + 34 93 402 18 96
08028 Barcelona

--PART-BOUNDARY=.110112211101.ZM10985.far.ub.es--



From chemistry-request@server.ccl.net Fri Dec 21 16:10:41 2001
Received: from gandalf.cber.nih.gov ([128.231.52.5])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBLLAeS18248
	for <CHEMISTRY@ccl.net>; Fri, 21 Dec 2001 16:10:40 -0500
Received: from localhost (rvenable@localhost) by gandalf.cber.nih.gov (980427.SGI.8.8.8/980728.SGI.AUTOCF) via ESMTP id QAA97121; Fri, 21 Dec 2001 16:04:27 -0500 (EST)
Date: Fri, 21 Dec 2001 16:04:26 -0500
From: Rick Venable <rvenable@gandalf.cber.nih.gov>
To: CHEMISTRY@ccl.net
cc: steve@helix.nih.gov
Subject: SUMMARY: Linux parallel scaling
Message-ID: <Pine.SGI.4.21.0112182109580.90987-100000@gandalf.cber.nih.gov>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

We have Myrinet on some nodes, and it scales very well for MD codes like
CHARMM and AMBER.  However, there are far more ethernet nodes available,
and there never seems to be enough computer time available ...

I should also note that for CHARMM, performance varies greatly with the
compiler used-- we have observed roughly 25-45% speedup for executables
produced with the PGI compiler wrt. g77.  The largest speedups were
observed for simulations with Particle-Mesh Ewald, probably because g77
doesn't optimize the 3D FFT very well (ca. 18 min on one proc for the
test below).

Prior to the summary, some background-- with dual P3 or AMD nodes, we
observed the following for a ca. 19K atom MD test with PM Ewald using a
CHARMM executable compiled with pgf77 and MPICH (completion time for 100
steps, in minutes; N processors):


  P3-866 2.2  P3-866 2.4  AMD p1400 2.4
  ----------- ----------- -----------  
N  CPU   WALL  CPU   WALL  CPU   WALL
1 10.07 10.07 10.36 10.37  5.67  5.68
2  6.74  7.37  7.25  7.88  3.55  4.13 <--
4  4.11  5.12  4.87  6.52  3.24  5.35 <--
         ****        ****  

Note the WALL clock time in particular; there is a continuing decrease
thru 4 processors for the P3-866 nodes, but it is much greater for the
Linux 2.2.x-tcpfix kernel.  Note that 2 procs is better than 4 for the
AMD p1400 nodes-- but this may be due in part to rough doubling of
processor speed, such that the MD simulations are essentially I/O bound.

Excerpts from the replies I received are below, following my signature.


=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
Rick Venable
FDA/CBER/OVRR Biophysics Lab
1401 Rockville Pike    HFM-419
Rockville, MD  20852-1448  U.S.A.
(301) 496-1905   Rick_Venable@nih.gov
ALT email:  rvenable@speakeasy.org
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=


	Bogdan Costescu <bogdan.costescu@iwr.uni-heidelberg.de>

I've just updated the software on one of our clusters, that we use
mainly for running CHARMM, to Red Hat Linux 7.2 (but using kernel 2.4.16
> from RawHide) and tested latest versions of LAM-MPI and MPICH. As I
expected (got the same behaviour at every test - about every 6 months
for the last 2 years), LAM was significantly better - about 25% faster.
Also I couldn't notice any significant speed change between this test
and the previous ones (which used Red Hat Linux 6.x with official Red
Hat kernels up to 2.2.19 and then 2.2.20; I only applied the TCP patch
to 2.2.16, with >= 2.2.19 kernels I couldn't see a significant
difference).

I could also get something more significant by compiling CHARMM without
GENCOMM flag; this way it uses collective operations from the MPI
library;  the downside is that it's only allowed to run on power of 2
number of nodes (I hope that I got this right, as I said, it was a year
ago...)


	David Konerding <dek@cgl.ucsf.EDU>

Are you using TCP/IP over 100BaseT ethernet?  If so, then I don't really
expect you will get very good scaling.  My experience with AMBER and
CHARMM is that they get about 2X for 4 CPUs at best, and that the
bottleneck is within the MPI implementation and the performance of
ethernet.  The latencies are just too high.  CHARMM and AMBER both scale
much better if you get a good interconnect like Myrinet and use their
MPI libraries.

	Ivan Rossi <ivan@biocomp.unibo.it>

I had a very similar problem using Scyld Beowulf, that uses a custom
version of MPICH, with GROMACS. The speedup was going up as sqrt(nodes).
The problem has been solved by using LAM-MPI (www.lam-mpi.org) and
replacing Scyld with a standard redhat distibution. It seems that it is
a better implementation of MPI.

however scaling is not very good for parallel MD on fast Ethernet, data
packets are small, and frequently exchanged. On the contrary I have seen
linear scaling up to 16 nodes for MD codes on Linux cluster equipped
with low-latency connections such as Myrinet or scali dolphin, but it is
much more expensive.





