From chemistry-request@server.ccl.net Fri Dec 14 23:26:59 2001
Received: from rwcrmhc52.attbi.com ([216.148.227.88])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBF4Qxi20870
	for <chemistry@ccl.net>; Fri, 14 Dec 2001 23:26:59 -0500
Received: from C1353359A ([12.228.114.158]) by rwcrmhc52.attbi.com
          (InterMail vM.4.01.03.27 201-229-121-127-20010626) with SMTP
          id <20011215042636.LZPT403.rwcrmhc52.attbi.com@C1353359A>
          for <chemistry@ccl.net>; Sat, 15 Dec 2001 04:26:36 +0000
Reply-To: <mark@planaria-software.com>
From: "Mark Thompson" <mark@planaria-software.com>
To: <chemistry@ccl.net>
Subject: Woodward Hoffman citations
Date: Fri, 14 Dec 2001 20:30:06 -0800
Message-ID: <000001c18521$2c6472c0$9e72e40c@attbi.com>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2911.0)
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4807.1700


Can someone give me the original journal citation(s) to Woodward and
Hoffman's Nobel prize winning work?

Thanks in advance,
Mark


=================================
Mark Thompson
Planaria Software
Seattle, WA.
http://www.planaria-software.com
=================================



From chemistry-request@server.ccl.net Sat Dec 15 08:37:22 2001
Received: from prserv.net ([32.97.166.31])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBFDbMi26438
	for <chemistry@ccl.net>; Sat, 15 Dec 2001 08:37:22 -0500
Received: from attglobal.net (slip-12-64-6-95.mis.prserv.net[12.64.6.95])
          by prserv.net (out1) with SMTP
          id <2001121513370220102on4kme>; Sat, 15 Dec 2001 13:37:03 +0000
Message-ID: <3C1B0E1D.2215B29D@attglobal.net>
Date: Sat, 15 Dec 2001 08:47:25 +0000
From: jmmckel@attglobal.net
Reply-To: jmmckel@attglobal.net
X-Mailer: Mozilla 4.73 [en]C-CCK-MCD NSCPCD473  (WinNT; U)
X-Accept-Language: en
MIME-Version: 1.0
To: chemistry@ccl.net
Subject: F77 to F90/95
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

CCLers,

On trying the free Intel Linux compiler, largely for F90/F95,  on F77
code the warnings about F77 are quite copious.  Is there an F77->F90/95
utility that will help clear up these warnings?  Is there a means to
de-VAXinate F77 code easily?

Thanks!  I'll post the results if there is interest.

John McKelvey



From chemistry-request@server.ccl.net Sat Dec 15 12:11:45 2001
Received: from mail.bancorp.ru (root@[195.239.131.24])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBFHBii28522
	for <chemistry@ccl.net>; Sat, 15 Dec 2001 12:11:44 -0500
Received: from 217.107.75.16 ([217.107.75.16])
	by mail.bancorp.ru (8.11.3/8.11.3) with ESMTP id fBFHBOd25466
	for <chemistry@ccl.net>; Sat, 15 Dec 2001 20:11:24 +0300
Date: Sat, 15 Dec 2001 20:11:20 +0300
From: Gregory Shamov <gas5x@bancorp.ru>
X-Mailer: The Bat! (v1.45) Educational
Reply-To: Gregory Shamov <gas5x@bancorp.ru>
X-Priority: 3 (Normal)
Message-ID: <18024905875.20011215201120@bancorp.ru>
X-Confirm-Reading-To: 
Disposition-Notification-To: gas5x@bancorp.ru
To: chemistry@ccl.net
Subject: disk or diskless beowulf clusters?
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hello All,

Does someone have experience in using quantum chemistry programs like
GAMESS or Gaussian on diskless Beowulf Linux clusters? What is better,
in general -- to write integral files on local HDD or to send them to
fileserver via NFS on Fast or Gigabit Ethernet?  How many diskless
nodes could possibly be served with common x86 PC-based fileserver
like a dual P3 or Athlon with IDE or SCSI RAID ?

Thank you in advance. I will summarize the results.

-- 
Best regards,
Gregory Shamov,
Dept. Phys. Chem,
Kazan State University
Kazan, Russian Federation                 mailto:gas5x@bancorp.ru



From chemistry-request@server.ccl.net Sat Dec 15 14:04:38 2001
Received: from gandalf.cber.nih.gov ([128.231.52.5])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBFJ4ci29755
	for <CHEMISTRY@ccl.net>; Sat, 15 Dec 2001 14:04:38 -0500
Received: from localhost (rvenable@localhost) by gandalf.cber.nih.gov (980427.SGI.8.8.8/980728.SGI.AUTOCF) via ESMTP id NAA71558 for <CHEMISTRY@ccl.net>; Sat, 15 Dec 2001 13:58:39 -0500 (EST)
Date: Sat, 15 Dec 2001 13:58:39 -0500
From: Rick Venable <rvenable@gandalf.cber.nih.gov>
To: CHEMISTRY@ccl.net
Subject: Linux kernel 2.4 parallel scaling
Message-ID: <Pine.SGI.4.21.0112151326430.86053-100000@gandalf.cber.nih.gov>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Myself and others here at NIH have been benchmarking CHARMM and other
parallel chemistry codes on Linux clusters using MPICH libs with an
ethernet network.  Tests on 2.4.12 kernel nodes with 2, 4, and 8 procs
showed terrible parallel scaling, compared to the same tests run with
the 2.2.16-tcpfix kernel.

I'd expect that other people using Linux clusters for computational
chemistry may have encountered this already, and may have looked into
possible solutions.

For instance, are there some kernel parameters in /proc that can be used
to tune the communications performance?

Are there any other software solutions to this problem?

Suggestions appreciated; I will summarize.

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
Rick Venable
FDA/CBER/OVRR Biophysics Lab
1401 Rockville Pike    HFM-419
Rockville, MD  20852-1448  U.S.A.
(301) 496-1905   Rick_Venable@nih.gov
ALT email:  rvenable@speakeasy.org
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=



From chemistry-request@server.ccl.net Sat Dec 15 15:35:32 2001
Received: from socrates.cgl.ucsf.edu ([128.218.27.3])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBFKZWi30856
	for <CHEMISTRY@ccl.net>; Sat, 15 Dec 2001 15:35:32 -0500
Received: (from ross@localhost)
	by socrates.cgl.ucsf.edu (8.9.3/8.9.3/GSC1.7) id MAA308229
	for CHEMISTRY@ccl.net; Sat, 15 Dec 2001 12:35:12 -0800 (PST)
Date: Sat, 15 Dec 2001 12:35:12 -0800 (PST)
From: Bill Ross  <ross@cgl.ucsf.EDU>
Message-Id: <200112152035.MAA308229@socrates.cgl.ucsf.edu>
To: CHEMISTRY@ccl.net
Subject: Re: CCL:Linux kernel 2.4 parallel scaling

	Myself and others here at NIH have been benchmarking CHARMM and other
	parallel chemistry codes on Linux clusters using MPICH libs with an
	ethernet network.  Tests on 2.4.12 kernel nodes with 2, 4, and 8 procs
	showed terrible parallel scaling, compared to the same tests run with
	the 2.2.16-tcpfix kernel.
	
	...are there some kernel parameters in /proc that can be used
	to tune the communications performance?
	
	Are there any other software solutions to this problem?
	
In another context and programming in java, I noticed that turning 
off Nagle's algorithm on the application side by setting sockets 
to 'nodelay' improved network performance considerably. Watching 
the low-level TCP/IP traffic, we saw that for a send of an 
application-level msg, the protocol went from 4 sequential packets 
with acks to a blast of 4 packets with a single ack coming back. 
When there is any network delay, the sequential mode amplifies it, 
more than I would have expected. 

I'm not sure how this maps to e.g. C - I couldn't find anything
in 'man setsockopt' for example. Looking up "Nagle's algorithm"
& linux on the web,

  http://mosquitonet.stanford.edu/~laik/projects/nagle_test/nagle_test.html

  "This program is useful because some TCP/IP stacks (in particular 
  Linux pre-2.0.30) do not correctly implement Nagle's Algorithm. "

So there is some precedent for problems in this area, and the
other web references in such a search look useful.

I wonder if Nagle's algorithm may be better adapted to network
conditions that are more lossy than usually seen these days,
or for slower computers. If so, I could imagine that it might be
disabled by default in the kernel, at least sometimes, which
could explain the variable results w/ different kernels.

Bill Ross

From chemistry-request@server.ccl.net Sat Dec 15 16:16:19 2001
Received: from socrates.cgl.ucsf.edu ([128.218.27.3])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBFLGJi31279
	for <CHEMISTRY@ccl.net>; Sat, 15 Dec 2001 16:16:19 -0500
Received: from socrates.cgl.ucsf.edu (localhost [127.0.0.1])
	by socrates.cgl.ucsf.edu (8.9.3/8.9.3/GSC1.7) with ESMTP id NAA309717;
	Sat, 15 Dec 2001 13:15:59 -0800 (PST)
Message-Id: <200112152115.NAA309717@socrates.cgl.ucsf.edu>
To: Bill Ross <ross@cgl.ucsf.EDU>
cc: CHEMISTRY@ccl.net, dek@cgl.ucsf.EDU
Subject: Re: CCL:Linux kernel 2.4 parallel scaling 
In-Reply-To: Your message of "Sat, 15 Dec 2001 12:35:12 PST."
             <200112152035.MAA308229@socrates.cgl.ucsf.edu> 
Date: Sat, 15 Dec 2001 13:15:59 -0800
From: David Konerding <dek@cgl.ucsf.EDU>

Bill Ross writes:
>	Myself and others here at NIH have been benchmarking CHARMM and other
>	parallel chemistry codes on Linux clusters using MPICH libs with an
>	ethernet network.  Tests on 2.4.12 kernel nodes with 2, 4, and 8 procs
>	showed terrible parallel scaling, compared to the same tests run with
>	the 2.2.16-tcpfix kernel.
>	
>	...are there some kernel parameters in /proc that can be used
>	to tune the communications performance?
>	
>	Are there any other software solutions to this problem?
>	
>In another context and programming in java, I noticed that turning 
>off Nagle's algorithm on the application side by setting sockets 
>to 'nodelay' improved network performance considerably. Watching 
>the low-level TCP/IP traffic, we saw that for a send of an 
>application-level msg, the protocol went from 4 sequential packets 
>with acks to a blast of 4 packets with a single ack coming back. 
>When there is any network delay, the sequential mode amplifies it, 
>more than I would have expected. 
>
>I'm not sure how this maps to e.g. C - I couldn't find anything
>in 'man setsockopt' for example. Looking up "Nagle's algorithm"
>& linux on the web,
>
>  http://mosquitonet.stanford.edu/~laik/projects/nagle_test/nagle_test.html
>
>  "This program is useful because some TCP/IP stacks (in particular 
>  Linux pre-2.0.30) do not correctly implement Nagle's Algorithm. "
>
>So there is some precedent for problems in this area, and the
>other web references in such a search look useful.

Actually, MPICH already disables Nagle's Algorithm, which was designed
to reduce the number of packets going over the wire by agglomeration.  This was
very useful in long-haul links and hub-based TCP/IP from about 5-10 years
ago, but it is somewhat less useful now. 

Also, 2.0.30 is so old as to be completely irrelevant.  Even 2.2 had Nagle
bugs, see:
http://www.icase.edu/coral/LinuxTCP.html

(their site or name server seems to be down).

2.4 should have a much better stack.  But, realistically, programs like
AMBER and CHARMM aren't going to scale very well over TCP/IP due to the
high latency and low bandwidth of 100BaseT (and even gigabit) ethernet.
The best I've been able to get is about 2X for 4 cpus and 3X for 6 cpus.
However, since the cpus are cheap and so is the networking hardware, 
I don't particularly care that the scaling is poor.

Dave

From chemistry-request@server.ccl.net Sat Dec 15 16:46:00 2001
Received: from socrates.cgl.ucsf.edu ([128.218.27.3])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBFLk0i31488
	for <CHEMISTRY@ccl.net>; Sat, 15 Dec 2001 16:46:00 -0500
Received: from socrates.cgl.ucsf.edu (localhost [127.0.0.1])
	by socrates.cgl.ucsf.edu (8.9.3/8.9.3/GSC1.7) with ESMTP id NAA311159;
	Sat, 15 Dec 2001 13:45:36 -0800 (PST)
Message-Id: <200112152145.NAA311159@socrates.cgl.ucsf.edu>
To: Rick Venable <rvenable@gandalf.cber.nih.gov>
cc: CHEMISTRY@ccl.net
Subject: Re: CCL:Linux kernel 2.4 parallel scaling 
In-Reply-To: Your message of "Sat, 15 Dec 2001 13:58:39 EST."
             <Pine.SGI.4.21.0112151326430.86053-100000@gandalf.cber.nih.gov> 
Date: Sat, 15 Dec 2001 13:45:36 -0800
From: David Konerding <dek@cgl.ucsf.EDU>

Rick Venable writes:
>Myself and others here at NIH have been benchmarking CHARMM and other
>parallel chemistry codes on Linux clusters using MPICH libs with an
>ethernet network.  Tests on 2.4.12 kernel nodes with 2, 4, and 8 procs
>showed terrible parallel scaling, compared to the same tests run with
>the 2.2.16-tcpfix kernel.
>
>I'd expect that other people using Linux clusters for computational
>chemistry may have encountered this already, and may have looked into
>possible solutions.
>
>For instance, are there some kernel parameters in /proc that can be used
>to tune the communications performance?
>
>Are there any other software solutions to this problem?

Can you please let us know what sort of network card you are using,
whether zero-copy networking is enabled, and how many errors you
see on your link (run ifconfig).   I missed the fact earlier that
you got better scaling with 2.2.16-tcpfix.  This suggests to me that
you need to look into how you configured the kernel and what
other variables changed (or didn't change).  How do simple programs like
FTP compare between the two kernels?  That's the first thing I'd check.

Dave

From chemistry-request@server.ccl.net Sat Dec 15 17:29:52 2001
Received: from bute.st-andrews.ac.uk (im17@[138.251.12.1])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBFMTqi31935
	for <chemistry@ccl.net>; Sat, 15 Dec 2001 17:29:52 -0500
Received: from localhost (im17@localhost)
	by bute.st-andrews.ac.uk (8.9.1a/8.9.1) with SMTP id WAA02981
	for <chemistry@ccl.net>; Sat, 15 Dec 2001 22:29:32 GMT
Date: Sat, 15 Dec 2001 22:29:32 +0000 (GMT)
From: Ibrahim Moustafa <im17@st-andrews.ac.uk>
X-Sender: im17@st-andrews.ac.uk
To: chemistry@ccl.net
Subject: Molecular dynamics with implicit solvent
Message-ID: <Pine.SOL.3.96.1011215215908.2325B-100000@bute>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

 Dear ccl users,

   I'm doing MD on my protein structure without explicit solvents because
of the limiting computing power. I use Tripos FF, in SYBYL6.6; cutoff
distance 15 A, distance dependent dielectric function with dielectric
constant 2.

   First, I heat the system to 300 K in steps ( 10, 50, 100, 200, 300 K),
2 ps for each step.
   
  Comparing the resultant configuration of the heating step with the
starting configuration gives ( rmsd 3.79 A ). Of course the absence of
solvents is responsible for this nonsense movements. Now I'm thinking to
do the simulation at low temperature ( 50 K ) to supress these large
movements. I want to discuss this with ccl users before doing so.



  What I want to ask about is: suppose i conduct this simulation at 50 K
instead of 300 k, non physiological condition, ( I have seen some people
doing that " Journal of molecular structure (theochem) 368;1996: 205-212")
Can we rely on the output results of that kind of simulation? can we realy
trust the answer for the question under investigation?

  -Another question: in absence of solvents do we need to equilebrate the
system before the production phase in the MD? if so, WHY?

  - Is it better to start the simulation from the average structure of the
heating step ( or the equilebration step ) or start from the last
configuration of each step??

   I'd be really appreciated if someone answered these questions. I
promise I'll sumarize the answer to the CCL users.

  Many thanks,
   Ibrahim
 

Name         :Ibrahim M.Moustafa
Mail address :Center for Biomolecular Science,
              BIOMOLECULAR SCIENCE BUILDING,
              North Haugh,St-Andrews,
              Fife,KY16 9ST
              Scotland,U.K.
         Tel :+44(0)1334-467257
      E-mail :im17@st-andrews.ac.uk



From chemistry-request@server.ccl.net Sat Dec 15 20:19:45 2001
Received: from epyon.INS.cwru.edu (root@[129.22.9.21])
	by server.ccl.net (8.11.6/8.11.0) with ESMTP id fBG1Jji05162
	for <CHEMISTRY@ccl.net>; Sat, 15 Dec 2001 20:19:45 -0500
Received: from gavin (chem51286.CHEM.CWRU.Edu [129.22.129.217]) by epyon.INS.cwru.edu with SMTP (8.11.6+cwru/CWRU-3.8)
	id fBG1JQv16485; Sat, 15 Dec 2001 20:19:26 -0500 (EST) (from hxt10@po.cwru.edu for <CHEMISTRY@ccl.net>)
Message-ID: <000d01c185d0$71249840$d9811681@cwru.edu>
Reply-To: "Hui-Hsu \(Gavin\) Tsai" <hxt10@po.cwru.edu>
From: "Hui-Hsu \(Gavin\) Tsai" <hxt10@po.cwru.edu>
To: <CHEMISTRY@ccl.net>
Subject: advise for protein folding and molecular electronics
Date: Sat, 15 Dec 2001 20:24:43 -0500
MIME-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_000A_01C185A6.8823D6C0"
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2479.0006
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2479.0006

This is a multi-part message in MIME format.

------=_NextPart_000_000A_01C185A6.8823D6C0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi Everyone,

    I will start new projects in studying the protein folding and =
molecular electronics (or nanotech) by utilizing computational methods.
I have some questions which need your important comments.
My questions are as below:

1. Which research groups did pioneer work in understanding the protein =
folding and molecular electronics by using computational methods?

2. Is there any significant progress recently in understanding and =
predicting protein folding?

3. What's the most difficulty in these fields (protein folding and =
molecular electronics)?

4. What are the current research directions of these fields (protein =
folding and molecular electronics)?


Any comment is welcomed.

I will summarize the results.

Thanks

Gavin =20

------=_NextPart_000_000A_01C185A6.8823D6C0
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; =
charset=3Diso-8859-1">
<META content=3D"MSHTML 6.00.2479.6" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>Hi Everyone,</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; I will start new =
projects in=20
studying the protein folding and molecular electronics (or =
nanotech)&nbsp;by=20
utilizing computational methods.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>I have some questions which need your =
important=20
comments.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>My questions are as below:</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>1. Which research groups =
did&nbsp;pioneer work in=20
understanding the protein folding and molecular electronics by using=20
computational methods?</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>2.&nbsp;Is there any significant =
progress recently=20
in understanding and predicting protein folding?</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>3. What's the most difficulty in these =
fields=20
(protein folding and molecular electronics)?</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>4. What are the current research =
directions of=20
these fields (protein folding and molecular electronics)?</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>Any comment is welcomed.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>I will summarize the =
results.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>Thanks</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial =
size=3D2>Gavin&nbsp;&nbsp;</FONT></DIV></BODY></HTML>

------=_NextPart_000_000A_01C185A6.8823D6C0--



