From mckelvey@Kodak.COM  Tue Nov  3 17:47:02 1992
Date: Tue, 3 Nov 92 22:47:02 -0500
From: mckelvey@Kodak.COM
To: osc@Kodak.COM
Subject: IBM/RS6000 compiler



A THOUSAND PARDONS, IBM! I USED AN OLD MAKEFILE!  -O GAVE 4' 38 SEC,
IN GOOD AGREEMENT WITH PREVIOUS RUN ON PREVIOUS COMPILER!   

SORRY TO HAVE CLUTTERED THE AIR IN THE FIRST PLACE.


JOHN MCKELVEY
RES LABS
E. KODAK

From cabku01@mailserv.zdv.uni-tuebingen.de  Wed Nov  4 05:13:59 1992
From: cabku01@mailserv.zdv.uni-tuebingen.de (Hartwig Kuehbauch)
Subject: CFV:sci.chem.organomet
To: chemistry@ccl.net (CHEMISTRY-LISTE)
Date: Wed, 4 Nov 92 10:21:45 MET


From news.announce.newgroups Wed Nov  4 10:17:01 1992
From: cabku01@mailserv.zdv.uni-tuebingen.de (Hartwig Kuehbauch)
Subject: CFV: sci.chem.organomet
Organization: uni-tuebingen.de
Date: Sun, 1 Nov 1992 23:53:54 GMT

Dear netters!

This is a Call For Votes for : "sci.chem.organomet"

The proposed newsgroup would support all kind of discussion related to
organometallic chemistry.

The voting period will be from the 11/02/1992 to 12/02/1992.

Only votes received during this period will be counted.


VOTING INSTRUCTIONS:
--------------------

To vote for or against this newsgroup, you must send an e-mail-message to

         cabku01@mailserv.zdv.uni-tuebingen.de

Posted votes will not be counted.

Your vote has to be clear. Something like:

         I vote for sci.chem.organomet
         I vote against sci.chem.organomet
         sci.chem.organomet yes
         sci.chem.organomet no

will do it. 


ABOUT THE PROPOSED NEWSGROUP:
-----------------------------

The newsgroup sci.chem.organomet would be unmoderated.

The main purpose of this newsgroup would be to give organometallic chemists
all over the world the chance to communicate with each other about everything
related to organometallic chemistry.

That means  no chatter about irrelevant (to a scientist doing research in
organometallic chemistry) things i.e. "smoke-bombs", or questions like "what
happens if I mix Domestos with HCl", but serious discussion with people of the
same interest. 

Everything that has to do with organometallic chemistry in a wide range
would be welcome. That would also include analytical methods like NMR, IR and
so on, computational problems, or things like postdoc positions.

This newsgroup would be one more source of information for the researcher
--- and possibly one of the best.


Thank you for reading.

Hartwig Kuehbauch
-- 
=============================================================================
= Hartwig Kuehbauch - University of Tuebingen - Dep. of Inorg. Chemistry II =
= (cabku01@mailserv.zdv.uni-tuebingen.de)     - Germany -                   =
=============================================================================

From LEHERTE@BNANDP11.bitnet  Wed Nov  4 10:09:38 1992
Date:         Wed, 04 Nov 92 10:08:38 +01
From: Laurence Leherte <LEHERTE%BNANDP11.BITNET@OHSTVMA.ACS.OHIO-STATE.EDU>
Subject:      Lectures announcement
To: chemistry@ccl.net



                    Announcement:
A Series of Lectures in the field of Chemistry and Artificial
Intelligence will be held at the Facultes Universitaires Notre-Dame
de la Paix (Namur, Belgium), on November the 19th.

                *****************************
                *  MOLECULAR SCENE ANALYSIS *
                *****************************

    Janice Glasgow, Dept. of Computing and Information Science
             Queen's University, Kingston, Canada

       Frank H. Allen, Cambridge Structural Data Centre,
                      Cambridge, UK.

    Suzanne Fortier, Dept. of Chemistry, Queen's University,
             Queen's University, Kingston, Canada


   The concept of "scene analysis" has been used in the context of
machine vision to refer to the set of processes associated with
the classification and understanding of complex images. Such
analyses rely on the availibility of domain knowledge in the form
of structural templates, rules or heuristics to locate and identify
features in a scene. By analogy we use the phrase "molecular scene
analysis" to refer to the processes associated with the reconstruction
and interpretation of molecular structures and molecular interactions.
This presentation, in three parts, will describe some fundamental
aspects of the Molecular Scene Analysis project.

Computational Imagery :  Janice Glasgow
---------------------------------------
   At the core of our knowledge-based approach to molecular
scene analysis is the concept of imagery, that is the ability to
reason with three-dimensional images of molecular structure.
To provide a computational framework that can imitate the
human visualization abilities, we are designing
image representations that make explicit the fundamental
spatial and visual charactersitics of a molecular scene.
Applying the computational reasoning techniques of molecular
imagery to the information accumulated in a crystallographic
knowledge base provides the "intelligence" vital to
molecular scene analysis.

From Databases to Knowledge Bases : Frank Allen

Conceptual Clustering Applications to Crystallographic Data :
----------------------------------------------------  Suzanne Fortier
                                                    -----------------
   Our research in machine learning is motivated by the need for
techniques to structure, manage and compress the rapidly growing
crystallographic databases and transform them into knowledge bases.
An incremental conceptual clustering algorithm, specifically designed
for objects/scenes composed of many parts, has been designed and
implemented. The algorithm and an initial application to pyranose sugar
data will be described.


Location:      Facultes Universitaires Notre-Dame de la Paix
---------      Chemistry Department
               Auditorium CH2
               Rue Grafe, 2
               B-5000 NAMUR
               Belgium

Schedule:      November, the 19th, 1992
---------      15:00

Information:   Prof. D. P. Vercauteren
------------   Dr. L. Leherte
               E. Titeca
               Laboratoire de Physico-Chimie Informatique
               email:vercau,leherte,titeca at scf.fundp.ac.be
               Tel:+32-81-724534, +32-81-724535
               Fax:+32-81-724530
Acknowledge-To: <LEHERTE@BNANDP11>

From schw0531@compszrz.zrz.tu-berlin.de  Wed Nov  4 16:10:39 1992
Date: Wed, 4 Nov 92 15:10:39 +0100
From: Prof. Dr. Helmut Schwarz <schw0531@compszrz.zrz.tu-berlin.de>
To: CHEMISTRY@ccl.net
Subject: ECP calculations





To initiate a discussion about the reability of ECP by TM calculations
i have summarized some results for the iron meutral an ion

          5D->5F     5D->3F    5D->6D     5D->4F
----------------------------------------------------------
SCF (UHF)
ECP(Ar)   1.45       2.08       6.68      7.71
ECP(Ne)   1.86       3.37       6.71      8.12
PP(Ar)    1.87       4.49       6.32      7.30
rel AE    2.06       7.53       6.29      8.27
----------------------------------------------------------
MP2
ECP(Ar)   0.50       1.11       6.89      6.66
ECP(Ne)   0.67       1.25       6.90      7.19
PP(Ar)    0.87       1.42       7.46      8.09
----------------------------------------------------------
exp       0.86       1.49       7.86      8.09
----------------------------------------------------------

in the table are given the calculated excitation energies for the 
iron (in eV) by using the ECP of Hay with a argon and neon core
respectivelly together with a pseudopotential (Durand&Barthelat)
parametrized by us. The basis sets used are ECP(Ar)-DZP, for the 
ECP(Ne) the contraction of Frenking (TZP) and for the PP calculations
we use 6s3p8d/5s3p5d.

I think, that the results clearly demonstrate, that the ECP(Ar)DZP 
are not able to describe the excitation energies even in a qualitative
way (6D X 4F ). So, as decribed by G. Frenking a expansion of the 
valence space and a more flexible basis can be used to make the results
more realistic. But as can concluded from the table, the results remain
still not very satisfying. In my view this problems are connected with
ground state oriented parametrization procedure more than with the
valence space. Our PP paramatrization (in fact very similar to Hay) use
the larger core and the agreement with all electron (AE) and experimental
values is rather good.

The problem of proper description of the excitation energies is very
important also in low spin (coordinative saturated TM systems) because
the electrinic valence state in a M-(L)n system is often diferent from
the atomic ground state. Errors in the description of the excitations
at the atomic level will lead to uncertainty of at least +/- 10 kcal/mol
in molecutr systems.

The message is be carefull if you are using the ECP of Hay (especially
in the GAUSSIAN program series, where they often converge to a excited
state.

Jan Hrusak

From mail Tue Nov  3 21:30:52 1992
Date: 	Tue, 3 Nov 1992 21:29:35 -0500
From: hyper!hurst (Graham Hurst)
To: chemistry@ccl.net
Subject: Musings about parallelism...

The recent flurry of posts about parallelism prompts my $0.02.  If
any of this has been said here before, I'm sorry.  (I wasn't a subscriber
when parallelism was a topic last year!)

Parallelism is not *that* new for computational chemistry, though I agree
that it is newer than vectorization.

I think the first distributed memory MIMD computational chemistry was
molecular dynamics and Monte Carlo benchmark calculations done by Neil
Ostlund, Bob Whiteside and Peter Hibbard (JPC 86, 2190 (82)) in the early
eighties.  I think they abandoned shared memory in favour of distributed
memory MIMD around the turn of the decade.

The first use of parallelism for quantum chemistry that I'm aware of was
in Enrico Clementi's lab at IBM in Kingston NY in the mid 80s (see
IJQC Symp 18, 601 (1984) and JPC 89, 4426 (85)).  When I started a postdoc
there in Jan 86, both IBMOL (later KGNMOL) and HONDO ran in parallel on the
LCAP systems.  Each LCAP had a serial IBM "master" and 10 FPS array processor
"slaves" that acted in a distributed memory fashion, though later
developments added shared memory.  The parallel HONDO 8 referred to in
an earlier post here probably descends from that version, parallelised by
Michel Dupuis.  Incidentally this is where Roberto Gomperts (hi!) first
learned about parallelism when developing KGNMOL.  Many other comp chem
programs were parallelized for LCAP in this lab too.

In Jan 88 I joined Hypercube (developers of HyperChem), which had been
founded by Neil Ostlund to write computational chemistry software for
distributed memory MIMD computers.  Neil's philosphy was (and still is
I think) that "dusty deck" FORTRAN codes do not parallelize well, and
he sought to start from scratch with distributed memory MIMD parallelism
as one of the design criteria.  At that time he already had ab initio
and semi-empirical prototype codes running on the Intel iPSC.  I developed
a parallel implementation of the AMBER molecular mechanics potential on
the Intel iPSC/2 (written in C) and later in 1988 ported to a ring of
transputers.  These semi-empirical and molecular mechanics codes designed
for distributed memory MIMD live on as parts of HyperChem!  Once you've
written for a parallel machine it's easy to run on a serial machine like
the PC - just set the number of nodes to 1!  For the SGI version of
HyperChem, parallelism is exploited by simulating the message passing
of distributed memory MIMD on multi-processor Irises.  This may be the
only parallel SGI comp chem code *not* parallelized by Roberto! ;-)

BTW HyperChem's implementation of the MOPAC methods *is* parallel for
distributed memory MIMD computers, but we haven't yet convinced Autodesk
to market such a version. :-(

It's nice to see the growing interest in and acceptance of parallelism,
but somewhat frustrating that we've had to wait so long! In the meantime
we had to make a serial PC version of our software to pay the rent! ;-)

Someone (sorry I didn't keep the post) commended CDAN for its recent
articles on parallelism - in the late 80's they declined to have Neil write
an article on parallelism in computational chemistry because they said no
one was interested in parallelism!

Should you worry about porting or redesigning for distributed memory
MIMD? Only if you:
    (a) want a single calculation done faster
or
    (b) want to tackle a larger calculation.
For throughput you're better off running n serial jobs on n nodes (provided
the jobs fit!).  You can do (a) for at least smaller numbers of nodes by
porting a serial code, but for a large number of nodes or (b) you probably
need to redesign to partition your data and hopefully keep data transfers
minimized, to/from near nodes, and overlapped with calculation.

Exploiting parallelism with networked computers is a good idea that
was first demonstrated in the 80s.  Bob Whiteside, now at Hypercube,
gained some acclaim by beating a Cray with a bunch of otherwise-idle
networked Suns while he was at Sandia.  As well as accomplishing (a),
networked computers can be used effectively for (b), though most people
seem more excited by the potential for speedup.

Cheers,

Graham
------------
Graham Hurst
Hypercube Inc, 7-419 Phillip St, Waterloo, Ont, Canada N2L 3X2 (519)725-4040
internet: hurst@hyper.com

From mattson@ganymede.sca.com  Wed Nov  4 04:07:41 1992
Date: Wed, 4 Nov 92 09:07:41 EST
From: mattson@ganymede.sca.com (Timothy G. Mattson)
To: chemistry@ccl.net, jas@medinah.atc.ucarb.com
Subject: Re: ...parallel computing



Workstation clusters are without a doubt going to be the 
dominant force in high performance computing for the next
decade (if not longer).  They will not completely replace 
the fancy MPP boxes which will always be needed for the 
most exotic grand challange problems, but lets face it -- 
many of us don't do grand challange problems.  A speedup of 
5 or 6 on 8 workstations is plenty.

With that preamble, I would like to point out some work with
the Linda parallel computing environment and computational 
chemistry.  There are a number of molecular modeling projects 
either completed or in late stages of development using Linda.

Klaus Schulten's group at the Beckman Institute has used Linda
(and PVM) in their molecular dynamics code, MD.  This code 
includes a parallel fast multipole algorithm and produces
really impressive results on the small sized (4) clusters I've
seen it run on (I will be running it soon on up to 16 RS/6000).

Richard Judsen at Sandia National laboratory has created a 
parallel version of the distance geometry code, DGEOM. This
is an embarassingly parallel code with speedups of about
27 on 32 SPARCstation 1's.  We have achieved even better
results (though I don't have them before me) running DGEOM
on the PVS machine from IBM (and now a word from my sponser...
Linda is portable from workstation clusters  all the way up
to MPP machines so I expended no additional effort in geting
the MPP DGEOM numbers).

Finally, there is MOPAC. I have been working with Kim Baldridge
of San Diego to create a Linda version of MOPAC (she did the
hard part in parallelizing the code for the iPSC/860). I hope
to have it done by the end of the year.  For now, all I have
done is the vibrational analysis portion of MOPAC (the easy 
part).  For Acetycholine I have numbers (in seconds):

          iPSC/860       Sparc 1
Nodes   Linda    Nx/2    Network

  1      768.8   766.6
  2      393.3   392.3     846.8
  4      210.6   208.5     452.1
  8      117.5   120.8     253.8
 16       74.7   75.4

(sorry that I don't have the sequential times for the SPARC 1).
Note that Linda and the native iPSC message passing numbers are
very similar.  Linda gets a bum rap for inefficiency which is
not always deserved (yes, thats another word form my sponser).

Kim Baldridge has finished the other pathways through the code and
has submitted a paper to some journal for publication (I don't
remember which).  Not surprisingly, the other pathways through 
MOPAC are not as parallel and therefore do not display such
nice speedups as above.   You'll have to ask Kim for those numbers.

Finally, I am trying to keep up on all the parallel computational
chemistry efforts going on around the world.  If you are doing
parallel computational chemistry and haven't spoken to me 
yet, please drop me a line -- especially if you are working
in molecular dynamics or MOPAC.  

I will be at SuperComputing'92 and urge you to come by and talk 
to me about parallel computational chemistry.  I will be at the
SCIENTIFIC Computing Booth (and SGI and HP and IBM and ...) so
I will be easy to find.  I might also add that on Tuesday
afternoon, there will be a workshop I organized with Cherri
Pancake of Oregon State University on "Mainstream tools for
parallel computing".  If you are thinking of entering the 
parallel computing game, this workshop should be most 
valuable).  

As computational chemists,  we are fortunate to be working in 
a field with so many algorithms that map rather well onto 
parallel architectures (though for the really big MPP systems
we need good eigenproblem software, as someone else pointed out).

--Tim

--------------------------------------------------------------------
Timothy G. Mattson, Ph.D.

Research Scientist                  Director of Product Engineering
Yale Computer Science Department    Scientific Computing Assoc. Inc.
mattson@cs.yale.edu                 mattson@sca.com
(203) 432-1203                      (203) 777-7442
--------------------------------------------------------------------

From STRSSROS@ACFcluster.NYU.EDU  Sun Nov  4 06:28:38 1992
Date: 04 Nov 1992 10:28:38 -0400 (EDT)
From: Rosalyn Strauss <STRSSROS@ACFcluster.NYU.EDU>
Subject: AMBER Parameters
To: Chemistry@ccl.net


Dear Netters,
	Does anyone know of AMBER parameters for the ether linkage of
diphenyl ether? I am working on an DNA adduct which could be modelled after 
diphenyl ether and need parameters for angles and dihedrals around the
ether.
		Any ideas or references?
You can send messages directly to me and I will summarize for the List.

					Thank you,
					Dr. Rosalyn Strauss
					Dept. of Biology
					New York University 
         Tel: (212) 998-8228     IN%"STRSSROS@ACFCluster.NYU.edu"

From mail Wed Nov  4 10:40:22 1992
From: hyper!slee (Thomas Slee)
Subject: Re: Conformational energies by Semiempirical methods.
To: chemistry@ccl.net
Date: 	Wed, 4 Nov 1992 10:28:24 -0500

In reply to my posting, Andy Holder wrote a defence of the
usefuleness of semi-empirical computations.  And I agree with him.

I did not say, and certainly do not believe, that "Molecular 
Mechanics is better than Quantum Mechanics".  Such a statement has no 
meaning, of course, without specifying what for.  I was just suggesting
that we do have to be careful to ask "for what?"

		Tom Slee
-- 
Tom Slee
Hypercube, Inc., #7-419 Phillip St., Waterloo, Ont. N2L 3X2 
Internet:  slee@hyper.com		Tel. (519) 725-4040

From mattson@ganymede.sca.com  Wed Nov  4 04:49:10 1992
Date: Wed, 4 Nov 92 09:49:10 EST
From: mattson@ganymede.sca.com (Timothy G. Mattson)
To: chemistry@ccl.net
Subject: Parallel Comp chem.



>> I appologize if this gets out onto the net twice.  I'm not  <<
>> sure if I have the right address so I may have accidentalyy <<
>> sent two copies.                                            <<

Workstation clusters are without a doubt going to be the 
dominant force in high performance computing for the next
decade (if not longer).  They will not completely replace 
the fancy MPP boxes which will always be needed for the 
most exotic grand challange problems, but lets face it -- 
many of us don't do grand challange problems.  A speedup of 
5 or 6 on 8 workstations is plenty.

With that preamble, I would like to point out some work with
the Linda parallel computing environment and computational 
chemistry.  There are a number of molecular modeling projects 
either completed or in late stages of development using Linda.

Klaus Schulten's group at the Beckman Institute has used Linda
(and PVM) in their molecular dynamics code, MD.  This code 
includes a parallel fast multipole algorithm and produces
really impressive results on the small sized (4) clusters I've
seen it run on (I will be running it soon on up to 16 RS/6000).

Richard Judsen at Sandia National laboratory has created a 
parallel version of the distance geometry code, DGEOM. This
is an embarassingly parallel code with speedups of about
27 on 32 SPARCstation 1's.  We have achieved even better
results (though I don't have them before me) running DGEOM
on the PVS machine from IBM (and now a word from my sponser...
Linda is portable from workstation clusters  all the way up
to MPP machines so I expended no additional effort in geting
the MPP DGEOM numbers).

Finally, there is MOPAC. I have been working with Kim Baldridge
of San Diego to create a Linda version of MOPAC (she did the
hard part in parallelizing the code for the iPSC/860). I hope
to have it done by the end of the year.  For now, all I have
done is the vibrational analysis portion of MOPAC (the easy 
part).  For Acetycholine I have numbers (in seconds):

          iPSC/860       Sparc 1
Nodes   Linda    Nx/2    Network

  1      768.8   766.6
  2      393.3   392.3     846.8
  4      210.6   208.5     452.1
  8      117.5   120.8     253.8
 16       74.7   75.4

(sorry that I don't have the sequential times for the SPARC 1).
Note that Linda and the native iPSC message passing numbers are
very similar.  Linda gets a bum rap for inefficiency which is
not always deserved (yes, thats another word form my sponser).

Kim Baldridge has finished the other pathways through the code and
has submitted a paper to some journal for publication (I don't
remember which).  Not surprisingly, the other pathways through 
MOPAC are not as parallel and therefore do not display such
nice speedups as above.   You'll have to ask Kim for those numbers.

Finally, I am trying to keep up on all the parallel computational
chemistry efforts going on around the world.  If you are doing
parallel computational chemistry and haven't spoken to me 
yet, please drop me a line -- especially if you are working
in molecular dynamics or MOPAC.  

I will be at SuperComputing'92 and urge you to come by and talk 
to me about parallel computational chemistry.  I will be at the
SCIENTIFIC Computing Booth (and SGI and HP and IBM and ...) so
I will be easy to find.  I might also add that on Tuesday
afternoon, there will be a workshop I organized with Cherri
Pancake of Oregon State University on "Mainstream tools for
parallel computing".  If you are thinking of entering the 
parallel computing game, this workshop should be most 
valuable).  

As computational chemists,  we are fortunate to be working in 
a field with so many algorithms that map rather well onto 
parallel architectures (though for the really big MPP systems
we need good eigenproblem software, as someone else pointed out).

--Tim

--------------------------------------------------------------------
Timothy G. Mattson, Ph.D.

Research Scientist                  Director of Product Engineering
Yale Computer Science Department    Scientific Computing Assoc. Inc.
mattson@cs.yale.edu                 mattson@sca.com
(203) 432-1203                      (203) 777-7442
--------------------------------------------------------------------

From jkl@ccl.net  Wed Nov  4 07:18:26 1992
From: jkl@ccl.net (Jan Labanowski)
Date: Wed, 4 Nov 1992 12:18:26 -0500
To: chemistry@ccl.net
Subject: new perl script for SCHAKAL88 viewer


Dear Netters,
Thanks to Martin Schuetz from the Institute for Physical Chemistry of
the University of Berne (schuetz@iacrs2.unibe.ch) we have another
valuable addition to our library of perl scripts. The GCmodes2schakal.perl
script will extract geometry and normal modes (or whatever they call
normal modes {:-)} ) from the Gaussian90 or CADPAC4 frequency jobs and
prepare input for the popular molecular viewing program SCHAKAL88 by
Egbert Keller. The script will find by itself which output is being processed.

You can get the file via e-mail by sending a message:
   send ./perl/vibmodes/GCmodes2schakal.perl from chemistry
to OSCPOST@ccl.net or OSCPOST@OHSTPY.bitnet

or you can retrieve it from anonymous ftp www.ccl.net [128.146.36.48] in
the directory pub/chemistry/perl/vibmodes 

Martin, thank you again for your contribution,

Jan
jkl@ccl.net


From CUNDARIT@MEMSTVX1.bitnet  Wed Nov  4 12:21:00 1992
Date: Wed, 4 Nov 92 17:21 CDT
From: CUNDARIT%MEMSTVX1.BITNET@OHSTVMA.ACS.OHIO-STATE.EDU
Subject: ECPs, TMs and energetic data
To: chemistry@ccl.net



        I think Jan Hrusak raises an interesting (and somewhat disturbing)
question - what is the reliability of ECP/valence basis set schemes for
energetic predictions.  I think we can agree that as long as the correct
wavefunction and a suitably flexible basis set is used that geometric
prediction is quite good.  But what about energetic data?
        Recently, Walt Stevens and I have finished ECP derivation for the
lanthanides and comparing numerical, Dirac-Hartree-Fock calculations with
ECP calculations suggests that the biggest discrepancies come about due to
the valence basis set and not the ECP approximation.  This is somewhat
encouraging since it is easier to augment the basis set than rederive the
ECPs.  Preuss, Stoll and co. have published a paper, also dealing with the
lanthanides, where they look at the ordering of atomic states as a function of
ECP core size.  They conclude that one should probably use a very small core
size (I think it was either Ar or Kr) if one is interested in reproducing
this sort of data (at least this is my interpretation of their conclusion).
For the TMs, it was essential to include ns and np to reproduce energy
ordering of atomic states; I believe that all workers in this area have
made this conclusion.
        Although ordering in these coordinatively unsaturated systems is
very important given the bonanza of thermodynamic data to emerge from
beam, FT-MS and FT-ICR techniques, another question is what about coord.
saturated complexes?  For main group and organic systems, the RHF geometry/
MP2 energy scheme seems to be quite effective, is anyone making a systematic
study for TM systems?  Frenking, Gauss et al. have shown the energy ordering
of isomers in d0, six-coord. complexes to vary widely with correlation level.
However, we get good quantitative agreement with activation barriers for d0
complexes (admittedly a best case scenario) using a simple RHF/MP2 scheme.
I should point out that in all cases the experimental folks gave us the
experimental numbers after we told them the calculated values, which is
somewhat satisfying!  I would be very interested to hear from netters what
their experience has been on predicting enthalpic data for TM systems using
various correlation schemes.

                                                Tom Cundari
                                                Assistant Professor
                                                Department of Chemistry
                                                Memphis State University
                                                Memphis, TN 38152
                                                phone:901-678-2629
                                                fax:901-MSU-EIGS

From roberto@medusa.boston.sgi.com  Wed Nov  4 14:01:17 1992
To: chemistry@ccl.net
Subject: Re: Musings about parallelism... 
Date: Wed, 04 Nov 92 19:01:17 EST
From: Roberto Gomperts <roberto@medusa.boston.sgi.com>


Your message dated: Tue, 03 Nov 92 21:29:35 EST
 > The recent flurry of posts about parallelism prompts my $0.02.  If
 > any of this has been said here before, I'm sorry.  (I wasn't a subscriber
 > when parallelism was a topic last year!)
 > 
 	I guess it is time to have my own $0.02 contribution in this
	interesting thread.
	
 > Parallelism is not *that* new for computational chemistry, though I agree
 > that it is newer than vectorization.

	Yes, parallelism came after vectorization although vectorization
	could easily be viewed as a special case of parallelism. It is
	just a matter of how you define it.
	I have often compared difficulties for a broad acceptance of
	parallelism with the early days of vectorization: initially the
	convertion of code to effectively take advantage of a particular
	hardware architecture can be seen as an unsurmountable obstacle.
	And, of course, you have the naive minds that think that a
	particular implementation of an algorithm will run well on any
	kind of machine. This leads always to frustration and the
	dismissal of interesting and good opportunities. These
	situations occurred before vector machines were popular and we
	have seen them as parallel machines evolve. But, in the same way
	as software developers and other users got use to vector codes
	(either by conversion of by writing) from scratch, we are
	already seeing more and more parallel codes. This very
	discussion thread is another indication of the growing
	acceptance and popularity of parallelism.
	It is remarkable that most of the original (shared memory) paralallel
	computers had/have vector CPUs (Alliant, Convex, Cray). Even the loosely
	coupled model in Enrico's lab (that Graham describes beneath)
	had fast "pipe-lined" processors (again something close to a
	vector machine).

 > The first use of parallelism for quantum chemistry that I'm aware of was
 > in Enrico Clementi's lab at IBM in Kingston NY in the mid 80s (see
 > IJQC Symp 18, 601 (1984) and JPC 89, 4426 (85)).  When I started a postdoc
 > there in Jan 86, both IBMOL (later KGNMOL) and HONDO ran in parallel on the
 > LCAP systems.  Each LCAP had a serial IBM "master" and 10 FPS array processo
 > r
 > "slaves" that acted in a distributed memory fashion, though later
 > developments added shared memory.  The parallel HONDO 8 referred to in
 > an earlier post here probably descends from that version, parallelised by
 > Michel Dupuis.  Incidentally this is where Roberto Gomperts (hi!) first
 > learned about parallelism when developing KGNMOL.  Many other comp chem
 > programs were parallelized for LCAP in this lab too.
 > 
 
 	LCAP was a very interesting architecture. It was never meant to
	be a "true" MPP (i.e. 100's or 1000's of processors) and it did
	not have shared memory. The idea was to have a few reasonable
	powerful processor. Enrico used to say something like "it was
	better to have a cart be pulled by 10 strong horses than by 1000
	chickens".
	It turns out that for many Monte Carlo and Ab-Initio programs
	this model is very appropriate. It is not my intention to get into
	or start a "religious war" between the MIMD and SIMD sects.
	Given the right program and the right problem both architectures
	can show their strengths!
	
 > In Jan 88 I joined Hypercube (developers of HyperChem), which had been
 > founded by Neil Ostlund to write computational chemistry software for
 > distributed memory MIMD computers.  Neil's philosphy was (and still is
 > I think) that "dusty deck" FORTRAN codes do not parallelize well, and
 > he sought to start from scratch with distributed memory MIMD parallelism
 > as one of the design criteria.  At that time he already had ab initio
 > and semi-empirical prototype codes running on the Intel iPSC.  I developed
 > a parallel implementation of the AMBER molecular mechanics potential on
 > the Intel iPSC/2 (written in C) and later in 1988 ported to a ring of
 > transputers.  These semi-empirical and molecular mechanics codes designed
 > for distributed memory MIMD live on as parts of HyperChem!  Once you've
 > written for a parallel machine it's easy to run on a serial machine like
 > the PC - just set the number of nodes to 1!  For the SGI version of
 > HyperChem, parallelism is exploited by simulating the message passing
 > of distributed memory MIMD on multi-processor Irises.  This may be the
 > only parallel SGI comp chem code *not* parallelized by Roberto! ;-)
 > 
 	I think that, putting aside philosophical oppinions, the
	practical thing to do to bring parallelism "to the masses" is to, in
	an initial stage, try to convert existing (serial) programs to
	run in parallel with reasonable efficiency.
	This approach has several advantages, among others:
	  1. Usually it is not too hard to do
	  2. As it has been pointed out users often are confronted with
	  the choice of speed vs throughput. In this context it is
	  imperative that:
	     a. running on 1 processor is as simple as Graham pointed
	     out above: "just set the number of nodes to 1!"
	     b. there is no signifcant loss in efficiency for the
	     parallel code running on 1 processor with respect to the
	     serial code.
	     
	I am not implying at all that new parallel algorithms should not be
	developed and implemented. I am just saying that while that is
	happenning and while there is no consensus on what the "standard"
	or "converged" parallel architecture of the futures is going to
	be, it would be a pitty not to be able to take advantage of
	parallelism TODAY.
	
	I am sorry if what follows, sounds as advertising, it is only
	intended as illustration. At SGI we are committed to do just
	that. Make parallelism available TODAY and NOW. And in different
	flavors and forms, trying to stay away from what I called before
	"religious wars": use the correct approach for the correct
	algorithm applied to the correct problem. To truly bring this "to
	the masses" we work in collaboration with the commercial and
	academic software vendors.
	  
 > BTW HyperChem's implementation of the MOPAC methods *is* parallel for
 > distributed memory MIMD computers, but we haven't yet convinced Autodesk
 > to market such a version. :-(
 > 
 	I should add that SGI's implementation of Mopac (obtainable via
	QCPE) is also parallel. I must confess that it is not one of the
	best examples of an efficient parallel implementation departing
	from an existing parallel code. But I think that any researcher
	would be more than happy if he/she can obtain a result more than
	2 times faster when using 3 processors than when using 1.
	
 > It's nice to see the growing interest in and acceptance of parallelism,
 > but somewhat frustrating that we've had to wait so long! In the meantime
 > we had to make a serial PC version of our software to pay the rent! ;-)
 > 
 	Why did it take it so long? Well, I guess that this is where the
	accusing finger goes to hardware vendors and to some system
	software developers. The development of tools to either convert
	serial codes to run in parallel or to develop parallel
	algorithms from scratch has been lagging behind. Again I am not
	saying that there are no tools out there (SGI certainly has
	a very neat and useful  environment for parallel development)
	but that it has not kept pace with the developments in hardware,
	both SIMD and MIMD. It has been my experience in different hardware
	companies that manufacture paralell computers, that the system
	software developers in this companies, tend to target the naive
	user, i.e. the person who will just use this "wonderful and
	magic" compiler that will take your dusty deck and make it run N
	times faster on N processors!!! (Obvioulsy marketing hype).
	While these compilers/preprocessors will do a good job on "wel
	behaved" loops (I am talking here clearly about shared memory
	machines) they have a long way to go before they can efficiently
	and correctly tackle "real world" codes. My contention is that
	the focus of the tools developers should be the applications
	software developers. We need tools to for expert or semi-expert
	users. I think that this is the right way to bring parallelism
	"to the masses" TODAY. And really, if you look at it, many of
	the users of the programs are not the ones who developed them,
	and while they might (should) have a basic understanding of the
	theoretical foundation of an algorithm or method, they have no
	interest nor time to get involved in the details of its
	implementation. Mind you, I am not talking about using a program for
	scientific research as a black box, but in practice people do
	not care how a program is vectorized as long as it doesn't throw
	your "CRAY money" away, or how it runs in parallel as long as it
	performs well when using more than 1 processor. 
	
 > Someone (sorry I didn't keep the post) commended CDAN for its recent
 > articles on parallelism - in the late 80's they declined to have Neil write
 > an article on parallelism in computational chemistry because they said no
 > one was interested in parallelism!
 > 
 > Should you worry about porting or redesigning for distributed memory
 > MIMD? Only if you:
 >     (a) want a single calculation done faster
 > or
 >     (b) want to tackle a larger calculation.
 > For throughput you're better off running n serial jobs on n nodes (provided
 > the jobs fit!).  You can do (a) for at least smaller numbers of nodes by
 > porting a serial code, but for a large number of nodes or (b) you probably
 > need to redesign to partition your data and hopefully keep data transfers
 > minimized, to/from near nodes, and overlapped with calculation.
 > 
 	I would make the question more general and not restrict it to
	MIMD machines. As (I think it was) Joe Leonard pointed out in
	one of the first mailings of this thread, there are quite a few
	programs out there that are running in parallel on shared memory
	machines (and more are forthcoming!). In my opinion, multiprocessor 
	shared memory machines offer an unique development environment
	to exploit the appropriate level of parallelism in the right
	place. Take f.e. the case of Gaussian 92. There a mixed model
	parallelism was used: a distributed memory model via the use of
	the "fork()" system call and at the same time the allocation of
	shared memory regions to avoid all the intrincancies of message
	passing algortihms. Also fine grain parallelism was exploited at
	the loop level (the "magic" compiler) and via the call to
	(shared memory) parallel routines for linear algebra operations
	like matrix multiplies.
	In other cases, given the underlying algorithms of the currently
	available commercial MM and MD programs like Charmm, Discover,
	Sybil, etc. the best parallel implementation is a shared memory
	one (sorry Graham!!). That is not to say that future
	developments would make MIMD implemetations of MM and MD codes
	efficient.
	
 > Exploiting parallelism with networked computers is a good idea that
 > was first demonstrated in the 80s.  Bob Whiteside, now at Hypercube,
 > gained some acclaim by beating a Cray with a bunch of otherwise-idle
 > networked Suns while he was at Sandia.  As well as accomplishing (a),
 > networked computers can be used effectively for (b), though most people
 > seem more excited by the potential for speedup.
 > 
 	I would generalize 
 > Cheers,
 > 
 > Graham
 > ------------
 > Graham Hurst
 > Hypercube Inc, 7-419 Phillip St, Waterloo, Ont, Canada N2L 3X2 (519)725-4040
 > internet: hurst@hyper.com
 > 
 > 
 > ---
 > Administrivia: This message is automatically appended by the mail exploder.
 > CHEMISTRY@ccl.net --- everyone      CHEMISTRY-REQUEST@ccl.net --- coordinato
 > r
 > OSCPOST@ccl.net  send help from chemistry            Anon. ftp kekule.osc.ed
 > u
 > ---
 > 

				-- Roberto


						Roberto Gomperts
						roberto@sgi.com
						phone: (508) 562 4800
						Fax:   (508) 562 4755





From lim@rani.chem.yale.edu  Wed Nov  4 14:52:20 1992
From: Dongchul Lim <lim@rani.chem.yale.edu>
Subject: Re: new perl script for SCHAKAL88 viewer
To: chemistry@ccl.net (Computational Chemistry)
Date: Wed, 4 Nov 92 19:52:20 EST



	Dear Netters,
	Thanks to Martin Schuetz from the Institute for Physical Chemistry of
	the University of Berne (schuetz@iacrs2.unibe.ch) we have another
	valuable addition to our library of perl scripts. The GCmodes2schakal.perl
	script will extract geometry and normal modes (or whatever they call
	normal modes {:-)} ) from the Gaussian90 or CADPAC4 frequency jobs and
	prepare input for the popular molecular viewing program SCHAKAL88 by
	Egbert Keller. The script will find by itself which output is being processed.

	or you can retrieve it from anonymous ftp www.ccl.net [128.146.36.48] in
	the directory pub/chemistry/perl/vibmodes 

	Martin, thank you again for your contribution,
	Jan
	jkl@ccl.net


Just curious. What is "SCHAKAL88" program?
-DCL

* Dongchul Lim                   | Phone (203) 432-6288            *
* Dept. of Chemistry, Yale Univ. | Email: lim@rani.chem.yale.edu   *
* 225 Prospect Street            |            (130.132.25.65)      *
* New Haven, CT 06511            |                                 *


From IPMP500@INDYVAX.IUPUI.EDU  Sun Nov  4 17:12:04 1992
Date: 04 Nov 1992 22:12:04 -0500
From: "Michael A. Peterson" <IPMP500@INDYVAX.IUPUI.EDU>
Subject: Word Processing and Graphing on SGIs?
To: chemistry@ccl.net



Dear netters:

I am looking for graphing and word processing programs for our SGIs.
Does anyone have favorites?  We have a variety of SGIs ranging from a 4D70
to INDIGOs.  Such programs would save us a lot of hastle I believe.

Since this will only effect owners of SGI machines (presumably), respond
directly to me (ipmp500@indyvax.iupui.edu), and, if the responses warrant
it, I will summarize for the net.

Thanks in advance!

Michael A. Peterson
Dept. of Chemistry
Indiana Univ.-Purdue Univ. @ Indianapolis (IUPUI)
Internet:  ipmp500@indyvax.iupui.edu		BITNET:  ipmp500@indyvax

