ECPs, transition metals, and parallel computing



 Hi,
         I hope the netters won't think I'm "hyping" GAMESS, but
 Theresa's
 e-mail has sparked me to inquire about something that has been percolating
 in my mind since Doug Smith's e-mail a while back on parallel computing.
         I vaguely recall last year we had some discussion on parallel
 computational chemistry.  If not, this could be the time.  We have been lucky
 to have access to the iPSC/860 at Oak Ridge through a collaboration between the
 Computational Chemistry Group at MSU and the Joint Institute of Computational
 Sciences located at U. T.-Knoxville.  The speed of the machine is enough to
 keep even impatient, untenured assistant professors from complaining.
         We have looked at both transition metal and lanthanide catalyst systems
 and compared identical jobs on the iPSC/860 versus the Cray Y-MP at San Diego
 Supercomputer Center.  I don't have an example handy which entails using ECPs
 to calculate 2nd dervis numerically, but the rough conclusion is the same -
 "4 to 8 nodes give Cray-like speed!"
      The table below shows some very promising timings for two
 sample calculations.  One is a 44 basis set RHF calculations of
 the nonlinear optical properties of water and the other is geometry
 optimization of LuCl2Hs, a catalyst model, using effective core potential.
 Timings in Seconds:
                          Water              LuCl2H
   iPSC/860 1 node                           608.12
            2 node                           322.48
            4 node       1777.33             179.21
            8 node        958.37
 112.76<<<<<<<<<
           16 node        516.19              79.77
           32 node        291.31              59.26
           64 node        197.82              51.41
   Cray Y-MP8/864
 144.22<<<<<<<<
   DECstation 3100       7743.02
 For the two comparisons the times given are total CPU times.  Of particular
 interest to us, is the comparison between the Cray Y-MP8/864 at the San
 Diego Supercomputer Center and the iPSC/860.
 Several questions for discussion.
         1) What other parallel "goodies" are out there?  I thought I
 read
 somewhere that Gaussian has a parallel version out or soon to be realized.
 Ditto for HONDO.  What sorts of machines are these ported to?  I can imagine
 that a "hot" field like this would have new options almost daily.
         2) Are there parallel programs that can do MP2 and/or MCSCF?
 Perhaps other correlated wavefunctions?  Are folks working on these?
         3) Do any of these iPSC/860 "hypercubes" or related machines
 come with
 more than  8Mb/node?  From my point of view, this would be the advance that
 would make it feasible to look at realistic models of experimental systems.
                                  Tom Cundari and Henry Kurtz
                                  Computational Chemistry Group
                                  Department of Chemistry
                                  Memphis State University
                                  Memphis, TN   38152