From p.grootenhuis@organon.akzonobel.nl  Mon Mar  4 10:47:58 1996
Received: from gatekeeper.oss.akzonobel.nl  for p.grootenhuis@organon.akzonobel.nl
	by www.ccl.net (8.7.1/950822.1) id KAA08785; Mon, 4 Mar 1996 10:13:02 -0500 (EST)
Received: (from mail@localhost) by gatekeeper.oss.akzonobel.nl (8.6.12/8.6.12) id QAA12003 for <chemistry@www.ccl.net>; Mon, 4 Mar 1996 16:16:13 +0100
Received: from apou01.akzonobel.nl(145.49.90.59) by gatekeeper.oss.akzonobel.nl via smap (V1.3)
	id sma013477; Mon Mar  4 16:15:44 1996
Received: by apou01.akzonobel.nl (8.7.1/AKZONOBEL-WvdL/951013)
	id QAA01042; Mon, 4 Mar 1996 16:11:10 GMT
Received: (from groot@localhost) by organon.akzonobel.nl (950511.SGI.8.6.12.PATCH526/8.6.12) id QAA27556 for chemistry@www.ccl.net; Mon, 4 Mar 1996 16:12:52 GMT
From: Peter Grootenhuis <p.grootenhuis@organon.akzonobel.nl>
Message-Id: <199603041612.QAA27556@organon.akzonobel.nl>
Subject: MD of non-globular proteins
To: chemistry@www.ccl.net
Date: Mon, 4 Mar 1996 16:12:50 +0000 (WET)
Name: P.Grootenhuis
Organisation: NV Organon 
Phone: (+31)0412-661920
X-Mailer: ELM [version 2.4 PL24 ME8b]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit


CCL-ers,

I want to carry out molecular dynamics calculations on proteins with a
non-globular shape in order to qualitatively assess the effect of certain 
mutations. However, both in gas-phase and aqueous simulations there is a 
rather strong tendency of the system to adopt conformations that result in more 
globular shapes. I believe this effect is driven by the nonbonded interactions 
that ultimately promote globular shapes.
Questions:
(1) Is anybody aware of (published) MD simulations of non-globular proteins ?
(2) Does anybody has (preferably computationally inexpensive) suggestions on 
	how to handle such a system ? 
I will summarize and post to the List. 

Thanks very much,
Peter
Grootenhuis
 ______________________________________________________________________________
 Dr. Peter D.J. Grootenhuis       |
 N.V. Organon / CMC Dept. RK2337  | Phone  : +31-412-661920
 P.O. Box 20 / 5340 BH Oss        | Fax    : +31-412-662539
 The Netherlands                  | E-mail : p.grootenhuis@organon.akzonobel.nl
 _________________________________|____________________________________________

From owner-chemistry@ccl.net  Mon Mar  4 15:44:15 1996
Received: from bedrock.ccl.net  for owner-chemistry@ccl.net
	by www.ccl.net (8.7.1/950822.1) id OAA14383; Mon, 4 Mar 1996 14:44:39 -0500 (EST)
Received: from hrz-sun1.hrz.uni-kassel.de  for gdanitz@hrz.uni-kassel.de
	by bedrock.ccl.net (8.7.1/950822.1) id OAA18681; Mon, 4 Mar 1996 14:44:37 -0500 (EST)
Received: from hrz-serv1.hrz.uni-kassel.de by hrz-sun1.hrz.uni-kassel.de (4.1/SMI-4.1)
	id AA19722; Mon, 4 Mar 96 20:45:47 +0100
Received: by hrz-serv1.hrz.uni-kassel.de (AIX 3.2/UCB 5.64/HRZ-GhK/HRZ-SERV1/pm)
          id AA48801; Mon, 4 Mar 1996 20:44:28 +0100
From: gdanitz@hrz.uni-kassel.de (Robert Gdanitz)
Message-Id: <9603041944.AA48801@hrz-serv1.hrz.uni-kassel.de>
Subject: Number crunching in 16-byte precision.
To: chemistry@ccl.net (Computational Chemistry List)
Date: Mon, 4 Mar 1996 20:44:27 +0100 (MEZ)


+-----+  Robert J. Gdanitz                      email: gdanitz@hrz.uni-kassel.de
| GhK |  Gesamthochschule Kassel                Tel.: +(49) 561-804-4556
|     |  Fachbereich 18 (Physik)                Fax:  +(49) 561-804-4006
+-----+  34109 Kassel 
X-Mailer: ELM [version 2.4 PL24 PGP3 *ALPHA*]
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
Content-Length: 3014      

Dear Netters,

I think it is justified to say that most of us are quite satisfied with, as well the pre-
cision (approx. 2.22E-16 in IEEE) as the performance of the usual "working precision" 
arithmetic (8-byte, real*8, "double-precision" is, if at all, not much slower than 4-byte,
real(*4), "single precision") in our routinely calculations which, in some cases, may take
considerable time. I have heard that once there was a time, when the working precision
was real*4 or something in between real*4 and real*8 which e.g. frequently gave rise to 
problems with numerical stability in quantum chemical calculations and double precision
was prohibitively slow, but this now is history.

Anyway, there are still people who want to push things to the limit (like me) who do their
calculations in a way, that even real*8 precision is not sufficient. And the problems do
not arise because I need to have, say 16 or more figures accuracy in the final result, or
because from the algorithms available to solve a certain problem, due to some reason, I
choose the one most prone to numerical instabilities.

What I actually want to do, is to solve the Schroedinger equation of some small chemical 
system (e.g. N2) to "chemical accuracy" (1 kcal/mol) which still has not yet been achieved
all (even "small") cases (e.g. the De of N2). One could give e.g. MR-CI-(SD) or CC(SD)[T]
a try, but the convergence of De with respect to the increase of the size of the basis set
(n) is VERY slow (~ n^-3). Recently, I managed to combine Kutzelnigg's and Klopper's 
"r12"-method with MR-CI(SD), resp. MR-ACPF to get a convergence ~ n^7, but there are still
problems to be solved...

The r12-method uses explicitly correlated terms which, when rather diffuse orbitals are 
joined by r12, are highly redundant. On the other hand, I cannot easily remove these terms
because then problems with e.g. proper dissociation or size-extensivity may arise. So I
have to do some fiddling with the CI- as well as with the r12-ansatz, i.e. I have to re-
move e.g. orbitals with high energy and r12-terms which are not necessary and I always 
have to check that I am indeed allowed to do so, which makes things somewhat tedious.

OK, now that I have these problems with numerical stability, why do I not simply switch to
real*16 arithmetic and be happy? Well, I found out that on our only computer where I can
dump some GByte temporary data (it's a SGI Power Challenge) real*16 arithmetic, when used
for e.g. a dot-product of two vectors, is 300 times slower than real*8. Since I cannot 
afford to wait one year to compute what takes one day in real*8, I'm in serious trouble. 
On the other hand, our IBMs, which degrade only by about 30 in the present case, do not
have enough temporary disc space available.

Anyway, if there is anybody out there who has experience doing large scale computations
in real*16, please tell me your experience with computers & algorithms.

As usual, I will summarize to the net.

Thanks in advance,
Robert Gdanitz

From owner-chemistry@ccl.net  Mon Mar  4 17:32:20 1996
Received: from bedrock.ccl.net  for owner-chemistry@ccl.net
	by www.ccl.net (8.7.1/950822.1) id QAA16087; Mon, 4 Mar 1996 16:47:04 -0500 (EST)
Received: from theory.tc.cornell.edu  for jeanne@tc.cornell.edu
	by bedrock.ccl.net (8.7.1/950822.1) id QAA22374; Mon, 4 Mar 1996 16:47:00 -0500 (EST)
Received: from [128.84.181.75] (JEANNE.TC.CORNELL.EDU [128.84.181.75]) by theory.tc.cornell.edu (8.6.9/8.6.6) with SMTP id PAA89014; Mon, 4 Mar 1996 15:43:09 -0500
Date: Mon, 4 Mar 1996 15:43:09 -0500
Message-Id: <v0213050aad605e3cfbe6@[128.84.181.75]>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
To: jeanne@TC.Cornell.EDU
From: jeanne@TC.Cornell.EDU (Jeanne C. Butler)
Subject: Cornell Theory Center IBM SP Workshop
Cc: reynolds@TC.Cornell.EDU


Workshop on Parallel Programming on the IBM RS/6000 SP

Sunday April 28 - Thursday May 2, 1996

Cornell Theory Center
Cornell University
Ithaca, NY

The Cornell Theory Center (CTC), a nationally funded high performance
computing center, is offering four days of lecture and laboratory sessions
on parallel programming for the IBM SP. CTC's SP, which consists of 512
RISC processors connected by a high performance switch, is the largest of
its kind in the world. The SP programming model is distributed memory.

This workshop will offer sessions on the following topics:

-Introduction to CTC's IBM SP and Parallel Programming
-Introduction to Performance Issues on CTC's IBM SP
-Parallel Programming Using the Message Passing Interface (MPI) Library
-Parallel Programming Using High Performance FORTRAN (HPF)
  (presented through a case study of a tariff modeling program)

All topics will be presented using a mix of lectures and programming
exercises, giving participants hands-on experience with parallel
programming. A full agenda is appended to this announcement.

The "Introduction to CTC's IBM SP and Parallel Programming" sessions are
intended for beginning parallel programmers and users who are new to CTC.
They cover basic information for running on the IBM SP, as well as the
fundamentals of parallel processing and distributed memory computing.
Experienced CTC users may wish to skip these sessions. All others should
take these sessions, as the remainder of the workshop assumes participants
are familiar with this information.

The "Introduction to Performance Issues on CTC's IBM SP" sessions are
intended for both beginning and intermediate distributed memory
programmers. Participants will become familiar with the basic concepts and
tools for performance modeling, timing, profiling, and tracing on the IBM
SP. Basic strategies for improving the performance of a program on the IBM
SP will also be presented.

The workshop sessions on "Parallel Programming Using the Message Passing
Interface (MPI) Library" will focus on the MPI standard library, as
implemented by IBM. These sessions will start with the six fundamental MPI
procedure calls and will proceed to some more advanced issues, such as
derived data types and persistent communication. The final sessions will
present a case study of parallelizing a serial program using MPI.

The sessions on "Parallel Programming Using High Performance FORTRAN (HPF)"
will demonstrate parallelizing a serial program, using a case study
approach. The program for the case study will be the same as that used for
the MPI case study sessions. Performance analysis and optimization of a
parallel program will also be demonstrated.

(All trade names referenced are trademarks or registered trademarks of
their respective companies.)

REGISTRATION INFORMATION

To apply, please complete the registration form, found at:
http://www.tc.cornell.edu/Events/SP.Apr96.html

An ASCII text registration form is available for FTP from:
ftp.tc.cornell.edu

Change to the pub directory and get file Apr96.workshop

Send payment separately to arrive no later than March 25, 1996, to:
Jeanne Butler
Conference Assistant
427 Frank H. T. Rhodes Hall
Ithaca, NY  14853-3801

Fees:	                         per day	           full workshop
Academic/Gov.	           $50	                         $200
Corporate	                   $225	                         $795
CPP*	                              $175	                        $595

(* members of Corporate Partnership Program)

Make checks payable to Cornell University. Local applicants may charge the
registration fee to the appropriate Cornell University account number.
Registrations will not be acted upon until the payment arrives. Refunds
will be made to those applicants not accepted to the workshop. Refunds
cannot be made after an applicant is accepted.

Course attendance is limited. Preference will be given to those who have
already received a CTC allocation, to Corporate Partnership Program (CPP)
members, and to those who have an application pending. It might be
necessary to limit the number of attendees from any one research project.


OTHER TRAINING OPPORTUNITIES

Applicants should also be aware that CTC will be offering a Virtual
Workshop (VW) over the summer months. The VW offers World Wide Web versions
of most of the material covered in this workshop, and it  includes
interactive logins on the CTC IBM SP for completion of exercises.  CTC
staff members offer consulting support to the VW participants through
e-mail and through the CTC MOO. More information on this offering will be
posted through CTC's Education Calendar of Events on the World Wide Web.

More information and a preview of the Virtual Workshop can be found at:
http://www.tc.cornell.edu/Edu/VW/

For general information on current and future CTC workshops, go to:
http://www.tc.cornell.edu/Edu/Upcoming/workshops.html

SPRING 1996 IBM SP WORKSHOP AGENDA

Sunday, April 28 (starting at 1:00 p.m.)

	Introduction to CTC IBM SP and Parallel Programming
		Registration and Logging In
		Introduction to the Workshop
		Introduction to the IBM SP at CTC
		Introduction to Parallel Processing

Monday, April 29

	Introduction to Parallel Programming, cont.
		Introduction to Distributed Memory Programming
		Parallel Program Design

	Introduction to Performance Issues on the CTC IBM SP
		Performance Basics
		Single-Processor Performance Tools
		Single-Processor Performance Considerations

Tuesday, April 30

	Parallel Programming with Message Passing Using MPI
		Introduction to MPI
		Basics of MPI Programming
		Point-to-Point Communication
		Collective Communication
		Advanced Topic: Derived Data Types

Wednesday, May 1

	Message Passing Using MPI, cont.
		Advanced Topic: Groups and Communicators
		Advanced Topic: Persistent Communication
		Advanced Topic:   More on Collective Communication
		Parallel Processing Performance Tools

	Tariff Case Study: Introduction and MPI VERSION
		Part I: Problem Description and Serial Implementation
		Part II: MPI Implementation

	Tariff Case Study Using HPF
		Part III: HPF Introduction and Automatic Parallelization
		Part IV: Tuning

Thursday, May 2

	Tariff Case Study Using HPF, cont.
		Part V: F90 and HPF Directives
		Part VI: The HPF Data Model
		Part VII: Parallel Loops in HPF
		Part VIII: Summary and Conclusions (Concludes at 12:00 noon)

		(An open lab session will be offered during the afternoon.)


 ...end/sp.workshop.announce



