From ccl@www.ccl.net  Tue Apr 15 06:36:32 1997
Received: from bedrock.ccl.net  for ccl@www.ccl.net
	by www.ccl.net (8.8.3/950822.1) id GAA00390; Tue, 15 Apr 1997 06:27:52 -0400 (EDT)
Received: from gap.cco.caltech.edu  for tmmec@fcindy5.ncifcrf.gov
	by bedrock.ccl.net (8.8.3/950822.1) id GAA21799; Tue, 15 Apr 1997 06:27:43 -0400 (EDT)
Received: from fcindy5.ncifcrf.gov (fcindy5.NCIFCRF.GOV [129.43.50.12]) by gap.cco.caltech.edu (8.7.5/8.7.3) with SMTP id DAA19258 for <MLIST-CHEMISTRY@NNTP-SERVER.CALTECH.EDU>; Tue, 15 Apr 1997 03:27:40 -0700 (PDT)
Received: (from tmmec@localhost) by fcindy5.ncifcrf.gov (950413.SGI.8.6.12/950213.SGI.AUTOCF) id FAA07567; Tue, 15 Apr 1997 05:29:40 -0700
From: "tmmec" <tmmec@fcindy5.ncifcrf.gov>
Message-Id: <9704150529.ZM7565@fcindy5.ncifcrf.gov>
Date: Tue, 15 Apr 1997 05:29:40 -0700
X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail)
To: tmmec@bilbo.edu.uy
Subject: TMMeC: First Number Available
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii




                                    TMMeC

                          First Number Announcement


 It is with great pleasure that we announce here that the first number
 of TMMeC has been finally released. We hope you enjoy the new format
 and, as always, you comments and suggestions are welcome. 

 Please visit our new pages at our main address:

                http://bilbo.edu.uy/tmmec

 Or in one of our mirrors:
 
                http://uqbar.ncifcrf.gov/tmmec
                http://tlon.ncifcrf.gov/tmmec



                                              The Editors

                             ----------------


From atiller@vishnu.msicam.co.uk  Tue Apr 15 10:36:22 1997
Received: from msicam.co.uk  for atiller@vishnu.msicam.co.uk
	by www.ccl.net (8.8.3/950822.1) id JAA01398; Tue, 15 Apr 1997 09:51:14 -0400 (EDT)
Received: from Garuda.msicam.co.uk by msicam.co.uk (940816.SGI.8.6.9/SMI-4.1 (MSI-Cambridge))
	id OAA11566; Tue, 15 Apr 1997 14:50:29 +0100
Message-Id: <2.2.32.19970415135113.0069c344@vishnu.msicam.co.uk>
X-Sender: andrew_tiller@vishnu.msicam.co.uk
X-Mailer: Windows Eudora Pro Version 2.2 (32)
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Tue, 15 Apr 1997 14:51:13 +0100
To: chemistry@www.ccl.net
From: Andrew Tiller <atiller@vishnu.msicam.co.uk>
Subject: New WebLab Viewer 2.0 Release
Cc: matth@msi.com, ajay@msi.com



Dear Colleague

This is just to let you know that the latest version of MSI's free chemical
visualization software, WebLab Viewer, is now available for download from
our Web and ftp sites.  Details can be found at the following URL:

        http://www.msi.com/weblab/


WebLab Viewer 2.0 has many new enhancements, including:

 - Sketch import from IsisDraw and ChemDraw with automatic
   2D->3D conversion
 - High resolution graphics for PowerMac as well as Windows/NT
 - In-place activation of WebLab Viewer in other applications (eg 
   Word, Excel, Powerpoint) now extended to the PowerMac platform
 - Selectable / displayable amino acid residues
 - Hydrogen bond and bump monitor display
 - Color surfaces by property (eg charge)
 - Improved crystal and symmetry display styles
 - Side-by-side stereo and full screen views
 - Add / remove hydrogens

 ... and much more!

WebLab Viewer 2.0 runs on Windows 95, NT and PowerMacintosh platforms.  We
hope to announce an SGI Unix version in the near future.  

We hope you enjoy using WebLab Viewer 2.0, and would be glad to receive your
comments.

Sincerely





Andy Tiller - Director, Market Development
Matt Hahn - Director, Desktop Development


*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
* Andrew Tiller, PhD. Director, Market Development                      *
*                     Molecular Simulations Inc                         *
*                     The Quorum, Barnwell Road, Cambridge, CB5 8RE, UK *
*                     Tel: +44 1223 413 300      Fax: +44 1223 413 301  *
*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*


From John_Beckerle@quickmail.clemson.edu  Tue Apr 15 11:59:06 1997
Received: from CLEMSON.EDU  for John_Beckerle@quickmail.clemson.edu
	by www.ccl.net (8.8.3/950822.1) id LAA01869; Tue, 15 Apr 1997 11:25:39 -0400 (EDT)
Received: from quickmail.clemson.edu (quickmail.clemson.edu [130.127.8.57]) by CLEMSON.EDU (8.7.5/8.7.3) with SMTP id LAA00821; Tue, 15 Apr 1997 11:22:02 -0400 (EDT)
Message-ID: <n1351019155.99623@quickmail.clemson.edu>
Date: 15 Apr 1997 11:11:34 -0500
From: "John Beckerle" <John_Beckerle@quickmail.clemson.edu>
Subject: Summary- "Best" Multiproces
To: "List CCL" <CHEMISTRY@www.ccl.net>,
        "Georg Ostertag" <ostertag@biochem.mpg.de>,
        "Mr HG Kruger" <KRUGER@che.und.ac.za>,
        "Park, Tae-Yun" <tp@elptrs7.rug.ac.be>, "Richard Walsh" <rbw@msc.edu>,
        ShiYi.Yue@arcm.ca.astra.com, szilagyi@indy.mars.vein.hu
Cc: "John Beckerle" <beckerj@CLEMSON.EDU>,
        "Gary Berger" <gary.berger@ces.clemson.edu>,
        "Rick Jarvis" <jpjrv@math.clemson.edu>,
        "James Leylek" <jleylek@CLEMSON.EDU>,
        "Wayne Madison" <wayne@cs.clemson.edu>, "Ed Page" <epage@CLEMSON.EDU>,
        "Chris Duckenfield" <Chris_Duckenfield@quickmail.clemson.edu>
X-Mailer: Mail*Link SMTP-QM 3.0.2


Here is the belated summary of the responses to my query about the "best"
multiprocessor platform for a broad range of computational chemistry
applications.  The executive summary is that under real world circumstances,
the respondents were happiest with the offerings from SGI, i.e. the
PowerChallenge.  Thanks to all who responded.  I have included in the summary
some of the give and take I had with the responders.  

My original question:

> We are seriously considering investing in a multiprocessor compute server.
> The candidates include the IBM SP2, SGI Origin2000, Sun Ultra HPC, DEC
> Alphaserver and other similar machines.  We are probably looking at
something
> on the order of 8 processors and 2GB of memory, although expandability and
> upgradability are important.  The system is supposed to serve a variety of
> engineering and scientific high performance computing needs, not just
> computational chemistry.
> 
> I would like to poll the collective experience of CCL as to what is the best
> platform, and even more importantly, which platforms should be avoided for
> computational chemistry applications.  I am interested in hearing about real
> experience with software availability, incompatibilities, and performance
> surprises (including effective use of parallelism), both good and bad.  We
are
> interested in the full range of computational chemistry applications
including
> SCF and MD. 
> 
> I would like to stress that I am looking for good and bad experiences with
> effectively using any of these multiprocessor machines and the roadblocks
> others have encountered.  I am not so interested in relatively minor
> differences in performance of the individual CPUs, benchmarks, etc.  I want
to
> know if they can be used effectively without miracles.
> 
> I am already familiar with the extensive evaluation of workstation
performance
> by Martyn Guest, as well as Milan Hodoscek's comparison for parallel MD
> speedups.
> 
> As I don't subscribe to CCL at this time, please respond directly to:
> beckerj@clemson.edu
===============================================================
From Dr. Eric V. Patterson:
I have experience running a variety of computational chemistry programs 
(Gaussian, MOLCAS, ACES II, etc.) on both an IBM SP2 system and an SGI 
Power Challenge, the forerunner to the Origin2000.  Forgive me if I'm 
covering familiar ground, but the main difference between these machines 
is the way they handle parallel operation.  The SP2 is series of 
individual nodes (e.g. RS/6000 machine "guts") with individual disk and 
memory (although the disk can be shared).  The nodes communicate via a very 
fast network.  The SGI is a shared memory machine, where the processors 
access the same the disks and same memory, and each processor is just a 
different chip on a card.

I prefer the SGI.  The basic reason is that it is one single machine with 
several processors.  The processors have few problems communicating with 
each other, and it is easier to run a standard queueing system such as 
NQS.  With the SP2, you must run LoadLeveler as your queueing system, 
which presents a unique set of hardships.  Plus, in the case of Gaussian 
at least, you need different Fortran libraries (LINDA) in order for the 
compiled program to work with LoadLeveler.  This may be true for other 
programs as well.

As far as other factors go, both machines are good performers.  If a 
program, chemical or otherwise, has been well written, it will 
parallelize well on either machine.  Also, both are fairly easily 
expanded.  For the SP2, you add more nodes until your rack is full, 
then can buy a new rack.  For the SGI, you can add more processor 
cards until you have no more space.  Adding disk and memory is no more 
difficult.  So, my preference for the SGI is simply from an "ease of use" 
consideration.

I hope this helps.  

Beckerle responds:
> I know G94 can be set up to run on the SP2 with Linda, and that Gaussian
> supports that configuration.  What about other software packages?  

Patterson:
The other packages I'm familiar with on the SP2 are GAMESS and MOLCAS.  
GAMESS is coded to handle message passing in a different way (I don't 
recall which way), so it does not require Linda.  It performs fairly well 
in parallel up to about 8 nodes.  MOLCAS does not run in parallel.  I am not 
familiar enough with any other scientific applications to know what they are 
capable of.

Beckerle:
>In
> practice, have you found that there are straightforward instructions and/or
> support to get things running on the SP2?  Or, do you tend to just run most
> calculations on the PowerChallenge instead? Another good measure:  How much
> idle time do you have on the SP2 as compared to the PowerChallenge? 

Patterson:
Once LoadLeveler is running, using the SP2 is no more difficult than any 
other queue-based machine.  However, I feel that LoadLeveler is less 
stable.  I think the reason for this is that you are dealing, in essence, 
with a cluster of networked machines.  I have had jobs crash, or seen the 
whole queueing system die, due to interuptions in the network.  The 
common examples occur when the LoadLeveler master node becomes 
unreachable, or when the disk that contains the program code become 
unreachable.  These problems happen a little too often for my taste.  
Because of that, jobs on the SP2 are often run in serial mode, and the 
SP2 remains more idle than the SGI.  

Getting LoadLevel running seems to be a stumbling block for many people.  
I have never had to set it up from scratch, but I have talked with the 
people who have.  It seems to be a chore.  I have set up a new SGI, and 
it was straightforward.  The OS on the SGI handles all of the parallel 
issues transparently, while the SP2 requires the operator to install 
message passing tools.  So, if you have ever set up a typical 
workstation, you can probably set up the Origin system with little 
effort.  Even if you haven't set up a workstation before, the SGI 
interface is a bit nicer than IBM's, so it tends to be more comfortable 
for people.

If you have other questions, or I haven't addressed certain key points, 
let me know.  I'm happy to share my experiences.

Eric
Dr. Eric V. Patterson		
Postdoctoral Associate 

Department of Chemistry		voice:	(612) 624-1535
University of Minnesota		FAX:	(612) 626-7541
207 Pleasant St. SE		email:	patter@pollux.chem.umn.edu
Minneapolis, MN 55455		WWW:	http://pollux.chem.umn.edu/~patter

================================================================
From Frank Herrmann:
I personally don't like the SP2 philosophy. You have to login on every
single processor or use a queueing system to have access to the whole
machine. I prefer shared memory machines like the Convex/HP SPP1 or
SGI Power Challenge. You don't have to bother on which Processor you
are or whether some of them are down or which processor is already
busy and so on - It's like being on a single processor machine. The
Convex and SGI are also much faster at least with my program (C
Programm with very little memory consumption and low input/output
activity - not the typical Gaussian job properties) Especially the SGI
Power Challenge with R10000 Processors is more than 5 times faster
than the SP2. But maybe there are faster SP2's because the R10000
machine is brandnew ...

---------------------------------------------------------------------
           Frank Herrmann, Computer Scientist, PhD Student            
   Institute of Parallel and Distributed High-Performance Systems    
                   (IPVR) University of Stuttgart                     
                      Breitwiesenstrasse 20-22                        
                    D-70565 Stuttgart  (Germany)                      
           Tel: (49) 711-7816-358, FAX: (49) 711-7816-250            
          email: Frank.Herrmann@informatik.uni-stuttgart.de         
http://www.informatik.uni-stuttgart.de/ipvr/bv/personen/herrmann.html

=================================================================
From Dr. Jack Miller:
We have had very goood luck and very good performance from multirocessor
SGI systems starting back with the old 340's then with the Challenge series
and are just preparing to by one of the new Origin systems. They have been
real workhorses and were the bast bang for the buck that ran our software
requirements at the time we went with SGI six years ago.

Six years ago IBM could win on a single job benchmark, but as soon as we
went to multiple jobs SGI won hands down --- the OS was very much more
efficient at swapping jobs to share the multiple  processors where the IBM
threshed about in a multi-job environment. We used our own programs that
our theoreticians wrote for a Cray and various packages such as Gaussian
that we knew we would have in our job mix. The results on a real job mix
were very different than the raw performance figures based on SPECmarks etc
would have suggested.

Jack Martin Miller
Professor of Chemistry
Adjunct Professor of Computer Science
Brock University,
St. Catharines, Ontario, Canada, L2S 3A1.

Phone (905) 688 5550, ext 3402
FAX   (905) 682 9020
e-mail jmiller@sandcastle.cosc.brocku.ca
http://chemiris.labs.brocku.ca/~chemweb/faculty/miller/
===================================================================
From Rick Venable:
Several groups have projects well underway to assemble their own parallel
systems using "off the shelf" components from the PC marketplace:

dual processor PentiumPro w. motherboard
128 MB RAM, 2 GB disk
multiport 100baseT fast ethernet
case, power supply, keyboard, monitor, display adapter
Linux
etc.

The commodity status of PC components and recent performance gains means
one can shop around a bit, and assemble a dedicated parallel system for
ca. $20,000 that outperforms commercial multiprocessor machines costing 5
to 10 times more.  Don't buy a service contract, just some extra parts. 
Of course, this approach does require one or two more technical
individuals on the scene, but it also represents a radical
price/performace breakthrough.  A group here at NIH is developing one such
system, and will eventually have a WWW page with a shopping list and a
recipe.  Milan's parallel CHARMM timings should now include results for
one and two processor PentiumPro machines.

At NIH, they call the project LoBoS (Lots of Boxes on Shelves); the name
is also an homage to early innovaters in this area, who coined the term
"beowulf" to describe the general concept.  For more information, contact
the project leader, Eric Billings at billings@helix.nih.gov

On 5 Apr 1997, John Beckerle wrote:
> But is this case I don't want to turn the hardware and software into the
> research project.  This system is supposed to support a variety of users and
> applications from different departments.  If you consider the real cost of
the
> expertise required to set up and maintain a system and to port software to
it,
> and debug the system stuff, and rewrite the code to get better parallelism,
> and ....  Well it adds up to a lot of personnel cost and time.   Before too
> long, you will be able to pick up the ready to run parallelized software for
> whatever you want to do from the same place as the list of components for
the
> hardware.  But I would still rather pay an expert to set it all up.  

I don't think the expertise required will be much beyond that needed by
any competent Unix sysadmin, especially someone who can add memory or
replace a disk, and handle TCP/IP configuration.  You should be able to
leverage the results of groups like the one at NIH so that it is not a
research project.  With regard to performance tuning, in my experience
that's often necessary for any change of platform, especially
multiprocessor systems, and in some cases a change of OS.  For any
parallel codes that use widely available parallel libraries such as PVM, I
expect very little effort would be required.

If your need for a multiprocessor system is more to support lots of
single-threaded jobs, and/or you have mostly code which is not explicitly
parallelized and require a compiler which will multi-thread compute
intensive inner loops, then I'd advise against the self-assembled parallel
PentiumPro approach.  For more dedicated use, e.g. only computational
chemistry with packages such as CHARMM and/or GAMESS, however, the low
cost will give groups who adopt this approach a major advantage over their
scientific rivals in terms of computational power. 

Rick Venable                  =====\     |=|    "Eschew Obfuscation"
FDA/CBER Biophysics Lab       |____/     |=|
Bethesda, MD  U.S.A.          |   \    / |=|  ( Not an official statement or
rvenable@deimos.cber.nih.gov  |    \  /  |=|    position of the FDA; for that,
http://nmr1.cber.nih.gov/           \/   |=|    see   http://www.fda.gov  )
======================================================================



From ccl@www.ccl.net  Tue Apr 15 12:36:24 1997
Received: from bedrock.ccl.net  for ccl@www.ccl.net
	by www.ccl.net (8.8.3/950822.1) id MAA02191; Tue, 15 Apr 1997 12:29:40 -0400 (EDT)
Received: from comsun.rz.uni-regensburg.de  for herbert.homeier@rchs1.chemie.uni-regensburg.de
	by bedrock.ccl.net (8.8.3/950822.1) id MAA03853; Tue, 15 Apr 1997 12:29:37 -0400 (EDT)
Received: from rchs1.chemie.uni-regensburg.de by comsun.rz.uni-regensburg.de with SMTP id AA14703
  (5.65c/IDA-1.4.4 for <CHEMISTRY@ccl.net>); Tue, 15 Apr 1997 18:29:21 +0200
Received: by rchs1.chemie.uni-regensburg.de (4.1/URRZ-sub (1.5))
	id AA13198; Tue, 15 Apr 97 19:26:57 +0200
Date: Tue, 15 Apr 97 19:26:57 +0200
From: Herbert Homeier t4720 <herbert.homeier@chemie.uni-regensburg.de>
Message-Id: <9704151726.AA13198@rchs1.chemie.uni-regensburg.de>
To: CHEMISTRY@ccl.net, chem-comp@mailbase.ac.uk,
        molecular-dynamics-news@mailbase.ac.uk,
        spectroscopy-group@mailbase.ac.uk, phocet-l@uva.nl
Subject: 1997, 09: (de) Elektronen- und Vibrations"uberg"ange in Metallkomplexen - Theorie und optische Spektren
Reply-To: joachim@theochem.uni-duesseldorf.de



Dear colleague, 

the following announcement might be of interest for you. It is also available online at
URL: http://www.chemie.uni-regensburg.de/pub/elmau3/elmau/elmau.html
I apologize if you receive this message several times, since it has been sent to various
mailing lists.

Yours sincerely

Herbert Homeier
--------------------------------------------------------------
Priv.-Doz. Dr. Herbert H. H. Homeier
Institut fuer Physikalische und Theoretische Chemie
Universitaet Regensburg, D-93040 Regensburg, Germany
Phone: +49-941-943 4720  FAX: +49-941-943 4719/+49-941-943 2305
email: herbert.homeier@na-net.ornl.gov
WWW: http://www.chemie.uni-regensburg.de/~hoh05008
----------------------------------------------------------------------


           Elektronen- und Vibrations"uberg"ange in Metallkomplexen
                      - Theorie und optische Spektren -
                     Dritter Workshop: 28.9. bis 2.10.97
                          Schlo_ Elmau, Oberbayern

      Prof. Dr. Hartmut Yersin        Priv.-Doz. Dr. Joachim Degen
      Universit"at Regensburg         Heinrich-Heine-
      Institut f"ur Physikalische     Universit"at D"usseldorf
      und Theoretische Chemie         Institut f"ur Theoretische Chemie
      D-93040 Regensburg              D-40225 D"usseldorf
      Telefax: 0941 943 4488          Telefax: 0211 81 13466
      Telefon: 0941 943 4464          Telefon: 0211 81 13208

Metallkomplexe werden zunehmend in Zusammenhang mit photochemischen und
photophysikalischen Anwendungen diskutiert. Als Beispiele seien die
Photoresist-Technologie bei der Chip-Herstellung, optische Schalter und
Speicher, die photochemische Ausnutzung der Sonnenenergie, neue
Photovoltaic-Systeme, die Antitumortherapie, supra-molekulare Systeme, sowie
die heterogene Katalyse und Hochtemperatur-Supraleitung genannt.

Bevor ein breiter technologischer Einsatz mvglich ist, sind allerdings noch
wesentliche Fragen zu kl"aren, die insbesondere die Eigenschaften der
beteiligten elektronischen Zustdnde betreffen, denn diese bestimmen
weitgehend z.B. das photochemische Verhalten. In j"ungster Zeit lassen sich
jedoch durch den Einsatz von Methoden der Spektroskopie, der Quantentheorie
sowie numerischer Verfahren Einblicke gewinnen, aufgrund derer eine gezielte
Pr"aparation von Substanzen mit den gew"unschten Eigenschaften m"oglich wird.
Derartige L"osungsans"atze finden bisher noch nicht die angemessene Resonanz.

Die beiden ersten Workshops 1993 mit 25 und 1994 mit "uber 30 Teilnehmern
unter starker internationaler Beteiligung erwiesen sich als so erfolgreich,
da"s aufgrund neuester Entwicklungen sowohl auf theoretischer als auch
experimenteller Seite f"ur 1997 eine weitere Veranstaltung geplant ist. Diese
hat zum Ziel, ein wissenschaftliches Forum zu schaffen, auf dem die
obengenannten Fragestellungen diskutiert werden. Dieses Forum soll besonders
M"oglichkeiten f"ur individuelle Kontakte zwischen international anerkannten
Wissenschaftlern aus dem In- und Ausland mit Diplomanden und Doktoranden
bieten. Auf gro"sen Konferenzen ist dies kaum erreichbar.

Das Schlo"s Elmau liegt in 1000 m H"ohe sehr ruhig am Fu"se der
Wettersteinspitze, ca. 20 km von Garmisch-Partenkirchen entfernt. Die Wahl
des Tagungsortes wird zweifellos zu einer verbindlichen Atmosph"are
beitragen.

Die Tagungsgeb"uhr betrdgt 80,- DM, f"ur Studenten 40,- DM. Die Kosten f"ur die
Unterbringung mit Vollpension betragen 165,- DM pro Person/Nacht im DZ bzw.
195,- DM im EZ. Diplomanden und Doktoranden erhalten vom Freundeskreis der
Elmau e.V. einen Zuschu"s von 50,- DM/Nacht.

Note: Participants from foreign countries can probably obtain some financial
support.

The number of (active) participants is restricted to 30-40 persons.

                                  Programm

 Sonntag, 28.9.1997:                        Anreise (ab 16 Uhr) 18:30 Abendessen, Begr"u"sungsabend
 Montag, 29.9.1997 bis Mittwoch, 1.10.1997: Vortr"age und Posterpr"asentationen
 Donnerstag, 2.10.1997:                     Vortr"age, Mittagessen, Abreise

Bitte senden Sie die beiliegende Anmeldung m"oglichst umgehend, sp"atestens
bis zum 30. Juni 1997 (Deadline) an eine der angegebenen Adressen:

      Prof. Dr. Hartmut Yersin        Priv.-Doz. Dr. Joachim Degen
      Universit"at Regensburg         Heinrich-Heine-
      Institut f"ur Physikalische     Universit"at D"usseldorf
      und Theoretische Chemie         Institut f"ur Theoretische Chemie
      D-93040 Regensburg              D-40225 D"usseldorf
      Telefax: 0941 943 4488          Telefax: 0211 81 13466
      Telefon: 0941 943 4464          Telefon: 0211 81 13208

Bitte legen Sie der Anmeldung ein Abstract (maximal eine Seite) in
englischer Sprache bei. Bitte achten Sie auf eine gute technische Qualit"at.



Anmeldung

Anmeldung zum dritten Workshop auf Schlo"s Elmau. 28.9. bis 2.10.97

Anmeldetermin - Deadline 30.06.97.

The number of (active) participants is restricted to 30-40 persons.

Vorname, Name

Adresse

Anreise am:

Abreise am:

mit (Bahn/PKW):

EZ/DZ (ggf. mit wem):

Titel:

Poster / Kurzvortrag

Datum, Unterschrift







-------------------------------- end of message ------------------------------------

