From mathieu@ripault.cea.fr  Fri Aug  7 04:57:17 1998
Received: from nenuphar.saclay.cea.fr (nenuphar.saclay.cea.fr [132.166.192.7])
        by www.ccl.net (8.8.3/8.8.6/OSC/CCL 1.0) with ESMTP id EAA24751
        Fri, 7 Aug 1998 04:57:14 -0400 (EDT)
Received: from muguet.saclay.cea.fr (muguet.saclay.cea.fr [132.166.192.6]) 
	by nenuphar.saclay.cea.fr (8.8.8/CEAnet-relay-4.1.1) with ESMTP id KAA23293
	for <chemistry@www.ccl.net>; Fri, 7 Aug 1998 10:57:38 +0200 (MET DST)
Received: from cerbere.limeil.cea.fr  (cerbere.bruyeres.cea.fr [132.165.76.1]) by muguet.saclay.cea.fr
       (8.8.8/CEAnet-relay-4.1.1) with SMTP id KAA05020
        for <chemistry@www.ccl.net>; Fri, 7 Aug 1998 10:56:35 +0200 (MET DST)
Received: by cerbere.limeil.cea.fr (CEANET-1.1+)
	id AA13451; Fri, 7 Aug 1998 10:57:12 +0200
Received: from indre.ripault.cea.fr(132.165.32.1) by cerbere via smap 
	id sma013449; Fri Aug  7 10:57:05 1998
Received: from ripault.cea.fr (manse.ripault.cea.fr [132.165.32.9]) by ripault.cea.fr. (8.8.7/8.7.3) with ESMTP id KAA05999 for <chemistry@www.ccl.net>; Fri, 7 Aug 1998 10:55:34 +0200
Received: from manse.ripault.cea.fr (localhost [127.0.0.1])
	by ripault.cea.fr (8.8.7/8.8.7) with SMTP id JAA00939
	for <chemistry@www.ccl.net>; Fri, 7 Aug 1998 09:57:34 +0200
Sender: mathieu@ripault.cea.fr
Message-Id: <35CAB36D.79040C47@ripault.cea.fr>
Date: Fri, 07 Aug 1998 07:57:33 +0000
From: Didier MATHIEU <mathieu@ripault.cea.fr>
Organization: CEA - Le Ripault
X-Mailer: Mozilla 3.01 (X11; I; Linux 2.0.32 i586)
Mime-Version: 1.0
To: chemistry@www.ccl.net
Subject: XMol for Linux ?
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Dear all,
According to CCL's archives, the XMOL graphics program does not appear
to be available for Linux... Does anybody know if we can have it running
under Linux, or know of another program showing atomic trajectories ?
Thank you
-- 
Didier MATHIEU
CEA - Le Ripault, BP 16
37260 Monts (France)
Tel. 33(0)2.47.34.41.85

From ib@oc30.uni-paderborn.de  Fri Aug  7 05:21:57 1998
Received: from oc30.uni-paderborn.de (oc30.uni-paderborn.de [131.234.240.90])
        by www.ccl.net (8.8.3/8.8.6/OSC/CCL 1.0) with SMTP id FAA24895
        Fri, 7 Aug 1998 05:21:55 -0400 (EDT)
Received: by oc30.uni-paderborn.de (951211.SGI.8.6.12.PATCH1502/940406.SGI.AUTO)
	 id LAA06652; Fri, 7 Aug 1998 11:19:52 +0200
From: "Ingo Brunberg" <ib@oc30.uni-paderborn.de>
Message-Id: <9808071119.ZM6650@oc30.uni-paderborn.de>
Date: Fri, 7 Aug 1998 11:19:47 -0600
In-Reply-To: "Douglas E. Stack" <destack@unomaha.edu>
        "CCL:G:Syntax for Cube option in Gaussian94" (Aug  6,  8:42am)
References: <01BDC116.1F6EAC70@zinc.unomaha.edu>
X-Mailer: Z-Mail (3.2.2 10apr95 MediaMail)
To: "Douglas E. Stack" <destack@unomaha.edu>,
        "'CCL'" <CHEMISTRY@www.ccl.net>
Subject: Re: CCL:G:Syntax for Cube option in Gaussian94
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii


On Aug 6,  8:42am, Douglas E. Stack wrote:
> Subject: CCL:G:Syntax for Cube option in Gaussian94
> I'm having problems using the cube properties keyword in Gaussian94.  I'd
like to
> evaluate both the HOMO and the LUMO molecular orbitals.  The manual is not
too clear on this keyword.  It states that the orbitals option should be
followed by the cube filename immediately followed by a list of orbitals.  I've
try the following  with no luck.
>
> Cube=orbitals filename HOMO LUMO
> Cube=(orbitals,filename, HOMO,LUMO)
> Cube=orbitals filename
> HOMO LUMO
>
> Cube=orbitals
> filename HOMO LUMO
>
> Can any help me with the correct syntax for this keyword, Thanks!
>
>
>
> Douglas E. Stack
> Assistant Professor
> Department of Chemistry
> University of Nebraska at Omaha
> Omaha, NE 68182-0109
> (402) 554-3647
> (402) 544-3888 (fax)
> destack@unomaha.edu

Hi,
you only have to specify cube=orbitals in the route section. And at the end of
the input file, somewhere below your blank line terminated geometry
specifycation you must give the filename and the orbitals to evaluate like
this:

filename
HOMO LUMO

From bausch@chem.vill.edu  Fri Aug  7 14:54:59 1998
Received: from rs6chem.chem.vill.edu (rs6chem.chem.vill.edu [153.104.73.1])
        by www.ccl.net (8.8.3/8.8.6/OSC/CCL 1.0) with ESMTP id OAA29913
        Fri, 7 Aug 1998 14:54:56 -0400 (EDT)
Received: from [153.104.73.18] (bausch.chem.vill.edu [153.104.73.18])
	by rs6chem.chem.vill.edu (8.8.5/8.8.5) with ESMTP id OAA16748
	for <chemistry@www.ccl.net>; Fri, 7 Aug 1998 14:54:57 -0400
Message-Id: <l03130304b1f0fb84ddc1@[153.104.73.18]>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Fri, 7 Aug 1998 14:57:59 -0400
To: chemistry@www.ccl.net
From: "Joseph W. Bausch" <bausch@chem.vill.edu>
Subject: summary: Seeking experiences with DQS 


Last Monday I posted the following note to the ccl, and below it are the
responses that I received.  In a netshell, I exported via NFS a /home area
on one of the boxes to all the rest, and built into the .cshrc of each user
the G94 environment variables so that scratch files are written locally and
the G94 code is read locally (similar to what Mike Frisch suggested to me
in reply #4 below).

My post from last Monday:

Hi, I've recently put together a handful of linux boxes for our department
to serve as compute machines primarily for running Gaussian.  I have also
got up and running the Distributed Queueing System out of Florida State
University, with the hopes it can help with management of job submissions.
I am hoping there are others in the CCL-world that have done this as well
and can pass along their comments/experiences, in particular in the
following area:

-it was relatively straightforward to set up a complex so people can submit
a G94 job to the complex, then the job gets sent to the least busy computer
of our group of linux machines.  However, what I don't know how to handle
is dealing with checkpoint files and scratch files.  If a person submits a
job that goes to computer A and runs successfully, then wants to run
another job using the same checkpoint file, how do you handle this with
dqs?  If the job gets submitted to say computer B, the checkpoint file
isn't there, but instead is on computer A.

I guess what I really want to have is the following:  have all users just
have to log into one machine, submit their Gaussian jobs from there, then
somehow have their output and checkpoint files (when a job is complete) be
copied back onto that one computer, with the ability to submit another job
to the complex that might require the use of the checkpoint file.  Right
now I don't have the computers in the complex sharing any hard drives,
except for the files needed to run DQS, and this is done via NFS.

Any advice/comments would be appreciated.  I'll summarize if there is
sufficient interest.

Joe
bausch@chem.vill.edu
--------------------

#1:

From: gaussian.com!fox@lorentzian.com (Doug Fox)

  Joe,

   When I run Gaussian workshops one of the easiest ways to organize
things is to designate one machine as the home server, that is all accounts
have their home directory on an NFS mounted partition.  By default the
log and checkpoint file are put in the current working directory and
with a little effort you can get the scratch file name to point to
local disk.  That way no matter where people sit they get the log and
home directory.  It works better if this machine is either more powerful
or less loaded than the Gaussian nodes but it does not need to be very
powerful for just file purposes.

#2:

From: Drake Diedrich <Drake.Diedrich@anu.edu.au>

   Enclosed is the README for the Debian DQS package.  It has some
suggestions towards the end about NFS automounters and such.
DQS works best if 'pwd' returns the same pathname on all machines in the
cluster.  Static NFS mounts and autofs are two ways to do this.

dqs for DEBIAN
----------------------

   The package installs itself as both a client and a server for cell
"Local".  No queues are created on this cell.  DQS generates many error and
warning messages.  Most of these (adding hosts, creating files) can be
ignored.
   To create and enable a queue, type
	qconf -aq
   This command will start up a vi on a standard queue configuration.
Edit the hostname and queuename, and whatever other parameters you wish.
Save and exit from vi, and qmod will attempt to add the queue.
	qmod -e <queue name>
Enables the just-created queue.
	qstat -f
will list the status of all queues in the cell.  If the cell is in an
UNKNOWN state, the qmaster and dqs_execd aren't speaking to each other.
Check your networking and hostname, make sure you can ping `hostname`.
Verify that hostname is in /etc/dqs/resolve_file.  Shutdown and restart dqs.
	/etc/init.d/dqs stop ; /etc/init.d/dqs start
Watch dqs_execd, as it sometimes survives a kill.
   If the qconf or qmod fails with an alarm timeout, try fiddling with the
ALARM* parameters in /etc/dqs/conf_file.  The current values aren't quite
right for most systems.  If you come up with good ones, let me know.

   Use qsub to submit a job.  For instance:
	qsub
	env
	^D
will run a job that prints out the environment in a batch job.  The
output will go to a set of files starting with STDIN in your home
directory.
   If you have more than one machine, you'll want to change the default
setup.  Choose one machine as the cell master, and whatever name you want
for the cell (a domain or hostname is conventional). Edit
/etc/dqs/resolve_file, and distribute a copy to each client.  Restart dqs on
all cell nodes.  A qmaster daemon should start on the cell master, and a
dqs_execd should start on every machine.  You'll need to add queues on the
master initially, in order for the qmaster to build up a trust list.
   DQS requires three tcp/ip ports,  610,611,612.  If these conflict with
existing port numbers edit /etc/services, or choose different existing names in
/etc/dqs/conf_file.  All machines in a cell must use the same port numbers.
DQS is not going to obtain official IANA port numbers, as DQS 4 is under
development and will use a different protocol.  Copying /etc/dqs/* and
/etc/services to all machines in your cell is the easiest way to keep
everything consistent.
   Sharing user home directories across the cell is recommended, it will
simplify writing DQS jobs.  Static NFS mounts may be the simplest.
Automounts from multiple hosts are what I use, but there are some
difficulties.  amd does not automatically mount files accessed in the /amd
directory, so the -cwd option often fails. autofs should resolve this
difficulty, but has only recently been released in a stable kernel (2.0.31).
   There are serious security implications to using DQS.  Running dqs
on a cell is essentially the same as adding all machines in the cell
to /etc/hosts.equiv.  uid 1015 on one host will be trusted to be uid
1015 on all the others in the cell.  Make sure these really are the
same user.  Root on any machine can become any user they wish without
authentication.  Consider installing bios passwords.
   Use of NIS is recommended, as it isn't any less secure than DQS, and
should reduce the need to manage accounts on all machines in the cell.  You
may need to edit /etc/nsswitch.conf.  Putting your cell behind a firewall or
off the internet entirely is also a good idea.
   Moving the cell master: kill the qmaster.  Copy the
/var/spool/qmaster/hostname directory from the old master to the new master,
renaming the hostname component.  Edit the resolve_file on all nodes in the
cell.  Restart the qmaster.  Long running jobs can survive, but if a
dqs_execd dies, any jobs on that host will die.  Try not to damage them.
The job id restarts at #1, so don't expect low-numbered jobs to survive a
transition.
  Parallel jobs are supported.  For instance, to submit a 3 node pvmpov job:

qsub -par PVM -master `hostname` -l qty.eq.2,linux
	povray -i /usr/doc/povray/povscn/level2/skyvase.pov \
	+v1 +ft -x +a0.300 +r3 -q9 -mv2.0 -w640 -h480 -d +N
^D


Drake Diedrich <Drake.Diedrich@anu.edu.au>, Tue,  1 Jul 1997 17:31:56 +1000

--
Dr. Drake Diedrich, Research Officer - Computing, (02)6279-8302
John Curtin School of Medical Research, Australian National University 0200
Replies to other than Drake.Diedrich@anu.edu.au will be routed off-planet

#3

From: frisch@lorentzian.com (Mike Frisch)

One easy solution is to do the following:  Make each user have the same
login directory on every machine (i.e., pick one machine to have the
non-scratch user files and NFS mount it's disk(s) on all the other machines.
Then every Gaussian job will put its output and checkpoint files and
look for its checkpoint file there regardless of where it is run.

Have each user's .csrch or .profile set up the name of the gaussian scratch
directory based on the host name, so that each Gaussian job gets local
disk for scratch files on the machine on which it happens to run.  For example,
here we have a directory /host-name/s0/scratch (and maybe
/host-name/s1/scratch,
etc. if there is more than one scratch disk) on each machine.  Thus if the
user files are on node named M1 in directory /userfiles then every other
machine
will nfs mount M1:/userfiles as /userfiles.  The machine named, say, M27 would
have a local filesystem /M27/s0/scratch and the user's .cshrc file (of
which there
will only be one, in M1's /userfile filesystem will have a line such as

setenv GAUSS_SCRDIR /`hostname`/s0/scratch

which sets the right local scratch directory for whatever machine is being
used.

Similarly you could either have one copy of the Gaussian executables in the
NFS-mounted /userfiles system (slow but economical of disk space) or you
could also have a file system which includes the host name for a local copy
of the executables on each machine (probably the better choice unless the
machines are very short of disk).  Then each user's .cshrc or .login would
look something like

setenv GAUSS_SCRDIR /`hostname`/s0/scratch
setenv g94root /`hostname`/s0/programs
source $g94root/g94/bsd/g94.login

(here, for example, the local copy of g94 for machine named M27 would be in
/M27/s0/programs/g94).

The basic idea is that every file system is either a) NFS mounted and one file
system has the same name everywhere or b) is local to a particular machine and
has the name of the machine as part of its path name, so that it can easily
be identified.

Mike Frisch

#4

From: ross@cgl.ucsf.EDU

PS - I think the PBS scheduling pkg may do the file copying
automatically. See http://science.nas.nasa.gov/Software/PBS/

Bill Ross

#5

From: Matt Challacombe T-12 <mchalla@t12.lanl.gov>

Please summarize!

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Matt Challacombe
Los Alamos National Laboratory    http://www.t12.lanl.gov/~mchalla/
Theoretical Division              email: mchalla@t12.lanl.gov
Group T-12, Mail Stop B268        phone:   (505) 665-5905
Los Alamos, New Mexico  87545     fax:     (505) 665-3909
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

#6

From: Dmitry Khoroshun <khoroshun@terra.chem.emory.edu>

Hello!

We have an operational cluster, check out
http://terra.chem.emory.edu

Honestly, it was really strange to read your letter. You seem to be
familiar with NFS, i.e. with unix's mounting on the remote computer.
It is a usual practice to have a checkpoint filesystem, say,
/checkpoint, physically present on a server, and mounted via
NFS over the network. This has nothing to do with dqs.

Scratch files are of course another issue, and are usually
placed on a scratch directory on the computer which runs the job.

> I guess what I really want to have is the following:  have all users just
> have to log into one machine, submit their Gaussian jobs from there, then
> somehow have their output and checkpoint files (when a job is complete) be
> copied back onto that one computer, with the ability to submit another job

Again, /home directory on clusters is usually the same for all machines.

> now I don't have the computers in the complex sharing any hard drives,

Thats too bad, change this aspect of the situation. You have to share
the drives.

Sincerely,
Dmotry Khoroshun

#7

From: "Wilson, Bruce E" <bewilson@eastman.com>

I have this sort of thing set up here, using a somewhat heterogeneous
system.
I'm using Codine, which is the commercial software derived from DQS, so
the basics are the same.  What I have done is as follows:

1) In the login process, there is a variable called hstnme that gets
defined,
>from the hostname executable.  I then have something like
  if ( -e .cshrc_$hstnme ) then
    source .cshrc_$hstnme
  end
in the .cshrc and .login files for users (we're exclusively csh here,
but the same
sort of thing can be made to work for other shells).

2) The users all use the same disk area for their home directory on all
machines,
with the above mechanism used to handle the machine specific things for
different machines and different archetectures.

3) Everybody uses /g94tmp as the location for gaussian scratch space.
On
each machine /g94tmp is linked to an appropriate local scratch space
area.
Each machine also has /usr/local set up as the Gaussian
root, with things handled appropriately so that the same type machines
all share a single copy of the executable (using soft links and NFS
volumes)

4) Submitted jobs are set up to run in the current working directory of
the
submission.

5) The following script is used to submit Gaussian jobs to the queue
system (g94
is the resource that's associated with the Gaussian queue complex).
mancha 4% more subgauss
#!/bin/csh
#
# Bruce E. Wilson, Chemicals Research, Eastman Chemical Company
# Command file to submit a Gaussian job to Codine, with some niceties.
#
while ( $#argv )
  if ( -f $argv[1] ) then
    set runfile=$argv[1]:r.run
    echo "cd $cwd" > $runfile
    echo "setenv g94root /usr/local" >> $runfile
    echo "source $g94root/g94/bsd/g94.login" >> $runfile
    echo "setenv GAUSS_SCRDIR /g94tmp" >> $runfile
    echo "time $g94root/g94/g94 < $argv[1] > $argv[1]:r.log" >> $runfile
    echo "tail -15 $argv[1]:r.log" >> $runfile
    qsub -l g94 -N $argv[1]:r -j y -o $argv[1]:r.nqslog  >&!
$argv[1]:r.sublog $runfile
    cat $argv[1]:r.sublog
  else
    echo "^^^^^^^^^^^^^^^^^^^^^^^^^"
    echo "ERROR: $argv[1] does not appear to be a text file"
  endif
  shift
end

Hope this is useful

======================================================
Bruce E. Wilson (bewilson@eastman.com)
Eastman Chemical Company, Chemicals Research Division
Lincoln St, B-150B, Box 1972
Kingsport, TN  37662-5150, USA
Office: (423) 229-8886; FAX: (423) 229-4558

the end!

Joe
bausch@chem.vill.edu
--------------------




