From owner-chemistry@ccl.net Tue Aug 7 01:08:01 2007 From: "Ross Walker ross-*-rosswalker.co.uk" To: CCL Subject: CCL: Intel quad core processors Message-Id: <-34888-070807010534-20736-qBz0ys9oMKOkHPBYessrUg]*[server.ccl.net> X-Original-From: "Ross Walker" Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="iso-8859-1" Date: Mon, 6 Aug 2007 21:04:59 -0700 MIME-Version: 1.0 Sent to CCL by: "Ross Walker" [ross|-|rosswalker.co.uk] Dear Andrew > > Has anyone tried Intel quad core processors? Opinions? The Intel Quad core chips are hopelessly short on memory bandwidth such that performance suffers. Given a cluster of these machines hooked up with infiniband you get more throughput (for AMBER 9 PMEMD) running 96 cpus as 16 nodes x 6 cpus (I.e. leave 2 cpus idle on each machine) than you do running 128 cpus as 16 nodes x 8 cpus. At the supercomputer centers you get charged for all 128 cpu's anyway so previously there was never any benefit to this but now you actually get more ns per service unit at 16*6 than you do at 16*8. Basically the chip design has been done on the cheap in order to get to "quad" core as fast as possible, to please the PR junkies, that Intel hasn't actually stopped to think about performance. And things are not likely to improve until Intel's CSI gets released. So in terms of "performance" I would write off the Clovertown chips. NCSA's new system "Abe" has Clovertown chips in it as 2xquad core nodes(E5345 .. 2.33GHz). Here's some numbers for AMBER 9 PMEMD running the FactorIX benchmark (91K atoms) (note the other cores are left idle while running this benchmark): Ncpu Throughput(ps/day) Speedup 2 134.9 2.00 4 260.3 3.86 5 289.2 4.29 6 328.0 4.86 7 344.4 5.11 8 366.4 5.43 So it looks like PMEMD runs out of steam at around 4 or 5 cpus on this machine and the scaling falls off - however, this is entirely a function of the poor chip design. For example if you run an 8 processor run across two nodes hooked up with infiniband, so you are using 4 cpus per 8 way node you get: 4x2 488.93 7.25 So going non local, albeit leaving 4 cores per node idle, vastly improves the performance. This of course makes working out price/performance very very difficult. Simplest metric is likely to compare the 8 way Clovertown boxes against 4 way Opterons. On a single node running 8 x 1cpu jobs I think you'd likely see the same sort of behaviour. I.e. as you go above 4 jobs the performance of each job would begin to drop. Basically these are really essentially 4 or 5 cpu nodes with 3 extra little heating units attached so Intel can make it's contribution to global warming. If you want to try things out for yourself, if you're in US academia, you can sign up for a teragrid wide roaming development account of 30,000 SUs by just submitting an abstract see: http://www.sdsc.edu/us/allocations/ This will let you run on all the NSF machines so you can compare say TACC Lonestar (Xeon 5100 series 2.66GHz dual-core x 2 = 4way SMP) against NCSA Abe (E5345 .. 2.33GHz Clovertown x 2 = 8 way SMP). Short summary: if someone like Dell will sell you the dual quad core machine for 5/8th of the list price then it is probably a good deal. All the best Ross /\ \/ |\oss Walker | HPC Consultant and Staff Scientist | | San Diego Supercomputer Center | | Tel: +1 858 822 0854 | EMail:- ross .. rosswalker.co.uk | | http://www.rosswalker.co.uk | PGP Key available on request | Note: Electronic Mail is not secure, has no guarantee of delivery, may not be read every day, and should not be used for urgent or sensitive issues. From owner-chemistry@ccl.net Tue Aug 7 01:42:01 2007 From: "Syed Tarique Moin tarisyed]![yahoo.com" To: CCL Subject: CCL:G: Manual and tutorials for gaussian 98 Message-Id: <-34889-070804101230-23146-3u75LiEG5MHT0i7ESU46DA===server.ccl.net> X-Original-From: "Syed Tarique Moin" Date: Sat, 4 Aug 2007 10:12:26 -0400 Sent to CCL by: "Syed Tarique Moin" [tarisyed%yahoo.com] Hello, Anyone can send me the pdf file of gaussian 98 manual as well as link of tutorials for gaussian 98. Regards From owner-chemistry@ccl.net Tue Aug 7 10:53:00 2007 From: "Shobe, David David.Shobe**sud-chemie.com" To: CCL Subject: CCL:G: Manual and tutorials for gaussian 98 Message-Id: <-34890-070807104602-12581-HpAarxX1s+5x3Snd3ApjBg.:.server.ccl.net> X-Original-From: "Shobe, David" Content-class: urn:content-classes:message Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="us-ascii" Date: Tue, 7 Aug 2007 16:45:16 +0200 MIME-Version: 1.0 Sent to CCL by: "Shobe, David" [David.Shobe- -sud-chemie.com] Syed, The G98 manual is downloadable at the last hyperlink on this page: http://www.gaussian.com/tech_top_level.htm I imagine you can buy "Exploring Chemistry with Electronic Structure Methods" (if that's what you meant by the tutorial) from an online bookstore. Regards, --David Shobe -----Original Message----- > From: owner-chemistry]_[ccl.net [mailto:owner-chemistry]_[ccl.net] Sent: Saturday, August 04, 2007 10:12 AM To: Shobe, David Subject: CCL:G: Manual and tutorials for gaussian 98 Sent to CCL by: "Syed Tarique Moin" [tarisyed%yahoo.com] Hello, Anyone can send me the pdf file of gaussian 98 manual as well as link of tutorials for gaussian 98. Regardshttp://www.ccl.net/cgi-bin/ccl/send_ccl_messagehttp://www.ccl.net/chemistry/sub_unsub.shtmlhttp://www.ccl.net/spammers.txtThis e-mail message may contain confidential and / or privileged information. If you are not an addressee or otherwise authorized to receive this message, you should not use, copy, disclose or take any action based on this e-mail or any information contained in the message. If you have received this material in error, please advise the sender immediately by reply e-mail and delete this message. Thank you. From owner-chemistry@ccl.net Tue Aug 7 12:29:01 2007 From: "Telhat Ozdogan telhatoz,omu.edu.tr" To: CCL Subject: CCL:G: Help on using Gaussian 2003 Message-Id: <-34891-070806093210-19360-Z5C0ryyueHrFojM1tx8Jrg^server.ccl.net> X-Original-From: "Telhat Ozdogan" Date: Mon, 6 Aug 2007 09:32:03 -0400 Sent to CCL by: "Telhat Ozdogan" [telhatoz#omu.edu.tr] Is there anybody say that "can a molecule be separated for the calculation in Gaussian 2003". From owner-chemistry@ccl.net Tue Aug 7 13:20:00 2007 From: "Christoph Weber weber+/-scripps.edu" To: CCL Subject: CCL:G: Intel quad core processors Message-Id: <-34892-070806204236-22815-mv3ZBuBP7WDoYcWrkCQ2MA,+,server.ccl.net> X-Original-From: Christoph Weber Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=windows-1252; format=flowed Date: Mon, 06 Aug 2007 16:59:08 -0700 MIME-Version: 1.0 Sent to CCL by: Christoph Weber [weber(!)scripps.edu] Very well put Rick! We came to the same conclusions, for the same reasons. Personally, I can say that the Clovertown CPUs yielded our best Gaussian benchmarks ever. Christoph Rick Venable venabler*_*nhlbi.nih.gov wrote: > I don’t have numbers, and may not be able to provide them, but a couple > scientists in our lab ran some benchmarks with both MD (CHARMM) and ab > initio codes (Q-chem, GAMESS). The results led to choosing the Intel > ‘Clovertown’ processors for our new set of cluster nodes over dual core > Opteron. I was told the QM codes in particular performed well, which > influenced the choice. Price/performance and available hardware > supporting a high bandwidth interconnect (InfiniBand in this case) were > factors in the decision as well. > > The AMD Barcelona processor was not yet available, and we had to meet > budget deadlines ... > > -- > Rick Venable 29/500 > Membrane Biophysics Section > NIH/NHLBI Lab. of Computational Biology > Bethesda, MD 20892-8014 U.S.A. > (301) 496-1905 venabler AT nhlbi*nih*gov -- | Dr. Christoph Weber Sen. Applications Specialist | Research Computing, TPC21 858-784-9869 (phone) | The Scripps Research Institute 858-784-9301 (FAX) | La Jolla CA 92037-1027 weber at scripps dot edu | http://www.scripps.edu/~weber/ Anything that is produced by evolution is bound to be a bit of a mess. -- Sidney Brenner From owner-chemistry@ccl.net Tue Aug 7 15:46:01 2007 From: "Alex A. Granovsky gran]=[classic.chem.msu.su" To: CCL Subject: CCL: Intel quad core processors Message-Id: <-34893-070807153650-20267-klLzgQ8t/R86nwJL4UpC3w+/-server.ccl.net> X-Original-From: "Alex A. Granovsky" Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="ISO-8859-1" Date: Tue, 7 Aug 2007 22:24:48 +0400 MIME-Version: 1.0 Sent to CCL by: "Alex A. Granovsky" [gran|a|classic.chem.msu.su] Hi, while the problem of memory bandwidth is indeed an open issue, it is not of so much critical importance for well-written applications. For PC GAMESS, we have results of benches for different types of workloads on different hardware, including Intel's four core Kentsfield and Clovertown based systems, summarized in the "Performance" section at the PC GAMESS homepage at MSU ( http://classic.chem.msu.su/gran/gamess ). These results can be compared with those of AMD's processors. We also have interesting data for Tigerton-based systems (four four core Xeons) but cannot publish them before the official processor launch. Best regards, Alex Granovsky ----- Original Message ----- > From: "Ross Walker ross-*-rosswalker.co.uk" To: "Granovsky, Alex, A. " Sent: Tuesday, August 07, 2007 8:04 AM Subject: CCL: Intel quad core processors > > Sent to CCL by: "Ross Walker" [ross|-|rosswalker.co.uk] > Dear Andrew > > > > Has anyone tried Intel quad core processors? Opinions? > > The Intel Quad core chips are hopelessly short on memory bandwidth such that > performance suffers. > > Given a cluster of these machines hooked up with infiniband you get more > throughput (for AMBER 9 PMEMD) running 96 cpus as 16 nodes x 6 cpus > (I.e. leave 2 cpus idle on > each machine) than you do running 128 cpus as 16 nodes x 8 cpus. At the > supercomputer centers you get charged for all 128 cpu's anyway so previously > there was never any benefit to this but now you actually get more ns per > service unit at 16*6 than you do at 16*8. > > Basically the chip design has been done on the cheap in order to get to > "quad" core as fast as possible, to please the PR junkies, that Intel hasn't > actually stopped to think about performance. And things are not likely to > improve until Intel's CSI gets released. So in terms of "performance" I > would write off the Clovertown chips. > > NCSA's new system "Abe" has Clovertown chips in it as 2xquad core > nodes(E5345 :: 2.33GHz). Here's some numbers for AMBER 9 PMEMD running > the FactorIX benchmark (91K atoms) (note the other cores are left idle > while running this benchmark): > > Ncpu Throughput(ps/day) Speedup > 2 134.9 2.00 > 4 260.3 3.86 > 5 289.2 4.29 > 6 328.0 4.86 > 7 344.4 5.11 > 8 366.4 5.43 > > So it looks like PMEMD runs out of steam at around 4 or 5 cpus on this > machine and the scaling falls off - however, this is entirely a function of > the poor chip design. For example if you run an 8 processor run across two > nodes hooked up with infiniband, so you are using 4 cpus per 8 way node you > get: > > 4x2 488.93 7.25 > > So going non local, albeit leaving 4 cores per node idle, vastly improves > the performance. This of course makes working out price/performance very > very difficult. Simplest metric is likely to compare the 8 way Clovertown > boxes against 4 way Opterons. > > On a single node running 8 x 1cpu jobs I think you'd likely see the same > sort of behaviour. I.e. as you go above 4 jobs the performance of each job > would begin to drop. Basically these are really essentially 4 or 5 cpu nodes > with 3 extra little heating units attached so Intel can make it's > contribution to global warming. > > If you want to try things out for yourself, if you're in US academia, > you can sign up for a teragrid wide roaming development account of > 30,000 SUs by just submitting an abstract > see: http://www.sdsc.edu/us/allocations/ > > This will let you run on all the NSF machines so you can compare say TACC > Lonestar (Xeon 5100 series 2.66GHz dual-core x 2 = 4way SMP) against NCSA > Abe (E5345 :: 2.33GHz Clovertown x 2 = 8 way SMP). > > Short summary: if someone like Dell will sell you the dual quad core machine > for 5/8th of the list price then it is probably a good deal. > > All the best > Ross > > /\ > \/ > |\oss Walker > > | HPC Consultant and Staff Scientist | > | San Diego Supercomputer Center | > | Tel: +1 858 822 0854 | EMail:- ross::rosswalker.co.uk | > | http://www.rosswalker.co.uk | PGP Key available on request | > > Note: Electronic Mail is not secure, has no guarantee of delivery, may not > be read every day, and should not be used for urgent or sensitive issues.> > > > From owner-chemistry@ccl.net Tue Aug 7 16:24:01 2007 From: "Dipesh Risal drisal+/-accelrys.com" To: CCL Subject: CCL: New Science from Accelrys at the 2007 ACS Fall Meeting in Boston Message-Id: <-34894-070807155733-26487-UeiMiEo8EMuEkxIrnAwoQA]-[server.ccl.net> X-Original-From: Dipesh Risal Content-Type: multipart/alternative; boundary="=_alternative 006D98D088257330_=" Date: Tue, 7 Aug 2007 12:57:05 -0700 MIME-Version: 1.0 Sent to CCL by: Dipesh Risal [drisal:-:accelrys.com] This is a multipart message in MIME format. --=_alternative 006D98D088257330_= Content-Type: text/plain; charset="US-ASCII" Dear Colleagues, Please join us over lunch at the ACS Meeting in Boston as we present new science from Accelrys, including a rational approach to the treatment of receptor flexibility during docking. Stop by the Accelrys Booth (Booth # 819-821) to continue the conversation. New Solutions for Receptor-Flexible Docking and vHTS Instructor(s): Dr. Dipesh Risal Date: Wednesday, August 22, 12:00 - 2:30 PM Room: BCEC, room 102A Pre-Register at http://acsboston.expoplanner.com/plworkshopregister.wcs?entryid=300017 Full list of Talks, Posters and Workshops: http://www.accelrys.com/events/conferences/acs/fall07.html Sincerely, Dipesh Risal, Ph. D. Product Manager, Life Sciences Accelrys, Inc. 10188 Telesis Court, Suite 100 San Diego, CA 92121, U.S.A. Tel: +1 (858) 799 5224, Cell: +1 (858) 414 2702 Fax: +1 (858) 799 5100 email: drisal|accelrys.com http://www.accelrys.com --=_alternative 006D98D088257330_= Content-Type: text/html; charset="US-ASCII"
Dear Colleagues,

Please join us over lunch at the ACS Meeting in Boston as we present new science from Accelrys, including a rational approach to the treatment of receptor flexibility during docking. Stop by the Accelrys Booth (Booth #  819-821) to continue the conversation.

New Solutions for Receptor-Flexible Docking and vHTS
Instructor(s): Dr. Dipesh Risal
Date: Wednesday, August 22, 12:00 - 2:30 PM
Room: BCEC, room 102A

Pre-Register at http://acsboston.expoplanner.com/plworkshopregister.wcs?entryid=300017

Full list of Talks, Posters and Workshops:  http://www.accelrys.com/events/conferences/acs/fall07.html

Sincerely,

Dipesh Risal, Ph. D.
Product Manager, Life Sciences
Accelrys, Inc.
10188 Telesis Court, Suite 100
San Diego, CA 92121, U.S.A.
Tel:  +1 (858) 799 5224, Cell:  +1 (858) 414 2702
Fax: +1 (858) 799 5100
email: drisal|accelrys.com
http://www.accelrys.com
--=_alternative 006D98D088257330_=--