CCL: Science code manifesto



Having just reviewed an area of sterol/lipid interactions in 2010, it is clear that experimental design is key in obtaining consistent results in that area. It is no different in computer chemistry. It is important to understand how programs do the calculations, so that discrepancies can be identified and explained. This should be performed by authors in their Introduction and Discussion sections of manuscripts or at least in review articles. However, in the rush to publish, given the pressure to commercialize, these details are often overlooked. Having a manifesto is a nice idea, but it might be appropriate to start by having code writers explain the calculation and code employed in a supplementary appendix available for electronic download.

On Mon, Oct 17, 2011 at 12:37 PM, Dr W.R. Pitt wrp24*_*cam.ac.uk <owner-chemistry : ccl.net> wrote:

Sent to CCL by: "Dr W.R. Pitt" [wrp24!=!cam.ac.uk]
I have two points to add to the discussion:

1/ As far as I can tell the the Science Code Manifesto does not insist on the use of open source software. In the discussion page it states "Use of languages, libraries, systems, and tools which are widely available is strongly recommended." The data analysis script should be made available but the manifesto seems to leave room for that script to be written using a commercial toolkit. I use commercial software as well as free software. Sometimes the former is only way (within the time I have available) of doing the job I have in mind. From my point of view, it is better to do science using commercial software and publish it than not doing it all.

2/ I do agree that the analysis scripts should be made public. However, I'm not sure it is necessary for a referee to be able to re-run the analysis script and check every last detail. The wider peer community would have the opportunity to do this. A natural consequence of this would be multiple versions of results in papers, derived from bug-fixes, following reader input. Much more rigorous testing of work prior to publication should result, which can only be a good thing.

Relivant articles :
Peer reviewers swamped, so extras get the heave-ho http://www.timeshighereducation.co.uk/story.asp?storyCode=413390&sectioncode=26

Report from the Publishing Open Data Working Group meeting http://blogs.openaccesscentral.com/blogs/bmcblog/entry/report_from_the_publishing_open






On Oct 17 2011, Jordi Villà i Freixa jordi.villafreixa/agmail.com wrote:

Allow me to be drastic here.

In my opinion this is a quite absurd discussion. Having the ability to make code transparent for others to reuse and build on top of is obviously way better for science than having black boxes. It is not needed to provide the ability to the referee (who by definition is a busy guy) to rerun the calculations, but to the scientific community as a whole. It may be the time already to reduce the often over-critic referee system and move to a more open science schema. Repositories for code have been extremely useful in the past and I think being concerned about the ability of the scientific community to improve our results is way more positive that believing that we and only we can run a code and understand its results.

Having myself a foot in enterpreneourship in addition to academia, I think that closing/selling code is futurless in a world with tens of thousands of developers ready for better implementations for each new idea. What makes sense is understanding the possible use of each technology, not closing them in a useless effort of overprotection.

So, the need for the manifesto is clear. Start opening our eyes to a wider
conception of science, far from egoistic and closed minded uses of it. The
system should follow this, one day or another.



2011/10/17 Pedro Silva pedros^ufp.edu.pt <owner-chemistry(!)ccl.net>


Sent to CCL by: Pedro Silva [pedros(_)ufp.edu.pt]
2011/10/17 João Brandão jbrandao+/-ualg.pt <owner-chemistry^-^ccl.net>:
> Sorry, but I disagree.
>
> "but if there are real concerns about the work, it is necessary that > the scientific community can look seriously at the code to see exactly > what the program
is
> doing."
>
> In my opinion:
> If anyone has real concerns about any work, the best way for science
> development is to write (or use) a different code and compare the
results.
>

The problem is: if the results do not agree with each other, how can
you adjudicate between the competing claims without access to the
code?>








-= This is automatically added to each message by the mailing script =-

E-mail to subscribers: CHEMISTRY : ccl.net or use:
    http://www.ccl.net/cgi-bin/ccl/send_ccl_message

E-mail to administrators: CHEMISTRY-REQUEST : ccl.net or use