Open access isn’t as open as you think, especially when there are corporate interests involved
Cambridge computer scientists have established a new gold standard for open research, in order to make scientific results more robust and reliable
A group of Cambridge computer scientists have set a new gold standard for openness and reproducibility in research by sharing the more than 200GB of data and 20,000 lines of code behind their latest results – an unprecedented degree of openness in a peer-reviewed publication. The researchers hope that this new gold standard will be adopted by other fields, increasing the reliability of research results, especially for work which is publicly funded.
The researchers are presenting their results at a talk today (4 May) at the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI) in Oakland, California.
In recent years there’s been a great deal of discussion about so-called ‘open access’ publications – the idea that research publications, particularly those funded by public money, should be made publicly available.
Computer science has embraced open access more than many disciplines, with some publishers sub-licensing publications and allowing authors to publish them in open archives. However, as more and more corporations publish their research in academic journals, and as academics find themselves in a ‘publish or perish’ culture, the reliability of research results has come into question.
“Open access isn’t as open as you think, especially when there are corporate interests involved,” said Matthew Grosvenor, a PhD student from the University’s Computer Laboratory, and the paper’s lead author. “Due to commercial sensitivities, corporations are reluctant to make their code and data sets available when they publish in peer-reviewed journals. But without the code or data sets, the results are irrelevant – we can’t know whether an experiment is the same if we try to recreate it.”
Beyond computer science, a number of high-profile incidents of errors, fraud or misconduct have called quality standards in research into question. This has thrown the issue of reproducibility – that a result can be reliably repeated given the same conditions – into the spotlight.
“If a result cannot be reliably repeated, then how can we trust it?” said Grosvenor. “If you try to reproduce other people’s work from the paper alone, you often end up with different numbers. Unless you have access to everything, it’s useless to call a piece of research open source. It’s either open source or it’s not – you can’t open source just a little bit.”
With their most recent publication, Grosvenor and his colleagues have gone several steps beyond typical open access standards – setting a new gold standard for open and reproducible research. All of the experimental figures and tables in the award-winning final version of their paper, which describes a new method of making data centres more efficient, are clickable.