Originally posted on the Open Science Collaboration by Denny Borsboom
This train won’t stop anytime soon.
That’s what I kept thinking during the two-day sessions in Charlottesville, where a diverse array of scientific stakeholders worked hard to reach agreement on new journal standards for open and transparent scientific reporting. The aspired standards are intended to specify practices for authors, reviewers, and editors to follow in order to achieve higher levels of openness than currently exist. The leading idea is that a journal, funding agency, or professional organization, could take these standards off-the-shelf and adopt them in their policy. So that when, say, The Journal for Previously Hard To Get Data starts to turn to a more open data practice, they don’t have to puzzle on how to implement this, but may instead just copy the data-sharing guideline out of the new standards and post it on their website.
The organizers1 of the sessions, which were presided by Brian Nosek of the Center for Open Science, had approached methodologists, funding agencies, journal editors, and representatives of professional organizations to achieve a broad set of perspectives on what open science means and how it should be institutionalized. As a result, the meeting felt almost like a political summit. It included high officials from professional organizations like the American Psychological Association (APA) and the Association for Psychological Science (APS), programme directors from the National Institutes of Health (NIH) and the National Science Foundation (NSF), editors of a wide variety of psychological, political, economic, and general science journals (including Science and Nature), and a loose collection of open science enthusiasts and methodologists (that would be me).
The organizers had placed their bets adventurously. When you have such a broad array of stakeholders, you’ve got a lot of different interests, which are not necessarily easy to align – it would have been much easier to achieve this kind of task with a group of, say, progressive methodologists. But of course if the produced standards succeed, and are immediately endorsed by important journals, funders, and professional organizations, then the resulting impact is such that it might change the scientific landscape forever. The organizers were clearly going for the big fish. And although one can never be sure with this sort of thing, I think the fish went for the bait, even though it isn’t caught just yet.
Before the meeting, subcommittees had been tasked to come up with white papers and proposed standards on five topics: Open standards for data-sharing, reporting of analyses, reporting on research design, replication, and preregistration. During the meeting, these standards were discussed, revised, discussed, and revised again. I don’t want to go into the details of exactly what the standards will contain, as they are still being revised at this time, but I think the committees have succeeded in formulating a menu of standards that are digestible for both progressive and somewhat more conservative stakeholders. If so, in the near future we can expect broad changes to take place on these five topics.
For me, one of the most interesting features of the meeting was that it involved such diverse perspectives. For instance, when you talk about data-sharing, what exactly are the data you want people to share? In psychology, we’re typically just talking about a 100kB spreadsheet, but what if the data involve Terabytes of neural data? And what about anthropological research, in which the data may involve actual physical artifacts? The definition of data is an issue that seems trivial from a monodisciplinary perspective, but that might well explode in your face if it is transported to the interdisciplinary realm. Similarly, halfway through the meeting, the people involved in clinical trials turned out to have a totally different understanding of preregistration as compared to the psychologists in the room. It was fascinating to see how fields slice up their research reality differently, and how they wrestle with different issues under the same header (and with the same issue under different ones).
Despite these differences, however, I felt that we all did have a clear target on the horizon, and I am confident that the new standards will be endorsed by many, if not all, of the stakeholders present at the meeting. Of course, it is a great advantage that leading journals like Science and Nature endorse higher standards op openness, and that funders like NIH and NSF are moving too. I sometimes have to pinch myself to make sure I am not dreaming, but there is real evidence that, after all these years, actual change is finally taking place: see NIH’s endorsement of open science in preclinical research, the Royal Society’s new guidelines which make open data mandatory, and the joint editorial on these issues that was simultaneously published by Science and Nature earlier this week. This effectively means that we live in a new world already.
Perhaps the most important thing about this meeting was that it proves how important openness and transparency have become. There are very few topics that could command the agendas of so many leaders in the field to align, so that they can all get together at the same time and place, to spend two days sweating on a set of journal standards. Today, open science is such a topic.
This train won’t stop anytime soon.
1The meeting was organized by Brian Nosek (Center for Open Science), Edward Miguel (University of Berkeley), Donald Green (Columbia University), and Marcia McNutt (Editor-in-Chief of Science Magazine); funded by the Laura and John Arnold Foundation; and sponsored by the Center Open Science, the Berkeley Initiative for Transparency in the Social Sciences, and Science Magazine.