It is now more than a year ago that we, master students at the Descartes Center for History and Philosophy of Science and the Humanities at Utrecht University, started the Journal of Trial and Error: a platform that encourages researchers to publish and discuss failure. In other words, we facilitate scientists to publish results that didn’t meet their initial aims. We believe that communicating these results could benefit the scientific community as a whole because it would challenge the dominant understandings of ‘success’ and ‘failure.’ Our journal is not directed to experimental scientists alone: we also aim to invite ‘commentators’ of science—historians, anthropologists, sociologists, philosophers, and scientists themselves—to discuss ‘trial and error’ in science. This Shells and Pebbles-post is specifically addressed to historians of science and knowledge. After a brief outline of the history of the project, we will introduce our manifesto. With this post, we hope to receive feedback and concrete suggestions on the format of the journal and the way we aim to facilitate reflection.
We started the project because, as junior historians and philosophers of science, we felt insecure. We are taught that pure science, based on the scientific method, doesn’t exist: ‘facts’ and ‘objectivity’ are problematic and historically contingent notions. After all, scientific knowledge can best be understood as constructed in social, political, cultural, and institutional contexts. This message inspired and intrigued us, but constructivism felt (overly) destructive in its implicit and sometimes explicit relativist take on present-day scientific practice. We were under the impression that historians and philosophers of science didn’t seem to turn their most important message—that science is done by humans—into a constructive dialogue with scientists themselves. Be it naïve, be it optimistic: we wanted to learn how we could help science, instead of solely criticizing it.
Process: trial and error
With that in mind, we curiously attended an ‘Open Science’ colloquium in October 2018 at the Utrecht University academic hospital, all of us having been HPS-master students for just two months. As enthusiastic historians-and-philosophers-in-the-making, we wanted to learn about current problems in science so that we could solve them. The colloquium showed us that ‘Open Science’ is more than merely open access: it aims to reduce work pressure, tries to reinvent evaluation criteria, and it critically reflects on existing business models of disseminating knowledge. Moreover, ‘Open Science’ seemed to be a call for more honest, transparent and altruistic science. The advocates for ‘Open Science’ (natural scientists, for the most part) present this movement as the only way to do proper science: communal, altruistic, and constructive.
In that regard, Open Science stands for a new attitude that could solve especially two major problems scientists are struggling with. The first is the ‘replication crisis’. Since scientists validate their results in terms of replicability; the present-day situation of unreplicable experiments is considered a serious problem. The ‘positive publication bias’ is the second problem targeted by the Open Science movement. When scientists are confronted with failed research, they have two options at hand: not publishing, or framing the results as productive by, for example, adding ad-hoc hypotheses in a potentially inadequate manner. The underlying fixation on quantity is widely considered to be a big threat to scientific quality.
Having a combined background in chemistry, neuroscience, psychology, and history, we were surprised by the extent to which these problems seem to be caused by an oppressive self-image of a groundbreaking, problem-solving, and successful science that denies the existence of trial and error. Academic career structures, ‘success’, and science communication are based and built on this quite powerful image. It seemed that pressing concerns like the publication bias and the replication crisis are caused by denying the social, cultural, political and institutional factors, and philosophical issues that are under investigation of historians and philosophers of science.
What if—we asked with a beer in our hands after the colloquium— we turn what historians and philosophers of science already know into practice? What if we brought the day-to-day realities back into publishing infrastructures and science communication. What if we encourage scientists to publish failure? Our first step was to present the idea of a Journal of Failed Experiments at the Descartes Christmas Colloquium 2018. This proposal was a thought experiment and also a practical joke.
To our disappointment, the response amongst the historians and philosophers present was rather mild. We were expecting critical comments but were met by passive encouragement. Those present told us that it was a good idea, but nobody wanted to be actively part of it. Only four fellow students actually signed up for our newsletter.
To our surprise, the response from scientists was very different. After we presented our idea at several conferences, they all had frustrating experiences to share and believed that the journal could really help to turn these experiences into constructive results. But they asked critical and cynical questions as well: who would publish in a journal about failure? And how would publishing failure be productive and meaningful for the scientific community as a whole? Could anyone falsify negative results? These issues posed interesting questions from a variety of angles—from the more epistemological, to the more practical or sociological.
Result: JOTE
As a response to those questions, we changed our name into the Journal of Trial and Error, focusing not so much on the result, but on the process that yielded that result. In addition, we turned to our own community, those who reflect on science. What if historians, philosophers, sociologists, and anthropologists reflected on trial-and-error-papers from their unique perspective? We could critically reflect on the question ‘what went wrong?’ and even problematize that question itself. With our journal, we hope to turn our insecurity regarding the most effective way to contribute to science into a concrete solution to some of the problems the open science movement is addressing.
Our ideas and ambitions culminated in the manifesto that you find below. With this piece, we position our project within the current problems and solutions in scientific practice. In addition, we believe that publications on trial and error benefit from constructive reflection from historians, philosophers, anthropologists, and sociologists. We would like to invite you, the readers of Shells and Pebbles, to discuss with us how historians could contribute to our journal.
One year later, our initiative has received much attention; we were invited to speak at several conferences across Europe, we are embedded in the Open Science movement, and—being a CrossRef member that can issue DOI numbers—we have established a digital infrastructure to host our journal. Several publishing houses have shown interest in our project. Promising manuscripts have been submitted. We know that the Journal of Trial and Error is here to stay. Our next step is to make constructive reflection happen. And for that reflection, we need you.
The Manifesto for Trial and Error in Science
We state that …
Trial and Error is the elementary process in Science by which knowledge is acquired. We differentiate between two types of scientific Trial and Error processes:
Methodological errors in a practical sense, driving improvement in the understanding and application of techniques. These errors are here understood in a broad sense, those that go beyond the learning of the individual researcher and have an impact on the scale of the scientific community.
Conceptual flaws, arising from hypotheses being confronted with conflicting observations. When the initial hypotheses are inappropriate in the face of empirical evidence, scientists improve or reject theoretical frameworks by developing alternative theses aimed at increasing empirical adequacy. Not only hits (positive results), but also misses (negative results) are key to scientific progress.
We identify three core problems in today’s Science. Namely, …
… a public image of Science based on breakthrough discoveries, fascinating images, and clear results. This reputation comes at a cost. Both scientists themselves, as well as philosophers, sociologists and historians of science have increasingly been highlighting the importance of science in the making. A more faithful picture of Science, the one of practices and fine-tuning methodologies, seems to be at odds with the unrealistic public image of big-discovery Science.
… a gap between what is published and what is researched. We know positive publication bias pressures scientists to conceal methodological mistakes and discard research containing negative findings, threatening proper interpretation. In the face of failed research —outcomes of Science that do not meet the initial aim of the individual researchers— scientists have two options at hand: not publishing or framing the results as productive by, for example, adding ad-hoc hypotheses in a potentially inadequate manner. This point is a consequence of the expectations of big-discovery Science and the publish-or-perish politics of this Science.
… a replication crisis. Since scientists validate their results in terms of replicability, the present-day situation of unreplicable experiments is a serious problem. Debate on this replication crisis has focused on the misuse of statistics by scientists, on methodological carelessness, or theoretical inappropriateness. Only a few venues are attentive to the potential harm.
We stand in the context of …
… a call for democratizing Science. Society rightfully demands that results are made accessible to both the public and fellow scientists. What is even more concerning is that individual researchers or citizens have to pay large amounts of public money to get access to mostly publicly funded research results. We need to rethink how Science is communicated by means of traditional publishing channels.
… a need for dialogue. We identify a highly specialized academic community, aiming to tackle and reflect on social and intellectual challenges in a frequently unproductive way. Because of the scattered organization of university departments and faculties, a constructive dialogue between different tribes of cutting-edge Science is missing. In the context of the earlier mentioned problems of a harmful public image of Science, the publication bias, the replication crisis, and inaccessible Science, the lack of communication has to be addressed even more urgently. In the face of these multifaceted problems, we need useful solutions for the future of Science.
Therefore, we propose…
A journal serving as a platform for Trial and Error in Science. We want to publish (1) methodological errors which have productive conclusions for the scientific community at large, and (2) conceptual errors in the form of negative results. In addition, our initiative aims to create a platform to openly talk about failure. That does not mean that we want to publish sloppy science. Rather, we believe that in talking about errors, scientists can learn about the dos and don’ts of their methods and concepts. As well, because negative results are highly informative, this would help alleviate the issue of publication bias, and reframe the replication crisis. Young researchers are the hope for a change in Science, therefore we do take their work and ideas seriously. We aim to publish high-quality work of early-career scientists, peer-reviewed and edited by more senior scholars. On every published article, a subject specialist, or a philosopher, historian, anthropologist or sociologist of science will be invited to reflect, thereby answering and problematizing the question “what went wrong?”. This combination aims to ensure novelty and quality in our journal.
So that …
… both society and members of the scientific community appreciate scientific endeavors in a more realistic and productive way. By establishing a forum for failure, we aim to do justice to the difficulties of empiricism. We, at the Journal of Trial and Error, acknowledge Science’s struggles in the practice as crucial elements in the generation and dissemination of knowledge.
… the gap between what is researched and what is published will be closed. As a response to the false dichotomy between publishing breakthroughs or publishing nothing, we aim at giving a platform for publishing mistakes without fear nor shame. We claim that it is compatible to err in the experiment and be a contributing scientist if we rethink what failure means. We already know that Trial and Error is productive in scientific practice; we are now exploring what productive means in scientific publishing
… the replication crisis is understood in its complexity. Our project aims to provide a common ground for the reflection on one of the landmarks of Science: replicability. Both empirical scientists and humanities scholars of science have long thought about what it means to show (in)comparable, (in)compatible or (un)identical results. Our journal offers a place to exchange such varied views.
… users of scientific results get unrestricted access to relevant scientific content. In the age of Open Science, we share the optimism of freely sharing articles and results, and wish to extend it to sharing data, methods, and errors.
… methodological pluralism is concrete and constructively focused, thereby helping scholars to err in a productive way, so they can trial enriching solutions for social and intellectual challenges.
Martijn van der Meer
Max Bautista Perpinyà
Stefan Gaillard
Nayra Hammann
Jobke Visser
Davide Cavalieri
Sean Devine
Maura Burke
Featured image: Christiana Couceiro, ‘Scientist’ (Getty Images)