Open science refers to a set of principles and practices that promote transparency in science. The core idea is to make it easy for others to access, verify, and build on scientific research, for example, by reanalyzing data, critiquing methods, running replication studies, or identifying errors or sources of bias. Examples of open science include preregistration; sharing of data, research materials, and analysis scripts; disclosure of funding and conflicts of interest; and open access to research articles. Proponents usually acknowledge that transparency needs to be weighed against other ethical priorities such as participant privacy—a key guiding principle is that research should be as “open as possible and as closed as necessary.”
While open science is a relatively modern term, transparency is a longstanding scientific principle (Wootton, 2016). During the early Scientific Revolution, visionaries like Francis Bacon and Robert Boyle criticized the clandestine research practices of alchemists and advocated for greater transparency and collaboration, including practices like communal witnessing, in which experiments were performed in front of an audience. The principles of transparency and independent verification are embodied in the motto of one of the oldest scientific institutions, the Royal Society: nullius in verba (take no one’s word for it).
As science has expanded and evolved, transparency has been increasingly neglected, with occasional exceptions. For example, in medicine, concerns about publication bias (Simes, 1986) spurred advocacy for the preregistration of clinical trials, a practice that is now a legal requirement in many jurisdictions. Open science received limited attention in the cognitive sciences until the 2010s, when mounting concerns about research quality led to calls for greater transparency (Ioannidis et al., 2014; Munafò et al., 2017). This renewed focus on open science has been supported by new online infrastructure, such as preprint servers, study registries, and repositories for storing data, materials, and analysis scripts. Initial bottom-up advocacy for open science from community groups such as the Peer Reviewers Openness Initiative and the Society for the Improvement of Psychological Science has been reinforced by top-down policy initiatives from major scientific organizations (Nosek et al., 2015). In 2021, The United Nations Educational, Scientific and Cultural Organization (UNESCO) identified open science as a global priority.
Preregistration refers to archiving a research plan (e.g., aims/hypotheses, methods, and analyses) in a public registry, typically before any data has been collected. Preregistration is intended to reduce the risk of bias by encouraging researchers to make decisions independently of the data (Hardwicke & Wagenmakers, 2023). When research decisions such as how to define outliers are made with knowledge of the data, those decisions can be skewed in favor of the researchers’ preferred outcome, creating a risk of bias. A public preregistration also enables readers to identify which decisions were preplanned (decreasing the risk of bias) and which were not (increasing the risk of bias) and appropriately calibrate their confidence in the study results. Preregistration templates have been developed for a variety of cognitive science domains, such as quantitative psychology (Bosnjak et al., 2022), cognitive modeling (Crüwell & Evans, 2021), and experimental linguistics (Roettger, 2021) [see Bayesian Models of Cognition].
Sharing data can facilitate the detection of error or fraud, synthesis and aggregation of evidence, and novel discovery. Anonymization or other precautions are sometimes necessary to ensure data are shared responsibly (Meyer, 2018). Sharing materials such as stimuli, survey instruments, software, and detailed methods ensures that research can be thoroughly evaluated. It also increases efficiency by facilitating reuse and enables independent verification, for example, through replication studies. Sharing analysis scripts enables independent researchers to verify whether the reported results can be obtained by repeating the original analysis with the original data (computational reproducibility). Analysis scripts document how data were filtered, transformed, analyzed, and visualized, usually in computational code.
Open access refers to making research articles freely available to readers. Traditionally, most scientific publications are only accessible to scientists based at institutions that can afford journal subscription fees. Open access advocates argue that this practice is unfair, as most research is funded by the public; moreover, making research accessible to all would maximize its benefits to society. Increasingly, researchers share preprints: freely accessible research articles stored on preprint servers such as PsyArXiv (Moshontz et al., 2021). Some journals also make some or all of their articles open access, although often the authors have to pay a fee.
Despite its scientific benefits, open science can increase pressures (e.g., workload or financial) on individual scientists, which may exacerbate existing inequalities (Bahlai et al., 2019). Additionally, it is not always clear how to balance transparency with competing legal, ethical, or practical constraints. For example, it has been argued that open science could facilitate harassment in controversial areas such as climate science (Lewandowsky & Bishop, 2016) and create challenges for qualitative researchers (Pownall et al., 2023). As the adoption of open science practices has increased, a number of implementation problems have been surfaced by meta-research. For example, open datasets are often inadequately documented (Towse et al., 2020), and researchers often fail to disclose deviations from preregistered plans (Claesen et al., 2021).
Meta-research (research on research), or meta-science, is a growing discipline that uses theoretical and empirical approaches to study and improve scientific research (Hardwicke et al., 2020). Open science and meta-research are frequently in dialogue, with meta-research empirically evaluating the effectiveness and possible side effects of open science reforms and open science providing data and content for meta-researchers to evaluate (e.g., datasets, preregistrations, study materials).
Transparency is a core principle of science (children are often taught in school that scientists “show their work”) and necessary for scientific self-correction. Thus, it seems logical that public trust in science would depend on a scientific community’s demonstrated commitment to open science. However, evidence regarding the link between open science and public trust in science is mixed (e.g., Rosman et al., 2022).
Open science does not prevent honest errors, misconduct, or poor research design—transparently reported science can still be wrong. However, open science practices make it easier to detect and correct errors, biases, and misrepresentations. Open science is thus one of several mechanisms for improving research quality.
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie du Sert, N., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), Article 0021. https://doi.org/10.1038/s41562-016-0021
Frank, M. C., Braginsky, M., Cachia, J., Coles, N., Hardwicke, T. E., Hawkins, R., Mathur, M., & Williams, R. (2024). Experimentology: An open science approach to experimental psychology methods. MIT Press. https://experimentology.io.