Way back in 2008, Ben Shneiderman published an article in Science titled “Science 2.0” in which he argued:
Science 1.0 heroes such as Galileo, Newton, and Einstein produced key equations that describe the relationships among gravity, electricity, magnetism, and light. By contrast, Science 2.0 leaders are studying trust, empathy, responsibility, and privacy.
In his view, the collaborative and connective tools made available by digital information communications technologies (ICT) make the interactions between people a much richer area of inquiry than the interactions between particles and waves.
Shneiderman was followed just two months later by an article by Mitch Waldrop in Scientific American with exactly the same “Science 2.0” title.
It’s hard to say exactly how much has changed in the practice of science since these two articles appeared five years ago. Certainly not reward structures! For example, tenure and promotion standards in most University science and engineering departments still fail to acknowledge anything but peer-reviewed, archival publications as evidence of productivity.
In fact, the articles themselves downplay the potential for Science 2.0 to represent a Kuhnsian paradigm shift. Shneiderman’s article suggests that the principal difference between Science 1.0 and Science 2.0 are the subjects of study — i.e., from particles to people. According to Shneiderman the process of science, including “hypothesis testing, predictive models, and the need for validity, replicability, and generalizability” will remain the same, even if the setting will shift from the laboratory to the internet.
Waldrof’s article claims that, despite the “2.0” designation, science has always been a social process, demanding discourse and open debate at least since the days of Galilieo, and so the only thing that has really changed is that digital technologies accelerate the pace of those debates.
If these caveats are to be taken seriously, then “Science 2.0” doesn’t seem like much of an upgrade. Nonetheless, I think that description underestimates the power of science to reinvent itself in a digital age.
Here are three important things that are changing faster than Universities, funding agencies, and scientific societies (as the governing institutions of science) can seem to keep up:
- Peer review. In Science 1.0, peer review moves slowly and linearly through the bottleneck of journal editors that are increasingly hard-pressed to enlist the volunteer assistance of qualified reviewers. The intended outcome is a reliable, positive, original archival publication. This process solved a particular technological problem in the Time Before The Internet (TBTI), which was that printing and storing paper was expensive. For example, this expense prohibited consideration of negative results for publication. Science 1.0 conserved publication and library resources only for unique papers that had been vetted by experts. But peer review in Science 2.0 can be very different. First, it does not necessarily take place before publication. It might, but Science 2.0 includes post-publication review, in which the reviews themselves become available to the larger science community as part of a meta-science conversation. Second, Science 2.0 need not suffer from the editor bottleneck. Authors and reviewers (which can be any reader, really) can be connected directly. Third, publications need never be archived in an immutable sense. They can be continuously updated to correct errors, add data, and update arguments so that the best version of the publication is always the version most readily available. Lastly, the embodiment of scientific knowledge need not be confined to paper. It might include software, databases, videos, or even physical artefacts reproducible by 3D printers.
- Citizen Scientist. In Nicholas Nassim Taleb‘s most recent book Antifragile he observed that an extraordinary number of scientific discoveries were evidently made by English vicars during the earliest stages of the Industrial Revolution. At that time, science was largely confined to educated men with the sinecure of a parish and the free time to pursue scholarly inquiry as a hobbyist. It may have been the success of the Industrial Revolution itself that created the wealth necessary to support a class of specialized science professionals, but as Science 1.0 evolved along these lines, it became inaccessible to the hobbyists that helped invent it. Science 2.0 can reconnect science to people that lack professional resources or credentials, but can nevertheless contribute. Examples abound. One of the most celebrated is the 16-yr old cancer scientist Jack Andraka, interviewed here lauding the advantages of open access to journal articles:
- Cost. The inexorable trend in Science 1.0 is towards more expensive laboratories, more expensive instruments, more expensive libraries and more expensive scientists. In its earliest days, the economics of Science 1.0 rewarded specialization of intellectual labor by reducing redundancy and allowing productivity gains. But now, Science 1.0 costs have continued to push further out a curve shaped by diminishing returns, despite the fact that digital technologies reduce some marginal costs (e.g., postage) within the old Science 1.0 paradigm. In truth, the specialization model of Science 1.0 has created new problems in research governance, collaboration and translation. By contrast, Science 2.0 allows unprecendented access to knowledge from outside a specialized field. Science 2.0 is no longer governed by the same insistence on the specialization characteristic of traditional science, because the research skills now in short supply are integrative, rather than reductive.
No doubt the decentralization of science that accompanies the upgrade to Science 2.0 is threatening to established institutions that currently represent the locus of scientific power. Although scientists themselves typically profess to be apolitical in the practice of their profession, the enterprise of science itself is nevertheless a political powerhouse. For example, it is still science writ large that is establishes the rules for determining Truth. To the extent that science maintains high barriers to entry that include expensive infrastructure, byzantine and specialized bureaucracies that codify standards of legitimacy, and ever larger, more centralized and complicated institutions, science as an enterprise maintains its grip on the political authority now necessary to secure government funding and other resources.
But is exactly these trends towards centralization, standardization, codification, exclusion and complication that make science all the more vulnerable to catastrophic collapse. Software engineers use a term called “bloatware” to refer to programs or features that suck up computing resources, create conflicts, and otherwise serve to render the computer useless. As more and more “fixes”, “features” and “expansion packs” get added to a program or an operating system, the software demands more memory, more central processing, and more graphics power from the computer, until eventually the entire computer crashes. Joe Tainter, writing in the Collapse of Complex Societies, claims social institutions work in the same way. As they becomes increasingly complicated, ineffective, centralized and expensive, they demand increasingly large external subsidies. Eventually, when resources are exhausted, the entire enterprise, even the entire society, collapses.
If that’s the future of Science 1.0, then I’m ready to upgrade now.