The Ends of Science

Will Freudenheim | Imran Sekalala | Darren Zhu

Introduction: Scientific Knowledge Production

The rise of science imbued with artificial intelligence heralds a profound shift for the foundations of scientific inquiry. This shift evokes questions about the unknowability of scientific knowledge generated through these tools, particularly as they can operate at levels of scale and complexity beyond human comprehension.

Historically, the scientific method was built on human observation, hypothesis generation, and experimental testing. Today, machine learning can scrutinize vast quantities of data and extract hypotheses at unparalleled scale. However, this is often an opaque process, producing results that can be profound yet indecipherable.

Consider the mysterious, brilliant Move 37 by the reinforcement learning program AlphaGo in its win against world champion Lee Sedol, juxtaposed against the discovery that amateur players can consistently trick these same artificial Go agents into making fatal errors. We face a similar conundrum when considering synthetic knowledge production: how do we differentiate seemingly ingenious outcomes from potential follies?

In grappling with this epistemological upheaval, we are confronted with the possibility that we are witnessing the symbolic Ends of Science.” Yet, these Ends of Science” do not suggest a cessation of scientific discovery. Instead, it indicates a shift in our understanding of the scientific method, hinting that our established practices might be morphing towards a more expansive approach to knowledge production.

Two symptoms of this inflection point are already visible in the current state of science: the increasing rate at which papers are published, as well as the increasing hyper-specialization of knowledge.

Metascience research — which studies how the domain of science operates — finds that both of these factors may already be contributing to an era of scientific stagnation. As the sheer quantity of research has increased, science has become more difficult to process, leading to less disruptive, less groundbreaking research.

In tandem, science has become hyper-specialized, as researchers carve out micro-niches at the edge of the knowledge ecosystem. At its most extreme, research fields can become too esoteric as the burden becomes overwhelming for a community to evaluate the legitimacy of a new finding.

This phenomenon is epitomized by Shinichi Mochizuki’s Inter-universal Teichmuller Theory — a 600-page proof developed over twenty years claiming to prove the abc conjecture in number theory. The complex nature of Mochizuki’s proof made it largely incomprehensible to his peer mathematicians, with novel concepts and terminology that required several conferences to interpret. As number theorist Jordan Ellenberg says: You feel a bit like you might be reading a paper from the future, or from outer space.”

Epistemic Overhangs and Underhangs

Like the Mochizuki moment, new findings made using neural networks can be challenging to evaluate, which may lead to gaps between scientific discovery and scientific verification.

We can categorize these as epistemic overhangs and epistemic underhangs, drawing from the concept of capability overhangs in artificial intelligence, where the latent capacities of AI exceed the realized applications.

Likewise, epistemic overhangs occur when new theories exceed our applications or methods of verification, leading to empirical gaps from our baseline of knowledge. Mochizuki’s abc conjecture can be characterized as a kind of epistemic overhang.

Conversely, epistemic underhangs occur when empirical findings lack underlying causal mechanisms, leading to theoretical gaps. In this sense, we have the ability to create applications from these discoveries, but we do not know how or why they work. Medicines, such as acetaminophen, that were developed prior to molecular biology often fit this category, as we still do not know their modes of action despite vast clinical use.

It might be tempting to believe that these epistemic gaps are only temporary issues that should be self-resolving; however, these gaps often expand to fracture scientific communities. The discourse around COVID conspiracies” reveals this: the inability to empirically determine the origins of SARS-CoV-2 became an epistemic overhang, one that mutated into scientific stigma around the lab leak theory. Meanwhile, the repurposing of ivermectin as a treatment without known antiviral mechanisms created an epistemic underhang, which intensified into pharmaceutical skepticism.

These epistemic overhangs and epistemic underhangs have the potential to be accelerated by artificial intelligence, which can cause both the overproduction and increased complexification of research findings.

Similar to the invention of microscopes in the late 1600s, and the development of randomized controlled trials in the early 1900s, both of which transformed scientific knowledge production, we will need new tools and methods that provide generalizable and understandable explanations from machine learning models.

Model Superorganism

To help resolve these epistemic overhangs and underhangs produced by artificial intelligence, let us turn to biology, where model organisms have emerged as a powerful framework for working with non-human intelligences. By studying model organisms, scientists can generalize their findings to understand more fundamental biological processes.

Mendel studied peas to determine the principles of genetic inheritance; yeast served as a model for eukaryotic transfer learning; fruit flies provided the foundations for developmental biology. And mice.

We can consider the fusion of science with artificial intelligence as a kind of model superorganism.

In order to be useful like model organisms, this requires tools to be developed that help dissect causal mechanisms within and across AI models.

Much like how we can engineer yeast to resemble cancerous states, we can train machine learning algorithms on cancerous cells. But, to translate findings from yeast, we must manipulate them genetically and examine them with microscopy. Therefore, to understand how artificial intelligence produces new scientific discoveries as a model superorganism, we need a kind of microscope for examining AI models.

Mechanistic interpretability is a new field developing tools to open and visualize neural networks. Research led by Chris Olah helped understand how computer vision algorithms work by identifying specific artificial neurons responsible for detecting curves in images.

In fact, these curve detectors from Olah’s work have been found to explain how V4 neurons in the primate visual cortex detect curves as well. In other words, computer vision is now helping us to understand animal vision at the neuron level. Thus, mechanistic interpretability offers a technique for using AI as a model superorganism to generate not only new findings, but also new explanations.

Sometimes these explanations may run seemingly counterintuitive to how we understand certain phenomena. One such example was a transformer model trained by AI alignment researchers to do modular addition, a simple arithmetic operation. But when reverse engineering this model, researchers found it learned the operation using identities from trigonometry.

Mechanistic interpretability presents the opportunity to explore causal mechanisms. We could elucidate new processes for how proteins fold by generating explanations” from AlphaFold 2. Probing its transformer architecture can allow us to infer which parts of protein sequences are most important to direct attention for determining protein structure.

In fact, we may be able to coax these model superorganisms to explain themselves directly. Proof-of-concept research has used large language models, such as GPT4, to automatically write and score explanations for the behavior of neuron activations within another language model. Using such tools, we can imagine training and prompting a machinic Mochizuki to explain the tacit knowledge behind Inter Universal-Teichmuller theory in much more comprehensible and scalable ways.

The combination of neural networks and sensor networks to process and collect data — be it metagenomic, bioacoustic, hyperspectral, and more — presents an opportunity to conduct research at planetary scale.

To prevent epistemic overhangs and underhangs from fracturing scientific inquiry, we can use a fundamentally new kind of scientific method: first we train a scientific foundation model on large datasets of observations, then we interrogate that model with tools from mechanistic interpretability.

If we consider the ends of science as these processes by which a planet comes to understand itself, the means of science may indeed take this form of a model superorganism.

In The Origin of Species, Charles Darwin concludes: There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”

The ends of science’ are the pursuit of these endless forms.

References

Jumper et al, Highly Accurate Protein Structure Prediction with AlphaFold.” Nature 2021.

Zeng et al, Structural Analysis of the Sulfolobus solfataricus TF55β Chaperonin by Cryo-Electron Microscopy.” Acta F 2021.

Liszana et al. “Joint Generation of Protein Sequence and Structure with RoseTTAFold Sequence Space Diffusion.” Biorxiv 2023.

Silver et al, Mastering the Game of Go with Deep Neural Networks and Tree Search.” Nature 2016.

Wang et al, Adversarial Policies Beat Superhuman Go AIs.” Arxiv 2022.

Bloom et al, Are Ideas Getting Harder to Find?” AER 2020.

Park et al, Papers and Patents are Becoming Less Disruptive Over Time.” Nature 2023.

Mochizuki S., Inter-Universal Teichmuller Theory I-IV.” 2012.

Drahl C, How Does Acetaminophen Work? Researchers Still Aren’t Sure.” Chemical & Engineering News 2014.

Bloom et al, Investigate the origins of COVID-19.” Science 2021.

Caly et al, The FDA-approved drug ivermectin inhibits the replication of SARS-CoV-2 in vitro.” Antiviral Res. 2020.

Hooke R, Micrographia.” 1665.

Fisher, The Arrangement of Field Experiments.” Journal Ministry of Agriculture 1926.

Ankeny and Leonelli, Model Organisms.” The Philosophy of Biology 2021.

Botstein et al, Yeast as a Model Organism.” Science 1997.

Cammarata et al, Curve Detectors.” Distill 2020.

Willeke et al, Deep learning-driven characterization of single cell tuning in primate visual area V4 unveils topological organization.” Biorxiv 2023.

Nanda et al, Progress measures for grokking via mechanistic interpretability.” Arxiv 2023.

Bills et al, Language models can explain neurons in language models.” OpenAI 2023.


Date
July 26, 2023