Since Jane Goodall died at 91, tributes have poured in for the scientist who reshaped our understanding of chimpanzees and, by extension, ourselves. Goodall’s life is a case study in why steady, protected funding for fundamental research is not a luxury, but a public good.
At first, Jane Goodall’s long‑term observations seemed impractical: years of quietly watching chimpanzees in the forest. Yet in 1964, she documented them using stripped leaves and grass blades to fish for ants, overturning the notion that only humans make tools and forcing science to redraw the boundary between humans and other animals. Her work helped launch modern comparative cognition and opened the door to recognizing planning, empathy, and cultural transmission in nonhuman species. Today those insights inform how we think about brain evolution, moral status, and even what “intelligence” means in designing AI systems.
Yet recent academic funding trends increasingly favor applied research—projects that promise measurable, near-term impact. In medicine, applied research often means clinical trials for new drugs that can quickly benefit, for example, cancer patients.
This work is essential.
But shifting most resources toward short-horizon, translational projects usually comes at the expense of open-ended inquiry, what we call fundamental science: curiosity-driven work without obvious short-term applications. Today, the institutions that once supported such work, such as the National Science Foundation, face stark cuts and growing pressure to justify the research they fund by showing how it can be immediately applied to solve a problem.
History shows that sustained investment in basic science pays off in outsized and often unpredictable ways.
Fields like theoretical physics or basic ecology have produced breakthroughs that led to technologies like lasers, fiber-optic communications, and ecological models that underpin modern conservation policy.
In other quantitative fields, some of the most revolutionary breakthroughs of our time began as arcane, curiosity-driven research. The CRISPR–Cas9 gene-editing system, awarded the 2020 Nobel Prize in Chemistry, emerged from basic studies of how bacteria defend themselves against viruses, with no obvious connection to human disease. Albert Einstein’s theories of special and general relativity, which were purely theoretical in the early 20th century, now underpin the Global Positioning System (GPS), which only achieves its meter-level accuracy because relativity corrections are built into satellite clocks. Mid-century work in semiconductor physics produced the transistor, the elemental switch at the heart of modern computing, which was recognized with the 1956 Nobel Prize in Physics.
Perhaps my favorite example of serendipitous discovery is the World Wide Web, which began as a lab tool for physicists at CERN—the European particle physics laboratory in Geneva. In the late 1980s, Tim Berners-Lee created a system to help physicists share documents and data across institutions.
What started as a niche solution for high-energy physics teams quietly became the operating system of the modern world.
Today, it shapes how we create, distribute, and access information, transforming economies, democracies, education, and nearly every human field touched by data. The internet was not born in a tech startup; it was born in a particle accelerator.
In contrast, the modern funding landscape is increasingly structured around short-term deliverables and tightly scripted milestones. This culture of micromanagement leaves little room for the curiosity-driven detours that so often lead to breakthroughs. Penicillin, for instance, was born not from a planned experiment but from Alexander Fleming’s chance observation of mold killing bacteria on a Petri dish—an accident that launched the antibiotic era. When we design funding systems that leave no space for such moments, we miss the compounding nature of discovery and quietly select against the next transformative discovery.
Paradigm shifts cannot be scheduled.
To be sure, applied research and commercialization matter; we need them funded robustly. But robbing fundamentals to feed near-term pipelines shrinks the pipeline itself. The lesson of Goodall’s career is not either/or but both/and.
Science needs a firewall that protects open-ended inquiry from shifting political winds and quarterly metrics. We can take meaningful steps to protect the kind of exploration that made Goodall’s work, and so many other fundamental discoveries, possible.
These steps include committing to multi-year, investigator-led funding that prioritizes people and ideas over narrowly defined projects, offering researchers the stability and freedom to follow questions as they evolve; replicating successful models like the European Research Council’s frontier research program, which supports bold questions and researchers, not just predefined projects, over five-year horizons; and investing more consistently in long-horizon field programs in areas like ecology, astronomy, and basic biomedicine, with funding that supports not only operations but also critical infrastructure such as data stewardship, long-term cohorts, and researcher training.
We should also broaden how we measure impact: recognizing the value of new methods, open datasets, and the development of future scientists, not just patents, citations, or short-term outcomes.
Most importantly, we can do a better job of telling the story: systematically tracking and sharing “basic-to-breakthrough” arcs, so the public sees how today’s obscure questions—whether posed in a particle accelerator or a forest in Africa—can reshape tomorrow’s foundations in technology, ethics, and entire fields of knowledge.


