Introduction
For decades I have been fascinated by “how things work.” By “things” I mean above all our body, our brain, and our nervous system—but pursuing that curiosity quickly leads to the question of how all of this came to be1. From there it is only a small step to evolutionary theory, the emergence of the human sciences, and, from there, Buddhist philosophy and psychology.
Over the years I have collected and studied a fair amount of reading material on these themes2, but also on science in general. What struck me repeatedly while reading about scientific developments was how painfully slow the acceptance of new insights can be—especially when those insights connect different disciplines or domains.
In the current wave of negativity surrounding the polyvagal theory I see a parallel, and that is why I felt it was time to examine and write an essay about a number of past scientific discoveries and their journey toward acceptance. What can these stories teach us about how science deals with criticism3?

A scientist who poisoned himself
In 1984, the Australian physician Barry Marshall did something his colleagues considered completely insane. He took a petri dish in which he had cultured a bacterium, mixed the contents with a small amount of liquid, and drank it. Not just any bacterium, but a micro-organism he claimed caused stomach ulcers.
The medical establishment at the time was absolutely convinced of its wisdom: stomach ulcers were caused by stress, too much coffee, spicy food, and a hectic lifestyle. Everyone knew that. Treatment consisted of sedatives, antacids, and advice to slow down. That a bacterium might cause stomach ulcers? Impossible. The stomach was far too acidic for bacteria to survive in. That was basic biology, and Marshall apparently was not clever enough to grasp it.
A few days after his self-experiment, Marshall began to feel terrible. He suffered from bloating, his breath became foul, and he was vomiting in the mornings. When he underwent an endoscopy, his stomach lining was found to be severely inflamed. The bacterium he had swallowed—later named Helicobacter pylori—had struck. Marshall had proved what he set out to prove: this bacterium caused inflammation of the stomach lining, and from that inflammation a peptic ulcer could develop.
It would take nearly ten more years before the medical world took him seriously. In 1994, a major conference in the United States finally concluded that he was right: stomach ulcers were indeed caused by a bacterium, and they could be cured with antibiotics. In 2005, Marshall and his colleague Robin Warren received the Nobel Prize for their discovery.
Why did it take so long? Why did a scientist have to literally make himself ill to be heard? And why is this pattern—a scientist discovers something, is ridiculed, and years later is vindicated—so persistently recurring throughout history?

The man who moved continents
If you look at a world map, you notice something remarkable. The eastern coastline of South America fits suspiciously well into the western coastline of Africa—as if someone had pulled two puzzle pieces apart. German scientist Alfred Wegener made the same observation in 1912, but he went further than merely looking. He gathered evidence from several corners of science.
Wegener was, in fact, a meteorologist, someone whose work concerned weather and climate, but his interests were broad. Reading articles on fossils, he encountered something puzzling: the same ancient plant and animal species were found in both South America and Africa. How was that possible? Those creatures could not have swum across the ocean. Traces of ancient glaciers appeared in places that are now tropically warm. And rock formations on either side of the ocean resembled each other strikingly.
Wegener conceived an explanation that at the time seemed utterly nonsensical: the continents had once been joined. He called this supercontinent Pangaea, meaning “all lands.” Roughly 300 million years ago, this mega-continent had broken apart, and ever since, the various pieces had been drifting slowly away from each other. South America and Africa had literally drifted apart.
The geological community’s reaction was devastating. Wegener was not a real geologist—what could he possibly know about it? Moreover, he could not explain how continents might move across the hard ocean floor. The prominent British geophysicist Harold Jeffreys calculated that it was physically impossible. The forces Wegener proposed—a kind of pull from the Earth’s rotation—were far too weak to displace enormous landmasses.
At a 1926 conference in New York, Wegener was publicly ridiculed. Speakers were sarcastic and sometimes outright insulting. A geologist who later recalled that period said, “I once asked one of my professors why he never spoke about continental drift. He answered scornfully that he might consider it if I could prove that a force existed capable of moving continents. The idea was complete nonsense, I was told.”
Wegener died in 1930 during an expedition in Greenland, aged fifty. He never knew he was right. It would be until the 1960s before his ideas were taken seriously.
In the 1950s, an American cartographer, Marie Tharp, began working with data from the ocean floor. Tharp was not permitted to join the research ships herself—women were not welcome on board—so she worked in the office with the measurements sent back from sea. From those dry numbers she constructed detailed maps of what lay beneath the water.
There she discovered something remarkable: running straight through the Atlantic Ocean was an enormous mountain range, the Mid-Atlantic Ridge. More striking still was a deep cleft running along the center of that range, a rift valley. When she told her colleague Bruce Heezen, he was skeptical—it seemed as though she wanted to breathe new life into Wegener’s old fantasies. But Tharp was right.
That mountain range turned out to be the place where new ocean floor was being created. Molten rock welled up, solidified, and pushed the older ocean floor aside. Other scientists found magnetic patterns in the rocks that confirmed it. Gradually the realization grew: the ocean floor is continuously renewed. New crust appears along underwater mountain ridges; old crust disappears into deep trenches. And those moving plates carry the continents with them.
By the mid-1960s, plate tectonics—the refined version of Wegener’s continental drift—was generally accepted. But Wegener himself had died thirty years earlier, ignored and mocked by the scientific establishment. And Marie Tharp, whose maps had provided the crucial evidence, received recognition only decades later. Her name was often absent from the publications her work had underpinned.
Darwin and the earth that was too young
Charles Darwin wrote his famous book On the Origin of Species in the two decades before its publication in 1859. The idea was simple yet revolutionary: species change over time. Through natural selection, better-adapted variants survive and pass their traits on to their offspring. Over millions of years, this leads to enormous diversity and the emergence of new species.
Within twenty years, most scientists were convinced that evolution was a fact4. Species had not been created in immutable form but developed over time. Yet there was one colossal problem to which Darwin had no answer: the time required.
William Thomson (later Lord Kelvin), one of the most respected physicists of his era, had calculated the age of the Earth. Starting from a molten fireball that gradually cooled, he concluded the Earth was between 24 and 400 million years old. That sounds impressively ancient—certainly far older than biblical accounts suggest—yet it was far too short for Darwin’s gradual evolution. Darwin needed billions of years, not millions.
Darwin was keenly aware of the problem. In 1869 he wrote to his colleague Alfred Wallace, “Thomson’s views on the recent age of the world have been one of my sorest troubles.” He believed he was right but could not substantiate it with hard figures.
There were further difficulties. How were traits passed from parents to children? Darwin did not know. Genetics did not yet exist5. There were also great gaps in the fossil record. Where were all the transitional forms? Critics such as the engineer Fleeming Jenkin argued that variations had a natural ceiling: you could selectively breed larger cattle, but growth eventually stopped. How then could you move from one species to an entirely different one?
Interestingly, biologists did accept evolution, but often not Darwin’s mechanism of natural selection. In France, Lamarck’s theory remained more popular well into the twentieth century. Lamarck held that organisms passed on traits acquired during their lifetime to their offspring: a giraffe that repeatedly stretched its neck to reach the highest leaves would produce offspring with longer necks. This theory also sat more comfortably with the moral climate of the time—you could improve yourself, and that improvement would be inherited.
The breakthrough came only in the 1930s: the integration of Mendelian genetics with Darwinian natural selection. The discovery of mutations and heredity via genes solved the problem of inheritance. Radioactive dating showed that the Earth was billions, not millions, of years old. Lord Kelvin had simply not accounted for radioactivity, a phenomenon unknown in his day6. The “Modern Synthesis,” or Neo-Darwinism, integrated genetics with evolutionary theory. Only then—seventy years after Darwin’s publication—was natural selection fully accepted.
Germs in a world of miasmas
Well into the nineteenth century, physicians and scientists believed that diseases were caused by “miasmas.” Miasma comes from Greek and literally means “pollution” or “contamination.” A kind of toxic vapor, it was thought, rose from rotting material. If you came near sewers, swamps, or corpses, you inhaled that poisonous air and fell ill7. The Black Death, the plague that killed millions across Europe, was explained by such noxious emanations.
Against that backdrop, in the 1860s a French chemist named Louis Pasteur proposed a radical theory. Diseases were not caused by bad air but by microscopically small organisms: bacteria. These germs were so tiny they could only be seen through a microscope, yet they were everywhere8. And when they entered the body, they could cause illness.
The resistance was immense. Rudolf Virchow, a celebrated German pathologist, mocked Pasteur. He is said to have declared that he had never looked through a microscope and had no intention of doing so. His argument against germ theory sounded logical: “If microbes were responsible for diseases, and they are everywhere, we would all be ill.” What Virchow did not understand was that the immune system can fight off most germs and that disease results only under specific circumstances.
Across Europe and also in the United States, Pasteur’s theory was vigorously contested by medical professionals who did not want to accept the change. This was a fundamental shift in how medicine was practiced. If diseases came from germs, then instruments had to be sterilized, hands had to be washed, and patients had to be isolated—a completely unique proposition from simply ensuring fresh air.
Pasteur himself was not a physician but a chemist. He had begun by studying wine fermentation and discovered that micro-organisms were responsible for the process. He then applied that knowledge to diseases in silkworms and later to infectious diseases in humans and animals. He was, in short, an outsider to medicine who solved a medical problem from an entirely different discipline.
Other scientists added pieces to the puzzle. The Hungarian physician Ignaz Semmelweis showed that puerperal fever occurred far less often when doctors washed their hands. The English physician John Snow traced a cholera outbreak in London to a specific water pump contaminated with sewage. The English surgeon Joseph Lister began sterilizing instruments with carbolic acid, and suddenly far more patients survived operations.
By the end of the nineteenth century, most scientists were convinced. The German physician Robert Koch formulated criteria in 1884 for establishing whether a specific bacterium causes a specific disease. He identified the bacteria responsible for anthrax, tuberculosis, and cholera. Germ theory became a cornerstone of modern medicine. The journey from miasmas to microbes had taken roughly thirty years—thirty years during which physicians like Semmelweis suffered mental breakdowns because no one believed them, patients died of infections that could have been prevented, and a new truth fought against old certainties.

Proteins that cause infection
In 1982 neurologist Stanley Prusiner made a claim that overturned much of what biologists and physicians knew about infection. He had isolated the infectious agent responsible for scrapie, a disease in sheep. It turned out to be neither a virus nor a bacterium. It was a protein—an ordinary protein, without DNA or RNA.
That was impossible. The central dogma of molecular biology states that information flows from DNA to RNA to protein. That is how organisms reproduce, how life “works.” A protein without genetic material that can replicate itself? This contradicted everything that was known.
Prusiner named this infectious protein a “prion,” derived from “proteinaceous infectious particle.” His idea was that a normal protein found in everyone’s body sometimes assumes the wrong shape. That misfolded form is stable and can force other, normal proteins to adopt the same wrong shape—a kind of domino effect. These misfolded proteins accumulate in the brain and cause fatal diseases such as BSE (mad cow disease) and, in humans, variant Creutzfeldt–Jakob disease.
The scientific world reacted with enormous skepticism. Prusiner was contradicted and ridiculed. For two decades he endured the mockery of his colleagues. Even prominent scientists such as David Baltimore, himself a Nobel laureate, were among the doubters. Some scientists persisted in claiming that an undiscovered virus must be involved. A protein alone simply could not transmit disease.
But Prusiner held on. He identified the specific protein and demonstrated that it occurred in two forms: a normal variant present in everyone and a disease-causing form folded differently. He conducted experiments showing that prions were resistant to treatments that normally kill viruses and bacteria—such as radiation that damages DNA—indicating that this was not a virus but a protein, albeit one extraordinarily stably folded and resistant to most methods that ordinarily break proteins down.
In the 1990s the BSE epidemic broke out in Great Britain. Thousands of cattle fell ill and had to be destroyed. Worse still, people who had eaten infected meat developed a variant of Creutzfeldt-Jakob disease. Suddenly this was no longer a theoretical discussion but a real public health emergency.
The tide turned. More and more scientists concluded that Prusiner might well be right. In 1997, fifteen years after his first publication, he received the Nobel Prize.
What these stories share
When you place all these examples side by side, several patterns emerge.
First, not one of these scientists could tell the complete story at the time of their first publication. Marshall had not conducted randomized trials. Wegener did not know how continents moved. Darwin did not understand how traits were inherited. Pasteur could not yet determine which specific bacterium caused which disease. Prusiner could not explain every detail of prion replication.
Yet the absence of a complete mechanism proved to be no reason to reject the entire theory. Later research and the growth of knowledge filled the gaps. For Wegener, it was oceanography that provided the key. For Darwin it was genetics. For Pasteur it was microbiology. For Marshall it was clinical trials with antibiotics. For Prusiner it was the structural biology of proteins.
Second, all of these theories ran directly counter to established convictions. Continents did not move—everyone knew that. Species had been created immutably. Diseases came from bad air. Stomach ulcers came from stress. Infectious diseases came from viruses or bacteria, always with DNA or RNA. These ideas formed the paradigms of their time—the worldview on which scientists based their work, the firmly anchored scientific consensus.
Third, these scientists were often not taken seriously because they were “outsiders.” Wegener was a meteorologist, not a geologist. Darwin was a naturalist without a formal academic position—essentially an amateur—not an established scholar at a university. Pasteur was a chemist, not a physician. Marshall was a gastroenterologist, not a microbiologist. Prusiner was a neurologist, not a molecular biologist or biochemist. That they came from another field was used against them. They could not possibly grasp the finer points of the discipline.
But it was precisely that outsider status that gave them an advantage: they saw connections that specialists overlooked. Wegener combined geography, paleontology, and climatology. Darwin brought biology and geology together. Pasteur applied chemical knowledge to medical problems. Marshall linked clinical observations with microbiology. Prusiner united neurology and biochemistry. They looked over the fence of their discipline and saw a larger picture.
It is inevitable that you become an outsider when you publish a theory spanning multiple fields. That also explains a significant part of the fierce resistance: cross-domain theories are judged by each discipline at their weakest points.
The problem of crossing domains
That last point—the cross-disciplinary character—deserves closer examination because it explains much of the resistance.
Scientific disciplines are like countries with their language, culture, and laws, written or unwritten. Geologists employ different methods than biologists. Anatomists look at the world differently than psychologists. Every discipline has its standards for what counts as evidence, its journals, and its experts who determine what is publishable.
If you develop a theory that fits within a single field, your work is assessed by experts in that domain. They examine your methods, your data, and your reasoning. If it is sound, they accept it. If it is weak, they reject it. That system works reasonably well.
The paradox is that precisely the apparent “weakness”—not “being fully at home in a given field”—is what creates space for out-of-the-box thinking. A specialist would never ask certain questions because they do not fit within the paradigm of that field. An outsider asks exactly those uncomfortable questions.
Geologists looked at Wegener and saw the mechanism does not work; the forces are too weak. That was true. But they did not see the strength of the paleontological and climatological evidence because that was not their area of expertise.
Biologists looked at Darwin and saw the mechanism of inheritance was missing; the timescale was wrong. Also true. But they missed the geographical and anatomical patterns that Darwin found so compelling.
With Pasteur it was clinical medicine. He was not a physician; he had not treated patients. That was accurate. Medical practitioners looked down on this chemist who thought he could tell them how diseases worked. But they did not see the strength of his experimental evidence.
With Marshall it was the absence of large clinical studies. One self-experiment does not make science, critics said. Where were the randomized trials? Yet his microbiological observations were correct, and when the trials came, they confirmed his hypothesis.
With Prusiner, it was the apparent impossibility of proteins without nucleic acid. That contradicted molecular biology. Critics kept searching for hidden viruses. Yet his biochemical work was meticulous and careful.
And today? Anatomists look at the polyvagal theory and see the anatomical details do not match exactly as described. Porges suggests that different vagal systems operate independently, but anatomists observe that the nerve fibers that slow the heart originate from different nuclei in the brainstem and work together rather than as separate entities. That too may be true. But anatomists do not necessarily see the clinical value, the therapeutic applications, or the explanatory power for trauma responses—because that is not their area of expertise.
The problem is that no one surveys the complete picture. Each discipline judges the work on the aspects that concern it and that it understands. And when you work across domains, there is almost always a discipline where your work is vulnerable—perhaps because you do indeed make errors in that field, perhaps because you are not familiar with the latest insights there, or perhaps simply because you, as a generalist, do not have the depth of knowledge of a specialist.
This is why cross-disciplinary theories attract so much criticism. Every discipline sees the weak spots within its domain. And that criticism can be entirely valid. Darwin genuinely did not know genetics. Wegener genuinely lacked a mechanism. Those points of criticism were legitimate.
But what critics often miss is the synthetic power of a theory—its capacity to bring together different puzzle pieces from different fields into a new picture. That is precisely where the strength of a cross-domain theory lies, and it is precisely what specialists do not always see.
The role of time and technology
In virtually all of these examples, time played an important role. It took decades before the theories were accepted—sometimes because new generations of scientists grew up unencumbered by the old prejudices, but often because new technologies produced evidence that had previously been impossible to gather.
Wegener’s theory only gathered momentum when sonar techniques could map the ocean floor, when scientists used magnetometers to measure magnetic patterns in the seabed, and when seismographs could trace earthquakes across the globe. None of those techniques existed in Wegener’s time.
Darwin’s theory was confirmed when genetics developed, when microscopes became powerful enough to observe chromosomes, and when radioactive dating revealed the true age of the Earth. Darwin did not have those tools.
Germ theory benefited from better microscopes, from techniques for culturing bacteria, and from the discovery of viruses as microscopes grew still more powerful.
Marshall’s work was confirmed when antibiotics became available and it could be shown that stomach ulcers disappeared when the bacterium was killed.
Prusiner’s work gained support when proteins could be purified and their structure determined at a molecular level.
Occasionally an idea is simply ahead of its time. The tools to prove it do not yet exist. The framework to understand it is not yet in place. That is why a theory may have to wait decades before the rest of science catches up.
Not every criticized theory is rehabilitated
There is another important aspect that belongs in this essay. For every theory that survived criticism and was eventually accepted, many others were criticized and rightly rejected. Those stories deserve attention too, because they show that criticism can have good reason.
Consider the phlogiston theory of the eighteenth century. Scientists believed that combustible materials contained a substance released during burning—phlogiston. This theory had adherents for decades, was defended by prominent chemists, and seemed to offer an elegant explanation for a range of phenomena: why things burn, why fire goes out in an enclosed space, and why metals rust. But there was a problem: metals became heavier when they burned, not lighter. The theory required increasingly elaborate adjustments to explain this, such as “negative phlogiston.” In the end, Lavoisier’s oxygen theory made the entire construction superfluous.
Or consider cold fusion in 1989. Two scientists claimed to have achieved nuclear fusion at room temperature—which, if true, would have solved the energy crisis. The press was euphoric. Laboratories across the world attempted to reproduce the results. A few claimed success. But gradually it became clear that it did not work. The original experiments had rested on measurement errors and overconfidence.
Or polywater in the 1960s and ‘70s. A Russian scientist believed he had discovered a new form of water with bizarre properties. International labs studied the phenomenon; publications, conferences, and discussions followed. It turned out to be nothing more than contaminated water.
Or vitalism—the idea that living organisms contain a special “life force” that makes them fundamentally different from dead matter. Popular in the nineteenth century and enjoying considerable scientific support, vitalism was gradually undermined as biochemistry and molecular biology demonstrated that life processes are simply complex chemical reactions. No special force required.
These theories shared something with the examples that eventually proved successful: they were defended by eminent scientists, they had supporters and opponents, they attracted fierce criticism while their proponents held firm, and they promised revolutionary insights. Yet they were mistaken.
What makes the difference?
This is a meaningful question—particularly when we later look more closely at the polyvagal theory. If you are in the middle of such a scientific struggle, how do you know whether you are a Darwin or a defender of phlogiston? In hindsight it is easy to see, but during the process?
Patterns can be distinguished, though they offer no guarantees. Phlogiston required ever more complicated assumptions to explain observations. Cold fusion could not be consistently reproduced. Polywater vanished the moment more careful work was done. Vitalism became increasingly redundant as science advanced. These theories were not merely rejected because of new insights; above all, they eroded because they failed to deliver on their promises.
The theories that did survive—evolution, plate tectonics, germ theory, prions—had a different dynamic. They began rough and incomplete but became more robust as more research accumulated. They predicted things that were later confirmed. They opened fruitful lines of inquiry. They converged with evidence from an ever-growing number of disciplines, rather than diverging.
Darwin predicted transitional forms that were found decades later. Wegener’s continental drift received support from oceanography, seismology, and paleomagnetism—entirely different fields that independently arrived at the same conclusion. Pasteur’s germ theory led to antisepsis, vaccination, and antibiotics—practical applications that worked. Prusiner’s prions explained an ever-growing number of diseases as molecular biology advanced.
The difference lies not in the quantity of criticism nor in how long a theory withstands resistance. The difference lies in what happens as the investigation continues. Do the problems grow, or do they resolve? Do explanations become more complicated or simpler? Do different lines of research converge on the same conclusion or point in different directions?
For cross-domain theories, there is something more. The theories that survived—evolution, plate tectonics, and germ theory—ultimately became stronger through interdisciplinary research. Biologists collaborated with geologists, chemists with physicians, and neurologists with biochemists; the disciplines reinforced each other and filled each other’s gaps. With the theories that disappeared, such as vitalism or the ether theory, the opposite happened: as disciplines developed, they rendered the theory increasingly redundant.
This may be the most important distinction. A viable cross-domain theory builds bridges between disciplines that remain standing even as details are adjusted. It opens new fields of inquiry, poses new questions, and leads to productive collaborations. Flawed cross-domain theories merely fill temporary gaps in knowledge and disappear once those gaps are filled by other means.
But these too are criteria that can mainly be applied in retrospect. During the process itself, the distinction is often not sharp. That is precisely why scientific skepticism is so valuable: it ensures that only theories capable of withstanding continuous scrutiny survive.
The polyvagal theory in perspective
Against this background, it is interesting to look at the polyvagal theory. This theory, developed by Stephen Porges—who worked on it from the 1970s and formally introduced it in 1994—seeks to explain how our autonomic nervous system responds to safety and danger and how this influences social behavior and emotion.
The theory combines neurobiology, evolutionary biology, psychology, psychiatry, and trauma therapy. It is a cross-domain theory par excellence. And, like the historical examples, it attracts criticism from multiple directions.
Anatomists and neuroscientists point to anatomical details that do not hold up. The “ventral vagal complex,” as Porges describes it, is said not to exist as a separate anatomical entity. Claims about unique mammalian innovations in the function of the vagus nerve are disputed. The details about which fibers travel where do not always appear to match what anatomy reveals.
That criticism is important and must be taken seriously. If the anatomical foundation of a theory does not hold, that is a real problem.
Yet at the same time, therapists, psychologists, and other professionals find great value in the theory. The concepts of neuroception (the unconscious detection of safety or danger), of a hierarchy in stress responses, and of the connection between autonomic regulation and social behavior prove to be clinically very useful. Trauma treatment has been influenced by them. The understanding of autism, anxiety disorders, and PTSD has benefited from them.
A tension thus arises between science and practice. Different disciplines see different things. Anatomists see anatomical errors. Therapists see therapeutic value. Both observations can be true simultaneously.
We genuinely do not know how this story will end. The polyvagal theory is not yet complete—Porges himself does not dispute this. He is willing to adapt his theory, responds to criticism, and refines his claims. The scientific discussion is ongoing. And that is the difference from our historical examples: we look at those with the wisdom of hindsight. We know how their stories ended. With the polyvagal theory, we are still in the midst of the process.
Perhaps this theory will follow the same path as evolutionary theory and plate tectonics: a large core of truth that is initially rough and imprecise but is gradually refined and accepted. Perhaps it will emerge, fifty years hence, that Porges was essentially right, even if details had to be adjusted. The theory is already opening fruitful lines of inquiry, gaining support from multiple therapeutic traditions, and demonstrating practical value—all characteristics that the successful theories displayed as well. Perhaps a refined, more complete theory of the autonomic nervous system will be developed that integrates certain insights from Porges.
We simply do not know. The history of science does not teach us that every cross-domain theory that attracts criticism will ultimately be vindicated. It teaches us that such theories must be assessed in a particular way and that the process of acceptance or rejection can be complex and lengthy.
What we do know is that Stephen W. Porges has published more than 400 articles on the PVT9. These are not “unfounded” scientific claims. His work is a beautiful transdisciplinary synthesis, one that remains incomplete—and Porges himself is aware of that.
As Porges recently wrote10:
Polyvagal theory emerged from my efforts to bridge psychological processes and autonomic function, drawing on insights from neurophysiology, neuroanatomy, clinical medicine, and the study of brain–body connections across disciplines. Developing this theory illuminated a fundamental challenge in science today: disciplinary silos often restrict collaboration and the integration of knowledge, as specialized methods and language can inhibit the exchange of ideas. When research remains isolated, advancing collective understanding becomes more difficult. This study examines the development of PVT and articulates its core principles in light of interdisciplinary engagement—particularly with colleagues unfamiliar with the theory's foundational literature. Bridging such gaps requires not only sharing knowledge but also cultivating openness to new perspectives, intellectual flexibility, and a spirit of curiosity about ideas that challenge established assumptions.
What can we learn from this?
An important lesson seems to be that science always contains a tension between specialization and synthesis—and we need both. We need experts who know every detail of their field, who can spot errors, and who maintain standards. Without them, science would descend into speculation and fantasy.
But we also need generalists who draw connections between disciplines, who ask new questions, and who dare to think outside the box. Without them, science would become mired in ever-narrower specializations that no longer communicate with each other—leading ultimately, in the extreme case, to a specialist who knows everything about nothing.
What we see is that these two types of scientists often clash. The specialist sees the errors, the imprecisions, and the lack of depth. The generalist sees the new connections, the synthetic power, the larger picture. And both are partly right.
A second lesson is that time and patience matter. Wegener’s theory needed fifty years. Darwin’s vision needed seventy. Genuine insight and profound knowledge apparently take time. New generations must graduate without the old prejudices. New techniques must be developed. Puzzle pieces from different fields must come together.
A third lesson is that criticism is valuable, even when it ultimately proves unfounded. The criticism of Darwin forced evolutionary biologists to become better—to develop genetics to study fossil evidence more carefully. The criticism of Wegener forced geologists to investigate the ocean floor to discover mechanisms. Criticism sharpens theories, and by doing so may help them survive.
And despite all these lessons, we must be cautious with historical analogies. That Wegener was right does not mean that every scientist who attracts criticism will be vindicated. That Marshall won a Nobel Prize after years of ridicule does not mean that every rejected theory will be rehabilitated. Phlogiston, cold fusion, polywater, and vitalism remind us that criticized theories may be criticized with good reason.
A messy process
Some ideas come too early and die before their time arrives. Some scientists devote their lives to a theory that turns out to be wrong. Semmelweis ended his life in a psychiatric institution, broken because no one would take his handwashing hypothesis seriously.
Science beyond domain boundaries is not easy—neither for the theory nor for the scientist. It regularly invites criticism; it requires a particular way of evaluating, one that looks beyond the weak spots in individual fields and pays attention to the synthetic power of the whole.
But we must understand that we cannot use the past to justify the present. That Wegener was ultimately proved right says nothing about whether a current theory will be vindicated. Every theory must be judged on its merits—with nuanced criticism and attention to both weaknesses and strengths.
The stories of these scientists teach us above all: be open to new ideas, but also be critical. Do not dismiss too quickly, but do not accept too readily either. Pay attention to the criteria that distinguish successful theories from failures: convergence of evidence, predictive power, fruitful research programs, and practical applications that work. And recognize that science is a process unfolding over decades, with much uncertainty and few guarantees.
In conclusion
Can we say something meaningful about the polyvagal theory? I think so.
If we look at the criteria that distinguish successful theories from failures, we may recognize certain patterns in the polyvagal theory. It opens fruitful lines of inquiry: from the microbiome-gut-brain axis to parasympathetic biofeedback, from research into social signals to studies of body-based interventions. It is gaining support from different disciplines: not only psychotherapy, but also education, perinatal care, addiction care, and autism support are applying its insights.
There are practical applications that demonstrably work: trauma-sensitive care in hospitals and mental health settings, regulation interventions in education, and support for developmental trauma. Porges’s model serves as a physiological explanation for the success of various body-based trauma therapies, including Somatic Experiencing®.
And the theory is becoming more robust as research accumulates: Porges continues to respond to criticism and continues to publish—which is how science is supposed to work. It is precisely the pattern we saw with Darwin, Wegener, and many of the other scientists mentioned here: a rough beginning that is gradually refined, not a rigid construction that collapses at the first headwind.
The criticism that exists certainly contains elements that still need to be examined, and I will be writing about those in the period ahead. But that is, in my view, no reason to throw the baby out with the bathwater.
Something for a future essay: the manner in which the polyvagal theory is being criticized does not deserve a prize for elegance—it is more reminiscent of the heated debates of Darwin’s era than of a nuanced scientific exchange.
The question is not whether every anatomical detail is correct—Darwin did not know genetics; Wegener had no explanatory mechanism. The question is whether the theory is fruitful enough to advance our understanding and whether it brings different disciplines together in a way that proves durable. Of that I am convinced. The polyvagal theory has given me more insight into the connection between body, emotion, and behavior than any theory I have encountered in the past several decades.
Whether in fifty years Porges’s theory will look exactly as it does today? Probably not. But the same was true of Darwin’s theory of evolution.
If you found this article worth reading and (not yet) feel like getting a paid subscription, you can always treat me to a cappuccino!
Theodosius Dobzhansky was a Ukrainian-American evolutionary biologist who in the 1960s made a celebrated pronouncement: “Nothing in biology makes sense except in the light of evolution.” A statement that resonated with me immediately and further kindled my interest.
See the photograph at the top of the article …
Full disclosure: As co-chair of the Polyvagal Institute Netherlands, I am an outspoken “proponent” of the polyvagal theory. I find it a beautiful theory, one that is enriching and that I believe deserves to have significant consequences for how we organise our society. I have written about this on relaxmore.net on several occasions. At the same time, I am a “truth-seeker,” so it is not my intention to suggest with this article that the polyvagal theory will sort itself out without further research or discussion. On the contrary: if we truly intend to let a theory influence the organisation of our society, that theory had better be well-founded. I have every confidence that things will work out well — but there is still work to be done.
Darwin was not, of course, the first or only person writing about evolution at the time; others — including his grandfather Erasmus Darwin — had already done considerable groundwork. But that is quite another story …
Ironically, during Darwin’s own lifetime the Austrian monk Gregor Mendel had already published his work on heredity — and it appears that Darwin even had a copy on his bookshelf, though he never got around to reading it.
The decay of radioactive elements in the Earth’s crust (uranium, thorium, potassium-40) continuously generates new heat — something entirely unknown in Kelvin’s time (1860–1890).
Interestingly, the word “miasma” is still used in English in a figurative sense to mean “a stifling or corrupting influence,” as in “a miasma of corruption.”
Bacteria had already been observed in the seventeenth century, but their role in disease was not understood. Antonie van Leeuwenhoek was the first to see bacteria through his microscope in 1676, calling them “diertjes” (”little animals”) or animalcules.






