FOR FREE PEOPLE

FOR FREE PEOPLE

Virologist Shi Zheng-li, left, works with her colleague in 2017 at the Wuhan Institute of Virology, where some believe COVID-19 originated. (Photo via Getty Images)

Is Gain-of-Function Research a ‘Risk Worth Taking’? Or ‘Insanity’?

A lab leak in Wuhan may have led to the outbreak of Covid. So what were scientists doing there? And why?

For years, our officials—from Anthony Fauci on down—maintained that Covid-19 probably originated from a natural pathway in Wuhan, China. Bats in a wet market, some said.

But according to a new report, the U.S. Energy Department believes the virus likely escaped from the Wuhan Institute of Virology before going on to kill millions around the world. The FBI and a number of prominent scientists also think Covid may have leaked from the lab, where researchers were allegedly conducting “gain-of-function” experiments on coronaviruses, potentially making them deadlier. 

Before the winter of 2020, most people had never heard of this type of research, which goes on in labs around the world. But over the past few years, experts have made the compelling argument that it is so risky it could endanger humanity.

And yet, the Biden administration remains supportive of gain-of-function research “to help prevent future pandemics” as long as it’s done safely and with transparency, according to John Kirby, National Security Council coordinator for strategic communications.

But can any of this research ever be done safely or transparently? And is it worth the risks?

As a longtime science writer, I have investigated much of the evidence—or lack thereof—behind health officials’ claims during the pandemic. This article won’t tell you how Covid-19 started (although, on balance, I’d put my money on a lab leak). 

But I can tell you what I’ve learned about gain-of-function research—what it does, where it’s happening, why it’s happening, and why I don’t feel any safer knowing the answers. 

What is gain-of-function research?

“Gain-of-function” is an umbrella term for any experiment where scientists manipulate organisms to give them new properties. That is, they “gain functions.” Genetically engineering types of corn, for example, could be considered gain-of-function research in the broadest sense.

But what’s alarming people, including many distinguished scientists, is research involving pathogens, known as gain-of-function research of concern, or GOFROC (not kidding). These experiments make pathogens more virulent and/or more transmissible. 

Simply put, this research allows scientists to manipulate viruses to make them even deadlier, on the theory that if nature produces the same virus, we will be ready to fight it.

A subset of this work is referred to more precisely as ePPP, or enhanced Potential Pandemic Pathogen research, and there are still other terms that describe the process in slightly different ways. The varying terms, and the somewhat subjective interpretation of whether an experiment meets their criteria, have led to disagreement as to what gain-of-function research of concern truly is. As a result, officials and scientists have plausible deniability about what kind of work is actually being conducted. 

Back in 2014, the U.S. government became so worried about GOFROC, it put a pause on certain research that could possibly fall under that category. That lasted for several years until a new framework—P3CO—was developed, which requires a government review of the risks and benefits of any ePPP research before funding is awarded. (For the purposes of consistency, we’ll stick with the term GOFROC to describe this research from here on.)

Has gain-of-function research of concern ever benefited humankind?

If there is evidence of a direct benefit, it’s certainly not obvious. 

A lengthy 2014 editorial in American Society of Microbiology journal mBio claimed that GOFROC experiments are of “epistemological value,” citing a number of theoretical benefits, without giving any concrete links to vaccines or therapeutics. 

A 2021 piece by the biosafety director of Colorado State University was equally modest in its claims of benefit, stating that: “Gain-of-function experiments may help researchers test scientific theories, develop new technologies and find treatments for infectious diseases.”

Vincent Racaniello is a professor of microbiology and immunology at Columbia University, and a vocal proponent of GOFROC. He believes many of the opponents of this research are unreasonable.

“Almost like arguing with an anti-vaxxer,” Racaniello said. 

When I asked him for an example of how this type of research has specifically led to a direct benefit, he told me that, many years ago, he and his colleagues gave a human gene to mice that enabled them to be infected with polio—and now the mice, instead of primates, are used to test polio vaccines.

While testing vaccines on mice rather than monkeys is certainly a positive development, this example did not sound like it met the criteria for GOFROC, because it was not enhancing a pathogen (rather, it created humanized mice to test vaccines on them). When I emailed Racaniello for clarification on this, I did not get a reply. 

When did scientists first voice concerns about gain-of-function research?

The first modern GOFROC controversy happened in 2011, when Dutch virologist Ron Fouchier and his colleagues manipulated the avian flu (A/H5N1) so it could be transmitted through the air between ferrets. Around the same time, a different research team, headed by virologist Yoshihiro Kawaoka at the University of Wisconsin–Madison, was conducting related experiments with H5N1 on ferrets. (It is worth noting that both of the H5N1 ferret studies were funded by Fauci’s former agency, the National Institute of Allergy and Infectious Diseases, or NIAID. More on this later.)

When word got out about the two men’s research, scientists were outraged. 

Here’s why: H5N1, a virus deadlier than smallpox, is extremely lethal in humans. By some estimates it has killed around 60 percent of those it was known to infect. But H5N1, by its nature, does not easily transmit between people. Nearly all known cases are among those who have had direct contact with birds or bird products. 

Fouchier conducted this experiment under the premise that H5N1 could, at some point, mutate in nature to be easily passed between humans. And so, he purposefully engineered it to become as communicable as the seasonal flu in our fellow mammals, with the hope it could aid in creating a vaccine in advance of this ever happening naturally. 

At the end of 2011, after the research was conducted, Fauci coauthored an opinion piece with Francis Collins, the head of National Institutes of Health (NIH) at the time, saying the research was “a risk worth taking.”

Critics said Fouchier’s and Kawaoka’s results shouldn’t be published, out of fears this would give bioterrorists a “recipe” for creating a bioweapon. The research was published anyway, in 2012. But this event led many to raise an even bigger and likelier concern: a future laboratory accident that would allow a Frankenvirus like this one to escape.

The new super-virus, of course, did not escape, nor do we know if it could jump from ferrets to humans, and if it did how deadly it would be. But that’s the point made by so many critics of GOFROC research: scientists don’t know how experiments will turn out. That’s literally the purpose of experimentation. 

This all seems completely crazy!

Many distinguished scientists agree with you, including Dr. Laura Kahn, a biodefense expert, physician, and global security scholar who recently left Princeton University after twenty years to found an organization that promotes interdisciplinary collaboration for public health.

Kahn told me that, as far as she knows, GOFROC “has not directly led to vaccines or therapeutics.” Rather, scientists use it to “predict pandemics” like “we use satellites to predict hurricanes.” 

But, she added, we don’t “seed the clouds to create a hurricane to study it. We observe. We don’t create the hurricane.” Purposefully creating dangerous viruses so you can potentially develop vaccines, she said, is “insanity.” 

Kevin M. Esvelt, an evolutionary and ecological engineer at MIT, agrees. As he wrote in a 2021 opinion piece: “I implore every scientist, funder and nation working in this field: Please stop.” 

Marc Lipsitch, an infectious diseases epidemiologist at Harvard’s T. H. Chan School of Public Health, has also written extensively about the risks of GOFROC. “Ethical principles,” he wrote in one of his many pieces on the topic, dictate that experiments should be allowed only “if they provide humanitarian benefits commensurate with the risk, and if these benefits cannot be achieved by less risky means.”

Yet another vocal opponent is Dr. Richard Ebright, a molecular biologist at the Waksman Institute of Microbiology at Rutgers University, whose recent testimony before Congress offers an extremely detailed critique of GOFROC. And Bryce Nickels, a professor of genetics, also at the Waksman Institute, is so concerned about the risks he recently cofounded, with Ebright, a nonprofit called Biosafety Now to raise awareness about the issue. 

When I called Nickels to discuss GOFROC, he told me: “I’m talking with you because I believe your life is in danger.” 

The coronavirus molecule, which some believe was created in a lab rather than in nature. (Photo via Getty Images)

How likely is a lab leak?

A 1995 epidemic of Venezuelan equine encephalitis virus is believed to have come from a lab. A 2003 case of SARS was traced back to a Singapore lab. Multiple cases of SARS in 2004 in Beijing likely originated in a lab, a World Health Organization report found. A 2007 outbreak of foot-and-mouth disease was believed to come from a UK lab. In 2002, cases of West Nile Virus were acquired in labs. In 1979 more than sixty people, who lived downwind from a Soviet military facility, died from anthrax that escaped from the building. In 2019, a lab leak of aerosolized Brucella in China infected more than 10,000 people. 

“The record of laboratory accidents and accidental infections in the most secure and highly scrutinized government labs shows that such accidents are inevitable,” Lipsitch wrote in one of his many papers on the topic.

You’re scaring me!

Those instances are just a sample. The American Biological Safety Association maintains a database of hundreds of case studies of laboratory-acquired infections, in case you really want to panic. To be clear, this is not a list of leaks specifically related to GOFROC, but the point is that lab accidents happen with surprising regularity. 

Can any of this research be done safely?

Racaniello acknowledges that some research carries some risks, but said when it’s done with appropriate precautions, such as in a biosafety level 4 lab called BSL-4—the highest level of safety, where the technicians work in moon suits, and access to the facility is carefully controlled—it’s worth doing.

Unfortunately, as Marc Lipsitch wrote, accidents still occur even at the highest safety levels. Moreover, Bryce Nickels told me there is a temptation for scientists to work at lower safety levels, because fewer restrictions make research easier to conduct. Some of the coronavirus research conducted in Wuhan was done at BSL-2—a level that a number of experts believe is far too low.

Where is most of this GOFROC research being done, who is funding it, and how much of the funding comes from the U.S. government?

There is no clear answer to this. When scientists apply for grants from the NIH, there are no checkboxes under GOFROC or ePPP. So there is no concrete list of where the U.S.-funded work is happening. 

And there is even greater uncertainty around research not funded by the NIH, conducted in public and private labs around the world. 

Members of the World Health Organization investigate the origins of COVID-19 at the Wuhan Institute of Virology on February 3, 2021. (Hector Retamal via Getty Images)

Did we fund any of this gain-of-function research in the Wuhan lab? 

Though still a matter of dispute, an objective review of the facts suggests the answer is yes. 

The National Institutes of Health, in connection with the National Institute of Allergy and Infectious Diseases (NIAID) under Anthony Fauci’s direction, gave grants totalling more than $3.7 million to a nonprofit organization called EcoHealth Alliance specifically to study the risk of bat coronavirus emergence. EcoHealth funneled approximately $800,000 of those grants, from 2014 through 2020, to the Wuhan Institute of Virology and Wuhan University School of Public Health. 

EcoHealth’s grant applications and proposals state that they planned to test “interspecies transmission” of coronaviruses. (Right here might be a good moment for the proverbial record scratch. But let’s keep going.) In a grant titled “Understanding the Risk of Bat Coronavirus Emergence,” EcoHealth said they planned to create chimeric viruses and test their ability to infect human cells in culture and lab animals. 

To summarize: scientists at the Wuhan lab, whose specific work on coronaviruses was funded by the U.S. government, took juiced-up viruses from bats and infected mutant mice engineered to have human receptors for SARS viruses to see what would happen. 

So why did Fauci say the U.S. didn’t fund the research in Wuhan?

Fauci testified before Congress that NIH and NIAID did not fund GOF research in the Wuhan Institute of Virology. But this is where slippery definitions come into play. 

In November, Fauci admitted in a hearing that “gain of function is a very nebulous term,” described “ePPP” as the more precise term, and refused to associate the research in Wuhan as gain-of-function. Senator Rand Paul shot back: “So what you’re doing is defining away gain-of-function. You’re simply saying it doesn’t exist because you’ve changed the definition on the NIH website. . . . What you’ve done is changed the definition on your website to try to cover your ass.”

Whether an experiment actually qualifies as GOF, or more specifically as ePPP, is somewhat subjective. And those who conduct or fund this type of research have an incentive to say particular experiments don’t qualify, and to play down the risks in order to bypass any critique. 

But by any layperson’s definition, it sure seems that the work NIAID funded in Wuhan met the definition of gain-of-function research of concern. Ebright, Nickels, and other experts say there is no doubt the experiments done in Wuhan were GOFROC. 

What other gain-of-function research is being done right now?

Recent research conducted at NEIDL, Boston University’s BSL-4 lab, published in 2023, involved scientists inserting the Omicron variant’s spike into the ancestral strain of SARS-CoV-2. This created a chimeric virus that killed 80 percent of the mice it infected. While less virulent than the original strain, which killed 100 percent of mice, the hybrid version they created more easily evaded the immune system than the current circulating strains of SARS-CoV-2. This, at least by a superficial interpretation, means it gained function.

A press release from NEIDL and BU said the research “may provide some answers” to questions such as why some variants are weaker than others and why Omicron spreads fast yet makes people less sick. The study’s senior author, Mohsan Saeed, said the research provides an “exciting new concept for future vaccines and therapeutics—if we know how to weaken the virus, we can better fight it.”

BU also claims that this research was not gain-of-function research. 

But many experts adamantly disagree. While likely far less dangerous, and perhaps more useful than the research done by Fouchier back in 2011, Marc Lipsitch wrote that, despite BU’s denials, “these are unquestionably gain-of-function experiments.”

Nickels believes there was no clear benefit to these experiments. He said most of these experiments happen so scientists can get their work published in prestigious journals, which, in academia, function as professional currency. 

Is anything being done to rein in gain-of-function research?

Some argue that the P3CO framework—which gives specific criteria for research involving potential pandemic pathogens—is evidence the government has put funding-related safeguards into place. But as Richard Ebright, among others, has argued, much research that could be considered ePPP was never flagged by the P3CO process. So these rules and procedures aren’t effective if they’re not robust enough or applied with sufficient force. 

Recently, the National Science Advisory Board for Biosecurity met to discuss potential new oversights. Among their findings: the definition of ePPP is too narrow, greater transparency in the review process is needed, and there should be oversight of ePPP research regardless of the funding source.

Frankly, the more I learn about this type of research—the bald obfuscations from government officials and the willingness of scientists to disregard the risks—the worse I feel about what may happen next. 


David Zweig is the author of Invisibles and the forthcoming book, An Abundance of Caution. Read his piece about how Twitter rigged the Covid debate here.

And if you want to support more stories that seek the truth, no matter how inconvenient, become a Free Press subscriber today:

Subscribe now

The Free Press earns a commission from any purchases made through Bookshop.org links in this article.

Latest