Nobody likes the idea of experimenting on animals. It seems like the definition of inhumanity, especially when you consider the growing evidence that animals have awareness just like us. But there's no doubt that the human race has gained incalculable benefits from the scientific testing of animals. Most scientists don't want to rule out animal testing, because we just don't have any decent alternatives.


Until now. Technology is finally coming up with solutions that could eliminate the practice altogether.

Putting an end to animal experimentation is more than just a matter of ethics. A growing number of scientists and clinicians are challenging the use of animal models on medical and scientific grounds. A 2006 study in JAMA concluded that, "patients and physicians should remain cautious about extrapolating the findings of prominent animal research to the care of human disease," and that "even high-quality animal studies will replicate poorly in human clinical research."

Two years ago, independent studies published in PLOS showed that only animal trials with positive results tend to get published, and that only two stroke treatments out of 500 verified that animal models actually worked on humans.


Making matters worse is the fact that mice are used in nearly 60% of all experiments. As Slate's Daniel Engber argues, mice are among the most unreliable test subjects, when it comes to approximating human biological processes. But most scientists are reluctant to move away from this tried-and-true model, mostly because mice are cheap, docile, and good subjects for genetic engineering experiments. They're also denied many of the rights afforded to other animals. Still, Engber points out, "It's not at all clear that the rise of the mouse β€” and the million research papers that resulted from it β€” has produced a revolution in public health."

Given these problems, and combined with the overarching ethics question, it's clear that something better has to come along. Thankfully, the process of replacing animal models is largely underway β€” an effort that began over 50 years ago.


The three R's of animal testing

Back in 1959, English scientists William Russell and Rex Burch conducted a study to see how animals were being treated at the hands of research scientists. To make their assessments, they looked at the degree of "humaneness" or "inhumaneness" that was afforded to the animals during testing. By analyzing the work being done by scientists in this way, Russell and Burch sought to create a set of guidelines that could be used to reduce the amount of suffering inflicted on laboratory animals.


To that end, they proposed the three R's of animal testing: Reduction, Refinement, and Replacement.

By practicing reduction, scientists were asked to to acquire high quality data using the smallest possible number of animals. Experiments needed to be designed so that they could continue to yield valuable results, while minimizing (if not eliminating) the need for endless repetition of the same tests. Consequently, scientists were told to work closer with statisticians (to better understand the required level of statistical significance) and to refer to previous studies that had essentially performed the same tests.

Refinement was simply the idea that more humanitarian approaches were required. It was a call to reduce the severity of distress, pain, and fear experienced by many lab animals.


More significantly, however, was the suggestion that scientists replace their lab animals with non-sentient animals β€” things like microorganisms, metazoan parasites, and certain plants. The less cognitively sophisticated the animal, it was thought, the less capacity it had to experience emotional, physical, and psychological distress.

Since the publication of Russell and Burch's guidelines, a number of scientists and bioethicists have put these policies into practice. But now, as more sophisticated tools emerge, scientists have been given entirely new options for testing β€” options that will enable them to honor the "R" of replacement.

Technological alternatives

Most of these new alternatives that are emerging are coming from the fields of biotechnology, hi-res scanning, and computer science.


Take research laboratory CeeTox, for example. They're using human cell-based in vitro (lab grown) models to predict the toxicity of drugs, chemicals, cosmetics, and consumer products β€” tests that are replacing the need to pump potentially hazardous chemicals into animals' stomachs, lungs, and eyes. Likewise, biotech firm Hurel has developed a lab-grown human liver that can be used to break down chemicals.

There's also MatTek's in vitro 3D human skin tissue that's being used by the National Cancer Institute, the U.S military, private companies, and a number of universities. Their virtual skin is proving to be an excellent substitute for the real thing, allowing scientists to conduct burn research, and to test cosmetics, radiation exposure, and so on.


The development of non-invasive brain scanning techniques are also enabling scientists to work on human test subjects. Technologies such as MRI, fMRI, EEG, PET, and CT are replacing the need to perform vivisections on the brains of rats, cats, and monkeys.

Likewise, the practice of microdosing, where volunteers are given extremely small one-time drug doses, is allowing researchers to work ethically with humans.

There's also the tremendous potential for computer models β€” and this is very likely where the future of drug testing and other scientific research lies. And this is a revolution that's already well underway.


The first heart models were developed 13 years ago, kickstarting efforts into the development of simulated lungs, the musculoskeletal system, the digestive system, skin, kidneys, the lymphatic system β€” and even the brain.

Today, computer simulations are being used to test the efficacy of new medications on asthma, though laws still require that all new drugs get verified in animal and humans tests before licensing. Models are also being used to simulate human metabolism in an effort to predict plaque build-up and cardiovascular risk. These same systems are also being used to evaluate drug toxicity β€” tests that would have normally involved the use of lab animals. And as we reported a few months ago, new computer simulations can even help scientists predict the negative side effects of drugs. All this is just the tip of the iceberg.


This said, not everyone agrees that computer simulations are the way to go. Some people feel that simulations can never truly paint an accurate picture of what they're trying to model β€” that it's a classic case of "garbage in, garbage out." The basic reasoning is that scientists can't possibly simulate something they don't truly understand. Consequently, if their models are off by even just a little bit, the entire simulation will diverge dramatically from reality.

But even though these problems are real, they're not necessarily intractable β€” nor are they deal breakers. It may very well turn out that the margin of error achieved in computer simulations will be comparable (or better) than the current margin of error when testing animal models. And given the rate of technological advance, both in biotechnology and information technology, it's even conceivable that we can simulate the intricate complexity that makes up organisms with extreme accuracy. And at that point, animal experimentation won't even seem like a sensible option.


Other sources: Siftung Forschung 3R, New Scientist, PETA. Images: Vit Kovalcik/Shutterstock, Everett Collection/Shutterstock, Mattek, James King-Holmes/Science Photo Library.