How Reliable is Scientific Research?

The scientific method was developed as a means to understand objective reality, or more simply stated, to discover the truth about the universe we inhabit. Commonly, it is regarded to consist of four steps:

1.       Observe a phenomenon.

2.       Conceive an explanation for the phenomenon. This is the hypothesis.

3.       Make a prediction about a future event, based on the hypothesis.

4.       Do an experiment to see if your prediction is verified.

If the prediction is borne out by experiment, there is now some confidence that the hypothesis is true.

Of course, it is possible that the experiment gave the results it did by chance, or because it was done incorrectly, or because it was designed it in such a way that it gave the results the experimenter wanted to see. So another step can be added to the scientific method to correct this:

5.       Repeat the experiment multiple times, with multiple researchers, and compare the results of all the experiments.

Each time a new experiment produces results that agree with previous experiments, the level of confidence that the hypothesis is correct becomes greater. When a hypothesis has been confirmed many times by multiple people, scientists call it a theory, and accept it as the best existing explanation of the original phenomenon. The theory is then used to generate new hypotheses, which are tested by more experiments. It is this iterative process that brings us ever closer to the truth.

Peer review is another process that science relies on to ensure that reported research has been done properly. When a paper is submitted to a scientific journal for publication, it is independently reviewed by two or three other scientists who work in the field that the paper addresses. Each reviewer prepares a list of comments and criticisms that are sent to the journal’s editors and to the paper’s authors. If errors or deficiencies are noted, the authors are expected to correct them before the paper is published. Of course, the authors can always withdraw the paper from consideration, and submit it to another journal with different reviewers.

A recent article in The Economist claimed that most published findings are incorrect, and that science as a whole is not as self-correcting as most scientists would like to think. The Economist article refers back to a paper published in 2005 by John Ioannidis, which has become one of the most widely cited scientific papers in history, and provides a persuasive statistical argument for this claim. Moreover, when scientists have tried to reproduce the work of others, failure occurs more often than success, providing empirical conformation of Ioannidis’ claims. The Economist gives several reasons for this, a few of which I will expand upon.

Biomedical studies and clinical trials are large, expensive and complicated affairs. Their complexity alone means that they cannot be reproduced exactly, not to mention that financial backing is difficult to obtain for repetition of work that has already been done. To remedy this lack of replication, the data from such studies is subjected to statistical analysis, which supposedly allows extrapolation of the results to the population at large. This is a mathematically sound assumption if the correct statistical procedures are used, and the experiment is designed with these statistical procedures in mind. However, statistics is a discipline that few scientists fully understand. Often, a statistician is not consulted as the experiment is being designed, and inadequate statistical procedures are applied to the data, giving questionable results.

Inadequate peer review can also be a problem. Reviewers consider the design of the experiment and the results, but rarely do they check the data analysis, because this is often something that would take many months to do. Reviewers often are not expert in statistics either, so it might be assumed that the experiment has been correctly designed for the statistical processes that were applied, consequently focusing the review on the results of those processes instead of their suitability. John Bohannon, a science writer, submitted a totally spurious paper, which contained serious scientific flaws that should have been easily spotted by any competent reviewer, to numerous open access scientific journals. These journals largely are not among the most prestigious, and some charge a fee for publication. In many cases, Bohannon found that a journal simply rubber-stamped the paper with no review. Bohannon also found that 70% of journals that did review the paper accepted it for publication. Reviews that pointed out the paper’s numerous scientific flaws occurred in only 12% (36/304) of those received. Sixteen journals accepted the paper despite poor reviews. To be fair, many scientists do realize that all journals are not equally reputable, and consider the source when determining how much credence to place in particular results. Bohannon’s sting has also been criticized because among other things, he did not include any subscription-based journals, which are generally considered more reputable, in his submissions.

Science writers tend to focus on the results of single studies, selectively reporting results that are perceived to be of popular interest. Most journals are reluctant to accept negative results (i.e., studies that did not verify the hypothesis being tested). Medical journals tend to focus on the more high profile studies, and these are the ones that most doctors read.

It is vital that scientific results be trustworthy if they are to drive national policy, as well as the future allocation of funds to pay for ever more expensive research. At least a portion of American public is perceived as anti-science, and publicizing that most scientific studies are inaccurate only serves to ingrain that attitude more deeply. The good news is that the scientific community has recognized the problem and is implementing corrective measures. For example, Nature, one of the most prestigious scientific journals, has developed a checklist of factors that must be reported in every manuscript submitted to the journal, in an effort to improve reproducibility. Other journals will doubtless follow their lead. If all scientists cannot become expert in statistics, journals must at least insist that the statistical analysis of data be justified as appropriate to the experimental design, employing reviewers who are experts for this purpose. In this era of Big Data, it is becoming even more difficult to do more multiple independent data analyses, but journals must insist that all of the data for a study be publicly available, and the analysis procedures rigorously described. Finally, scientists must change their attitudes, favoring veracity of results over quick and prodigious publication.

The scientific method was conceived as a means for humanity to discover the truth about the universe. It is the still best method we have to accomplish that goal. We must preserve its integrity at all costs.


Photo credit: sneaka / / CC BY-SA

Posted in Science | Tagged , , , , , , , | 1 Comment

Big Data for Big Problems

In order to do science, a scientist must collect information about the phenomena he is studying. In the days before computers, information collection was laborious, and in many instances, a scientist was faced with the problem of too little information about the phenomenon under consideration to conduct a sound analysis. Often, data collection consumed much more time and energy than the analysis itself. With the advent of computers and automated data collection, the situation has reversed. In many instances, collecting the data consumes much less time than analyzing it. “Big data” is a catchphrase coined to describe this situation.

So what is big data? The answer to this question, like the answer to so many others in science, is, it depends. In general, big data is that amount of data that strains the data processing capacity and processing equipment available to a researcher. The problem arises because data collection equipment is more efficient than data processing equipment, and the data collected is unstructured. It is the scientist’s job, with the aid of the data processing equipment, to sort through all of the unstructured data to identify a subset that is germane to the problem under consideration. When that process consumes inordinate amounts of time, money and other resources, big data has become a big problem.

For example, the Sloan Digital Sky Survey collected more data in the first few weeks of operation than had ever been collected in astronomy before, in an effort to map the observable universe. It has produced a digital image of the sky composed of more than one trillion pixels, containing over 500 million objects. New telescopes that will collect even more data are in the planning stages.

Another example of a project that produces huge amounts of data is the Large Hadron Collider (LHC). Scientists hope that data produced by this massive particle accelerator will answer many fundamental questions in physics. The LHC produces over 600 million collisions per second, during runs lasting as long as 10 hours. About 100 of these collisions each second will be of interest to scientists. The collider is expected to produce about 15 petabytes of data per year.

So why collect so much data? For some questions, it is necessary if one wants a scientifically sound evaluation of a problem. For example, the U.S. Environmental Protection Agency (EPA) is tasked with monitoring potentially .harmful environmental chemicals. In the U.S., there are more than 85,000 commercial chemicals  ̵  of these, over 2,500 are produced in quantities exceeding one million pounds per year. Each year, 2,000 more commercial chemicals are introduced. It is humanly impossible to test all of these chemicals rigorously for every possible adverse health effect. Prohibiting or restricting the production of any chemical has economic consequences for all who produce and use the substance, so it is not wise to do this based on limited information. To approach this problem, the Tox 21 program was established, which will use robotic technology test 10,000 environmental chemicals in numerous biochemical assays, to assess their potential to disrupt biochemical reactions necessary to life. As in the examples given above, only a fraction of the data generated by this endeavor will be relevant, and it is the task of the scientists involved to use modern bioinformatics technology to separate the wheat from the chaff. The goal of the project is to identify which substances may be hazardous substances, and then subject these to more intense scrutiny.

As our technology evolves, our capacity for data collection will only increase, so big data will continue to present big challenges. Those who can rise to meet these challenges will become the leaders in this new, exciting world.

eye on the sky

Photo credit: darkmatter / / CC BY-NC-ND

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

A (Very!) Brief History of Genetic Engineering

Genetic engineering has its origins in plant and animal breeding. Sometime in antiquity, someone observed that like produces like – on the average, if one breeds two strong, healthy brown cows, one gets more strong, healthy brown cows. Change one of the parents to a white cow and you can expect white cows, and maybe even varicolored cows, among the offspring. Do these breedings consistently, periodically going back to a common ancestor to reinforce certain traits, and soon you will have a line that breeds true; that is, that produces individuals of consistent appearance, or phenotype.

The 19th century Augustinian monk, Gregor Mendel, codified these principles through his experiments with pea plants, founding the new science of genetics. His discoveries made it possible for humans to deliberately mold plants and animals in their own vision by selective breeding, revolutionizing agriculture in the process.

The other vital, 19th century contribution to modern genetic engineering methods was made by Charles Darwin, who discovered the principle of natural selection. In its simplest form, this principle states that the species most suited to its environment will predominate over other species that are less suited, and that the genes that confer this suitability will proliferate.

In the middle of the 20th century, James D. Watson, Francis Crick, Maurice Wilkins and Rosalind Franklin discovered the structure of deoxyribonucleic acid (DNA), the genetic material, and proposed a mechanism for its replication. Their mechanism was based on the fact that DNA is composed of four building blocks, called nucleotides, identified as adenine (A), thymine (T), guanine (G) and cytosine (C). These nucleotides form long chains, or stands, which associate with each other in a double helix. In that helix, A is always found across from T, and G is always found across from C, providing a code that allows either strand to be exactly duplicated from the nucleotide sequence of its partner. Along with nuclear fission, this was arguably the most important scientific discovery of the century, for which Watson, Crick and Wilkins won the Nobel prize. This discovery enabled legions of biologists to elaborate on the mechanisms of genetic change elucidated by Mendel in biochemical terms.

Darwin’s theory proposed that genetic change in a population occurs via a process of mutation and selection; that is, a random change in the structure of an organism’s DNA occurs and confers a trait that makes that organism more suited to its environment. Scientists verified this theory by identifying bacterial mutants that were resistant to various antimicrobial substances, and selecting them away from sensitive strains by exposing them to the toxic substance. They also identified particular chemicals, called mutagens, which increased the frequency of the appearance of these resistant strains.

Scientists learned that they could introduce foreign DNA into plant and animals cells by a process called transfection, and that the introduced DNA would integrate into the organism’s DNA, conferring a heritable genetic trait. This process came to be known as genetic engineering.

The next important discovery that allowed modern genetic engineering techniques to be developed was that of enzymes called restriction endonucleases, which cleaved DNA strands at specific nucleotide sequences. This led to the so-called technology of shotgun cloning, where restriction fragments from a donor could be introduced into the cells of a recipient, where they would combine with its DNA The introduced DNA sequence would be perpetuated by subsequent cell division. As always, particular clones with desired genetic traits could be isolated by selection. This technique even allowed the DNAs of different species to be combined in a single organism.

This technology was viewed with apprehension by many, essentially because the introduced genetic material was random. While a specific trait in the recipient could be selected for, it was difficult or impossible to know what other traits may have been introduced along with it. If a genetically altered organism was released inadvertently or deliberately into the environment, where its proliferation could no longer be controlled, many feared that potentially catastrophic consequences. Apprehension was so strong that top scientists in the field called for a voluntary moratorium on certain types of experiments until the practical and ethical implications of the work could be fully considered. Even today, some types of research are controversial – see my post on Scientific Ethics and the Flu for one example.

Modern advances in genetic engineering technology are decreasing this apprehension because they provide much more control over the process. One example is Precision Biosciences’ Directed Nuclease Editor (DNE) technology. Standard restriction endonucleases recognize very short (~5-6) nucleotide sequences. This results in many short DNA fragments when a large piece of DNA is cleaved. The heart of DNE technology is so-called “meganucleases”, which recognize much longer sequences. This allows for excision and purification of specific genes from the donor DNA, provided that the DNA sequences flanking a particular gene are known, eliminating the introduction of nonspecific DNA fragments into the recipient. This technology has many applications – I recently wrote an article for a local newspaper on just one of them.

The future of this technology is exciting! It essentially allows humans to customize organisms for a specific purpose, or to a specific environment. It has tremendous implications for revolutionizing manufacturing and healthcare, eliminating hunger throughout the world, and bringing new industries into impoverished areas. The possibilities are limited only by our imaginations.


Photo credit: IRRI Images / / CC BY

Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment

The Demise of Natural History – Or is it?

Carl Linneaus (1707 – 1778) spent his life on one magnum opus – the Systema Naturae, a classification of all living things. In the process of formulating his classification system, he attempted to enumerate and describe every characteristic of many diverse individuals so he could group like with like, and revealed astounding insights how organisms are related to each other. His efforts resulted in the establishment of the field of taxonomy, which comprises the description, identification, nomenclature, and classification of organisms.

In order to do his work, Linneaus needed access to as many specimens as he could get. These were provided by naturalists – people who traveled the world collecting and cataloging every conceivable organism – plants, animals, insects, etc. The profession of naturalist was considered refined and useful, and many well-off people took it up as a hobby. Several of Linneaus’ students acted as naturalists on subsequent voyages of exploration, including the circumnavigations by Captain Cook. Charles Darwin’s participation as a naturalist on the famous Voyage of the Beagle and his subsequent publication of Origin of Species was responsible for a paradigm shift in modern biology.

While it has been estimated that there are some 8.7 million species of plants and animals on earth, only a fraction of that number, some 1.2 million, have been identified. Today’s naturalists certainly have their work cut out for them – it’s estimated that it would require nearly 500 years to catalog all of earth’s plants and animals, if it’s even possible. While some funding is still available for such efforts, it certainly isn’t plentiful, compared to that available for other endeavors. So it looks as if natural history, as people like Darwin and John James Audubon knew it, is dead in our modern age. Or is it?

Like many other scientific endeavors, the study of natural history has changed names and methodologies. Now it’s called molecular phylogeny, or phylogenomics. It’s characterized by the determination of DNA sequences of genes, or entire genomes, from various organisms– this article about snakes in a recent issue of Science is but one example of this growing effort. The scientists doing this work are the contemporary equivalent to the 18th and 19th century naturalists who scoured the globe to provide specimens for the taxonomists.

The National Institutes of Health (NIH) has provided a central resource for this information called GenBank, which is now in its 25th year. However, this modern version of natural history is beset with a problem common to many areas of science these days – too much data being gathered too quickly. While GenBank is a significant and useful effort, no one data repository can serve the needs of all researchers. Multiple repositories, which store different kinds of data in different ways, are a necessity. But with multiple repositories a different challenge arises – how can the data present in the various repositories be integrated? Such an effort will require unprecedented international cooperation and sharing of both data and data processing resources.

The desired end result is a classification system like Systema Naturae, but based on molecular similarity rather than physical characteristics. Just as Linneaus’ system provided new insights into the relatedness of organisms, this system will provide novel insights into the similarities and differences of the molecular processes that constitute life itself. The potential for new discoveries arising from this is immense, running the gamut from revolutionary treatments for diseases, to targeted genetic engineering of crops and other organisms adapted to specific environments. It’s an exciting time to be involved in a very old endeavor!


Photo credit: / CC BY-SA

Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment

On Near-Earth Objects

A little less than a year ago, on 15 February, 2013, a meteor approximately 20 meters in diameter exploded in an airburst about 60 kilometers from the Russian city of Chelyabinsk. An article in last week’s issue of Science provides an overview of the incident and a summary of its effects. Even though the energy of the explosion was some 25 times as powerful as the atomic bomb dropped on the Japanese city of Hiroshima near the end of World War II, fortunately, the damage on the ground was minimal, mostly confined to broken windows and other superficial damage to buildings caused by the shockwave. There were nearly 1,200 injuries   ̵  most were minor burns or eye injuries from the fireball produced as the meteor burned up in the atmosphere, and a smaller number of people received cuts and abrasions from flying glass or debris. It could have been much, much worse  ̵  the Chelyabinsk area has a population of over 3 million people.

How much worse? A survey of the damage from a similar event that occurred in 1908, at Tunguska, in Russia, is a good barometer. A meteor approximately twice the size of the one that exploded over Chelyabinsk produced an explosion some 185 times as powerful as the Hiroshima bomb. Trees were felled by the explosion over an area of greater than 800 square miles and a man reported being burned by the fireball some 40 miles from the epicenter of the blast. Again, fortunately, Tunguska is a remote area, so injuries were low and damage to buildings was confined to a few villages.  Only two deaths were reported as a result of the explosion. Had this event occurred over an area as populous as Chelyabinsk, the consequences are too terrible to contemplate.

While thousands of extraterrestrial objects enter Earth’s atmosphere each year, most simply burn up, causing no discernable effects on the surface. Even some that result in significant explosions go unnoticed, because they occur in very remote areas. A few make it all the way to Earth’s surface – for instance, it is estimated that some 4 – 6 tons of material from the Chelyabinsk strike did so, but not all in a single chunk. The size of a meteorite that hits the Earth is inversely proportional to the probability that it will strike at all, so significant impacts are mercifully rare. However, rare does not mean nonexistent.

In 1998, The NASA Near-Earth Object (NEO) Program Office was established to monitor potentially hazardous near-Earth objects. In 2005, the U.S. Congress finally provided specific funding for this activity, with the goal of cataloging 90% of potentially hazardous NEOs by 2020. Previously undiscovered NEOs (some of significant size) are discovered continually, however none with a significant probability of a present-day impact have been found so far. The largest object with a significant probability of hitting the Earth discovered to date is 99942 Apophis, a 330-meter wide asteroid. In 2004, a probability of 2.7% that it would collide with Earth in 2029 was assigned, but this was subsequently reduced to near zero after new information was gathered.

What actions would be taken if a NEO of destructive size, with a high probability of impact, was discovered? While this is a favorite theme for science fiction writers and Hollywood, actual alternatives are few. The option we are most capable of carrying out is to simply crash a spacecraft into the object, hoping to change its course sufficiently to avert a collision with Earth. If the object is too big for that, the next best approach is nuclear explosion(s) on the surface or below the surface, either to divert it of break it up into pieces that will have a less catastrophic effect if they strike. This seems to be the most effective approach with today’s technology. Installation of a device that would provide a “slow push” of constant acceleration in a safe direction has also been proposed, but this is the most expensive and the least feasible approach using current technology.

Long warning periods (decades, at least) are generally required for the success of any scheme. NASA warns that 30-80 percent of potentially hazardous NEOs are too far away for launch systems that we currently have, or those that we plan to have in the foreseeable future, to reach.

Technology is a double-edged sword. Many see the invention of nuclear weapons as the beginning of the end for humanity. However, should a potentially lethal NEO target our planet, they could be our salvation. As current treaties prohibit the deployment of nuclear weapons in outer space, international cooperation is an absolute requirement for this option. One can only hope that it would be easily secured.


Posted in Uncategorized | Tagged , , , , , , | Leave a comment

The Rise of the Superbugs

Until the discovery of the sulfa drugs and the antibiotic penicillin in the 1930s, a simple bacterial infection could easily be a life-threatening illness. Even the most trivial cut or scape could be cause for alarm, should an infection occur. Soldiers feared infectious disease or wound infection more than death from battle trauma, because many more deaths occurred from the former. After the advent of the new wonder drugs, many thought that infectious disease would soon be a thing of the past. Then, drug resistance began to rear its ugly head.

Drug resistance is a consequence of Darwin’s principle of natural selection. Bacteria that are resistant to antimicrobial agents are present in the environment ̵ exposure of a population to a particular agent results in proliferation of the resistant strains, because as the more prevalent sensitive strains die, more nutrients are available to the resistant bugs. Put another way, “Everything is everywhere, but the environment selects”.

Drug resistance arises through spontaneous mutation; that is, a random change in the DNA of a particular organism. As the mutated cell divides, a small clone of drug resistant organisms arises. That clone will likely be only a tiny fraction of the total population of bacteria present in a particular environment. However, if selective pressure is brought to bear: for example, a particular drug enters the environment, that small clone will multiply at the expense of everything else, and become the dominant species.

To make matters worse, drug resistance can be passed around. The instructions for drug resistance are often contained on small, circular pieces of DNA called plasmids. These can be transferred between two bacteria of different species; for example, a species that is not infectious to humans can transfer the resistance to one that is.

As more and more drugs are released in the environment, more and more drug resistant organisms arise. Genes for drug resistance can also accumulate on a single plasmid, so by a single transfer of genetic material, one previously sensitive organism can become resistant to multiple drugs. This results in the so-called superbugs.

One superbug that has caused considerable trouble is MRSA – short for methicillin resistant Staphylococcus aureus .S. aureus is a common inhabitant of human skin, and its presence is usually innocuous, however it can cause infections if it invades the soft tissue beneath broken skin. These infections can become life-threatening if they move into the bloodstream. Since S. aureus is found on humans, MRSA can be prevalent in a hospital setting. Even though MRSA is resistant to multiple drugs, antimicrobial treatments for MRSA do exist. However, the best strategy is prevention, by means of proper hygiene and good disinfection practices.

Another group of superbugs that the CDC has designated as an urgent threat are the CRE bacteria. This group contains multiple species, including E. coli, a common inhabitant of the human gut, and Klebsiella pneumoniae, a human pathogen. CDC has documented 9,000 + infections in hospitals, with a mortality rate approaching 50%.

Sexually transmitted diseases have been a scourge of humanity since antiquity. One of the most common causes, a bug called Neisseria gonorrhea, has also been designated as an urgent threat by the CDC, because strains resistant to numerous antimicrobial drugs are becoming prevalent. Untreated gonorrhea infections can cause infertility, and can be passed on to newborns.

Some scientists think that reducing the amount of antibiotics in the environment might slow the rise of superbugs. Doctors are cautioned not to prescribe antibiotics to patients unless necessary – for example, antibiotics are useless against viral infections and should not be used unless complications are likely. However, the majority of antibiotic use is veterinary. Animals raised in a crowded condition are more susceptible to disease than free-ranged animals, and routine administration of antibiotics in feed helps reduce infection and improves profits. However it also creates environment conditions favorable for the development of antibiotic resistance. Regulatory agencies are concerned about this problem, and new regulations prohibiting prophylactic antibiotic use in animals are being considered.

It is almost a foregone conclusion that we will see another global pandemic. Most experts think that this is likely to be viral, in which case, antibiotic resistance will not be a factor. But historically, some of the worst plagues were bacterial, and the death tolls they racked up in the pre-antibiotic era were staggering. However, the strength of humanity has always been its adaptability. A combination of prudent preventive measures and new technology will likely be our salvation.


Photo credit: estherase / / CC BY-SA

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

Concerning Zombies

I must admit I wrestled with the idea of including the topic of zombies in a science blog. I have always disparaged the public’s penchant to lend equal credence to supernatural and pseudo-scientific ideas, as it does to the results of scientific investigations, and I have no desire to encourage that tendency with this blog. However, when I discovered that no less a scientific authority than the U.S. Centers for Disease Control and Prevention (CDC) has a website addressing methods for surviving the coming zombie apocalypse, I reconsidered. What’s good enough for that august organization surely must be good enough for me!

The working definition of a zombie is a reanimated human being. That obviously supposes that the creature was once dead, and is now alive, or at least undead, if you prefer. Zombies are generally depicted as mindless eating machines with an inexplicable taste for human flesh, although some traditions attribute them with rudimentary intelligence, or nefarious intentions imparted by the evil sorcerer that controls them.

It is generally agreed that the current zombie craze started in 1968 with the release of George Romero’s Night of the Living Dead (1968), although cinematic zombies have been around far longer. The redoubtable Bela Lugosi starred in White Zombie as early as 1932. Howard Phillips Lovecraft published his classic story Herbert West – Reanimator ten years earlier. At least one academician regards the current Zombie fad as a symptom of societal illness, while others see it as a coping strategy for our deepest fears. However, it is a fact that humans have believed in the return of the dead as monsters to plague the living since antiquity –Haitian and African zombie legends and the vampire legends of Eastern Europe are well-known examples. Regardless of the cause, it is undeniable that the current zombie craze is big business. One estimate of its contribution to the economy is as high as 5 billion dollars.

It is interesting to consider whether there is any scientific basis for the existence of zombies. While the definition of death has changed over the years with the advancement of medical technology, it is obvious that a condition characterized by the irreversible cessation of all biological functions exists. Nevertheless, modern technology allows recovery from many conditions that would have been mortal not that many years ago. Cases of spontaneous recovery from seemingly fatal conditions were apparently common enough in Victorian times to cause a fear of premature burial that was so pervasive in that culture, it even led to the manufacture of coffins that could be opened from the inside.

Numerous toxins and diseases that could cause the appearance of a dead man returning to wreak havoc on the living also exist. Not all cases of intoxication or infection with these agents produce identical symptoms, but a particular symptom complex that would give the appearance of reanimation could certainly be imagined. A person living in a primitive society might attribute such symptoms to supernatural causes, having no information to the contrary.

Ergot is a fungus that grows on grain crops when climatic conditions are favorable. Ergot poisoning of populations that consumed grain from the same contaminated source has been documented, and implicated in apparent outbreaks of lycanthropy in Europe, as well as the accusations of witchcraft in Salem, Massachusetts in the 1600s. The symptoms of ergotism are both neurological and dermal (it causes gangrene), so it is easy to see how the image of a rotting dead thing with a shuffling gait could be evoked from a case of ergot poisoning.

There are several modern drugs which could induce a similar state. One of the scariest is a narcotic known as Krokodil (Desomorphine), a street drug in Russia and Eastern Europe that has recently appeared in America. The preferred route of administration for Krokodil is subcutaneous (i.e., skin-popping). Since its manufacture is underground, purification is minimal and the final product sold to addicts is generally heavily contaminated with toxic and even corrosive substances, which cause necrosis of the skin surrounding the injection sites. A committed user could easily resemble a decomposing, undead creature with minimal awareness. Another opiate, known as black tar heroin, can have similar effects. Like Krokodil, it can contain dangerous impurities, and has been documented to destroy veins if injected intravenously, which leads many users to prefer subcutaneous injection. However, subcutaneous use can predispose users to necrotizing skin infections.

Scopolamine is another illicit drug being used as a weapon by criminals, because intoxication essentially causes a loss of conscious choice. Much anecdotal evidence suggests that it is employed on a significant scale in South America to cause victims to give money, sexual favors and other things to perpetrators who surreptitiously dose them. It isn’t documented to have dermal effects, but it does induce a state of mindlessness, allowing the kind of control over its victims ascribed to the juju man who creates a zombie. Other, so-called date-rape drugs, (e.g., Flunitrazepam (Rohypnol), GHB, (gamma hydroxybutyric acid), Ketamine) are used voluntarily as street drugs or administered surreptitiously to induce similar effects.

Several diseases also produce symptoms similar to zombie behavior. In theory, any disease that causes delirium could transform a sufferer into a zombie-like state. Mental or neurological conditions, such as somnambulism or catatonia, can also produce similar outcomes. An accompanying skin disorder would complete the picture. Indeed, infectious disease is the favored vehicle of fiction writers to explain the occurrence of a zombie plague, although no communicable disease has yet been documented to reproducibly produce a set of such symptoms in a large population of infected individuals. So, regardless of CDC’s trepidations, it doesn’t look as if a widespread zombie plague is on the horizon anytime soon.

Still, how can it hurt to be prepared?


Photo credit: Kevin Conor Keller / / CC BY-NC-ND

Posted in Uncategorized | Tagged , , , , , , , , , , , , | 2 Comments