Mars and the Future of Humanity

Mars. The Red Planet. It has fascinated humans since antiquity.  It was discovered so long ago that there is no record of its discovery.  It is easy to observe, appearing periodically as a morning or evening star, when the sky is too light to see the other  stars. References to Mars can be found in the writings of the ancient Egyptians, Greeks and Romans. Galileo Galillei was the first person to observe Mars telescopically in the 17th century. In 1903, the famed astronomer Percival Lowell published a book entitled The Canals of Mars that fueled popular fascination with the planet and let to speculation that it was inhabited. It was not long afterwards that Edgar Rice Burroughs published Under the Moons of Mars, the first installment of his famous Barsoom series, and introduced the world to the redoubtable John Carter, the ravishing Martian princess Dejah Thoris, multi-armed green men and other fantastic inhabitants of the dying world.

Exploration of Mars has been part of NASA’s mission since the agency was first created. Indeed, many considered the Apollo moon missions as a prelude to a manned mission to Mars, possibly as early as the 1980s. Unfortunately, after the Apollo 11 moon landing, public support for expensive manned space exploration gradually waned, and the Apollo program was cancelled after the flight of Apollo 17. But it appears that NASA has not given up entirely on a manned mission to Mars.

Such projects are always dependent on political considerations if they are funded by public money. However, there is another effort to put humans on Mars that is privately funded. The Mars One program is a nonprofit project that has the goal of establishing a permanent human settlement on Mars by 2025, beating NASA by at least five years. They have already begun an astronaut selection program, and are planning the construction of a facsimilie of the Martian base on earth, for use in training and for publicity.

Ethical considerations concerning a manned mission to Mars and space exploration in general abound. Astronauats will face potentially dangerous conditions in transit and after landing on other planets. Some people consider the harvest of natural resourses from  other planets, or changing the environment of another planet to suit human needs as unethical. Mars One plans to finanace their project in part by selling broadcasting rights, and at least one person has wondered if such a broadcast will celebrate the triumph of humanity, or simply publicize a tragedy.

Then there is the larger question of the future of humanity as a species. Altruists tell us that it is the responsibility of those of us living now to exercise good stewardship of the planet, so those who come after us will be able to live here too. But it is an undeniable fact that planets, like people, have a life, and that life has a beginning and an end. The Second Law of Thermodynamics tells us that environmental degradation of Earth is inevitable, whether from natural processes or human activity – the only question is how long it will take for the Earth to become uninhabitable for humankind. When that day comes, the survival of homo sapiens (or whatever species we have evolved to) will depend on whether some of us have managed to leave the third rock from from the sun. And that may be the greatest ethical concern of all.

mars

Photo credit: gunnsteinlye / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Posted in Science | Tagged , , , , , , , , | Leave a comment

A Mission of Gravity

One of the seminal novels to come out of the Golden Age of science fiction was a book by Hal Clement entitled A Mission of Gravity. It tells of the adventures of some inhabitants of the strange world Mesklin, whose gravity varied from 3g (three times Earth’s gravity) to 700g. The novel proved to be fascinating to many, and has remained in print more or less continuously since its publication in 1953.
Perhaps one reason for the novel’s enduring popularity is that gravity is the member of the four fundamental forces of nature that we are most familiar with, but is arguably the strangest one. Three of these forces, the weak force, electromagnetism, and the strong force have been unified (i.e., shown to have a common mathematical basis), but so far, gravity has stubbornly resisted all efforts at unification.
Classical mechanics, Issac Newton’s theory of physics, assumed that gravity acted at a distance – as an instantaneous force that was not mediated by anything – it was simply a property of the universe. Albert Einstein’s General Theory of Relativity contradicted that view. Einstein described gravity as a warp in spacetime created by massive bodies that acted on the bodies themselves. He envisioned it as propagated by a wave, which traveled at the speed of light, transporting gravitational radiation, in a manner analogous to an electromagnetic wave. However, gravitational waves are incredibly weak and could not be detected by existing instrumentation.
Another difference between gravity and the other three fundamental forces is that a quantum mechanical description of the weak force, electromagnetism, and the strong force has been developed. Quantum mechanics is the branch of physics that describes the behavior of systems over infinitesimally small distances. Each of the three forces has been found to be propagated by a specific messenger particle. Einstein envisioned a messenger particle for gravity dubbed the graviton, but, like gravitational waves, the graviton has never been detected. A quantum theory of gravity likewise remains elusive.
Things changed radically on Monday, March 17, 2014, when scientists involved with the BICEP2 project announced the first direct detection of primordial gravitational waves. BICEP2 is essentially a telescope designed to detect changes in the polarization of light. The light, or more exactly, the electromagnetic radiation that BICEP2 detects, is the Cosmic Microwave Background (CMB) – the radiative signature left by the Big Bang. Using BICEP2, the scientists mapped the pattern of polarization in the CMB, and found that gravitational waves could be the only way to explain this pattern.
The BICEP2 result strongly indicates that gravity must be quantum mechanical, because the polarization patterns observed by BICEP must originate from quantum oscillations in spacetime itself.
BICEP only looked at a small portion of the sky – a sector about 15° by 60°. Scientists expect that even larger gravitational wave patterns exist, but in order to see them, an instrument that can observe the entire sky will be necessary. To do this, a spacecraft rather than an earthbound telescope will be required. Scientists hope that, if these larger patterns are present, they may be able to finally test competing theories of quantum gravity, like string theory and loop quantum gravity.
You can watch the BICEP2 press conference here.

bicep2_09

Posted in Science | Tagged , , , , , , , , , , | 1 Comment

Are We Lucky?

“I’d rather be lucky than smart!” This hackneyed adage is usually delivered sarcastically, in response to someone who just won the lottery, got the big promotion, or left the casino with a pile of cash. The implication is that, if one is lucky, working hard and taking risks is unnecessary. Just wait, and good fortune will come along. If one is unlucky, then industry and risk-taking is all for naught. It’s a philosophy that can lead to nihilism and despair quite quickly.

So, what is luck? It is undeniable that good things seem to happen for no perceivable reason, as do adverse events. For millennia, people have attributed such things to supernatural causes, and have attempted to control them by propitiation of gods, or by seeking talismans that will grant them that nebulous quality called luck. Others believe that the universe is fundamentally random, and that luck is largely a matter of perception. Of course, it’s easier to take the latter view if one is lucky…

The concept of luck has been a fruitful subject for science fiction. Larry Niven explored it in his Known Space stories, in which an alien race actually tried to breed lucky humans by providing incentives for lucky people to mate with each other. J.K. Rowling, the fantastically successful author of the Harry Potter novels, created a magical potion that conferred good fortune on the imbiber, leading one to wonder if she consumed some herself.

A good definition of luck is the occurrence of a preponderance of improbable events over a short time frame  ̵  whether that luck is good or bad is a value judgment based on the observer’s perceptions. This definition assumes that the events in question are statistically independent, that is, the occurrence of one event does not affect the probability of the occurrence of a subsequent event. Misperception of statistical independence can lead to the Gambler’s Fallacy, which has caused much misery over the ages.

However, a theorem called the Law of Large Numbers exists in probability theory. Simply stated, it says that, given a very large numbers of trials, the average of the observed occurrences of a particular result approaches the expected value of those occurrences. Consider an event with a probability of occurrence of one in a million. This means, that in one million observations, you should, on average, observe the event one time. The expected value in this case is one. Is it possible that event will not be observed at all? Certainly! It is also possible that the event will be observed more than once. As the number of observations increases, so does the expected value. If the number of observations is infinite, the expected value becomes a certainty.

The geologist David Waltham has written a book called Lucky Planet, which will be published later this year. The thesis of Lucky Planet is that the earth, due to a highly improbable series of weather patterns, experienced conditions amenable to the genesis of intelligent life. This supports the Rare Earth Hypothesis, which is a possible explanation of why we have not observed any other intelligent life in our universe.

In 2009, NASA deployed the Kepler telescope, whose mission was to search for extrasolar planets in the habitable zone of their parent solar system. To date, NASA scientists have confirmed 961 planets discovered by Kepler, and have extrapolated those results to estimate that the number of earthlike planets in the Milky Way galaxy is likely tens of billions. Estimates of the number of galaxies in the observable universe run into the hundreds of billions.

So, while the earth may indeed be a lucky planet, it doesn’t seem likely we’re alone in the universe. Do the math.

Roulette wheel

Håkan Dahlström / Foter / CC BY

Posted in Science | Tagged , , , , , , , , , , , , , | 1 Comment

Metabolomics, Elephants and Blind Men

A couple of weeks ago I attended the NIH Eastern Regional Comprehensive Metabolomics Resource Core (RCMRC) Symposium at the Research Triangle Institute, in Research Triangle Park, North Carolina. This was an all-day event focused on the fascinating field of metabolomics.

Metabolomics is the study of metabolism via the analysis of low molecular weight compounds present in  organs, tissues, cells or body fluids. Such molecules are considered as metabolites; that is, products of the biochemical pathways that constitute the processes of life. The suffix –omics refers to a branch of biology that studies a biological system in its entirety; for example, genomics is the study of the entire genome, proteomics is the study of all of the proteins in an organism. Indeed, some have even attempted to extend the use of -omics outside of biology. Metabolomics attempts to study metabolism as a whole by analyzing metabolic profiles as characterized by all of the metabolites present in a particular system. It is thus one of the so-called “big data” sciences because of the massive amounts of data generated in some metabolomics studies.

The concept behind metabolomics is simple. Take a biological sample from an organism, get as much of it into solution as you can, then identify as many of the small molecules that it contains as possible via some automated analytical method. What exactly is a small molecule? There is some discussion around that point, but it is generally agreed to refer to chemicals with a molecular weight of 1 kilodalton or less. This creates a metabolic snapshot of the sample in a particular state, at a particular moment. If you change the state in some identifiable way, for example, by exposing the organism to a toxic substance, and create another snapshot, you can then examine the differences between the two states. From these differences you can infer the effect of the exposure on the metabolism of the organism in its entirety.

Of course, the process really isn’t simple. To account for biological variability, multiple samples from the same state will have to be analyzed. Difficulties will arise in solubilizing the sample. Different analytical methods will identify different subsets of metabolites. The number of metabolites identified will be large, at least hundreds and possibly thousands of substances from each sample, so a computer will be necessary to characterize the differences.

The most common analytical method to analyze samples for metabolomics study is an old one, nuclear magnetic resonance (NMR) spectroscopy. This method, developed in the 1940s and 50s, takes advantage of the fact that a magnet spinning in a magnetic field (like an atomic nucleus) emits electromagnetic radiation at a frequency specific to the strength of the magnetic field. NMR patterns for very many chemicals have been identified and cataloged in the ensuing years, making the method ideal for analysis by computers.

Metabolomics studies have proved important in many scientific disciplines. For example, in medicine, specific disease states, such as cancer, can be identified by the changes they cause in the metabolic profile of body fluids. This allows reproducible, sometimes early, diagnosis, and early initiation of treatment which could result in a greater chance of success. Metabolomics profiles can be used to differentiate groups of people according to their diets, or to indicate exposure to an environmental toxicant. These findings can be correlated with the prevalence of particular conditions (e.g., obesity, diabetes), to help understand the effects of diet or chemical exposure on these conditions. Metabolomics studies can also help identify effective new drugs, by determining which compounds cause specific changes in metabolic profiles indicative of disease states. Metabolomics has also been used to detect product adulteration in botanicals such as ginseng and herbal extracts, by analysis of the small molecule profiles of the products.

Metabolomics is another example of the technological advances made possible by interfacing computers with standard analytical techniques. It allows scientists to investigate a biological processes in its entirety rather than focusing on a small part, then trying to infer what the whole is like, as in the story of the blind men and the elephant. It will contribute greatly to our understanding of life, as well as to human health and well-being.

 

elephant_pic

Posted in Science | Tagged , , , , , , , , | 3 Comments

Non-prescription Pain Relievers

One of my favorite stories in American literature is a chapter in Mark Twain’s iconic classic, Tom Sawyer, entitled The Cat and the Pain Killer. It’s about Tom’s solution to a problem – his Aunt Polly has discovered a new medicine called Pain Killer, with which she has taken to dosing him on a daily basis, and the stuff tastes simply horrible. So Tom conceives a plan. He pretends to be fond of it, and makes a nuisance of himself asking for it, until his aunt tells him just to help himself. He does so with gusto, except that he pours the medicine down a crack in the floor instead of down his throat. All goes well, until one day when the family cat happens by as Tom is dosing the crack.

The cat, thinking everything that humans have must be good, meows plaintively until Tom decides to give him a dose. After receiving it, the hapless animal goes on a tear about the house, doubtless brought on by the terrible taste and the considerable alcoholic content of the Pain Killer. Tom’s Aunt Polly forces him to confess what he did, and castigates him for cruelty to animals, after which Tom points out that what is cruel to a cat might just be cruel to a boy as well. His aunt relents and Tom gets dosed with the Pain Killer no more

While the story is poignant, it illustrates well the state of the pharmaceutical industry in the 19th century. Drugs were unregulated, so anyone was free to mix up a concoction to treat any ailment and hawk it to the public. Brews such as Pain Killer were common, and probably effective too, as they most likely contained liberal amounts of alcohol and opiates. Too frequent use for minor ailments like headaches or muscle pain from over exertion were likely to lead to a far more serious problem. However, non-addictive drugs to suppress pain simply did not exist.

In 1897, a German chemist named Felix Hoffman, who was working for Bayer, synthesized what was to become the first wonder drug of the new century, acetylsalicylic acid, also known as aspirin. In addition to being quite an effective pain reliever, aspirin also had antipyretic (fever-reducing) properties. Many years after its invention, aspirin was also found to be effective in the treatment of and protection against certain kinds of heart attacks because of its anti-clotting and anti-inflammatory properties. However, aspirin was not a perfect drug – its side effects included stomach upset and a tendency to cause stomach ulcers with chronic use. It was also implicated in a rare but potentially fatal disorder called Reye’s syndrome. Reye’s syndrome typically occurs during recovery from a viral disease such as chicken pox or flu, and a causal link to aspirin use has been found. This has caused aspirin to fall out of favor as the go-to drug for relief of minor pains.

About the same time as the invention of aspirin, two chemists a class of chemicals known as aniline derivatives were being investigated for antipyretic effects. In 1877, a German physician, Joseph von Mering, tested an aniline derivative, paracetamol, in patients, and found it effective. However, it was not until the 1950’s that this chemical was effectively marketed. In addition to its effectiveness as a fever reducer, paracetamol was also found efficacious as a pain reliever. In the United States, paracetamol is also called acetaminophen. The most famous brand name of this drug is Tylenol.

Many doctors and patients prefer acetaminophen for pain relief because it does not irritate the stomach as other pain relievers do. It also has few interactions with other drugs, making it a popular choice for patients on a multi-drug regimen. However, acetaminophen can potentially cause serious and fatal cases of liver failure if taken at too high a dose, or in combination with alcohol. Moreover, the therapeutic dose of acetaminophen (the dose that has a beneficial effect) is quite close to the toxic dose, at which liver damage occurs. As little as 25% over the recommended maximum dose taken over a few days can cause liver failure. This is a problem because some nonprescription formulations contain acetaminophen, such as multi cold symptom relievers and headache powders. Many people do not read labels carefully, and end up taking a dangerous dose of acetaminophen if using two products simultaneously. Acetaminophen is also contained in formulations of more powerful pain relievers such as Percocet and Vicodin, so adding extra acetaminophen when taking drugs like these is also dangerous.

Another class of non-prescription pain killers are the non-steroidal anti-inflammatory drugs, or NSAIDS. Examples of these include ibuprofen (brand names Motrin and Advil), and naproxen sodium (Aleve). Aspirin is also considered an NSAID, but is usually considered in a class by itself. The principal side effects of these drugs are gastrointestinal, and regular use has been implicated in a higher risk of heart attack or stroke.

All of these non-prescription pain killers have a similar mechanism of action – inhibition of the production of prostaglandins, which are substances produced in the body that promote inflammation.

All of these drugs have their adherents, and seem to be equally safe if used in moderation. They are surely one of the blessings of modern technology. In times past, the chronic pain that accompanied advanced age offered only the choice between constant suffering and addiction. Because of these drugs, we have a far better choice today.

Headache

Photo credit: Brandon Koger / Foter / CC BY-NC-SA

Posted in Science | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

Deadliest Poison or Wonder Drug?

“The dose makes the poison.” This maxim is credited to Paracelsus, who is known as the father of toxicology. Another way of saying the same thing is, “Everything is poisonous. It just depends on the dose.”

The lethality of a substance is indicated by the dose required to kill. Toxicologists measure this by the LD50; that is, the dose required to kill 50% of the animals that the substance is tested in. A substance can be administered to the test animals in various ways (e.g., injected, ingested, inhaled), but we won’t worry about that for this discussion. The LD50 is given in milligrams per kilogram of body weight. So if a substance has an LD50 of 5 mg/kg in mice, it would take, on average, 0.1 mg to kill a 20 g mouse. That is a very small amount (0.1 mg is about 0.000004 oz), so this substance is a very deadly poison.

Which is the deadliest poison? Another way to frame this question is, which poison will kill at the lowest dose? There are several candidates for this dubious honor, but a strong contender is botulinum neurotoxin (BoNT), a substance produced by the bacterium Clostridium botulinum. The LD50 of BoNT is around 1 -2 nanograms per kilogram in humans. A nanogram is 0.000001 milligram.

Botulism is well-known to most people as a foodborne illness. Because C. botulinum spores are virtually everywhere, and the bacterium thrives in an environment without oxygen, cases of BoNT poisoning can arise from improper canning procedures. BoNT poisoning is a risk anytime foods are stored and allowed to spoil in a low oxygen environment – other foods implicated in botulism outbreaks are smoked meats or fish, foil-wrapped baked potatoes, and cream cheese. Thankfully, foodborne botulism is very rare in the United States, but less so in developing countries.

Botulinum toxin is a neurotoxin – that is, it exerts its toxic effect on nerve cells. Specifically, BoNT interferes with the release of a chemical from nerve cells, in an irreversible way. The chemical that is blocked allows muscular contraction – in its absence, a muscle is in a permanently relaxed state. One molecule of BoNT can disable many nerve cells, which accounts in part for its extreme toxicity. The overt indication of BoNT poisoning is a descending flaccid paralysis – meaning that muscle relaxation begins with the facial muscles and progresses downward throughout the body. When the paralysis reaches the diaphragm muscles, the person afflicted can no longer breathe, and will die without medical intervention. The mortality rate for botulism is fairly low (about 5%), as long as it is diagnosed and treated promptly. Without medical intervention, almost three out of four persons who are poisoned will die.

However, BoNT is also well-known for another reason. Marketed as Botox Cosmetic® by Allergan, it is famous as an anti-wrinkle treatment used by millions worldwide. The reason that such a deadly toxin can have such a benign effect is simple – the dose. A very dilute formulation of BoNT that tends to stay localized in the area in which it is injected, relaxes muscles just enough so wrinkles go away. The effect is not permanent, but it is long-lasting (about three to four months). However, while cosmetic usage of BoNT may be the most lucrative application, it is not the most important.

Before BoNT was approved by the FDA for cosmetic usage, it was approved for other, more serious conditions. It is a palliative for muscle spasms of many types, effective in treating eye ailments (Blepharospasm – uncontrolled eye twitching, or strabismus – crossed eyes), migraine headaches and even hyperhidrosis (excessive underarm sweating).

But BoNT is being used for even more than these approved conditions. Off-label use occurs when a doctor prescribes an approved drug for a condition for which that drug has not been approved, but is deemed effective for that condition by the prescribing physician. This practice is perfectly legal. Allergan has filed patents for over 90 such uses. Off-label uses include conditions such as enlarged prostate, oily skin, sinus and other non-migraine headaches, fibromyalgia, and nocturnal teeth grinding and jaw clamping.

Is the term “wonder drug” too strong for a substance with so many applications? Regardless, BoNT is a wonderful illustration of Paracelsus’ maxim that “The dose makes the poison.”

botox_l

AJC1 / Foter / CC BY-NC

Posted in Science | Tagged , , , , , , , , , , , , , | 2 Comments

The Future of Toxicology

Last week, I was privileged to attend the Future Tox II conference in Chapel Hill, NC. This conference, sponsored by the Society of Toxicology, was focused on recent advances in in vitro and in silico toxicology, as directed towards improvements in predictive toxicology. The two day conference drew over 200 scientists from 13 different countries in North and South America, Europe and Asia. Twenty-two talks by invited experts were supplemented by 50 posters showing results from ongoing research projects, to provide a snapshot of the current state of the science.

In order to understand the scope of Future Tox II, a brief consideration of the history of toxicology is necessary. Toxicology is the science of poisons, and is concerned with all aspects of toxic substances – where they occur, why they are toxic, what symptoms they produce. Predictive toxicology seeks to determine how much of a particular substance one must be exposed to, and for how long, to cause a particular toxic effect. In the 20th century, predictive toxicology became extremely important, as governments moved to establish agencies to monitor and regulate toxic substances in foods, drugs, cosmetics and the environment as a whole, in order to improve the public health and safety. Examples of these agencies in the U.S. are the Food and Drug Administration, the Environmental Protection Agency, and the Consumer Product Safety Commission. Analogous agencies exist in all other industrialized countries worldwide.

The toxic effects of substances were largely determined by testing them on animals (in vivo methods), then attempting to extend the findings in animals to human beings. While this system was imperfect, it was substantially better than simply monitoring deleterious effects on the population and trying to retroactively identify the toxic substances that caused them. However, animal testing has many drawbacks. It is time-consuming and expensive, and results obtained in animals don’t always extrapolate well to humans. Additionally, some people consider animal testing unethical. For these and other reasons, toxicologists are actively seeking non-animal methods for toxicity testing.

Non-animal methods fall into two broad classes – in vitro and in in silico methods. In vitro methods use cells or other components derived from animals, instead of live animals or organs from animals, as test subjects. In silico methods use computer analysis of existing data to arrive at conclusions. An advantage of in vitro methods is that biological material derived from humans instead of other species can be used for experiments. Similarly, in silico methods can use data derived from humans instead of animals. A disadvantage of both methodologies is that they are conducted at lower level of biological organization than whole animal studies, so there is still an extrapolation problem. However, some in silico models do attempt to model higher biological levels such as tissues and organs, or specific environments for ecotoxological studies.

Toxicologists are currently employing in vitro and in in silico methods to define adverse outcome pathways (AOPs) for diverse toxic effects. An AOP attempts to link a temporal series of events to an observed toxic effect of a particular substance, beginning with a unique molecular initiating event. Once the AOP for a particular substance has been worked out, toxicologists have a mechanistic explanation for the toxic effect of that substance. The ultimate goal is to be able to use that mechanistic explanation to predict how much of a substance an organism must be exposed to, to produce the toxic effect. This, in turn, allows the toxicologist to predict allowable levels of the substance in foods, products or in the environment. Furthermore, once the AOP for a particular toxic effect has been defined, it is reasonable to assume that any chemical that can be linked to it will cause the same effect, so in theory, groups of related chemicals can be defined without the need for laboratory testing. This is extremely important, because the number of potentially toxic chemicals that are entering the environment is so large that it is impossible to test each one individually.

The task these scientists have set for themselves is daunting. A thorough understanding of AOPs, which are ultimately the building blocks of living systems, will translate into improved human health and safety, as well as a more complete understanding of life itself. Future Tox III, which will occur in about two years, is already in the planning stage. It is through continuous, organized and sustained efforts like this that science will continue to ensure that humanity’s future is a bright one.

test_tubes

Photo credit: proteinbiochemist / Foter / CC BY-NC

Posted in Science | Tagged , , , , , , , , | Leave a comment