Zika and Risk

As the Zika virus spreads north from Latin America, Central America and the Caribbean, the list of public health recommendations and scientific unknowns continues to grow. Zika is not new; it was first identified in 1947 in Uganda, and although scientists have found consistent evidence of antibodies in primates since then, few documented cases were reported in humans until recently. Current statistics are grim: the virus has now been confirmed in over thirty countries in the region, with hundreds, perhaps thousands of additional cases likely in the coming months as mosquito season peaks in the Northern Hemisphere.

Although Zika has been linked to a number of health issues, including fever, joint paint, and Guillain-Barré syndrome, most adults who are infected will have mild symptoms, if any, and no lasting effects. The risks for pregnant women, however, are more severe. The virus has been found to cause microcephaly, a condition in which babies are born with abnormally small heads, leading to brain damage and developmental issues. Mounting fears of a virus for which no vaccine or cure exists are prompting increasingly dire warnings from public health agencies, including the World Health Organization, which recommends that pregnant women avoid traveling to areas of ongoing Zika transmission. Officials in a number of affected countries have advised women to postpone pregnancy for a period of months or years; in El Salvador, health ministers have told women not to get pregnant until 2018.

While we know that Zika causes microcephaly, a deeper understanding of the ways in which the virus works is severely lacking. Take the following catalog of unknowns from the website of the Centers for Disease Control:

"If a pregnant woman is exposed

  • We don't know how likely she is to get Zika.

If a pregnant woman is infected

  • We don't know how the virus will affect her or her pregnancy
  • We don't know how likely it is that Zika will pass to her fetus.
  • We don't know if the fetus is infected, if the fetus will develop birth defects.
  • We don't know when in pregnancy the infection might cause harm to the fetus.
  • We don't know whether her baby will have birth defects."

While I’m by no means trying to minimize the implications of having a baby that tests positive for Zika or a child with microcephaly, I find that the uncompromising public health recommendations around the virus’s transmission are a reflection less of the absolute risk to a pregnant woman (which we lack the information to conclusively determine) than of the inadequacy of what medicine can offer in the event of infection. The anxiety surrounding the virus is understandably based in fear and uncertainty, as pregnancy is a 40-week state of perpetual uncertainty that entails a constant balancing of input versus outcome. As with alcohol and caffeine, which pregnant American women are advised to avoid entirely, there is no safe level of exposure to Zika; one must assume that a developing fetus is at risk, even if the mechanism of infection is not fully understood.

I realize that the calculation of risk will be different for women who travel to areas of active Zika transmission and those who reside there. I’m also aware that birth control and abortion are not available to most women in a number of affected countries, including Brazil, and sexual violence and coercion mean that many women are not fully in control of their sexuality. Zika may not be a new disease, but it is a newly emerging threat, and millions of women who are pregnant, thinking of becoming pregnant, or simply of childbearing age will have to weigh questions of risk and responsibility as they make essential decisions about travel and reproduction.

Football and the Risk of Concussion

I’ve been thinking a lot about risk lately. In medicine and public health, it’s an idea that’s always present, usually invoked toward the goal of disease prevention. Over the years, the ways in which the concept of risk has been put forth have changed as the major causes of mortality have shifted from infectious to chronic disease. In the eighteenth and nineteenth centuries, an epidemic of cholera or yellow fever might have been seen as a way to separate acceptable citizens from unacceptable, the latter premised on some combination of ethnicity, race, religion, class and moral principles. More recently, public health recommendations have focused on lifestyle practices that can reduce our risk of developing cancer, heart disease, and other chronic illnesses with multifactorial causes.

mouse_risk.jpg

I’m interested in how we experience risk and how this shapes the decisions we make about what to eat, where to live, the types of behaviors we engage in and the situations we’re comfortable with. How does each of us choose to respond to a series of unknowns about, for instance, the dangers of genetically modified food, the possible link between cellphone radiation and cancer, or the relationship between pesticides and hormonal imbalances? If Alzheimer’s disease runs in your family, what do you do to decrease the chances you’ll develop it? If you’re diagnosed with a precancerous condition that may or may not become invasive, do you remove the suspicious cells immediately or wait to see if they spread? What does it mean for our bodies to be constantly at risk, under threat from sources both known and unknown that we cannot see or regulate?

In the upcoming months, I’ll be exploring these ideas and more in a series of essays on risk. My premise is that the ways in which we choose to deal with risk are fundamentally about control, and are aimed at addressing the illusion that we have command over disease outcomes in a world ruled by randomness and unpredictability. Cancer screenings, lifestyle habits, and the other behaviors we adopt to stay healthy are an attempt to reduce our risk, to make the uncertain certain, to bring what’s unknown into the realm of the foreseeable. As a way of managing the future, this approach assumes a linearity of outcomes; if I engage in x behavior, then I will prevent y disease. It assumes that illness can be reduced to a series of inputs and corresponding outputs, that wellness is more than a game of chance or a spin of a roulette wheel. The boundaries of what we consider reasonable measures to embrace for the sake of our health will differ for each of us based on our individual tolerance for ambiguity and what we consider an acceptable level of risk. As I delve into an investigation of the relationship between risk and health, the underlying question I’ll be concerned is this: what level of uncertainty can each of us live with, and how does it affect our behavior?

So here goes, my first essay in an ongoing series on risk.

With the Denver Broncos’ 24-10 victory over the Carolina Panthers in Super Bowl 50, the 2015 football season came to its much-hyped conclusion. I didn’t watch the game, but I have been following closely any public health news involving the National Football League. Just days before the Super Bowl, the family of Ken Stabler, a former NFL quarterback, announced that he suffered from chronic traumatic encephalopathy (CTE), a degenerative brain disease that can trigger memory loss, erratic behavior, aggression, depression, and poor impulse control. The most prominent quarterback yet to be diagnosed, Stabler joins Junior Seau, Frank Gifford, Mike Webster, and over 100 former players found to have the disease, which is caused by repeated brain trauma and can only be determined after death by a physical examination of the brain. Retired NFL players suffer from numerous chronic injuries that affect their physical and mental well-being: in addition to the multiple concussions, there are torn ligaments, dislocated joints, and repeated broken bones that can no longer effectively be managed by cortisone injections and off-the-field treatments. Many athletes end up addicted to painkillers; some, like Seau, commit suicide or die from drug overdoses, isolated from family and friends. One particularly moving article in the New York Times profiled Willie Wood, a 79-year old former safety for the Green Bay Packers who was part of the most memorable play of Super Bowl I, yet can no longer recall that he was in the NFL. And incidents of domestic abuse against the partners and spouses of players continue to make headlines, including the unforgettable video of Ray Rice knocking his girlfriend unconscious in an elevator at an Atlantic City casino.

Despite these controversies, football remains enormously popular in the United States. Revenue for the NFL was $11 billion in 2014, and league commissioner Roger Goodell pocketed $34 million in compensation that year. The NFL has managed to spin the concussion issue in a way that paints the league as highly concerned about player safety. Goodell touts the 39 safety-related rules he has implemented during his tenure, and the settlement last fall in a class-action lawsuit brought by former players set up a compensation fund to cover certain medical expenses for retired athletes (although some criticized the deal because it doesn’t address symptoms of CTE in those who are still alive). Increasing awareness of the danger of concussions has prompted discussions about how to make the game safer for young athletes. One approach that’s been floated is to have players scrimmage and run drills without helmets and protective padding, forcing them to treat each other gently in practice while saving the vigorous tackles for game day. The Ivy League just agreed to eliminate full-contact hitting from all practices during the regular season, a policy that Dartmouth College adopted in 2010. And earlier this week, the NFL’s top health and safety official finally acknowledged the link between football and CTE after years of equivocating on the subject.

But controlled violence is such a central aspect of football that I wonder how much the sport and its culture can be altered without changing its underlying appeal. Would football be a profoundly different thing with the adoption of protocols that reduce the likelihood of concussions and other injuries? How much room is there for change within the game that football has become? Players continue to get bigger and stronger, putting up impressive stats at younger and younger ages. My friend’s nephew, a standout high school player in Texas and a Division I college prospect, was 6’2” and weighed 220 pounds when he reported as a freshman for pre-season training—numbers that I imagine will only expand as he continues to train, and ones that players around him will have to match in order to remain competitive.

With mounting knowledge of the link between football and degenerative brain disease, I’m interested in the level of risk that’s acceptable in a sport where serious acute and chronic injuries are increasingly the norm. In a recent CNN town hall, Florida senator Marco Rubio asserted that football teaches kids important life lessons about teamwork and fair play, and pointed out that there are risks inherent in plenty of activities we engage in, such as driving a car. True enough, but driving a car is an essential part of daily life for many of us, which means that we have little choice but to assume the associated risks. Football is voluntary. I realize that for some, football is less than completely voluntary, from children who face parental pressure to professional athletes who feel compelled to remain in the game because they’re supporting families or facing limited options outside the sport. Still, playing football is an acquired risk to a greater degree than driving or riding in a car, the dominant form of transportation in our suburbanized communities. And if risk reduction is about attempting to control for uncertainty, then the accumulating evidence about CTE and other severe injuries is sure to change the calculus of how parents and players assess participation in a sport where lifelong mental and physical disabilities are not just possible, but probable.

Johnson & Johnson’s Baby Powder: Harmless Household Product or Lethal Carcinogen?

On Monday night, a jury in St. Louis awarded $72 million to the family of a woman who died of ovarian cancer after using Johnson & Johnson’s baby powder and other talcum-based products for years. The verdict came after a three-week trial in which lawyers for the plaintiff, an Alabama woman named Jacqueline Fox, argued that Johnson & Johnson had known of the dangers of talcum powder since the 1980s and concealed the risks. The corporation’s lawyers countered by saying the safety of talcum powder was supported by decades of scientific evidence and there was no direct proof of causation between its products and Fox’s cancer.

Fox used Johnson & Johnson’s baby powder and another talc-based product called Shower to Shower for 35 years. “It just became second nature, like brushing your teeth,” her son said. “It’s a household name.” The company has come under fire in recent years from consumer safety groups for the use of questionable ingredients in its products, including formaldehyde and 1,4-dioxane, both of which are considered likely carcinogens. Fox’s case was the first to reach a monetary award among some 1,200 lawsuits pending nationally against the company.

The case bears a notable resemblance to the lawsuits against the tobacco companies, with attorneys for both the plaintiff and the defendant taking a page from the playbook of their respective side. Fox’s lawyers claimed that Johnson & Johnson’s own medical consultants warned in internal documents of the risk of ovarian cancer from hygienic talc use, just as tobacco companies knew for decades that smoking caused lung cancer but sought to suppress the evidence. And the pharmaceutical giant responded as the tobacco industry did in the numerous lawsuits it faced in the 1980s and 1990s: by creating doubt about the mechanism of cancer causation and upholding the safety of its products.

I find this case uniquely disturbing because the image of Johnson & Johnson’s baby powder as a household product that evokes a sense of comfort and protection is so at odds with the jury’s finding that it caused or contributed to a case of fatal ovarian cancer. The company appears to be right in claiming that the scientific evidence is inconclusive: some studies have shown a slightly increased risk of ovarian cancer among women who use products containing talcum powder, while others have found no link. It’s important to note that until the 1970s talcum-based products contained asbestos, so people who used them before that time were exposed to a known carcinogen. Still, the research is unsettled enough that the American Cancer Society advises people who are concerned about talcum powder to avoid using it “[u]ntil more information is available.”

Without examining the trial transcripts or interviewing the jurors, it’s impossible to know for sure what factors influenced the verdict. I imagine the tobacco settlements have irrevocably changed the environment surrounding these types of lawsuits—that there’s a sizable segment of the American public which is understandably suspicious of large corporations trying to conceal research about the health risks of their products. I suspect there’s also an element of causation and blame at work here, about wanting to assign responsibility for a disease that remains, for the most part, perplexing and impenetrable. We all make choices that affect our health on a daily basis, from what kind of shampoo to use to what to eat for lunch, and we want assurance that the repercussions of the decisions we make with good intentions will be in our best interest. But as the unprecedented $72 million verdict shows, we have an immense uneasiness about the dangers lurking behind the most benign-seeming household products. And we fear that those products, rather than benefiting us, will instead do us harm.

Shooting the Moon on Cancer

During his final State of the Union address on January 12th, President Obama announced a “moonshot” to cure cancer and appointed Vice President Biden to lead it. The edict is reminiscent of Nixon’s 1971 War on Cancer, which was envisioned as an all-out effort to eradicate the disease by marshaling the kinds of resources and scientific knowledge that two years earlier had sent a man to the moon. By most measures we’re much better off now than we were four decades ago: cancer treatments have improved drastically, people are living longer after diagnosis, and mortality rates have been falling since their peak in the early 1990s. But as anyone who has been touched by cancer can attest—and in the United States, that’s nearly all of us—the war is far from over.

Biden, whose son died of brain cancer last year, outlined a plan that’s essentially twofold: to increase public and private resources in the fight against cancer, and to promote cooperation among the various individuals, organizations and institutions working in cancer research. The initiative will likely lead to increased funding for the National Institutes of Health, a prospect that has many scientists giddy with anticipation. But the complexities of cancer, which are now much clearer than in 1971, underscore the multiple challenges confronting us. On NPR, one researcher described how the same form of cancer can act differently in different people because of the immense number of genetic distinctions between us. And in the New York Times, Gina Kolata and Gardiner Harris pointed out that the moonshot reflects an outmoded view of cancer as one disease rather than hundreds, and the idea of discovering a single cure is therefore “misleading and outdated.”

Nixon’s initiative signaled an optimism in the certainty of scientific progress to combat a disease that many regarded with dread. In polls, articles, and letters, Americans at the time debated whether they’d want to be told of a cancer diagnosis and worried about being in close contact with cancer patients. The disease’s many unknowns generated fears of contracting it, desperation about the pain and debilitation associated with it, and plenty of unorthodox cures (this was, after all, the era of laetrile and Krebiozen).

Much has changed in the intervening decades, and if you’re diagnosed with a form of cancer today, you’d undoubtedly have a better prognosis than in 1971. But one aspect of cancer in American culture has not changed, and that’s the mystique surrounding the disease. Cancer is not the biggest cause of death in the US—heart disease takes top honors—but it remains the most feared. It occupies an outsize place in the landscape of health and wellness, suffering and death. As such, it demands a bold approach. Winning the “war on cancer” would necessitate breaking down the disease into types and subtypes: not cancer, but cancers; not cancer but ductal carcinoma in situ and acute myeloid leukemia and retinoblastoma. But this would dilute its power as a singular cultural force, an adversary that (the thinking goes) with a massive coordinated input of resources could be vanquished once and for all.

Photo credit: Cecil Fox, National Cancer Institute

Photo credit: Cecil Fox, National Cancer Institute

Biden’s moonshot doesn’t just reproduce an outmoded idea of cancer; it is dependent upon it. It also promotes research and therapies in search of a cure at the expense of prevention, which he fails to mention a single time. Innovative new treatments that showcase scientific advancement are flashy and exciting, unlike lifestyle recommendations around nutrition or exercise, or widespread public health efforts to reduce the presence of environmental carcinogens. There’s also the issue of how to measure progress with a goal as elusive as curing cancer. Is it in decreased incidence or mortality rates? In lowering the number of new diagnoses to zero? Perhaps the moonshot should instead focus on reducing human suffering associated with cancers by emphasizing prevention and addressing inequalities that affect health and health outcomes. It’s an objective that’s unquestionably less spectacular than curing cancer, but certainly more achievable.

 

Sources:

James T. Patterson, The Dread Disease: Cancer and Modern American Culture. Harvard University Press, 1987.