Johnson & Johnson’s Baby Powder: Harmless Household Product or Lethal Carcinogen?

On Monday night, a jury in St. Louis awarded $72 million to the family of a woman who died of ovarian cancer after using Johnson & Johnson’s baby powder and other talcum-based products for years. The verdict came after a three-week trial in which lawyers for the plaintiff, an Alabama woman named Jacqueline Fox, argued that Johnson & Johnson had known of the dangers of talcum powder since the 1980s and concealed the risks. The corporation’s lawyers countered by saying the safety of talcum powder was supported by decades of scientific evidence and there was no direct proof of causation between its products and Fox’s cancer.

Fox used Johnson & Johnson’s baby powder and another talc-based product called Shower to Shower for 35 years. “It just became second nature, like brushing your teeth,” her son said. “It’s a household name.” The company has come under fire in recent years from consumer safety groups for the use of questionable ingredients in its products, including formaldehyde and 1,4-dioxane, both of which are considered likely carcinogens. Fox’s case was the first to reach a monetary award among some 1,200 lawsuits pending nationally against the company.

The case bears a notable resemblance to the lawsuits against the tobacco companies, with attorneys for both the plaintiff and the defendant taking a page from the playbook of their respective side. Fox’s lawyers claimed that Johnson & Johnson’s own medical consultants warned in internal documents of the risk of ovarian cancer from hygienic talc use, just as tobacco companies knew for decades that smoking caused lung cancer but sought to suppress the evidence. And the pharmaceutical giant responded as the tobacco industry did in the numerous lawsuits it faced in the 1980s and 1990s: by creating doubt about the mechanism of cancer causation and upholding the safety of its products.

I find this case uniquely disturbing because the image of Johnson & Johnson’s baby powder as a household product that evokes a sense of comfort and protection is so at odds with the jury’s finding that it caused or contributed to a case of fatal ovarian cancer. The company appears to be right in claiming that the scientific evidence is inconclusive: some studies have shown a slightly increased risk of ovarian cancer among women who use products containing talcum powder, while others have found no link. It’s important to note that until the 1970s talcum-based products contained asbestos, so people who used them before that time were exposed to a known carcinogen. Still, the research is unsettled enough that the American Cancer Society advises people who are concerned about talcum powder to avoid using it “[u]ntil more information is available.”

Without examining the trial transcripts or interviewing the jurors, it’s impossible to know for sure what factors influenced the verdict. I imagine the tobacco settlements have irrevocably changed the environment surrounding these types of lawsuits—that there’s a sizable segment of the American public which is understandably suspicious of large corporations trying to conceal research about the health risks of their products. I suspect there’s also an element of causation and blame at work here, about wanting to assign responsibility for a disease that remains, for the most part, perplexing and impenetrable. We all make choices that affect our health on a daily basis, from what kind of shampoo to use to what to eat for lunch, and we want assurance that the repercussions of the decisions we make with good intentions will be in our best interest. But as the unprecedented $72 million verdict shows, we have an immense uneasiness about the dangers lurking behind the most benign-seeming household products. And we fear that those products, rather than benefiting us, will instead do us harm.

Shooting the Moon on Cancer

During his final State of the Union address on January 12th, President Obama announced a “moonshot” to cure cancer and appointed Vice President Biden to lead it. The edict is reminiscent of Nixon’s 1971 War on Cancer, which was envisioned as an all-out effort to eradicate the disease by marshaling the kinds of resources and scientific knowledge that two years earlier had sent a man to the moon. By most measures we’re much better off now than we were four decades ago: cancer treatments have improved drastically, people are living longer after diagnosis, and mortality rates have been falling since their peak in the early 1990s. But as anyone who has been touched by cancer can attest—and in the United States, that’s nearly all of us—the war is far from over.

Biden, whose son died of brain cancer last year, outlined a plan that’s essentially twofold: to increase public and private resources in the fight against cancer, and to promote cooperation among the various individuals, organizations and institutions working in cancer research. The initiative will likely lead to increased funding for the National Institutes of Health, a prospect that has many scientists giddy with anticipation. But the complexities of cancer, which are now much clearer than in 1971, underscore the multiple challenges confronting us. On NPR, one researcher described how the same form of cancer can act differently in different people because of the immense number of genetic distinctions between us. And in the New York Times, Gina Kolata and Gardiner Harris pointed out that the moonshot reflects an outmoded view of cancer as one disease rather than hundreds, and the idea of discovering a single cure is therefore “misleading and outdated.”

Nixon’s initiative signaled an optimism in the certainty of scientific progress to combat a disease that many regarded with dread. In polls, articles, and letters, Americans at the time debated whether they’d want to be told of a cancer diagnosis and worried about being in close contact with cancer patients. The disease’s many unknowns generated fears of contracting it, desperation about the pain and debilitation associated with it, and plenty of unorthodox cures (this was, after all, the era of laetrile and Krebiozen).

Much has changed in the intervening decades, and if you’re diagnosed with a form of cancer today, you’d undoubtedly have a better prognosis than in 1971. But one aspect of cancer in American culture has not changed, and that’s the mystique surrounding the disease. Cancer is not the biggest cause of death in the US—heart disease takes top honors—but it remains the most feared. It occupies an outsize place in the landscape of health and wellness, suffering and death. As such, it demands a bold approach. Winning the “war on cancer” would necessitate breaking down the disease into types and subtypes: not cancer, but cancers; not cancer but ductal carcinoma in situ and acute myeloid leukemia and retinoblastoma. But this would dilute its power as a singular cultural force, an adversary that (the thinking goes) with a massive coordinated input of resources could be vanquished once and for all.

Photo credit: Cecil Fox, National Cancer Institute

Photo credit: Cecil Fox, National Cancer Institute

Biden’s moonshot doesn’t just reproduce an outmoded idea of cancer; it is dependent upon it. It also promotes research and therapies in search of a cure at the expense of prevention, which he fails to mention a single time. Innovative new treatments that showcase scientific advancement are flashy and exciting, unlike lifestyle recommendations around nutrition or exercise, or widespread public health efforts to reduce the presence of environmental carcinogens. There’s also the issue of how to measure progress with a goal as elusive as curing cancer. Is it in decreased incidence or mortality rates? In lowering the number of new diagnoses to zero? Perhaps the moonshot should instead focus on reducing human suffering associated with cancers by emphasizing prevention and addressing inequalities that affect health and health outcomes. It’s an objective that’s unquestionably less spectacular than curing cancer, but certainly more achievable.

 

Sources:

James T. Patterson, The Dread Disease: Cancer and Modern American Culture. Harvard University Press, 1987.

 

History on Screen: The Knick, William Halsted, and Breast Cancer Surgery

Recently I watched the first episode of The Knick, a new series on Cinemax that revolves around the goings-on at a fictitious hospital in turn-of-the-century New York. It stars Clive Owen as Dr. John Thackery, a brilliant and arrogant surgeon who treats his coworkers contemptuously but earns their grudging respect because he’s so darn good at his job. I’ve read that the show draws on the collections and expertise of Stanley Burns, who runs the Burns Archive of historical photographs. As a medical historian, I suppose it’s an occupational inevitability that I would view The Knick with an eye toward accuracy. Mercifully, I found the show’s depiction of the state of medicine and public health at the time to be largely appropriate: the overcrowded tenements, the immigrant mother with incurable tuberculosis, the post-surgical infections that physicians were powerless to treat in an age before antibiotics. I was a bit surprised by one scene in which Thackery and his colleagues operate in an open surgical theater, their sleeves rolled up and street clothes covered by sterile aprons as they dig their ungloved hands into a patient; while not strictly anachronistic, these practices were certainly on their way out in 1900. But overall, I was gratified to see that the show’s producers seem to be taking the medical history side of things seriously, even if they inject a hefty dose—or overdose—of drama.

A temperamental genius, Thackery thrives on difficult situations that call for quick thinking and improvisation. He pioneers innovative techniques, often in the midst of demanding surgeries, and invents a new type of clamp when he can’t find one to suit his needs. He is also a drug addict who patronizes opium dens and injects himself with liquid cocaine on his way to work. The character appears to be based on William Stewart Halsted, an American surgeon known for all of these qualities, right down to the drug addiction. Born in 1852 to a Puritan family from Long Island, he attended Andover and Yale, where he was an indifferent student, and the College of Physicians and Surgeons, where he excelled. After additional training in Europe, he returned to the US to begin his surgical career, first in New York City, then at Johns Hopkins Medical School. In addition to performing one of the first blood transfusions and being among the first to insist on an aseptic surgical environment, he was famously a cocaine addict, having earlier begun experimenting with the drug as an anesthetic. His colleagues covered for his erratic behavior, turning the other cheek when he arrived late for operations or missed work for days or weeks at a time. Twice he was shipped off to the nineteenth-century version of rehab, where doctors countered his cocaine addiction by dosing him with heroin. Although Halsted remained a cocaine addict all his life, he managed it well enough that by the time he died in 1922 he was considered one of the country’s preeminent surgeons and the founder of modern surgery.

Halsted pioneered another modern innovation, as well: the overtreatment of breast cancer. In the late nineteenth century, women often waited until the disease had reached an advanced stage before seeking medical treatment. As historian Robert A. Aronowitz writes, clinicians “generally estimated the size of women’s breast tumors on their initial visit as being the size of one or another bird egg.” When cancer was this far along, the prognosis was poor: more than 60 percent of patients experienced a local recurrence after surgery, according to figures compiled by Halsted.

In the 1880s, Halsted began working on a way to address these recurrences. Like his contemporaries, he assumed that cancer started as a local disease and spread outward in a logical, orderly fashion, invading the closest lymph nodes first before dispersing to outlying tissues. Recurrences were the result of a surgeon acting too conservatively by not removing enough tissue and leaving cancerous cells behind. The procedure he developed, which would become known as the Halsted radical mastectomy, removed the entire breast, underarm lymph nodes, and both chest muscles en bloc, or in one piece, without cutting into the tumor at all. Halsted claimed astonishing success with his operation, reporting in 1895 a local recurrence rate of six percent. Several years later, he compiled additional data that, while less impressive than his earlier results, still outshone what other surgeons were accomplishing with less extensive operations: 52 percent of his patients lived three years without a local or regional occurrence.

By 1915, the Halsted radical mastectomy had become the standard operation for breast cancer in all stages, early to late. Physicians in subsequent decades would push Halsted’s procedure even further, going to ever more extreme lengths in pursuit of cancerous cells. At Memorial Sloan-Kettering Hospital in New York, George T. Pack and his student, Jerome Urban, spent the 1950s promoting the superradical mastectomy, a five-hour procedure in which the surgeon removed the breast, underarm lymph nodes, chest muscles, several ribs, and part of the sternum before pulling the remaining breast over the hole in the chest wall and suturing the entire thing closed. Other surgeons performed bilateral oophorectomies on women with breast cancer, removing both ovaries in an attempt to cut off the estrogen that fed some tumors. While neither of these procedures became a widely utilized treatment for the disease, they illustrate the increasingly militarized mindset of cancer doctors who saw their mission in heroic terms and considered a woman’s state of mind following the loss of a breast, and perhaps several other body parts, to be, at best, a negligible consideration.

The Halsted radical mastectomy was on its way out by the late 1970s; within a few years, it would comprise less than five percent of breast cancer surgeries. The demise of Halsted’s eponymous operation had several causes. First, data from cancer survivors showed that the procedure was no more effective at reducing mortality than simple mastectomy, or mastectomy combined with radiation. Second, the radical mastectomy was highly disfiguring, leaving women with a deformed chest where the breast had been, hollow areas beneath the clavicle and underarm, and lymphedema, or swelling of the arm following the removal of lymph nodes. As the women’s health movement expanded in the 1970s, patients grew more vocal about insisting on less disabling treatments, such as lumpectomies and simple mastectomies.

William Stewart Halsted

William Stewart Halsted

Halsted’s life and the state of surgery, medicine and public health at the turn of the twentieth century are a rich source of material for a television series, with the built-in drama of epidemic diseases, inadequate treatments, and high mortality rates. But Halsted’s legacy is complicated. He pushed his field forward and introduced innovations, such as surgical gloves, that led to better and safer conditions for patients. But he also became the standard-bearer for an aggressive approach to breast cancer that in many cases resulted in overtreatment. The Halsted radical mastectomy undoubtedly prevented thousands of women from dying of breast cancer, but for others with small tumors or less advanced disease it was surely excessive. And hidden behind the statistics of the number of lives saved were actual women who had to live with the physical and emotional scars of a deforming surgery. The figure of the heroic doctor may still be with us, but the mutilated bodies left behind have been forgotten.


Sources:

Robert A. Aronowitz, Unnatural History: Breast Cancer and American Society. Cambridge University Press, 2007.

Barron H. Lerner, The Breast Cancer Wars: Hope, Fear, and the Pursuit of a Cure in Twentieth-Century America. Oxford University Press, 2001.

Howard Markel, An Anatomy of Addiction: Sigmund Freud, William Halsted, and the Miracle Drug Cocaine. Pantheon, 2011.

James S. Olson, Bathsheba’s Breast: Women, Cancer & History. Johns Hopkins University Press, 2002.

What's In a Name?

sars2.jpg

Last week, the World Health Organization issued guidelines for naming new human infectious diseases. Concerned about the potential for disease names to negatively impact regions, economies, and people, the organization urged those who report on emerging diseases to adopt designations that are “scientifically sound and socially acceptable.” “This may seem like a trivial issue to some,” said Dr. Keiji Fukuda, Assistant Director-General for Health Security,” but disease names really do matter to the people who are directly affected. We’ve seen certain disease names provoke a backlash against members of particular religious or ethnic communities, create unjustified barriers to travel, commerce and trade, and trigger needless slaughtering of food animals. This can have serious consequences for peoples’ lives and livelihoods.”

According to the new guidelines, the following should be avoided: geographic locations (Lyme disease, Middle East Respiratory Syndrome, Rocky Mountain Spotted Fever, Spanish influenza, Japanese encephalitis); people’s names (Creutzfeldt-Jakob disease, Lou Gehrig’s disease, Alzheimer’s); animal species (swine flu, monkeypox); references to an industry or occupation (Legionnaires’ disease); and terms that incite undue fear (fatal, unknown, epidemic).

Instead, the WHO recommends generic descriptions based on the primary symptoms (respiratory disease, neurologic syndrome, watery diarrhea); affected groups (infant, juvenile, adult); seasonality (winter, summer); the name of the pathogen, if known (influenza, salmonella); and an “arbitrary identifier” (alpha, beta, a, b, I, II, III, 1, 2, 3).

Stigmatization caused by disease names is a legitimate concern, as we’ve seen that the way in which an appellation is chosen can have very real consequences for a community. It can alter perceptions of who is susceptible, which in turn can affect how doctors make their diagnoses and devise plans for treatment. It can shape social attitudes toward both patients and those who remain disease-free, and it can influence decisions about research and funding. When AIDS first emerged in the United States in the early 1980s, it was named GRID, or Gay Related Immune Deficiency, a measure of the extent to which it was associated with gay men. While gay and bisexual men remain the group most severely affected by HIV today, the disease’s original name undoubtedly shaped public perceptions of who was—and wasn’t—at risk.

But stigmatization can also happen apart from the process of naming a disease, a matter that the WHO guidelines would do nothing to address. In 2003, an outbreak of SARS (Severe Acute Respiratory Syndrome) in China, Vietnam and Hong Kong led to widespread stigmatization of Asian American communities as people avoided Chinatowns, Asian restaurants and supermarkets, and sometimes Asians themselves. The 1983 classification of Haitians as a high-risk group for HIV by the Centers for Disease Control and Prevention prompted a backlash against people of Haitian descent, and from 1991 to 1994 the US government quarantined nearly 300 HIV-positive Haitian refugees at Guantanamo Bay, Cuba. And then there are the diseases that have been renamed in an attempt to destigmatize them, although their new monikers would be considered unsuitable under the WHO guidelines. Leprosy, for example, is often referred to as Hansen’s disease, particularly in Hawaii, where the contagious, highly disfiguring illness devastated families and led to the establishment of disease settlements on the islands.

I’m not in favor of stigmatization, but as someone who studies the history and sociology of illness, I can’t help but wonder if something will be lost if the WHO’s recommendations are widely adopted. A disease name can influence its place in the public consciousness; it can simultaneously bring to mind a particular location or person and a constellation of symptoms. A single word, poetic in its succinctness, can suggest a range of images and associations—biological, psychological, political, and cultural. Would Ebola have the same resonance if it were called viral hemorrhagic fever? How much of our perception of Lou Gehrig’s disease, also known as amyotrophic lateral sclerosis, involves our knowledge of the tragic physical decline of the once formidable Yankees slugger?

There are, of course, plenty of evocative diseases that don’t contain a geographic location or person’s name: polio, for instance, or cholera. But the WHO guidelines all but guarantee that the names for emerging diseases, while scientifically accurate and non-stigmatizing, will be cumbersome, clunky designations that do little to capture the public imagination. After all, who remembers the great A(H1N1)pdm09 pandemic of 2009?