Our Diseases, Our Selves

Over the past few weeks, I’ve been following coverage of the Institute of Medicine’s recent recommendation of a new name and new diagnostic criteria for chronic fatigue syndrome. In a 250+ page report, the IOM, a division of the National Academy of Sciences, proposed that the disease be renamed “systemic exertion intolerance disease.” This would link it more closely with its central feature while distancing it from a designation many patients see as both demeaning and dismissive of the serious impairment that can accompany the condition. It’s a move that has been applauded by a number of advocates and researchers who study the disease, although others caution that more work is needed to develop a definitive test, as well as medications that can effectively treat it.

disease.jpg

The disorder, which is also called myalgic encephalomyelitis (ME), is characterized by persistent fatigue lasting for more than six months, muscle and joint pain, unrefreshing sleep, and post-exertional malaise. Estimates of the number of affected Americans vary widely, from 836,000 to as many as 2.5 million.* I was struck by the divergence of these numbers, as well as by the following statistic which might explain why: it goes undiagnosed in an estimated 84 to 91 percent of patients. This could be the result of physicians’ lack of familiarity with ME/CFS, doubt about the seriousness of symptoms, or a belief that the patient is making up or exaggerating the extent of the illness. But regardless of your perspective on the disease, that’s an alarming rate of underdiagnosis.

As I’ve been perusing the responses from patients and comments from the public debating the nature of the disorder, I’ve noticed that reactions to the IOM recommendations tend to fall into one of two camps. One group is sympathetic to the disease and its sufferers, urging compassion, education, and continued research; not surprisingly, this group seems to consist mainly of patients with ME/CFS, people who have friends or relatives with it, and physicians who treat them. The second group sees patients as malingerers who are overstating their symptoms to get special consideration; they blame our modern lifestyle for inducing widespread fatigue in our population and point to the lack of a conclusive diagnostic test as evidence that the disease doesn’t exist.

All of this brings me to the following question, which I think is relevant not just to the current discussion but to the entire enterprise of Western medicine: what makes a disease “real”? When are diseases worthy of sympathy and concern, insurance reimbursement, research money, and pharmaceutical therapies, and when are they considered to exist only within a patient’s imagination? Few people in the twenty-first century would dispute, for instance, that pneumonia, malaria, and yellow fever are caused by particular microorganisms, their presence in the body detectable through various tests of blood and fluids. But what about conditions for which we have not yet identified a specific pathology? Does the lack of a clear mechanism for the causation of a disease mean that those who are affected by it are suffering any less? Are a patient’s perceived symptoms enough for an ailment to be considered “real”?

I’m distinguishing here between “disease,” which is a pathological condition that induces a particular set of  markers of suboptimal health in an individual, and “illness,” which is the patient’s experience of that disease. Naming a disease confers legitimacy; being diagnosed with it assigns validity to a patient’s suffering, gives him a disease identity, and connects him with a community of the afflicted. And if naming a disease confers a degree of legitimacy, then outlining a biological mechanism for it bestows even more. Disorders with an identifiable pathology are “real,” while all others are suspect. But this process is subject to revision. As the history of medicine shows us, a number of conditions are now considered real that were once thought to be caused by a lack of morality and self-control, namely alcoholism and addiction. Others, including hysteria, chlorosis, neurasthenia, and homosexuality, were once classified as diseases but are now no longer recognized as such.

“Disease,” as Charles Rosenberg reminds us, “does not exist until we have agreed that it does, by perceiving, naming, and responding to it.” It always occurs within a social context and makes little sense outside of the social and cultural environment within which it is embedded. That is why, to varying degrees, what physicians are responding to is a patient’s subjective assessment of how she is experiencing disease: the level of pain, the physical disability, the fatigue, the fever, the extent to which an ailment is interfering with her life.

To say that diseases can exist independently of us is to misunderstand their fundamental nature as human concepts and social actors. They are not mere biological events, but are made legible and assigned meaning through our system of fears, morals, and values. Whether the proposed name change from chronic fatigue syndrome to systemic exertion intolerance disease will lead to greater acceptance for the disorder and those who suffer from it remains to be seen. But it's brought attention to the process of how we define and name diseases. The ways in which we explain their causation and assign responsibility and blame set forth standards for acceptable behavior and delineate the boundaries of what we consider normal. Our relationship with disease reveals how we understand ourselves as a society. All diseases are therefore both not real and real—not real in the sense that they wouldn't exist without us, and real because we have agreed that they do.

 

*By way of comparison, about 5 million Americans are currently living with Alzheimer’s and about 1.2 million have HIV.

 

Sources:

Charles E. Rosenberg and Janet Golden, eds. Framing Disease: Studies in Cultural History. New Brunswick, NJ: Rutgers University Press, 1992.

 

 

The Coerciveness of Public Health

This morning I awoke to the news that Chris Christie, the governor of New Jersey, thinks parents should have a choice about whether to vaccinate their children. He has since backtracked on his statement and affirmed his support for vaccination. But as a measles outbreak spreads across California, Arizona, and twelve other states, it’s exposing the tension between personal autonomy and community well-being that’s an ever-present part of the doctrine of public health.

needle

The current measles outbreak most likely started when a single infected individual visited Disneyland over the holidays, exposing thousands of vacationers to a highly communicable disease that the CDC declared eliminated from the U.S. in 2000. At another time—say, ten years ago—the outbreak might have been contained to a handful of cases. But as numerous media outlets have reported, immunization rates have been dropping in recent years, particularly in wealthy enclaves where parents still believe the debunked link between vaccines and autism, aim for a toxin-free lifestyle, or distrust Big Pharma and the vaccine industrial complex.

I am young enough to have benefited from the scientific advances that led to widespread immunization in the 1970s, and old enough to have parents who both had measles as children and can recall the dread surrounding polio when they were growing up. Vaccines are a clear example of how public health is supposed to work. One of the unambiguous public health successes of the twentieth century, vaccines have transformed ailments such as pertussis, diphtheria, and chickenpox from fearsome childhood afflictions that could cause lifelong complications, and even death, to avertible diseases.

The basic premise of public health is the prevention of disease, and public health guidelines have led to increased life expectancy and decreased incidence of communicable illnesses, as well as some chronic ones. Yet public health regulations have always had to balance individual civil liberties with public safety. People are free to make their own choices, as long as they don’t infringe on the public good. For the most part you’re still allowed to smoke in your own home (although your neighbors could sue you for it), but you can’t subject me to your secondhand smoke in restaurants, bars, or office buildings.

I believe in handwashing, USDA inspections, the use of seatbelts, and the pasteurization of milk. I believe in quarantines when they are based on the best available information and are applied evenly. (A quarantine that isolates all travelers from West Africa who have symptoms of Ebola would be reasonable; one that singles out black Africans from anywhere on the continent regardless of health status would not.) In short, I am in favor of a coercive public health apparatus. The problem with the current measles outbreak is that enforcement has become too lax, with too many states allowing parents to opt out of immunizing their children because of ill-conceived beliefs that are incompatible with the public good.

Every parent spends a lifetime making choices about how to raise their child, from environment and lifestyle to moral and ethical guidance. But some choices have a greater capacity to impact the lives of others. If you want to let your child run around with scissors, watch R-rated movies, and eat nothing but pork rinds all day, you can. If you want to home-school your child because you want greater control over the curriculum he or she is being taught, you’re free to do that, too. And if you want to keep your child from getting vaccinated against communicable diseases, then the state won’t step in to force you. Opting out of vaccinations might not make you a bad parent any more than raising a fried-snack fiend might. But unless you’re planning to spend your days in physical isolation from every other human on the planet, it does make you a bad member of the public.

Your Beard Is Full of Tuberculosis

Victorian beard

On a crowded L train to Williamsburg one recent evening, I clasped my hand around the subway pole and scanned the multitude of hipster men surrounding me. As I studied the slim trousers ending just above sockless ankles, the plaid shirts encasing concave torsos, and the array of earnest tote bags, I spotted several men with full beards. Apparently these unfortunate hipsters were not aware that the style was on its way out (so 2013!), or maybe they were trying to get maximum benefit from their facial hair transplants. Certainly they were not East Asian, for whoever has met an East Asian man with the ability to grow a thick beard?

The embrace of facial hair by the hipster crowd has a historical precedent in the Victorian era, when full beards served as a symbol of masculinity and a stylistic corollary to the elaborate outfits and ornate home furnishings favored by fashionable contemporaries. Women clad themselves in long dresses with full skirts, bustles, and bodices, their hats topped with flowers, feathers, and, occasionally, entire stuffed birds. Men’s sartorial fashion was somewhat less extravagant, featuring neckties and waistcoats in rich fabrics like silk and brocade. At home, overstuffed sofas and armchairs, heavy drapes, and wall-to-wall carpets filled Victorian parlors. The preference for opulence even extended into bathrooms, which often contained luxuriant carpets and drapes, as well as ornamental wood cabinetry.

But in all of those folds of fabric and lush decorations lurked a hidden danger: germs. At the turn of the twentieth century, the leading causes of death were infectious and communicable diseases, especially tuberculosis, pneumonia, influenza, and diarrheal illnesses. Tuberculosis was particularly feared for the slow, painful death it induced in its victims; it consumed the body from the inside out, provoking a graveyard cough or “death rattle” in its final stages, when the patient’s gaunt appearance indicated that the end was near. In 1900, the disease killed one out of every ten Americans overall, and one in four young adults. Physicians had been able to diagnose tuberculosis accurately since 1882, when German bacteriologist Robert Koch identified the microorganism responsible for causing it. But this knowledge did nothing to improve a patient’s prognosis, for no cure existed. It wouldn’t be until after World War II, when antibiotics came into general use, that sufferers would finally have an effective remedy.

Koch’s discovery prefaced a new science of bacteriology. Toward the end of the nineteenth century, the lessons of the laboratory began to reach into American homes and public spaces, changing individual behaviors and cultural preferences. Spitting was a particular target of public health authorities. Tuberculosis-laden sputum could travel from the street into the home on women’s trailing skirts; once inside, it dried into deadly dust that imperiled vulnerable infants and children. In cities across the nation, concerned citizens urged women to shorten their hemlines to avoid dragging germs around on their clothing. “Don’t ever spit on any floor, be hopeful and cheerful, keep the window open,” read one pamphlet. The common communion cup, once a familiar sight in Protestant churches, disappeared as it became implicated in the spread of disease. Hoteliers instituted a practice of wrapping woolen blankets in extra-long sheets that were folded over on top; when a hotel guest departed, the sheet was laundered to remove any tubercular germs exhaled during slumber. Homeowners ripped out the overstuffed upholstery and heavy fabrics of their Victorian-era interiors, replacing them with metal and glass. In bathrooms, white porcelain tiles and the white china toilet supplanted carpeted walls and floors. Preferences shifted to materials that could be cleaned of dust and disinfected, slick surfaces where germs would be unable to gain a foothold.

The stripped-down, modern aesthetic extended to personal style, as well. Women’s hemlines grew shorter, their silhouettes more streamlined. Men began to shed their full beards and moustaches in favor of a clean-shaven look. In 1903, an editorial in Harper’s Weekly commented on the “passing of the beard,” noting that “the theory of science is that the beard is infected with the germs of tuberculosis.” Writing in the same magazine four years later, an observer remarked upon the “revolt against the whisker” that “has run like wild-fire over the land.” By the 1920s, the elaborate fashions of the Victorian era were nowhere in evidence. Picture, for instance, a flapper-era female wearing a cropped hairstyle and a calf-length shift. Or the neatly trimmed moustaches of Teddy Roosevelt and William Howard Taft, the last two presidents to sport facial hair in their official portraits.

A century ago, men with full beards would have felt cultural pressure to shave to protect themselves and their families from the dangerous germs concealed within. It’s a sign of how much our understanding of bacteriology has changed that today’s hipsters harbor no such worries; indeed, few are probably even aware of the historical precedent of disease-laden facial hair. I was never a fan of the look to begin with, and now I can’t help thinking back to earlier fears of contagion whenever I see these beards. But short of a tuberculosis epidemic, which of course I don’t wish for, I’ll have to hope for some other imperative that will bring about a contemporary “revolt against the whisker.”

 

Sources:

Nancy Tomes, The Gospel of Germs. Cambridge, MA: Harvard University Press, 1998.

HPV Testing and the History of the Pap Smear

Several weeks ago, the U.S. Food and Drug Administration approved the Cobas HPV test as a primary screening method for cervical cancer. As the first alternative to the familiar Pap smear ever to be green-lighted by the agency, this is big news. If gynecologists and other health practitioners adopt the FDA’s recommendations, it could change women’s experience of and relationship to cancer screening, a process we undergo throughout our adult lives. The HPV test probably won’t replace the Pap smear anytime soon, but it could pose a challenge to the diagnostic’s sixty-year standing as the undisputed first-line defense against cervical cancer.

The Cobas HPV test, manufactured by Roche, works by detecting fourteen high-risk strains of the human papilloma virus, including HPV 16 and HPV 18, the pair responsible for 70% of all cervical cancers. (The Centers for Disease Control estimates that 90% of cervical cancers are caused by a strain of HPV.) If a patient tests positive for HPV 16 or 18, the new FDA guidelines recommend a colposcopy to check for cervical cell abnormalities. If she tests positive for one of the other twelve high-risk HPV strains, the recommended follow-up is a Pap smear to determine the need for a colposcopy. But critics fear that the new guidelines will lead to overtesting and unnecessary procedures, especially in younger women, many of whom have HPV but will clear the virus on their own within a year or two. Biopsies and colposcopies are more invasive, painful, and expensive than Pap testing, and might increase the risk of problems with fertility and pre-term labor down the road.

Papanicolaou

When George Papanicolaou began the experiments in the 1920s that would lead to the development of his namesake test, cervical cancer was among the most widespread forms of the disease in women, and was by many accounts the commonest. It was also deadly. With no routine method to detect early-stage cancers, many patients weren’t diagnosed until the disease had already metastasized. Even for those who heeded the symptoms of irregular bleeding and discharge, medicine offered little by way of treatment or cure. As Joseph Colt Bloodgood, a prominent surgeon at Johns Hopkins Hospital, grimly observed in 1931, cervical cancer “is today predominantly a hopeless disease.”

Papanicolaou, a Greek-born zoologist and physician, spent his days studying the menstrual cycle of guinea pigs at Cornell University Hospital in New York City. Using a nasal speculum and a cotton swab, he extracted and examined cervical cells from the diminutive animals. Eventually he extended his work to “human females,” using his wife, Mary, as a research subject. He discovered that his technique allowed for the identification of abnormal, precancerous cells shed by the cervix. After a few false starts—his first presentation of his work was at a eugenics conference in 1928 and was panned by attendees—he went back to the lab, spending another decade on swabs and slides. By 1941 he had gotten his ducks in a row, and with a collaborator he published his results in a persuasive paper that was quickly embraced by colleagues. Thus was born the Pap smear.

The Pap smear is not an infallible diagnostic. It can’t distinguish between cells that will become invasive and those that will never spread outside the cervix. Results can be ambiguous and slides are sometimes misread. Nonetheless, the Pap smear was a breakthrough at the time because it detected precancerous changes in cervical cells. It upended the customary timeline of cervical cancer, pushing the clock back by enabling diagnosis of the disease at a stage when lesions could be treated with relative ease and success. Since its introduction, it has contributed to a remarkable reduction in American mortality from cervical cancer, from 44 per 100,000 in 1947 to 2.4 per 100,000 in 2010, a roughly eighteenfold decrease in just over sixty years.

When women in the U.S. die from cervical cancer today, it’s generally because they never had a Pap test, hadn’t had one within the past five years, or failed to follow up on abnormal results with appropriate treatment. The problem isn’t with the test itself; it’s with uneven access to screening and follow-up care. These are issues of class, geographic location, insurance status, and health literacy that the HPV test will do nothing to address. The Pap smear may not be perfect, but when utilized correctly it does a pretty good job of detecting cervical cancer. The FDA’s approval of the Cobas HPV test as a first-line defense and its new cervical cancer screening guidelines have the potential to subject millions of women to decades of invasive, expensive procedures, upending six decades of established practice for a protocol with no clear gains in effectiveness. And that is a very big deal.

 

Sources:

Siddhartha Mukherjee, The Emperor of All Maladies: A Biography of Cancer. New York: Scribner, 2010.

Ilana Löwy, A Woman’s Disease: The History of Cervical Cancer. New York: Oxford University Press, 2011.

Monica J. Casper and Adele E. Clarke, “Making the Pap Smear into the ‘Right Tool’ for the Job: Cervical Cancer Screening in the USA, circa 1940-95,” Social Studies of Science 28 (1998): 255-90.

Joseph Colt Bloodgood, “Responsibility of the Medical Profession for Cancer Education, with Special Reference to Cancer of the Cervix,” American Journal of Cancer 15 (1931): 1577-85.

Statistics on cervical cancer from the National Cancer Institute at http://seer.cancer.gov/statfacts/html/cervix.html.