Pathologizing Childhood Behavior

Several weeks ago the New York Times published a disturbing front-page story on the use of psychiatric medications in very young children. The article, by Alan Schwarz, describes a sharp uptick in the number of prescriptions for antipsychotics and antidepressants to address violent or withdrawn behavior in children under the age of two. I’ve written on Schwarz’s superb prior reporting on the increasing prevalence of psychiatric diagnoses in children and the aggressive role of pharmaceutical companies in promoting medications to treat them. But his latest work reveals an alarming new trend in addressing behavioral disorders in children, encapsulating much of what’s wrong with the American healthcare system and our contemporary attitudes toward illness.

The risks of using psychiatric medications such as Haldol and Prozac on neurologically developing brains are not known, because the experiments have never been done in children—and won’t be, for ethical reasons. In adults, antipsychotics are generally used to treat symptoms of schizophrenia and can have long-term, debilitating side effects. These range from feelings of numbness and a lack of emotion to a condition called tardive dyskinesia, which is characterized by involuntary, repetitive movements, usually facial twitching, and is often permanent and irreversible.

While children as young as eighteen months or two years are obviously not ideal candidates for cognitive behavioral therapy, which can be extremely effective in addressing behavioral disorders in adults, there are still ways to attend to the underlying issues and attempt to determine what's causing them. As one of the experts quoted in the article notes, however, this takes time and money at all levels, as well as patience. The system of health insurance reimbursement in the United States favors shorter physician visits over longer ones, making it faster and thus more profitable to write a prescription than to address a patient’s issues in a lengthier, more wide-ranging way. It’s far easier to medicate away a symptom than it is to address its source, especially for overworked, stressed-out parents and for physicians who are not necessarily rewarded financially for emphasizing a social rather than a biomedical approach to the treatment of behavioral disorders.

Finally, there’s the idea that physicians are more likely to prescribe something for a particular condition if a medication to address its symptoms is readily available. This makes intuitive sense: if a patient has high blood pressure or high cholesterol, then prescribing an antihypertensive or a statin would presumably follow. Similarly, a person with signs of depression might receive a prescription for an antidepressant, as someone who suffers from migraine headaches could benefit from a drug that addresses the condition’s multiple symptoms. But the very existence of a medication to treat an illness can contribute to perceptions of that illness’s prevalence. In some instances, medication can create illness; in others, it can make it more visible. Take, for example, menopause and erectile dysfunction. Until recently, both were considered ordinary consequences of aging. Then hormone replacement therapy and Viagra emerged as pharmaceutical remedies for each condition, medicalizing them and rendering them abnormal. (Recommendations for hormone replacement therapy in post-menopausal women changed abruptly in 2002 when the Women’s Health Initiative study found that the standard regimen increased a woman’s risk of heart disease and breast cancer.) And what’s abnormal must be made normal, whether the deviation is physiological, hormonal, or numerical. But behavioral disorders are harder to define, and therefore the threshold of who needs treatment will vary.

I’m not suggesting that doctors stop prescribing psychiatric medications to children altogether, as experts agree that antianxiety drugs such as Klonopin are an appropriate way to treat seizures in young patients; although the long-term side effects are unknown, the consequences of leaving the seizures untreated are even worse. But it’s the increasing use of these medications for an ever-expanding list of behavioral disorders that’s of concern, both in what it indicates about our contracting sense of normal childhood conduct and in the reluctance of physicians to take a more expansive approach to addressing it. We should embrace a broader, more forgiving view of what it means to be a child and work to ensure that our healthcare system considers psychiatric care in a comprehensive way. Utilizing counseling and social support instead of instinctively reaching for a prescription pad may be a more time-consuming and expensive way to treat behavioral disorders, but it's one that involves fewer long-term and unknown risks to very young brains.

 

 

History on Screen: The Knick, William Halsted, and Breast Cancer Surgery

Recently I watched the first episode of The Knick, a new series on Cinemax that revolves around the goings-on at a fictitious hospital in turn-of-the-century New York. It stars Clive Owen as Dr. John Thackery, a brilliant and arrogant surgeon who treats his coworkers contemptuously but earns their grudging respect because he’s so darn good at his job. I’ve read that the show draws on the collections and expertise of Stanley Burns, who runs the Burns Archive of historical photographs. As a medical historian, I suppose it’s an occupational inevitability that I would view The Knick with an eye toward accuracy. Mercifully, I found the show’s depiction of the state of medicine and public health at the time to be largely appropriate: the overcrowded tenements, the immigrant mother with incurable tuberculosis, the post-surgical infections that physicians were powerless to treat in an age before antibiotics. I was a bit surprised by one scene in which Thackery and his colleagues operate in an open surgical theater, their sleeves rolled up and street clothes covered by sterile aprons as they dig their ungloved hands into a patient; while not strictly anachronistic, these practices were certainly on their way out in 1900. But overall, I was gratified to see that the show’s producers seem to be taking the medical history side of things seriously, even if they inject a hefty dose—or overdose—of drama.

A temperamental genius, Thackery thrives on difficult situations that call for quick thinking and improvisation. He pioneers innovative techniques, often in the midst of demanding surgeries, and invents a new type of clamp when he can’t find one to suit his needs. He is also a drug addict who patronizes opium dens and injects himself with liquid cocaine on his way to work. The character appears to be based on William Stewart Halsted, an American surgeon known for all of these qualities, right down to the drug addiction. Born in 1852 to a Puritan family from Long Island, he attended Andover and Yale, where he was an indifferent student, and the College of Physicians and Surgeons, where he excelled. After additional training in Europe, he returned to the US to begin his surgical career, first in New York City, then at Johns Hopkins Medical School. In addition to performing one of the first blood transfusions and being among the first to insist on an aseptic surgical environment, he was famously a cocaine addict, having earlier begun experimenting with the drug as an anesthetic. His colleagues covered for his erratic behavior, turning the other cheek when he arrived late for operations or missed work for days or weeks at a time. Twice he was shipped off to the nineteenth-century version of rehab, where doctors countered his cocaine addiction by dosing him with heroin. Although Halsted remained a cocaine addict all his life, he managed it well enough that by the time he died in 1922 he was considered one of the country’s preeminent surgeons and the founder of modern surgery.

Halsted pioneered another modern innovation, as well: the overtreatment of breast cancer. In the late nineteenth century, women often waited until the disease had reached an advanced stage before seeking medical treatment. As historian Robert A. Aronowitz writes, clinicians “generally estimated the size of women’s breast tumors on their initial visit as being the size of one or another bird egg.” When cancer was this far along, the prognosis was poor: more than 60 percent of patients experienced a local recurrence after surgery, according to figures compiled by Halsted.

In the 1880s, Halsted began working on a way to address these recurrences. Like his contemporaries, he assumed that cancer started as a local disease and spread outward in a logical, orderly fashion, invading the closest lymph nodes first before dispersing to outlying tissues. Recurrences were the result of a surgeon acting too conservatively by not removing enough tissue and leaving cancerous cells behind. The procedure he developed, which would become known as the Halsted radical mastectomy, removed the entire breast, underarm lymph nodes, and both chest muscles en bloc, or in one piece, without cutting into the tumor at all. Halsted claimed astonishing success with his operation, reporting in 1895 a local recurrence rate of six percent. Several years later, he compiled additional data that, while less impressive than his earlier results, still outshone what other surgeons were accomplishing with less extensive operations: 52 percent of his patients lived three years without a local or regional occurrence.

By 1915, the Halsted radical mastectomy had become the standard operation for breast cancer in all stages, early to late. Physicians in subsequent decades would push Halsted’s procedure even further, going to ever more extreme lengths in pursuit of cancerous cells. At Memorial Sloan-Kettering Hospital in New York, George T. Pack and his student, Jerome Urban, spent the 1950s promoting the superradical mastectomy, a five-hour procedure in which the surgeon removed the breast, underarm lymph nodes, chest muscles, several ribs, and part of the sternum before pulling the remaining breast over the hole in the chest wall and suturing the entire thing closed. Other surgeons performed bilateral oophorectomies on women with breast cancer, removing both ovaries in an attempt to cut off the estrogen that fed some tumors. While neither of these procedures became a widely utilized treatment for the disease, they illustrate the increasingly militarized mindset of cancer doctors who saw their mission in heroic terms and considered a woman’s state of mind following the loss of a breast, and perhaps several other body parts, to be, at best, a negligible consideration.

The Halsted radical mastectomy was on its way out by the late 1970s; within a few years, it would comprise less than five percent of breast cancer surgeries. The demise of Halsted’s eponymous operation had several causes. First, data from cancer survivors showed that the procedure was no more effective at reducing mortality than simple mastectomy, or mastectomy combined with radiation. Second, the radical mastectomy was highly disfiguring, leaving women with a deformed chest where the breast had been, hollow areas beneath the clavicle and underarm, and lymphedema, or swelling of the arm following the removal of lymph nodes. As the women’s health movement expanded in the 1970s, patients grew more vocal about insisting on less disabling treatments, such as lumpectomies and simple mastectomies.

William Stewart Halsted

William Stewart Halsted

Halsted’s life and the state of surgery, medicine and public health at the turn of the twentieth century are a rich source of material for a television series, with the built-in drama of epidemic diseases, inadequate treatments, and high mortality rates. But Halsted’s legacy is complicated. He pushed his field forward and introduced innovations, such as surgical gloves, that led to better and safer conditions for patients. But he also became the standard-bearer for an aggressive approach to breast cancer that in many cases resulted in overtreatment. The Halsted radical mastectomy undoubtedly prevented thousands of women from dying of breast cancer, but for others with small tumors or less advanced disease it was surely excessive. And hidden behind the statistics of the number of lives saved were actual women who had to live with the physical and emotional scars of a deforming surgery. The figure of the heroic doctor may still be with us, but the mutilated bodies left behind have been forgotten.


Sources:

Robert A. Aronowitz, Unnatural History: Breast Cancer and American Society. Cambridge University Press, 2007.

Barron H. Lerner, The Breast Cancer Wars: Hope, Fear, and the Pursuit of a Cure in Twentieth-Century America. Oxford University Press, 2001.

Howard Markel, An Anatomy of Addiction: Sigmund Freud, William Halsted, and the Miracle Drug Cocaine. Pantheon, 2011.

James S. Olson, Bathsheba’s Breast: Women, Cancer & History. Johns Hopkins University Press, 2002.

What's In a Name?

sars2.jpg

Last week, the World Health Organization issued guidelines for naming new human infectious diseases. Concerned about the potential for disease names to negatively impact regions, economies, and people, the organization urged those who report on emerging diseases to adopt designations that are “scientifically sound and socially acceptable.” “This may seem like a trivial issue to some,” said Dr. Keiji Fukuda, Assistant Director-General for Health Security,” but disease names really do matter to the people who are directly affected. We’ve seen certain disease names provoke a backlash against members of particular religious or ethnic communities, create unjustified barriers to travel, commerce and trade, and trigger needless slaughtering of food animals. This can have serious consequences for peoples’ lives and livelihoods.”

According to the new guidelines, the following should be avoided: geographic locations (Lyme disease, Middle East Respiratory Syndrome, Rocky Mountain Spotted Fever, Spanish influenza, Japanese encephalitis); people’s names (Creutzfeldt-Jakob disease, Lou Gehrig’s disease, Alzheimer’s); animal species (swine flu, monkeypox); references to an industry or occupation (Legionnaires’ disease); and terms that incite undue fear (fatal, unknown, epidemic).

Instead, the WHO recommends generic descriptions based on the primary symptoms (respiratory disease, neurologic syndrome, watery diarrhea); affected groups (infant, juvenile, adult); seasonality (winter, summer); the name of the pathogen, if known (influenza, salmonella); and an “arbitrary identifier” (alpha, beta, a, b, I, II, III, 1, 2, 3).

Stigmatization caused by disease names is a legitimate concern, as we’ve seen that the way in which an appellation is chosen can have very real consequences for a community. It can alter perceptions of who is susceptible, which in turn can affect how doctors make their diagnoses and devise plans for treatment. It can shape social attitudes toward both patients and those who remain disease-free, and it can influence decisions about research and funding. When AIDS first emerged in the United States in the early 1980s, it was named GRID, or Gay Related Immune Deficiency, a measure of the extent to which it was associated with gay men. While gay and bisexual men remain the group most severely affected by HIV today, the disease’s original name undoubtedly shaped public perceptions of who was—and wasn’t—at risk.

But stigmatization can also happen apart from the process of naming a disease, a matter that the WHO guidelines would do nothing to address. In 2003, an outbreak of SARS (Severe Acute Respiratory Syndrome) in China, Vietnam and Hong Kong led to widespread stigmatization of Asian American communities as people avoided Chinatowns, Asian restaurants and supermarkets, and sometimes Asians themselves. The 1983 classification of Haitians as a high-risk group for HIV by the Centers for Disease Control and Prevention prompted a backlash against people of Haitian descent, and from 1991 to 1994 the US government quarantined nearly 300 HIV-positive Haitian refugees at Guantanamo Bay, Cuba. And then there are the diseases that have been renamed in an attempt to destigmatize them, although their new monikers would be considered unsuitable under the WHO guidelines. Leprosy, for example, is often referred to as Hansen’s disease, particularly in Hawaii, where the contagious, highly disfiguring illness devastated families and led to the establishment of disease settlements on the islands.

I’m not in favor of stigmatization, but as someone who studies the history and sociology of illness, I can’t help but wonder if something will be lost if the WHO’s recommendations are widely adopted. A disease name can influence its place in the public consciousness; it can simultaneously bring to mind a particular location or person and a constellation of symptoms. A single word, poetic in its succinctness, can suggest a range of images and associations—biological, psychological, political, and cultural. Would Ebola have the same resonance if it were called viral hemorrhagic fever? How much of our perception of Lou Gehrig’s disease, also known as amyotrophic lateral sclerosis, involves our knowledge of the tragic physical decline of the once formidable Yankees slugger?

There are, of course, plenty of evocative diseases that don’t contain a geographic location or person’s name: polio, for instance, or cholera. But the WHO guidelines all but guarantee that the names for emerging diseases, while scientifically accurate and non-stigmatizing, will be cumbersome, clunky designations that do little to capture the public imagination. After all, who remembers the great A(H1N1)pdm09 pandemic of 2009?

Of Placebos and Pain

As the New York Times reported last week, a recent study in the BMJ found that acetaminophen is no more effective than a placebo at treating spinal pain and osteoarthritis of the hip and knee. For those who rely on acetaminophen for pain relief, this may not come as much of a surprise. Until recently, I was one of them. Because my mother suspected I had a childhood allergy to aspirin, I didn’t take it or any other NSAIDs until several years ago, when I strained my back and decided to test her theory by dosing myself with ibuprofen. To my great relief, I didn’t die. And I was surprised to discover that unlike acetaminophen, which generally dulled but didn’t eliminate my pain, ibuprofen actually alleviated my discomfort, albeit temporarily. Perhaps my mother was wrong about my allergy, or maybe I outgrew it. Either way, my newfound ability to take NSAIDs without fear of an allergic reaction allows me to reap the benefits of a medication that can offer genuine respite from pain, rather than merely rounding out its sharp edges.

But back to the study. Just because researchers determined that acetaminophen is no more effective than a placebo in addressing certain types of pain doesn’t necessarily mean that it’s ineffective. A better-designed investigation might have added another analgesic to the mix, comparing the pain-relief capabilities of acetaminophen and a placebo not just to each other, but to one or more additional medications: say, ibuprofen, or aspirin, or naproxen. That would have enabled them to rank pain relievers on a scale of efficacy and isolate whether their results in the first study were due to the placebo effect (i.e. both acetaminophen and a placebo were effective), or to the shortcomings of acetaminophen (i.e. both acetaminophen and a placebo were useless).

painscale

In any case, what I find noteworthy is not the possibility that acetaminophen might not work, but that a placebo could be effective. One of the foremost issues with the treatment and management of pain—and a major dilemma for physicians—is the lack of an objective scale for measuring it. Pain is the most common reason for visits to the emergency room, where patients are asked to rate their pain on a scale of 0 to 10, with 0 indicating the absence of pain and 10 designating unbearable agony. Pain is always subjective, and it exists only to the extent that a patient perceives it in mind and body. This makes it both challenging and complicated to address, as the experience of pain is always personal, always cultural, and always political.

The issue of pain—and who is qualified to judge its presence and degree—unmasks the question of whose pain is believable, and therefore whose pain matters. As historian Keith Wailoo has written, approaches toward pain management disclose biases of race, gender and class: people of color are treated for pain less aggressively than whites, while women are more likely than men both to complain of pain and to have their assertions downplayed by physicians. Pharmacies in predominantly nonwhite neighborhoods are less likely to stock opioid painkillers, while doctors hesitate to prescribe pain medication for chronic diseases and remain on the lookout for patients arriving at their offices displaying “drug-seeking behavior.”

Whether the pain of women and people of color is undertreated because these groups are experiencing it differently or because doctors are inclined to interpret their perceptions in another way, it underscores the extent to which both the occurrence of pain and its treatment always occur in a social context. (In one rather unsubtle example, a researcher at the University of Texas found that Asians report lower pain levels due to their stoicism and desire to be seen as good patients.) Pain, as scholar David B. Morris has written, is never merely a biochemical process but emerges “only at the intersection of bodies, minds, and cultures.” Since pain is always subjective, then all physicians can do is to treat the patient’s perception of it. And if the mind plays such an essential role in how we perceive pain, then it can be enlisted in alleviating our suffering, whether by opioid, NSAID, acetaminophen, or placebo. If we think something can work, then we open up the possibility for its success.

Conversely, I suppose there’s a chance that the BMJ study could produce a reverse-placebo effect, in which this new evidence that acetaminophen does not relieve pain will render it ineffective if you choose to take it. If that happens, then you have my sympathy and I urge you to blame the scientists.

 

Sources:

David B. Morris, The Culture of Pain. Berkeley, CA: University of California Press, 1991.

Keith Wailoo, Pain: A Political History. Baltimore, MD: Johns Hopkins University Press, 2014.