![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Is Hysteria Real? Brain Images Say Yes
By ERIKA KINETZ, The New York Times, September 26, 2006

The 19th-century French neurologist Jean-Martin Charcot, shown lecturing on hysteria, helped lay the groundwork for contemporary research.
Hysteria is a 4,000-year-old diagnosis that has been applied to no mean parade of witches, saints and, of course, Anna O.
But over the last 50 years, the word has been spoken less and less. The disappearance of hysteria has been heralded at least since the 1960’s. What had been a Victorian catch-all splintered into many different diagnoses. Hysteria seemed to be a vanished 19th-century extravagance useful for literary analysis but surely out of place in the serious reaches of contemporary science.
The word itself seems murky, more than a little misogynistic and all too indebted to the theorizing of the now-unfashionable Freud. More than one doctor has called it “the diagnosis that dare not speak its name.”
Nor has brain science paid the diagnosis much attention. For much of the 20th century, the search for a neurological basis for hysteria was ignored. The growth of the ability to capture images of the brain in action has begun to change that situation.
Functional neuroimaging technologies like single photon emission computerized tomography, or SPECT, and positron emission tomography, or PET, now enable scientists to monitor changes in brain activity. And although the brain mechanisms behind hysterical illness are still not fully understood, new studies have started to bring the mind back into the body, by identifying the physical evidence of one of the most elusive, controversial and enduring illnesses.
Despite its period of invisibility, hysteria never vanished — or at least that is what many doctors say.
“People who say it is vanished need to come and work in some tertiary hospitals where they will see plenty of patients,” Kasia Kozlowska, a psychiatrist at the Children’s Hospital at Westmead in Sydney, Australia, and the author of a 2005 review of the subject in The Harvard Review of Psychiatry, wrote in an e-mail message.
But it did change its name. In 1980, with the publication of the third edition of its Diagnostic and Statistical Manual of Mental Disorders, the American Psychiatric Association officially changed the diagnosis of “hysterical neurosis, conversion type” to “conversion disorder.”
“Hysteria, to me, has always been a pejorative term, because of its association with women,” said Dr. William E. Narrow, the associate director of the research division of the American Psychiatric Association. “I think the fact we got rid of that word is a good thing.”
Unofficially, a host of inoffensive synonyms for “hysterical” have appeared: functional, nonorganic, psychogenic, medically unexplained.
“Medically unexplained” and “functional” encompass a broader swath of distress than just conversion disorder — by some accounts, patients with medically unexplained symptoms account for up to 40 percent of all primary care consultations. But clinicians seeking to avoid the wrath of patients who do not appreciate being told that their debilitating seizures are hysterical in origin also use these blander terms.
Throughout that cloud of shifting nomenclature, people have kept getting sick. “The symptoms themselves have never changed,” said Patrik Vuilleumier, a neurologist at the University of Geneva. “They are still common in practice.”
Common, perhaps. Well studied, no. There is still no consensus on how conversion disorder should be classified, and not all physicians agree on diagnostic criteria. The epidemiology is hazy; one commonly cited statistic is that conversion disorder accounts for 1 percent to 4 percent of all diagnoses in Western hospitals. In addition, patients have heterogeneous symptoms that affect any number of voluntary sensory or motor functions, like blindness, paralysis or seizures.
The two things all patients have in common are, first, that they are not faking the illness and, second, that despite extensive testing, doctors can find nothing medically wrong with them. The scientific studies that have been conducted on conversion disorder generally have small sample sizes and methodological differences, complicating the comparison of results from different scientific teams and making general conclusions difficult.
“It’s one of those woolly areas, and it has this pejorative association,” said Peter W. Halligan, a professor of neuropsychology at Cardiff University in Wales and the director of Cardiff’s new brain imaging center. “Some people say, ‘That’s a Freudian throwback, let’s go into real science.’ ”
Hysteria actually predates Freud. The word itself derives from “hystera,” Greek for uterus, and ancient doctors attributed a number of female maladies to a starved or misplaced womb. Hippocrates built on the uterine theory; marriage was among his recommended treatments.
Then came the saints, the shamans and the demon-possessed. In the 17th century, hysteria was said to be the second most common disease, after fever. In the 19th century, the French neurologists Jean-Martin Charcot and Pierre Janet laid the groundwork for contemporary approaches to the disease. Then Charcot’s student, a young neurologist named Sigmund Freud, radically changed the landscape and, some argue, popularized hysteria.
Freud’s innovation was to explain why hysterics swooned and seized. He coined the term “conversion” to describe the mechanism by which unresolved, unconscious conflict might be transformed into symbolic physical symptoms. His fundamental insight — that the body might be playing out the dramas of the mind — has yet to be supplanted.
“Scores of European doctors for generations had thought hysteria was something wrong with the physical body: an unhappy uterus, nerves that were too thin, black bile from the liver,” said Mark S. Micale, an associate professor at the University of Illinois at Urbana-Champaign and the author of “Approaching Hysteria” (Princeton University Press, 1994). “Something somatic rooted in the body is giving rise to fits, spells of crying, strange aches and pains. Freud reverses that direction of causality. He says what the cases on his couch in Vienna are about is something in the psyche or the mind being expressed physically in the body.”
For neuroscientists now, there is no such division between the physical brain and the mind. The techniques allow scientists to see disruptions in brain function, which lets them sketch a physical map of what might be going on in the minds of modern-day hysterics. Many questions remain unanswered, but the results have begun to suggest ways in which emotional structures in the brain might modulate the function of normal sensory and motor neural circuits.
In the last decade, a number of brain imaging studies have been done on patients suffering from hysterical paralysis. Patients with hysterical paralysis have healthy nerves and muscles. Their problem is not structural but functional: something has apparently gone wrong in the higher reaches of the human mind that govern the conception of movement and the will to move. The dumb actors in this dance are fine; it’s the brilliant but complex director that has a problem.
Movement is the product of a multistage process. There is initiation (“I want to move my arm”); then planning, in which the muscles prepare for coordinated action; and finally execution, in which you actually move your arm. In theory, paralysis could result from a malfunction at any stage of this process. (Charcot had a similar idea back in the 1890’s.)
In a 1997 paper published in the journal Cognition, Dr. Halligan, of Cardiff, and John C. Marshall and their colleagues analyzed the brain function of a woman who was paralyzed on the left side of her body. First they spent large amounts of money on tests to ensure that she had no identifiable organic lesion.
When the woman tried to move her “paralyzed leg,” her primary motor cortex was not activated as it should have been; instead her right orbitofrontal and right anterior cingulate cortex, parts of the brain that have been associated with action and emotion, were activated. They reasoned that these emotional areas of the brain were responsible for suppressing movement in her paralyzed leg.
“The patient willed her leg to move,” Dr. Halligan said. “But that act of willing triggered this primitive orbitofrontal area and activated the anterior cingulate to countermand the instruction to move the leg. She was willing it, but the leg would not move.”
Subsequent studies have bolstered the notion that parts of the brain involved in emotion may be activated inappropriately in patients with conversion disorder and may inhibit the normal functioning of brain circuitry responsible for movement, sensation and sight.
Such imaging studies may one day be useful as diagnostic tools. Conversion disorder has long been a troubling diagnosis because it hinges on negative proof: if nothing else is wrong with you, maybe you’ve got it.
This has led to some obvious problems. For one thing, it means hysteria has been a dumping ground for the unexplained. A number of diseases, including epilepsy and syphilis, once classified as hysterical, have with time and advancing technology acquired biomedical explanations.
Such specious history makes patients skeptical of the diagnosis, even though the rates of misdiagnosis have gone down. (One widely cited 1965 study reported that over half of the patients who received a diagnosis of conversion disorder would later be found to have a neurological disease; more recent studies put the rate of misdiagnosis between 4 percent and 10 percent.)
“It helps to have some information from functional imaging to support the diagnosis,” Dr. Vuilleumier said. “That helps make the treatment and the diagnosis in the same language. The patient is coming to you with bodily language. The patient is not saying, ‘I’m afraid.’ It’s ‘I’m paralyzed.’ If you can go to the patient with bodily language, it helps.”
Such physical evidence might help hack away at prejudice among medical practitioners too. “Hysterical patients take a bad rap in the medical profession,” said Deborah N. Black, an assistant professor of neurology at the University of Vermont.
“We don’t like them,” Dr. Black said. “Somewhere deep down inside, we really think they’re faking it. When we see a patient with improbable neurological signs, the impulse is to say: ‘Come on, get off it. Sure you can move that leg.’ The other reason we don’t like them is they don’t get better, and when we can’t do well by them we don’t like them.”
The embodiment of distress is common across cultures, and the suffering tend to find acceptable manifestations for their pain. The “jinn” (evil spirits) in Oman are thought to cause convulsions. In Nigeria and India, common somatic symptoms include hot, peppery sensations in the head, hands or feet. Among Caribbean women, “ataque de nervios” — headache, trembling, palpitations, upset stomach — is a common complaint. One study of British veterans found that over the course of the 20th century, post-traumatic disorders did not disappear, but rather changed form: the gut replaced the heart as the most common locus of weakness.
Both its persistence and its pervasiveness suggest that hysteria may be derived from an instinctual response to threat. Total shutdown, in the form of paralysis, for example, is not an entirely untoward or unheard of response to an untenable situation. (Think of deer in the headlights.)
But the broadest consensus within the scientific community does not pertain to what is known about hysteria, but instead to how much remains unknown. “We’re only at the beginning,” Dr. Halligan said.
Subway Sleuth Clears Dinosaur of Cannibalism
By JOHN NOBLE WILFORD, The New York Times, September 26, 2006
A graduate student in paleontology was standing on the downtown subway platform at the American Museum of Natural History stop. He idly inspected the bronze cast on the wall of one of the museum’s dinosaurs.
The student, Sterling J. Nesbitt, was surprised to see what looked like crocodile bones that had presumably been the dinosaur’s last feast. This set in motion a re-examination of two specimens on display in the museum’s Hall of Dinosaurs, and wiped clean a dinosaur’s reputation that had been besmirched by suspicions of cannibalism.
Museum paleontologists found that the exhibited predatory dinosaur, Coelophysis bauri, had in fact not eaten one of its own.
“Our research shows that the evidence for cannibalism in Coelophysis is nonexistent,” Mr. Nesbitt said in an interview, “and the evidence for cannibalism in other dinosaurs is quite thin.”
In one of the two suspected specimens, the bones of the dinosaur’s last meal, lying inside the skeleton where its stomach would have been, were not those of a juvenile Coelophysis or any other dinosaur. The review showed the bones were of a small crocodile.
“The femur was the key,” Mr. Nesbitt said. “I knew this wasn’t a dinosaur that had been eaten.”
The re-examination of the second fossil specimen disputed the assumption that the bones in the stomach area of a Coelophysis were of a cannibalized young dinosaur. The scientists said the bones were outside the dinosaur’s ribcage, and from an animal perhaps too large to have been eaten whole.
The verdict of not-guilty of cannibalism was delivered last week in the journal Biology Letters of the Royal Society of London. The lead author is Mr. Nesbitt, a doctoral student at Columbia and the museum.The two Coelophysis specimens were excavated in New Mexico in 1947 by a dinosaur-hunting party from the museum. The bones that appeared to lie within the dinosaurs’ bodies led Edwin H. Colbert, a museum paleontologist in those days, to assume these animals had been cannibals.
Mark A. Norell, the museum’s curator of paleontology and a co-author of the report, said, the new research was a reminder that it is not always necessary to go all the way to the Gobi Desert to learn something new about dinosaurs.
For his part, Mr. Nesbitt said: “It’s pretty lucky. It was an example of serendipity.” And, he added, it helped to be alert while waiting for the subway train.
Mixed Report on U.S. Nanotechnology Effort
By BARNABY J. FEDER, The New York Times, September 25, 2006
The United States continues to lead the world in nanotechnology research, but the impact of the federal government’s multibillion-dollar investment in the field and shortcomings in the effort are impossible to quantify, according to a lengthy assessment for Congress of the National Nanotechnology Initiative.
The National Research Council’s hopeful but guarded analysis fulfilled a requirement in a 2003 law that the initiative be reviewed every three years. This first report concluded that coordination between the many arms of government involved in nanotechnology had improved since the adoption of the law, which transformed the research effort begun under President Bill Clinton into a permanent program with an annual budget topping $1 billion.
But the report cautioned that too little money is being invested in understanding the potential health and environmental impacts of manipulating matter on such a small scale.
Nanotechnology refers to a rapidly expanding range of devices and industrial processes that manipulate atoms and relatively small clusters of molecules — materials measuring one to 100 nanometers, or billionths of a meter. At such small dimensions, conventional materials can develop valuable behaviors, like unusual strength, electrical conductivity or invisibility to the naked eye, and can be recombined with other materials to form novel drugs, foods and devices.
It is widely assumed that nanotechnology will have a huge economic impact in the decades to come. But there is also concern that the novel materials will also bring new safety risks that could take decades to be fully understood.
The National Research Council report said that because nanotechnology was an “enabling technology,” the research spending could logically be compared to early investments in computing and communications technology, in which the impact took 20 to 40 years to become apparent. But the report warned that as things stand now, the government had neither the types of data nor the management structure it needed to accurately assess what it was getting for its nanotechnology spending. It said the 50-member panel of public and private nanotechnology experts set up to advise the government’s technology managers is too broad and too busy elsewhere to provide much help in setting priorities and should be replaced with a smaller, more dedicated group.
The report also urged the program’s managers to enlist the Department of Labor and the Department of Education in a more coordinated effort to get students and workers the training needed to cope with technology that bridges disciplines like biology, physics and materials engineering.
The finding that safety research is not receiving adequate financial support echoed a report last week by a panel of experts assigned by President Bush’s National Science and Technology Council. In testimony on that report last Thursday before the House Science Committee, some experts said that the less than $40 million being spent on such research each year is not only too little but that the effort has been an incoherent reflection of the interests of the many individual researchers supported by various government agencies.
In a reflection of the challenges ahead, one leading expert told the committee that any centralized effort by the government to try to focus such research on the toughest questions could be fruitless.
“I have to tell you that this area is so complex that I don’t know of any person or a small group of people who would be smart enough to be able to identify all the risks, set the priorities, and lay out a so-called game plan,” said Arden L. Bement Jr., director of the National Science Foundation. “The situation changes day by day, and so there has to be more of a soccer approach to this rather than an American football approach,”
A Conversation With Paul Greengard: He Turned His Nobel Into a Prize for Women
By CLAUDIA DREIFUS, The New York Times, September 26, 2006
When the neuroscientist Paul Greengard was named one of three winners of the 2000 Nobel Prize in Physiology or Medicine, he decided to use his award — almost $400,000 — to finance something new: the Pearl Meister Greengard Prize.
This honor, named for Dr. Greengard’s mother, would give an annual $50,000 prize to an outstanding female biomedical researcher. Of the 184 medical Nobelists, only 7 have been women.
“I hoped to bring more attention to the work of brilliant women scientists,” Dr. Greengard recently explained at his laboratory at Rockefeller University in New York. “Perhaps this will bring them further recognition and even a Nobel.”
Dr. Greengard’s Nobel Prize, which was shared with Eric R. Kandel of Columbia University and Arvid Carlsson of Gothenberg University in Sweden, recognized his discoveries of how nerve cells communicate with one other.
This year’s Nobel winners will be announced next week.
Q. Why create the Pearl Meister Greengard Prize?
A. There were two factors. One was the observation that there was still discrimination against women in science, even at the highest levels. On a personal level, I wanted to create something in honor of my mother, Pearl Meister, who died giving birth to me.
Q. Had your mother been a scientist?
A. She was a secretary until she married. I’m told she was an extremely bright woman. I didn’t even know of her existence until I was 20. Thirteen months after my birth, my father, who was Jewish, married an Episcopalian who kept me from knowing we were related to anyone named Meister.
I don’t have a single photograph of my mother. When I married, my wife, Ursula, put a picture of a woman we thought was Pearl Meister above our mantelpiece. Ten years later, we discovered this was someone else’s mother. Since there’s not a shred of physical evidence that my mother ever existed, I wanted to do something to make her less abstract.
Rockefeller University will be awarding the third annual Pearl Meister Greengard Prize in November. It will go, this year, to a British biologist, Mary Lyon.
Q. With such a painful childhood, did you become a neuroscientist to help relieve emotional suffering?
A. No. After attending college on the G.I. Bill in the late 1940’s, I wanted to do graduate work in physics. I was good at math and physics. But at that time, the only physics fellowships came from the Atomic Energy Commission. This was right after the A-bombing of Japan.
I didn’t want to spend my life contributing to the development of more atomic weaponry. So when the parents of my college roommate, two physicians, told me of the nascent field of biophysics, which used math and physics to solve biological problems, that appealed. I began studying electrical signaling in nerve cells. I became convinced that biochemistry played the critical role in how nerve cells communicated with each other. With time, I came to think that nerve transmitters — those chemicals that communicate from one nerve cell to another — produced their effects through a cascade of reactions that resulted in a physiological response. In the 1950’s and 1960’s, this was a really radical idea. For a long time, I had the field to myself. I didn’t have to worry about picking up Nature and finding my work scooped by another researcher.
Q. Is it true that this work eventually led to Prozac?
A. Research I did in the 1970’s provided the underlying science for the Prozac-type drugs. It turned out that Prozac and similar drugs work, in part, by increasing levels of the neurotransmitter serotonin, which is widely believed to cause an antidepressant action in brain cells.
Q. Recently, your laboratory here at Rockefeller University announced the discovery of a new cell protein, p11. Why do you think this an important finding?
A. This p11 protein moves the serotonin receptors from the interior of the brain cell to its surface so that they can be seen by the serotonin. Our lab data, and some studies with post-mortem brain tissue, show that p11 levels appear to be a predictor of whether or not an individual is depressed.
Until now, when making antidepressants, we’ve been focused on changing serotonin levels in brain cells. Maybe we can try to increase the p11 levels? We need to find out how p11 levels are controlled. This could lead to a whole new class of antidepressants.
Q. I’ve heard it said that while the discovery is interesting, it doesn’t take brain research into any new direction. What’s your answer?
A. I disagree. This is the first example of a protein, the level of which has been found to correlate with a neurological or psychiatric disorder.
Q. You are 80 years old and your laboratory is still coming up with new findings. What does that mean?
A. It says that I’m a genetic freak. [Laughs.] No, it means that modern science has changed. It used to be that the big medical discoveries were made by people in their 30’s and 40’s. But in those days, the scientist was a kind of sole investigator working alone, testing ideas.
Today, the exciting developments come out of interdisciplinary working groups, where participants can be of any age. I don’t know for sure, but I suspect that the leaders of teams making discoveries now are a lot older than they used to be.
And that’s good. It’s a tragedy for society to spend decades training people and then depriving them of work at some arbitrary age.
Q. Earlier, you said that one reason you set up this prize for female biologists was that you had witnessed much discrimination. What have you seen?
A. Nothing here at Rockefeller University, which is a good place for women. But I’ve seen instances of bias, big and small, at other institutions. I’ve seen women kept from academic committees, for instance, because they were female.
Q. In a recent article in Nature, the Stanford neurobiologist Ben Barres complained that male scientists rarely speak out against antiwoman bias when they see it. Would you agree?
A. Whenever I’ve seen it, I’ve spoken up.
One of the most outrageous things I ever saw was at an Ivy League university. A faculty couple were divorcing. The husband told his male colleagues it upset him to see his ex when she went to the ladies’ room, near his laboratory. So this female scientist was ordered to take this circuitous route to the washroom — up a set of stairs, over a hallway and down another staircase — to protect the husband’s sensibilities. I said, “If you don’t change this, I will report it and we’ll all lose our grants.”
Q. Was it difficult to organize this prize?
A. Easier than one would think. With tax incentives, in some brackets, it can end up costing about 20 percent of the value of donation.
Three years ago, after we announced the first award, my wife and I received several hundred congratulatory messages. Many female scientists wrote and said: “I’ve suffered discrimination. This means so much to me.”
Well, it meant a lot to me, too.
By ERIKA KINETZ, The New York Times, September 26, 2006

The 19th-century French neurologist Jean-Martin Charcot, shown lecturing on hysteria, helped lay the groundwork for contemporary research.
Hysteria is a 4,000-year-old diagnosis that has been applied to no mean parade of witches, saints and, of course, Anna O.
But over the last 50 years, the word has been spoken less and less. The disappearance of hysteria has been heralded at least since the 1960’s. What had been a Victorian catch-all splintered into many different diagnoses. Hysteria seemed to be a vanished 19th-century extravagance useful for literary analysis but surely out of place in the serious reaches of contemporary science.
The word itself seems murky, more than a little misogynistic and all too indebted to the theorizing of the now-unfashionable Freud. More than one doctor has called it “the diagnosis that dare not speak its name.”
Nor has brain science paid the diagnosis much attention. For much of the 20th century, the search for a neurological basis for hysteria was ignored. The growth of the ability to capture images of the brain in action has begun to change that situation.
Functional neuroimaging technologies like single photon emission computerized tomography, or SPECT, and positron emission tomography, or PET, now enable scientists to monitor changes in brain activity. And although the brain mechanisms behind hysterical illness are still not fully understood, new studies have started to bring the mind back into the body, by identifying the physical evidence of one of the most elusive, controversial and enduring illnesses.
Despite its period of invisibility, hysteria never vanished — or at least that is what many doctors say.
“People who say it is vanished need to come and work in some tertiary hospitals where they will see plenty of patients,” Kasia Kozlowska, a psychiatrist at the Children’s Hospital at Westmead in Sydney, Australia, and the author of a 2005 review of the subject in The Harvard Review of Psychiatry, wrote in an e-mail message.
But it did change its name. In 1980, with the publication of the third edition of its Diagnostic and Statistical Manual of Mental Disorders, the American Psychiatric Association officially changed the diagnosis of “hysterical neurosis, conversion type” to “conversion disorder.”
“Hysteria, to me, has always been a pejorative term, because of its association with women,” said Dr. William E. Narrow, the associate director of the research division of the American Psychiatric Association. “I think the fact we got rid of that word is a good thing.”
Unofficially, a host of inoffensive synonyms for “hysterical” have appeared: functional, nonorganic, psychogenic, medically unexplained.
“Medically unexplained” and “functional” encompass a broader swath of distress than just conversion disorder — by some accounts, patients with medically unexplained symptoms account for up to 40 percent of all primary care consultations. But clinicians seeking to avoid the wrath of patients who do not appreciate being told that their debilitating seizures are hysterical in origin also use these blander terms.
Throughout that cloud of shifting nomenclature, people have kept getting sick. “The symptoms themselves have never changed,” said Patrik Vuilleumier, a neurologist at the University of Geneva. “They are still common in practice.”
Common, perhaps. Well studied, no. There is still no consensus on how conversion disorder should be classified, and not all physicians agree on diagnostic criteria. The epidemiology is hazy; one commonly cited statistic is that conversion disorder accounts for 1 percent to 4 percent of all diagnoses in Western hospitals. In addition, patients have heterogeneous symptoms that affect any number of voluntary sensory or motor functions, like blindness, paralysis or seizures.
The two things all patients have in common are, first, that they are not faking the illness and, second, that despite extensive testing, doctors can find nothing medically wrong with them. The scientific studies that have been conducted on conversion disorder generally have small sample sizes and methodological differences, complicating the comparison of results from different scientific teams and making general conclusions difficult.
“It’s one of those woolly areas, and it has this pejorative association,” said Peter W. Halligan, a professor of neuropsychology at Cardiff University in Wales and the director of Cardiff’s new brain imaging center. “Some people say, ‘That’s a Freudian throwback, let’s go into real science.’ ”
Hysteria actually predates Freud. The word itself derives from “hystera,” Greek for uterus, and ancient doctors attributed a number of female maladies to a starved or misplaced womb. Hippocrates built on the uterine theory; marriage was among his recommended treatments.
Then came the saints, the shamans and the demon-possessed. In the 17th century, hysteria was said to be the second most common disease, after fever. In the 19th century, the French neurologists Jean-Martin Charcot and Pierre Janet laid the groundwork for contemporary approaches to the disease. Then Charcot’s student, a young neurologist named Sigmund Freud, radically changed the landscape and, some argue, popularized hysteria.
Freud’s innovation was to explain why hysterics swooned and seized. He coined the term “conversion” to describe the mechanism by which unresolved, unconscious conflict might be transformed into symbolic physical symptoms. His fundamental insight — that the body might be playing out the dramas of the mind — has yet to be supplanted.
“Scores of European doctors for generations had thought hysteria was something wrong with the physical body: an unhappy uterus, nerves that were too thin, black bile from the liver,” said Mark S. Micale, an associate professor at the University of Illinois at Urbana-Champaign and the author of “Approaching Hysteria” (Princeton University Press, 1994). “Something somatic rooted in the body is giving rise to fits, spells of crying, strange aches and pains. Freud reverses that direction of causality. He says what the cases on his couch in Vienna are about is something in the psyche or the mind being expressed physically in the body.”
For neuroscientists now, there is no such division between the physical brain and the mind. The techniques allow scientists to see disruptions in brain function, which lets them sketch a physical map of what might be going on in the minds of modern-day hysterics. Many questions remain unanswered, but the results have begun to suggest ways in which emotional structures in the brain might modulate the function of normal sensory and motor neural circuits.
In the last decade, a number of brain imaging studies have been done on patients suffering from hysterical paralysis. Patients with hysterical paralysis have healthy nerves and muscles. Their problem is not structural but functional: something has apparently gone wrong in the higher reaches of the human mind that govern the conception of movement and the will to move. The dumb actors in this dance are fine; it’s the brilliant but complex director that has a problem.
Movement is the product of a multistage process. There is initiation (“I want to move my arm”); then planning, in which the muscles prepare for coordinated action; and finally execution, in which you actually move your arm. In theory, paralysis could result from a malfunction at any stage of this process. (Charcot had a similar idea back in the 1890’s.)
In a 1997 paper published in the journal Cognition, Dr. Halligan, of Cardiff, and John C. Marshall and their colleagues analyzed the brain function of a woman who was paralyzed on the left side of her body. First they spent large amounts of money on tests to ensure that she had no identifiable organic lesion.
When the woman tried to move her “paralyzed leg,” her primary motor cortex was not activated as it should have been; instead her right orbitofrontal and right anterior cingulate cortex, parts of the brain that have been associated with action and emotion, were activated. They reasoned that these emotional areas of the brain were responsible for suppressing movement in her paralyzed leg.
“The patient willed her leg to move,” Dr. Halligan said. “But that act of willing triggered this primitive orbitofrontal area and activated the anterior cingulate to countermand the instruction to move the leg. She was willing it, but the leg would not move.”
Subsequent studies have bolstered the notion that parts of the brain involved in emotion may be activated inappropriately in patients with conversion disorder and may inhibit the normal functioning of brain circuitry responsible for movement, sensation and sight.
Such imaging studies may one day be useful as diagnostic tools. Conversion disorder has long been a troubling diagnosis because it hinges on negative proof: if nothing else is wrong with you, maybe you’ve got it.
This has led to some obvious problems. For one thing, it means hysteria has been a dumping ground for the unexplained. A number of diseases, including epilepsy and syphilis, once classified as hysterical, have with time and advancing technology acquired biomedical explanations.
Such specious history makes patients skeptical of the diagnosis, even though the rates of misdiagnosis have gone down. (One widely cited 1965 study reported that over half of the patients who received a diagnosis of conversion disorder would later be found to have a neurological disease; more recent studies put the rate of misdiagnosis between 4 percent and 10 percent.)
“It helps to have some information from functional imaging to support the diagnosis,” Dr. Vuilleumier said. “That helps make the treatment and the diagnosis in the same language. The patient is coming to you with bodily language. The patient is not saying, ‘I’m afraid.’ It’s ‘I’m paralyzed.’ If you can go to the patient with bodily language, it helps.”
Such physical evidence might help hack away at prejudice among medical practitioners too. “Hysterical patients take a bad rap in the medical profession,” said Deborah N. Black, an assistant professor of neurology at the University of Vermont.
“We don’t like them,” Dr. Black said. “Somewhere deep down inside, we really think they’re faking it. When we see a patient with improbable neurological signs, the impulse is to say: ‘Come on, get off it. Sure you can move that leg.’ The other reason we don’t like them is they don’t get better, and when we can’t do well by them we don’t like them.”
The embodiment of distress is common across cultures, and the suffering tend to find acceptable manifestations for their pain. The “jinn” (evil spirits) in Oman are thought to cause convulsions. In Nigeria and India, common somatic symptoms include hot, peppery sensations in the head, hands or feet. Among Caribbean women, “ataque de nervios” — headache, trembling, palpitations, upset stomach — is a common complaint. One study of British veterans found that over the course of the 20th century, post-traumatic disorders did not disappear, but rather changed form: the gut replaced the heart as the most common locus of weakness.
Both its persistence and its pervasiveness suggest that hysteria may be derived from an instinctual response to threat. Total shutdown, in the form of paralysis, for example, is not an entirely untoward or unheard of response to an untenable situation. (Think of deer in the headlights.)
But the broadest consensus within the scientific community does not pertain to what is known about hysteria, but instead to how much remains unknown. “We’re only at the beginning,” Dr. Halligan said.
Subway Sleuth Clears Dinosaur of Cannibalism
By JOHN NOBLE WILFORD, The New York Times, September 26, 2006
A graduate student in paleontology was standing on the downtown subway platform at the American Museum of Natural History stop. He idly inspected the bronze cast on the wall of one of the museum’s dinosaurs.
The student, Sterling J. Nesbitt, was surprised to see what looked like crocodile bones that had presumably been the dinosaur’s last feast. This set in motion a re-examination of two specimens on display in the museum’s Hall of Dinosaurs, and wiped clean a dinosaur’s reputation that had been besmirched by suspicions of cannibalism.
Museum paleontologists found that the exhibited predatory dinosaur, Coelophysis bauri, had in fact not eaten one of its own.
“Our research shows that the evidence for cannibalism in Coelophysis is nonexistent,” Mr. Nesbitt said in an interview, “and the evidence for cannibalism in other dinosaurs is quite thin.”
In one of the two suspected specimens, the bones of the dinosaur’s last meal, lying inside the skeleton where its stomach would have been, were not those of a juvenile Coelophysis or any other dinosaur. The review showed the bones were of a small crocodile.
“The femur was the key,” Mr. Nesbitt said. “I knew this wasn’t a dinosaur that had been eaten.”
The re-examination of the second fossil specimen disputed the assumption that the bones in the stomach area of a Coelophysis were of a cannibalized young dinosaur. The scientists said the bones were outside the dinosaur’s ribcage, and from an animal perhaps too large to have been eaten whole.
The verdict of not-guilty of cannibalism was delivered last week in the journal Biology Letters of the Royal Society of London. The lead author is Mr. Nesbitt, a doctoral student at Columbia and the museum.The two Coelophysis specimens were excavated in New Mexico in 1947 by a dinosaur-hunting party from the museum. The bones that appeared to lie within the dinosaurs’ bodies led Edwin H. Colbert, a museum paleontologist in those days, to assume these animals had been cannibals.
Mark A. Norell, the museum’s curator of paleontology and a co-author of the report, said, the new research was a reminder that it is not always necessary to go all the way to the Gobi Desert to learn something new about dinosaurs.
For his part, Mr. Nesbitt said: “It’s pretty lucky. It was an example of serendipity.” And, he added, it helped to be alert while waiting for the subway train.
Mixed Report on U.S. Nanotechnology Effort
By BARNABY J. FEDER, The New York Times, September 25, 2006
The United States continues to lead the world in nanotechnology research, but the impact of the federal government’s multibillion-dollar investment in the field and shortcomings in the effort are impossible to quantify, according to a lengthy assessment for Congress of the National Nanotechnology Initiative.
The National Research Council’s hopeful but guarded analysis fulfilled a requirement in a 2003 law that the initiative be reviewed every three years. This first report concluded that coordination between the many arms of government involved in nanotechnology had improved since the adoption of the law, which transformed the research effort begun under President Bill Clinton into a permanent program with an annual budget topping $1 billion.
But the report cautioned that too little money is being invested in understanding the potential health and environmental impacts of manipulating matter on such a small scale.
Nanotechnology refers to a rapidly expanding range of devices and industrial processes that manipulate atoms and relatively small clusters of molecules — materials measuring one to 100 nanometers, or billionths of a meter. At such small dimensions, conventional materials can develop valuable behaviors, like unusual strength, electrical conductivity or invisibility to the naked eye, and can be recombined with other materials to form novel drugs, foods and devices.
It is widely assumed that nanotechnology will have a huge economic impact in the decades to come. But there is also concern that the novel materials will also bring new safety risks that could take decades to be fully understood.
The National Research Council report said that because nanotechnology was an “enabling technology,” the research spending could logically be compared to early investments in computing and communications technology, in which the impact took 20 to 40 years to become apparent. But the report warned that as things stand now, the government had neither the types of data nor the management structure it needed to accurately assess what it was getting for its nanotechnology spending. It said the 50-member panel of public and private nanotechnology experts set up to advise the government’s technology managers is too broad and too busy elsewhere to provide much help in setting priorities and should be replaced with a smaller, more dedicated group.
The report also urged the program’s managers to enlist the Department of Labor and the Department of Education in a more coordinated effort to get students and workers the training needed to cope with technology that bridges disciplines like biology, physics and materials engineering.
The finding that safety research is not receiving adequate financial support echoed a report last week by a panel of experts assigned by President Bush’s National Science and Technology Council. In testimony on that report last Thursday before the House Science Committee, some experts said that the less than $40 million being spent on such research each year is not only too little but that the effort has been an incoherent reflection of the interests of the many individual researchers supported by various government agencies.
In a reflection of the challenges ahead, one leading expert told the committee that any centralized effort by the government to try to focus such research on the toughest questions could be fruitless.
“I have to tell you that this area is so complex that I don’t know of any person or a small group of people who would be smart enough to be able to identify all the risks, set the priorities, and lay out a so-called game plan,” said Arden L. Bement Jr., director of the National Science Foundation. “The situation changes day by day, and so there has to be more of a soccer approach to this rather than an American football approach,”
A Conversation With Paul Greengard: He Turned His Nobel Into a Prize for Women
By CLAUDIA DREIFUS, The New York Times, September 26, 2006
When the neuroscientist Paul Greengard was named one of three winners of the 2000 Nobel Prize in Physiology or Medicine, he decided to use his award — almost $400,000 — to finance something new: the Pearl Meister Greengard Prize.
This honor, named for Dr. Greengard’s mother, would give an annual $50,000 prize to an outstanding female biomedical researcher. Of the 184 medical Nobelists, only 7 have been women.
“I hoped to bring more attention to the work of brilliant women scientists,” Dr. Greengard recently explained at his laboratory at Rockefeller University in New York. “Perhaps this will bring them further recognition and even a Nobel.”
Dr. Greengard’s Nobel Prize, which was shared with Eric R. Kandel of Columbia University and Arvid Carlsson of Gothenberg University in Sweden, recognized his discoveries of how nerve cells communicate with one other.
This year’s Nobel winners will be announced next week.
Q. Why create the Pearl Meister Greengard Prize?
A. There were two factors. One was the observation that there was still discrimination against women in science, even at the highest levels. On a personal level, I wanted to create something in honor of my mother, Pearl Meister, who died giving birth to me.
Q. Had your mother been a scientist?
A. She was a secretary until she married. I’m told she was an extremely bright woman. I didn’t even know of her existence until I was 20. Thirteen months after my birth, my father, who was Jewish, married an Episcopalian who kept me from knowing we were related to anyone named Meister.
I don’t have a single photograph of my mother. When I married, my wife, Ursula, put a picture of a woman we thought was Pearl Meister above our mantelpiece. Ten years later, we discovered this was someone else’s mother. Since there’s not a shred of physical evidence that my mother ever existed, I wanted to do something to make her less abstract.
Rockefeller University will be awarding the third annual Pearl Meister Greengard Prize in November. It will go, this year, to a British biologist, Mary Lyon.
Q. With such a painful childhood, did you become a neuroscientist to help relieve emotional suffering?
A. No. After attending college on the G.I. Bill in the late 1940’s, I wanted to do graduate work in physics. I was good at math and physics. But at that time, the only physics fellowships came from the Atomic Energy Commission. This was right after the A-bombing of Japan.
I didn’t want to spend my life contributing to the development of more atomic weaponry. So when the parents of my college roommate, two physicians, told me of the nascent field of biophysics, which used math and physics to solve biological problems, that appealed. I began studying electrical signaling in nerve cells. I became convinced that biochemistry played the critical role in how nerve cells communicated with each other. With time, I came to think that nerve transmitters — those chemicals that communicate from one nerve cell to another — produced their effects through a cascade of reactions that resulted in a physiological response. In the 1950’s and 1960’s, this was a really radical idea. For a long time, I had the field to myself. I didn’t have to worry about picking up Nature and finding my work scooped by another researcher.
Q. Is it true that this work eventually led to Prozac?
A. Research I did in the 1970’s provided the underlying science for the Prozac-type drugs. It turned out that Prozac and similar drugs work, in part, by increasing levels of the neurotransmitter serotonin, which is widely believed to cause an antidepressant action in brain cells.
Q. Recently, your laboratory here at Rockefeller University announced the discovery of a new cell protein, p11. Why do you think this an important finding?
A. This p11 protein moves the serotonin receptors from the interior of the brain cell to its surface so that they can be seen by the serotonin. Our lab data, and some studies with post-mortem brain tissue, show that p11 levels appear to be a predictor of whether or not an individual is depressed.
Until now, when making antidepressants, we’ve been focused on changing serotonin levels in brain cells. Maybe we can try to increase the p11 levels? We need to find out how p11 levels are controlled. This could lead to a whole new class of antidepressants.
Q. I’ve heard it said that while the discovery is interesting, it doesn’t take brain research into any new direction. What’s your answer?
A. I disagree. This is the first example of a protein, the level of which has been found to correlate with a neurological or psychiatric disorder.
Q. You are 80 years old and your laboratory is still coming up with new findings. What does that mean?
A. It says that I’m a genetic freak. [Laughs.] No, it means that modern science has changed. It used to be that the big medical discoveries were made by people in their 30’s and 40’s. But in those days, the scientist was a kind of sole investigator working alone, testing ideas.
Today, the exciting developments come out of interdisciplinary working groups, where participants can be of any age. I don’t know for sure, but I suspect that the leaders of teams making discoveries now are a lot older than they used to be.
And that’s good. It’s a tragedy for society to spend decades training people and then depriving them of work at some arbitrary age.
Q. Earlier, you said that one reason you set up this prize for female biologists was that you had witnessed much discrimination. What have you seen?
A. Nothing here at Rockefeller University, which is a good place for women. But I’ve seen instances of bias, big and small, at other institutions. I’ve seen women kept from academic committees, for instance, because they were female.
Q. In a recent article in Nature, the Stanford neurobiologist Ben Barres complained that male scientists rarely speak out against antiwoman bias when they see it. Would you agree?
A. Whenever I’ve seen it, I’ve spoken up.
One of the most outrageous things I ever saw was at an Ivy League university. A faculty couple were divorcing. The husband told his male colleagues it upset him to see his ex when she went to the ladies’ room, near his laboratory. So this female scientist was ordered to take this circuitous route to the washroom — up a set of stairs, over a hallway and down another staircase — to protect the husband’s sensibilities. I said, “If you don’t change this, I will report it and we’ll all lose our grants.”
Q. Was it difficult to organize this prize?
A. Easier than one would think. With tax incentives, in some brackets, it can end up costing about 20 percent of the value of donation.
Three years ago, after we announced the first award, my wife and I received several hundred congratulatory messages. Many female scientists wrote and said: “I’ve suffered discrimination. This means so much to me.”
Well, it meant a lot to me, too.
no subject
Date: 2006-09-26 12:58 pm (UTC)(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:no subject
Date: 2006-09-26 02:47 pm (UTC)no subject
Date: 2006-09-26 03:52 pm (UTC)no subject
Date: 2006-09-26 04:49 pm (UTC)