Art and Letters Daily
Jan. 27th, 2006 08:33 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Arts and Letters Daily Highlights:
China beat Columbus to it, perhaps
The Economist
An ancient map that strongly suggests Chinese seamen were first round the world

THE brave seamen whose great voyages of exploration opened up the world are iconic figures in European history. Columbus found the New World in 1492; Dias discovered the Cape of Good Hope in 1488; and Magellan set off to circumnavigate the world in 1519. However, there is one difficulty with this confident assertion of European mastery: it may not be true.
It seems more likely that the world and all its continents were discovered by a Chinese admiral named Zheng He, whose fleets roamed the oceans between 1405 and 1435. His exploits, which are well documented in Chinese historical records, were written about in a book which appeared in China around 1418 called “The Marvellous Visions of the Star Raft”.
Next week, in Beijing and London, fresh and dramatic evidence is to be revealed to bolster Zheng He's case. It is a copy, made in 1763, of a map, dated 1418, which contains notes that substantially match the descriptions in the book. “It will revolutionise our thinking about 15th-century world history,” says Gunnar Thompson, a student of ancient maps and early explorers.
The map (shown above) will be unveiled in Beijing on January 16th and at the National Maritime Museum in Greenwich a day later. Six Chinese characters in the upper right-hand corner of the map say this is a “general chart of the integrated world”. In the lower left-hand corner is a note that says the chart was drawn by Mo Yi Tong, imitating a world chart made in 1418 which showed the barbarians paying tribute to the Ming emperor, Zhu Di. The copyist distinguishes what he took from the original from what he added himself.
The map was bought for about $500 from a small Shanghai dealer in 2001 by Liu Gang, one of the most eminent commercial lawyers in China, who collects maps and paintings. Mr Liu says he knew it was significant, but thought it might be a modern fake. He showed his acquisition to five experienced collectors, who agreed that the traces of vermin on the bamboo paper it is written on, and the de-pigmentation of ink and colours, indicated that the map was more than 100 years old.
Mr Liu was unsure of its meaning, and asked specialists in ancient Chinese history for their advice, but none, he says, was forthcoming. Then, last autumn, he read “1421: The Year China Discovered the World”, a book written in 2003 by Gavin Menzies, in which the author makes the controversial claim that Zheng He circumnavigated the world, discovering America on the way. Mr Menzies, who is a former submariner in the Royal Navy and a merchant banker, is an amateur historian and his theory met with little approval from professionals. But it struck a chord: his book became a bestseller and his 1421 website is very popular. In any event, his arguments convinced Mr Liu that his map was a relic of Zheng He's earlier voyages.
The detail on the copy of the map is remarkable. The outlines of Africa, Europe and the Americas are instantly recognisable. It shows the Nile with two sources. The north-west passage appears to be free of ice. But the inaccuracies, also, are glaring. California is shown as an island; the British Isles do not appear at all. The distance from the Red Sea to the Mediterranean is ten times greater than it ought to be. Australia is in the wrong place (though cartographers no longer doubt that Australia and New Zealand were discovered by Chinese seamen centuries before Captain Cook arrived on the scene).
The commentary on the map, which seems to have been drawn from the original, is written in clear Chinese characters which can still be easily read. Of the west coast of America, the map says: “The skin of the race in this area is black-red, and feathers are wrapped around their heads and waists.” Of the Australians, it reports: “The skin of the aborigine is also black. All of them are naked and wearing bone articles around their waists.”
But this remarkable precision, rather than the errors, is what critics of the Menzies theory are likely to use to question the authenticity of the 1418 map. Mr Menzies and his followers are naturally extremely keen to establish that the 1763 copy is not a forgery and that it faithfully represents the 1418 original. This would lend weighty support to their thesis: that China had indeed discovered America by (if not actually in) 1421. Mass spectrography analysis to date the copied map is under way at Waikato University in New Zealand, and the results will be announced in February. But even if affirmative, this analysis is of limited importance since it can do no more than date the copyist's paper and inks.
Five academic experts on ancient charts note that the 1418 map puts together information that was available piecemeal in China from earlier nautical maps, going back to the 13th century and Kublai Khan, who was no mean explorer himself. They believe it is authentic.
The map makes good estimates of the latitude and longitude of much of the world, and recognises that the earth is round. “The Chinese were almost certainly aware of longitude before Zheng He set sail,” says Robert Cribbs of California State University. They certainly assumed the world was round. “The format of the map is totally consistent with the level of knowledge that we should expect of royal Chinese geographers following the voyages of Zheng He,” says Mr Thompson.
Moreover, some of the errors in the 1418 map soon turned up in European maps, the most striking being California drawn as an island. The Portuguese are aware of a world map drawn before 1420 by a cartographer named Albertin di Virga, which showed Africa and the Americas. Since no Portuguese seamen had yet discovered those places, the most obvious source for the information seems to be European copies of Chinese maps.
But this is certainly not a unanimous view among the experts, with many of the fiercest critics in China itself. Wang Tai-Peng, a scholarly journalist in Vancouver who does not doubt that the Chinese explored the world early in the 15th century (he has written about a visit by Chinese ambassadors to Florence in 1433), doubts whether Zheng He's ships landed in North America. Mr Wang also claims that Zheng He's navigation maps were drawn in a totally different Chinese map-making tradition. “Until the 1418 map is scientifically authenticated, we still have to take it with a grain of salt,” he says.
Most forgeries are driven by a commercial imperative, especially when the market for ancient maps is booming, as it is now. The Library of Congress recently paid $10m for a copy of a 1507 world map by Martin Waldseemuller, a German cartographer. But Mr Liu says he is not a seller: “The map is part of my life,” he claims.
The consequences of the discovery of this map could be considerable. If it does indeed prove to be the first map of the world, “the history of New World discovery will have to be rewritten,” claims Mr Menzies. How much does this matter? Showing that the world was first explored by Chinese rather than European seamen would be a major piece of historical revisionism. But there is more to history than that. It is no less interesting that the Chinese, having discovered the extent of the world, did not exploit it, politically or commercially. After all, Columbus's discovery of America led to exploitation and then development by Europeans which, 500 years later, made the United States more powerful than China had ever been.
Copyright © 2006 The Economist Newspaper and The Economist Group. All rights reserved.
Cantonese Is Losing Its Voice
Speakers of the spicy tongue that can make words of love sound like a fight are having to learn its linguistic kin, the mellower Mandarin.
By David Pierson, Los Angeles Times, January 3, 2006
Carson Hom's family has run a thriving fortune cookie and almond cookie company in Los Angeles County for 35 years.
And for much of that time, it was a business that required two languages: Cantonese, to communicate with employees and the Chinese restaurants that bought the cookies, and English, to deal with health inspectors, suppliers and accountants.
But when Hom, 30, decided to start his own food import company, he learned that this bilingualism wasn't enough anymore.
"I can't communicate," said Hom, whose parents are from Hong Kong. "Everyone around used to speak Cantonese. Now everyone is speaking Mandarin."
Cantonese, a sharp, cackling dialect full of slang and exaggerated expressions, was never the dominant language of China. But it came to dominate the Chinatowns of North America because the first immigrants came from the Cantonese-speaking southern province of Guangdong, where China first opened its ports to foreigners centuries ago.
It is also the chief language of Hong Kong, the vital trading and financial center that became China's link to the West.
But over the last three decades, waves of Mandarin-speaking mainland Chinese and Taiwanese immigrants have diluted the influence of both the Cantonese language and the pioneering Cantonese families who ran Chinatowns for years.
The surging Chinese economy today has challenged Cantonese further. Because Mandarin is China's official language, entrepreneurs like Hom have been forced to adapt, often learning the hard way that business can't be done with Cantonese alone.
Many Cantonese speakers are racing to learn Mandarin any way they can — by watching Chinese soap operas, attending schools, paying for expensive immersion courses and even making more Mandarin-speaking friends. This is no cinch. Although Cantonese and Mandarin share the same written language, they are spoken as differently as English and French.
At the same time, few people are learning Cantonese. San Jose State University and New York University offer classes, but they are almost alone among colleges with established Cantonese communities. The language is not taught at USC, UCLA, Pasadena City College, San Francisco State or Queens College in New York, to name a few.
With the changes, some are lamenting — in ways they can do only in Cantonese — the end of an era. Mandarin is now the vernacular of choice, and they say it doesn't come close to the colorful and brash banter of Cantonese.
"You might be saying, 'I love you' to your girlfriend in Cantonese, but it will still sound like you're fighting," said Howard Lee, a talk show host on Cantonese language KMRB-AM (1430). "It's just our tone. We always sound like we're in a shouting match. Mandarin is so mellow. Cantonese is strong and edgy."
Cantonese is said to be closer than Mandarin to ancient Chinese. It is also more complicated. Mandarin has four tones, so a character can be intonated four ways with four meanings. Cantonese has nine tones.
Beginning in the 1950s, the Chinese government tried to make Mandarin the national language in an effort to bridge the myriad dialects across the country. Since then, the government has been working to simplify the language, renamed Putonghua, and give it a proletarian spin. To die-hard Cantonese, no fans of the Communist government, this is one more reason to look down on Mandarin.
Many say it is far more difficult to learn Cantonese than Mandarin because the former does not always adhere to rules and formulas. Image-rich slang litters the lexicon and can leave anyone ignorant of the vernacular out of touch.
"You have to really listen to people if you want to learn Cantonese," said Gary Tai, who teaches the language at New York University and is also a principal at a Chinese school in Staten Island. "You have to watch movies and listen to songs. You can't learn the slang from books."
Popular phrases include the slang for getting a parking ticket, which in Cantonese is "I ate beef jerky," probably because Chinese beef jerky is thin and rectangular, like a parking ticket. And teo bao (literally "too full") describes someone who is uber-trendy, so hip he or she is going to explode.
Many sayings are coined by movie stars on screen. Telling someone to chill out, comedian Stephen Chow says: "Drink a cup of tea and eat a bun."
Then there are the curse words, and what an abundance there is.
A four-syllable obscenity well known in the Cantonese community punctuates the end of many a sentence.
"I think we all agree that curse words in Cantonese just sound better," said Lee, the radio host. "It's so much more of a direct hit on the nail. In Mandarin, they sound so polite."
His colleague, news broadcaster Vivian Lee, chimed in to clarify that the curse words were not vindictive.
"It's not that Cantonese people are less educated. They're very well educated. The language is just cute and funny. It doesn't hurt anyone," said Lee, who does the news show on the station five days a week. "The Italians need body language. We don't need that at all. We have adjectives."
To stress a point or to twist a sentence into a question, Cantonese speakers need only add a dramatic ahhhhhhh or laaaaaaa at the end.
Something simple like, "Let's go" becomes "C'mon, lets get a move on!" when it's capped with laaaaa.
By comparison, with Mandarin from China, what you see is what you get. The written form has been simplified by the Chinese government so that characters require fewer strokes. It is considered calmer and more melodic.
Take the popular Cantonese expression chi-seen, which means your wires have short-circuited. It is used, often affectionately, to call someone or something crazy. The Mandarin equivalent comes off to Cantonese people sounding like "You have a brain malfunction that has rendered your behavior unusual."
The calm tones of Mandarin are heard more and more around Southern California's Chinese community.
Even quintessential Hong Kong-style restaurants, including wonton noodle shops, now have waitresses who speak Mandarin, albeit badly, so they can take orders. Elected officials in Los Angeles County, even native Cantonese, are holding news conferences in Mandarin.
Some Cantonese speakers feel besieged.
Cheryl Li, a 19-year-old Pasadena City College student whose parents are from Hong Kong, is studying to become an occupational therapist and volunteers at the Garfield Medical Center in Monterey Park, where most of the patients are Chinese.
Recently, she was asking patients, in Mandarin, what they wanted to eat. When one man thought her accent was off, he said, "Stupid second-generation Chinese American doesn't speak Mandarin."
Li responded angrily, "No! I was born here. But I understand enough."
"We're in the minority," she added, reflecting on the incident. "I'm scared Cantonese is going to be a lost language."
Still, Li is studying Mandarin.
There are places where Cantonese is protected and cherished.
At a cavernous Chinese seafood restaurant in Monterey Park, members of the Hong Kong Schools Alumni Federation gathered in a back room to munch on stir-fried scallops, pork offal soup and spare ribs.
It was a regular monthly meeting of the group and a sanctuary for Hong Kong Chinese people who take comfort eating and joking with fellow Cantonese speakers.
"I just can't express myself as freely in Mandarin," said Victor Law, an accountant who left Hong Kong to attend college in the U.S. 34 years ago. "That's why we have this association. I feel like we're the last of a dying breed."
For Law, it's not just the language but many Cantonese traditions that are on the decline. He says it's now hard to find a mah-jongg game that uses Hong Kong rules instead of Taiwanese rules, a distinction concerning how many tiles are used.
"I'm not ready to be a dinosaur," said Amy Yeung, president of the alumni group.
To the trained ear, it was instantly apparent that this was a gathering of Cantonese speakers. The room was deafeningly loud with everyone talking. Even serious discussions were punctuated with wise cracks.
When Yeung announced that members could get seats and walk the red carpet at an Asian film festival, the room erupted in unison in the most common way a Cantonese person expresses astonishment.
Waaaaaaaaaaaaaaaaaaah!
Near the end of the night, Yeung had important news. A mother in Hong Kong called to say she was moved to tears by a scholarship the federation had given to her daughter to attend the Massachusetts Institute of Technology.
"She told me to tell you all, 'Thank you from the bottom of my heart. I didn't know there were such good people in the world,' " Yeung said.
The room fell silent for a moment. Sensing the awkwardness and, God forbid, self-congratulatory tone of the story, Law blurted, "Does she know how to cook?"
Everyone laughed and another successful meeting came to an end.
The alumni association can afford to lament. Many of them speak Mandarin already. But many Cantonese speakers are finding out now that they have to learn Mandarin or risk being left behind in business or even within their families.
To learn Mandarin, Joyce Fong sits in her favorite black leather massage chair in front of her living room TV and goes through Chinese soap operas on DVD. Some are about ancient Chinese dynasties. Others focus on the story of a single mother. And a few are South Korean programs dubbed into Mandarin.
The 67-year-old retiree says she has to pick up the language if she hopes to be able to communicate with her 9- and 5-year-old grandsons in China.
The boys had been living with their parents in the Bay Area, but the family decided to move to China a year ago so that Fong's son, Gregory, could take a job at a university and also raise his children immersed in Chinese culture.
Although the grandchildren will also speak English, they will primarily use Mandarin at school, Fong said.
"I want to encourage them. I tell them, 'Grandma is trying to learn Mandarin too,' " said Fong, who immigrated to the U.S. from Hong Kong 53 years ago and is socially involved in L.A. Chinatown through her family association.
Walnut City Councilman Joaquin Lim grew up in Hong Kong and immigrated to the U.S. in the 1960s. For decades in California, he found he could get by with English and Cantonese.
But that changed when he decided to get into politics a decade ago.
Running for the school board in his suburban community, Lim quickly realized that most of his Chinese constituents in the eastern San Gabriel Valley were newcomers who didn't speak Cantonese.
So Lim had his Mandarin friends speak to him in their mother tongue. He watched movies in Mandarin and listened to Mandarin songs. By the time he ran for City Council in 1995, he felt comfortable enough with the language to campaign door-to-door and talk to Mandarin residents.
But there's always room for improvement — as Mandarin speakers are quick to remind him when he gives speeches. A few months ago, he was speaking to the Chinese language media at a news conference announcing a task force to improve health standards in Chinese restaurants.
As he spoke in Mandarin, fellow task force member Anthony Wong interrupted him in mid-sentence to correct his grammar.
The ethnic Chinese reporters chuckled, acknowledging that his Mandarin was a work in progress.
Lim recently spoke at a graduation ceremony in Cal Poly Pomona for government officials from central China who took a four-week course in American administrative practices.
Lim thought it went well. But the leader of the Chinese delegation had a slightly more reserved review: "It's much better than most Cantonese-speaking people."
Copyright 2006 Los Angeles Times
When Darwin Meets Dickens
By Nick Gillespie, TCS Daily, 29 Dec 2005
One of the subtexts of this year's Modern Language Association conference -- and, truth be told, of most contemporary discussions of literary and cultural studies -- is the sense that lit-crit is in a prolonged lull. There's no question that a huge amount of interesting work is being done -- scholars of 17th-century British and Colonial American literature, for instance, are bringing to light all sorts of manuscripts and movements that are quietly revising our understanding of liberal political theory and gender roles -- and that certain fields -- postcolonial studies, say, and composition and rhetoric -- are hotter than others. But it's been years -- decades even -- since a major new way of thinking about literature has really taken the academic world by storm.
If lit-crit is always something of a roller-coaster ride, the car has been stuck at the top of the first big hill for a while now, waiting for some type of rollicking approach to kick in and get the blood pumping again. What's the next big thing going to be? The next first-order critical paradigm that -- like New Criticism in the 1940s and '50s; cultural studies in the '60s; French post-structural theory in the '70s, and New Historicism in the '80s -- really rocks faculty lounges? (Go here for summaries of these and other movements).
It was with this question in mind that I attended yesterday's panel on "Cognition, Emotion, and Sexuality," which was arranged by the discussion group on Cognitive Approaches to Literature and moderated by Nancy Easterlin of the University of New Orleans. Scholars working in this area use developments in cognitive psychology, neurophysiology, evolutionary psychology, and related fields to figure out not only how we process literature but, to borrow the title of a forthcoming book in the field, Why We Read Fiction.
Although there are important differences, cognitive approaches often overlap with evolutionary approaches, or what The New York Times earlier this year dubbed "The Literary Darwinists"; those latter critics, to quote the Times:
“...read books in search of innate patterns of human behavior: child bearing and rearing, efforts to acquire resources (money, property, influence) and competition and cooperation within families and communities. They say that it's impossible to fully appreciate and understand a literary text unless you keep in mind that humans behave in certain universal ways and do so because those behaviors are hard-wired into us. For them, the most effective and truest works of literature are those that reference or exemplify these basic facts.“
Both cognitive and evolutionary approaches to lit-crit have been gaining recognition and adherents over the past decade or so. Cognitive critics are less interested in recurring plots or specific themes in literature, but they share with the Darwinists an interest in using scientific advances to help explore the universally observed human tendency toward creative expression, or what the fascinating anthropologist Ellen Dissanayake called in Homo Aestheticus: Where Art Comes From and Why, “making special.”
This unironic -- though hardly uncritical -- interest in science represents a clear break with much of what might be called the postmodern orthodoxy, which views science less as a pure source of knowledge and more as a means of controlling and regulating discourse and power. The postmodern view has contributed to a keener appreciation of how appeals to science are often self-interested and obfuscating. In this, it was anticipated in many ways by libertarian analyses such as F.A. Hayek's The Counter-Revolution of Science: Studies on the Abuse of Reason (1952) and Thomas Szasz's The Myth of Mental Illness, which exposed a hidden agenda of social control behind the helper rhetoric of the medical establishment and, not uncoincidentally, appeared the same year as Michel Foucault's The Birth of the Clinic. (For more on connections between libertarian thought and postmodernism, go here and here.)
At the same time, the postmodern view of science as simply one discourse among many could be taken to pathetic and self-defeating extremes, as the Sokal Hoax, in which physicist Alan Sokal published a secret parody in a leading pomo journal, illustrated. Indeed, the status of science -- and perhaps especially evolution and theories of human cognition that proceed from it -- in literary studies is curious. On the one hand, a belief in evolution as opposed to creationism or Intelligent Design is considered by most scholars a sign of cosmopolitan sophistication and a clear point of difference with religious fundamentalists. On the other hand, there are elements of biological determinism implicit in evolution that cut against various left-wing agendas -- and against the postmodern assertions that all stories are equally (in)valid.
Yet if evolution is real in any sense of the word, it must have a profound effect on what we do as human beings when it comes to art and culture.
Which brings us back to the "Cognition, Emotions, and Sexuality" panel, which sought, pace most literary theory of the past few decades, to explore universal processes by which human beings produce and consume literature. That alone makes the cognitive approach a significant break with the status quo.
The first presenter was Alan Palmer, an independent scholar based in London and the author of the award-winning Fictional Minds. For Palmer, how we process fiction is effectively hardwired, though not without cultural emphases that depend on social and historical context; it also functions as a place where we can understand more clearly how we process the "real" world. After summarizing recent cognitive work that suggests "our ways of knowing the world are bound up in how we feel the world...that cognition and emotion are inseparable," he noted that the basic way we read stories is by attributing intentions, motives, and emotions to characters. "Narrative," he argued, "is in essence the description of fictional mental networks," in which characters impute and test meanings about the world.
He led the session through a close reading of a passage from Thomas Pynchon's The Crying of Lot 49. The section in question was filled with discrepant emotions popping up even in the same short phrases. For instance, the female protagonist Oedipa Maas at one point hears in the voice of her husband "something between annoyance and agony." Palmer -- whose argument was incredibly complex and is hard to reproduce -- mapped out the ways in which both the character and the reader made sense of those distinct emotional states of mind. The result was a reading that, beyond digging deep into Pynchon, also helped make explicit the "folk psychology" Palmer says readers bring to texts -- and how we settle on meanings in the wake of unfamiliar emotional juxtapositions. As the panel's respondent, University of Connecticut's Elizabeth Hart, helpfully summarized, Palmers' reading greatly "complexified the passage" and was "richly descriptive" of the dynamics at play.
The second paper, by Auburn's Donald R. Wehrs, argued that infantile sexual experiences based around either the satisfaction of basic wants by mothers or proximity to maternal figures grounded the metaphors used by various philosophers of religious experience. Drawing on work that argues that consciousness emerges from the body's monitoring itself in relation to objects outside of it, Wehrs sketched a metaphoric continuum of images of religious fulfillment with St. Augustine at one end and Emmanuel Levinas on the other; he also briefly located the preacher Jonathan Edwards and Ralph Waldo Emerson on the continuum too. As Hart the respondent noted, Wehrs showed that there's "an emotional underwebbing to the history of ideas." That is, a set of diverse philosophers expressed a "common cognitive ground rooted in infantile erotic experience rather than practical reasoning."
Augustine, says Wehrs, conflates the divine and human and locates the origin of love and religious ecstasy with the stilling of appetite or desire. In essence, peace is understood as the absence of bad appetites, which accords with one basic infantile erotic or physical response to wants. Levinas, on the other hand, also draws on infantile experience but focuses not on ingestion but on proximity to the mother. Both of these reactions are basic cognitive realities that all humans experience as infants; together, they create a range of possible metaphors that recur in religious discussions. On the one hand, Augustine talks of being one with God (and the mother), of an inviolate bond that shows up in somewhat attenuated form in Jonathan Edward's imagery of being penetrated by God. On the other, Levinas stresses proximity to the Other, which mirrors infantile cognitive experience of closeness with the mother. This understanding, he said, is also reflected in Emerson's metaphors of resting and laying in Nature.
Will cognitive approaches become the next big thing in lit-crit? Or bio-criticism of the Darwinian brand? That probably won't happen, even as these approaches will, I think, continue to gain in reputation and standing. More to the point, as I argued in a 1998 article, these scholars who are linking Darwin and Dickens have helped challenge an intellectual orthodoxy that, however exciting it once was, seems pretty well played out. In his tour de force Mimesis and the Human Animal: On the Biogenetic Foundations of Literary Representation (1996), Temple's Robert Storey -- one of Nancy Easterlin's doctoral advisors -- warns:
“If [literary theory] continues on its present course, its reputation as a laughingstock among the scientific disciplines will come to be all but irreversible. Given the current state of scientific knowledge, it is still possible for literary theory to recover both seriousness and integrity and to be restored to legitimacy in the world at large.”
Ten years out, Storey's warning seems less pressing. The lure of the most arch forms of anti-scientific postmodernism has subsided, partly because of their own excesses and partly because of challenges such as Storey's. As important, the work being done by the cognitive scholars and others suggest that literature and science can both gain from ongoing collaboration.
Nick Gillespie is Editor-in-Chief of Reason.
The Battle to Stop Bird Flu
The pandemic has hit New Mexico. Inside the Los Alamos weapons lab, massive computer simulations are unleashing disease and tracking its course, 6 billion people at a time.
By Thomas Goetz, Wired
On a cold January day in 1976, Private David Lewis came down with the flu. Struck with the classic symptoms - headache, sore throat, fever - Lewis was told to go to his barracks at Fort Dix, New Jersey, and get some rest. Instead, he went on a march with other grunts, collapsed, and, after being rushed to the base hospital, died on February 4. He was the first - and, as it would turn out, the only - fatality of the great swine flu epidemic of 1976.
Lewis' death came just as health officials were starting to worry about an influenza outbreak in the US. The best science at the time held that flu epidemics erupted in once-a-decade cycles; since the last epidemic had occurred in 1968, the next one should be on the near horizon. As an article in The New York Times put it just days before Lewis fell ill: "Somewhere, in skies or fields or kitchens, the molecules of the next pandemic wait."
At Fort Dix, a few other soldiers developed flu symptoms. When lab tests revealed that perhaps 500 on the base had caught the virus, officials at the Centers for Disease Control and Prevention faced a quandary. Was this the epidemic they'd feared, in which case they should call for mass inoculation? Or should they play the odds, hoping the disease would go away as often happens?
They had little information to go on: the outbreak in New Jersey, isolated cases from Minnesota to Mississippi, and a flu virus that looked suspiciously like the strain that killed half a million Americans in 1918. Estimates for the chance of an epidemic ranged from 2 to 35 percent. Indeed, there was much the scientists didn't know about influenza, period. Flu viruses hadn't been isolated until the 1930s, and they are moody, fast-mutating pathogens. "The speed with which [mutation] can happen," the Times wrote, "is mystifying." When a strain was identified, there was no telling how virulent it was. At the time, the best computer models were in Russia, where health authorities were doing a fair job predicting the spread of flu from city to city. But those models took advantage of the Soviets' penchant for tracking the movements of their citizens; in the US, where travel was open, it was impossible to create such a forecast.
So on March 24, 1976, President Gerald Ford convened a "blue-ribbon panel" of experts from the CDC, the Food and Drug Administration, and the National Institutes of Health. After a few hours, Ford emerged with Jonas Salk, the doctor behind the polio vaccine, by his side and announced a plan "to inoculate every man, woman, and child in the United States." It was to be the largest immunization drive in US history.
The inoculations began on October 1. As of mid-December, 20 percent of the population had received a shot. But by then, it had become clear that an epidemic was not, in fact, at hand. Lewis remained the only fatality - unless you count the 32 other people who died from the vaccine. Soon the program, the last major inoculation effort in the US, was canceled.
The 1976 swine flu scare has become enshrined as "the epidemic that never was," one of the great fiascoes of our national health care system. But in truth, government officials performed well enough. In just a few months, they went from isolating a strange new flu virus to delivering a vaccine to every American who wanted one. The problem was, all they had were blunt instruments: crude mathematical models, rough estimates of infection rates, and a vaccine that often packed too strong a punch. They were fairly well equipped to react to a worst-case scenario - they just weren't equipped to determine if one was imminent. Forced to guess, they chose "to risk money rather than lives," as Theodore Cooper, an assistant secretary of Health Education and Welfare, said at the time. "Better to be safe than sorry."
All of which raises a question: With the specter of an actual flu epidemic looming, are we any better equipped today? H5N1, the strain of avian influenza currently festering in Asia, has yet to pull off the mutation that would customize it for human-to-human transmission. But we know it's an especially lethal virus; most health experts expect it will make that jump soon enough. So the task for experts is to devise a plan that pinpoints how the virus might spread through the US population - a plan that draws more from the Soviet approach to disease forecasting than from the CDC's approach in 1976.
Thirty years on, a new science of epidemiology is at hand. It's based on sophisticated computer models that can get ahead of a virus and, in a sometimes dazzling demonstration of computer science, provide exacting prescriptions for health care policy rather than best guesses. It's an approach pioneered not by physicians but by physicists. And it owes a lot to the nuclear bomb.
In 1992, the US announced a moratorium on nuclear testing. The move meant that the Pentagon could not use underground test explosions to "certify" its arsenal of weapons - to establish that its nuclear stockpile would work when called upon and be safe until that day. That forced the guardians of the stockpile - the nuclear scientists at Los Alamos National Laboratory in New Mexico - to devise new ways to do their jobs. And that meant massive supercomputer simulations.
Computer simulations have a long history at Los Alamos. They were first deployed at the lab during the Manhattan Project in the 1940s to model nuclear explosions - among the first computer models ever attempted. Early on, they were a coarse tool and no substitute for physical experiments; the physicist Richard Feynman, who worked at the lab in its earliest years, called them "a disease" that would lead scientists into computerized daydreams tangential to the task at hand. But over the next 50 years they became an important instrument at Los Alamos, indispensable to the study of nuclear fusion and rocket propulsion. The 1992 moratorium simply codified that role, making computer models the only game in town. Since then, the lab has built one of the world's largest supercomputing facilities, amassing a total of 85 teraflops of processing power.
These tools are now being used in research that goes far beyond weapons work. Among the lab's 6,000 scientists, you'll find astrophysicists modeling white dwarf stars, chemical engineers replicating the effects of Florida hurricanes, geologists modeling the Earth's core, and biologists constructing microbial genomes. "All science is simulation these days," says Stephen Lee, the deputy division leader of computational sciences at Los Alamos.
The most promising application of sim science to real-world policy targets epidemic disease. A mile from the main compound at Los Alamos, in a grade school turned research lab, half a dozen physicists and computer scientists (and one mathematical biologist) are grinding out disease like pepper from a mill. This is EpiSims, an ambitious computer-simulation project that has released anthrax in Houston, sown the bubonic plague in Chicago, and, most recently, spread the flu in Los Angeles.
In 2000, EpiSims let loose smallpox in Portland, Oregon. Programmers started by creating a computer model of the city that's accurate down to the individual high school, traffic light, and citizen. In EpiSims, as in life, people go about their daily business. So on Tuesday morning, John Doe leaves his apartment in the Pearl District at 6:45, stops at Starbucks at 7:08, gets to his office parking lot at 7:45, greets his colleagues in the elevator at 7:49, and is at his desk checking email by 8:02. There are three-quarters of a million John Does in EpiSims' Portland, and just as many Jane Does, each with their own routines and encounters. This is the secret of EpiSims: its insatiable appetite for minutiae. EpiSims is the closest we've come to a huge city living inside a computer - or more specifically, several hundred computers. James Smith, who runs the EpiSims project at Los Alamos, describes his tools as "giant data fusion engines." Tapping the scientists' sophisticated computing algorithms and the lab's supercomputer clusters, it takes about 300 parallel processors and less than 24 hours to run a one-year simulation.
Smallpox is an opportunistic virus, eager to take advantage of incidental encounters. It spreads through the respiratory system and incubates for as long as 10 days before the onset of fluish symptoms - coughing, fever, stomachache. Only days later do victims develop a pustular rash - the pox. It is vicious; in an untreated smallpox epidemic, 30 percent of those infected will die.
In the 2000 smallpox sim, the EpiSims team tracked the virus as it climbed toward its 30 percent fatality rate not all at once, but person by person: schoolteachers and shop clerks first, then office workers and hospital staff. As smallpox leapt from one unwitting victim to another, the EpiSims team watched disease ooze out of schools and shopping malls, erupt in downtown office buildings, and take root in neighborhoods.Within 90 days, Portland was teeming with smallpox. The epidemic was at hand.
But simulating the spread of disease is only half the job. EpiSims also had to evaluate how officials should respond. So, researchers rebooted the sim and Portland was once again alive and disease free. And this time the city had a plan of action. Four days after the first sign of virus, the authorities closed the schools, kicked off a mass vaccination program, and generally shut the city down. And with it, the disease: In 100 days, it had run its course. That sim was followed by another with a slightly different response strategy, and then another. EpiSims eventually ran through hundreds of smallpox models, sometimes vaccinating only exposed individuals, other times targeting the so-called superspreaders, individuals who transmit more than their share of disease, sometimes putting the entire city in quarantine. With every tweak, the disease would peter out or gain steam accordingly.
The EpiSims smallpox models led to a handful of contrarian conclusions about epidemic disease. The first: "The superspreader hypothesis isn't necessarily true," Smith says. This rule holds that in any population, the more social individuals - the hubs - are the principal conduits for spreading disease. Shatter the network by inoculating or removing these hubs, the theory goes, and you'll stand a better chance of knocking out the disease. But EpiSims has shown that we're all more popular than we might think. Even the most reclusive of us runs to Walgreens for toothpaste or drops by Boston Chicken for takeout. For a highly communicable disease like smallpox or influenza, these incidental interactions spread disease just as well as extended encounters. So chasing after the hubs can mean chasing after 80 percent of the population - a huge waste of time and energy. Better simply to inoculate the entire city.
A second revelation: With a lethal pathogen like smallpox, response time is all. As the delay stretches from 4 to 7 to 10 days before officials move into action, EpiSims found that the outbreak becomes increasingly lethal. It turns out that, in the ticking moments after an epidemic strikes, when health officials act is more important than what they actually do. Start with inoculation. Or quarantine. Or school closings. It doesn't matter. What does matter is reducing the time between first outbreak and first response. At the same time, EpiSims warns against overreacting to a less-lethal disease - as in 1976, when standard health measures would have sufficed. (How to tell the difference? Run a simulation.)
These sorts of precise, real-world conclusions are the payoff of the EpiSims approach. They are, to use Smith's term, "actionable" - worthy of consideration not just by scientists but by policymakers. Such relevance has made EpiSims a darling at Los Alamos and an integral component of a Department of Homeland Security project called Nisac (for National Infrastructure Simulation and Analysis Center), an effort to model a range of disasters and plot recovery strategies. Born in 2000 as a tiny $500,000 joint project between Los Alamos and its sister lab in Sandia, New Mexico, Nisac got a $20 million infusion after September 11, 2001, and a mandate to measure how the nation would fare after another deliberate attack, be it a dirty nuke or a bioweapon like smallpox.
More recently, as attention has turned to DHS's responsibility for acts of God as well as acts of terrorists, EpiSims has begun assessing the threat of avian influenza. With these new simulations, Smith's team is adding even more granularity. They're modeling the health care system down to the hospital bed, to see what happens if flu victims flood hospitals, fill the beds, and then spill back into their homes. They're taking into account slight behavior changes, so if people start wearing surgical masks, SARS-style, disease transmissions in the sim will fall off according to the masks' particulates-per-million filtration rate. The results go to the DHS and straight up the chain, helping inform the ultimate question that looms behind all of Nisac's work. "What do we tell the President?" says DV Rao, who directs the lab's Decision Applications division. At these highest levels, this sort of predictive science is an entirely new and unfamiliar decision-making tool. "I don't know what they make of it now," says Rao. "But in a year, hopefully they're going to say, 'All right. Tell us what we should be doing.'"
On a long flight to Maui in February 2003, Tim Germann, a physical chemist at Los Alamos, was reading Richard Preston's Demon in the Freezer. A vivid account of what's at stake if the last samples of smallpox escape US or Russian labs, Preston's warning struck Germann as real enough. But the scope of the danger was unquantified and apparently unquantifiable. Germann also had in his bag a copy of Science that included a piece coauthored by Emory University biostatistician Ira Longini. ("It was a long flight," Germann says.) Longini was investigating different vaccination strategies in the event of a smallpox outbreak. But his simulation sample totaled just 2,000 people - not large enough to extrapolate his conclusions to a larger population.
The reading made Germann wonder: Sure, a national outbreak of smallpox would be bad. But how bad, and how likely? And who would be at risk? Then Germann realized that he had a way of finding out. His day job involved computational materials science, specifically, how metal atoms - copper and iron - would react under stress or shock. In an epidemic, Germann thought, people might behave like the atoms in his simulations. "Atoms have short-range interactions," Germann explains. "Even though we're doing millions or billions of them, every one just moves in its local neighborhood. People work in the same way." So, just as a cooling metal slows down atoms, a quarantine slows down people. By bolstering these physics models with experimental data on things like how viruses circulate from children to adults, and he could conceivably model the entire US population, or even the entire global population.
Germann cranked up the simulation, adjusted the software, and added the parameters Longini had used to model his smallpox outbreak. That marked the birth of what would be called EpiCast - a combination of epidemic and forecast. "It's basically still the same code," Germann says. "We use it one day for running atoms and the next for people." As it turned out, the model showed that smallpox may not be the cataclysm many imagine. Because of the long lagtime between successive generations of an outbreak (two weeks or more), and the tell-tale symptoms, smallpox would be quickly identified. That, plus the stockpiling, post-9/11, of large quantities of vaccine, means that "it should be possible to contain" an outbreak after the first few waves, Germann says.
The flu, by contrast, has a very short generation time (days instead of weeks) and generic symptoms. What's more, it's nearly impossible to stockpile a vaccine because the virus is so quick to mutate. Add the fact that people can be infected and contagious without knowing it, and you've got one vexing virus. So Germann called Longini and described how his molecular models could be adapted to epidemics. "I thought it was a bit preposterous," recalls Longini, who has been modeling epidemics for 30 years, most recently with a National Institutes of Health research program investigating the risk of pandemic influenza. "When Tim told me he could model the whole country or the whole planet, 6 billion people, it sounded very impressive. But I wondered if it was really possible."
So Germann set to work creating flu scenarios to augment Longini's NIH work. With nearly 300 million agents representing every man, woman, and child in the US, EpiCast doesn't bother to track minute-by-minute behaviors as EpiSims does. Instead, Germann puts his computing power to work detailing how slightly different parameters - various antivirals or different isolation policies, for instance - have slightly different national repercussions. So far, the project has run about 200 simulations of an avian flu epidemic, models that have helped Longini's group reach provocative conclusions that fall along two lines: how a nationwide outbreak might take hold, and what policies would best combat it.
EpiCast reveals that, in contrast with flu epidemics of decades past, an outbreak today won't progress "like a wave across the country," spreading from town to town and state to state. Instead, no matter where it erupts - Seattle, Chicago, Miami - it will swiftly blanket the nation. "It starts in Chicago one day," Germann says, "and a couple of weeks later it's everywhere at once." Thank the airlines. Even though disease has piggybacked on air travel for decades, we generally had only isolated outbreaks of low-transmission viruses - like when SARS leapt from Hong Kong to Canada in 2003 but failed to spread beyond Toronto. In an epidemic of a highly communicable disease, the airlines' hub network would effectively seed every metropolitan area in the country within a month or two - and then reseed them, repeatedly.
EpiCast showed that local intervention measures can have some impact: Close the schools, enforce a quarantine, and the disease will slow down. That buys the federal government time to develop and mass-produce a vaccine. But Germann quickly adds a caveat: Acting locally may not be enough. In a worst-case outbreak, without a viable vaccine, "the disease will climb, and eventually go exponential. And once it's on the exponential curve, it's very difficult to contain." Cue Richard Preston.
In November, the Department of Health and Human Services released its pandemic influenza plan. The report offers a thorough and frank assessment of the havoc a full-fledged pandemic would wreak. The nation, the report says, "will be severely taxed, if not overwhelmed." Disease will break out repeatedly, for as long as a year. Hospitals will run out of beds and vaccines. Doctors and nurses will be overworked to the point of exhaustion. Mass fatalities will overwhelm mortuaries and morgues with bodies. Before it has exhausted itself, the report estimates, the disease could spread to as many as 90 million Americans, hospitalizing 10 million and killing almost 2 million.
The report also sketches out how the federal government should respond in such a scenario. In effect, officials face what Bruce Gellin, director of HHS's National Vaccine Program Office, describes as a reverse Hurricane Katrina: Rather than an all-out response focused on one particular region, a flu epidemic would force the government to ration its resources to serve the entire nation. How best to do that - tactically, quickly, and effectively - is now the focus of EpiCast's work.
After the HHS plan was released, Germann and Longini were called to Washington for a strategy session with officials from the NIH, DHS, and the White House. Plenty of Los Alamos scientists, starting with Oppenheimer and Feynman, have made the trek to the corridors of the Capitol. But those trips were concerned with fighting wars, not disease. During the HHS meeting, the officials talked about how to apply EpiCast to the problem at hand. Germann explained the power of the tool. If HHS wants to know where to stockpile antivirals, EpiCast can pinpoint optimal locations. If the government wants to slow down the spread of disease, EpiCast can suggest whether to screen airline passengers by body temperature - and determine just how high a fever is too high to fly. If the first outbreak is in, say, Los Angeles, "do you send doctors from around the country to the West Coast," Germann says, "or keep them where they are because it'll be everywhere in a few weeks?"
Germann assured the group he could help. Then he returned to Los Alamos. Every question means a new sim, and every sim helps answer questions that are otherwise unanswerable.
Thomas Goetz (thomas@wiredmag.com) is Wired's deputy editor.
China beat Columbus to it, perhaps
The Economist
An ancient map that strongly suggests Chinese seamen were first round the world

THE brave seamen whose great voyages of exploration opened up the world are iconic figures in European history. Columbus found the New World in 1492; Dias discovered the Cape of Good Hope in 1488; and Magellan set off to circumnavigate the world in 1519. However, there is one difficulty with this confident assertion of European mastery: it may not be true.
It seems more likely that the world and all its continents were discovered by a Chinese admiral named Zheng He, whose fleets roamed the oceans between 1405 and 1435. His exploits, which are well documented in Chinese historical records, were written about in a book which appeared in China around 1418 called “The Marvellous Visions of the Star Raft”.
Next week, in Beijing and London, fresh and dramatic evidence is to be revealed to bolster Zheng He's case. It is a copy, made in 1763, of a map, dated 1418, which contains notes that substantially match the descriptions in the book. “It will revolutionise our thinking about 15th-century world history,” says Gunnar Thompson, a student of ancient maps and early explorers.
The map (shown above) will be unveiled in Beijing on January 16th and at the National Maritime Museum in Greenwich a day later. Six Chinese characters in the upper right-hand corner of the map say this is a “general chart of the integrated world”. In the lower left-hand corner is a note that says the chart was drawn by Mo Yi Tong, imitating a world chart made in 1418 which showed the barbarians paying tribute to the Ming emperor, Zhu Di. The copyist distinguishes what he took from the original from what he added himself.
The map was bought for about $500 from a small Shanghai dealer in 2001 by Liu Gang, one of the most eminent commercial lawyers in China, who collects maps and paintings. Mr Liu says he knew it was significant, but thought it might be a modern fake. He showed his acquisition to five experienced collectors, who agreed that the traces of vermin on the bamboo paper it is written on, and the de-pigmentation of ink and colours, indicated that the map was more than 100 years old.
Mr Liu was unsure of its meaning, and asked specialists in ancient Chinese history for their advice, but none, he says, was forthcoming. Then, last autumn, he read “1421: The Year China Discovered the World”, a book written in 2003 by Gavin Menzies, in which the author makes the controversial claim that Zheng He circumnavigated the world, discovering America on the way. Mr Menzies, who is a former submariner in the Royal Navy and a merchant banker, is an amateur historian and his theory met with little approval from professionals. But it struck a chord: his book became a bestseller and his 1421 website is very popular. In any event, his arguments convinced Mr Liu that his map was a relic of Zheng He's earlier voyages.
The detail on the copy of the map is remarkable. The outlines of Africa, Europe and the Americas are instantly recognisable. It shows the Nile with two sources. The north-west passage appears to be free of ice. But the inaccuracies, also, are glaring. California is shown as an island; the British Isles do not appear at all. The distance from the Red Sea to the Mediterranean is ten times greater than it ought to be. Australia is in the wrong place (though cartographers no longer doubt that Australia and New Zealand were discovered by Chinese seamen centuries before Captain Cook arrived on the scene).
The commentary on the map, which seems to have been drawn from the original, is written in clear Chinese characters which can still be easily read. Of the west coast of America, the map says: “The skin of the race in this area is black-red, and feathers are wrapped around their heads and waists.” Of the Australians, it reports: “The skin of the aborigine is also black. All of them are naked and wearing bone articles around their waists.”
But this remarkable precision, rather than the errors, is what critics of the Menzies theory are likely to use to question the authenticity of the 1418 map. Mr Menzies and his followers are naturally extremely keen to establish that the 1763 copy is not a forgery and that it faithfully represents the 1418 original. This would lend weighty support to their thesis: that China had indeed discovered America by (if not actually in) 1421. Mass spectrography analysis to date the copied map is under way at Waikato University in New Zealand, and the results will be announced in February. But even if affirmative, this analysis is of limited importance since it can do no more than date the copyist's paper and inks.
Five academic experts on ancient charts note that the 1418 map puts together information that was available piecemeal in China from earlier nautical maps, going back to the 13th century and Kublai Khan, who was no mean explorer himself. They believe it is authentic.
The map makes good estimates of the latitude and longitude of much of the world, and recognises that the earth is round. “The Chinese were almost certainly aware of longitude before Zheng He set sail,” says Robert Cribbs of California State University. They certainly assumed the world was round. “The format of the map is totally consistent with the level of knowledge that we should expect of royal Chinese geographers following the voyages of Zheng He,” says Mr Thompson.
Moreover, some of the errors in the 1418 map soon turned up in European maps, the most striking being California drawn as an island. The Portuguese are aware of a world map drawn before 1420 by a cartographer named Albertin di Virga, which showed Africa and the Americas. Since no Portuguese seamen had yet discovered those places, the most obvious source for the information seems to be European copies of Chinese maps.
But this is certainly not a unanimous view among the experts, with many of the fiercest critics in China itself. Wang Tai-Peng, a scholarly journalist in Vancouver who does not doubt that the Chinese explored the world early in the 15th century (he has written about a visit by Chinese ambassadors to Florence in 1433), doubts whether Zheng He's ships landed in North America. Mr Wang also claims that Zheng He's navigation maps were drawn in a totally different Chinese map-making tradition. “Until the 1418 map is scientifically authenticated, we still have to take it with a grain of salt,” he says.
Most forgeries are driven by a commercial imperative, especially when the market for ancient maps is booming, as it is now. The Library of Congress recently paid $10m for a copy of a 1507 world map by Martin Waldseemuller, a German cartographer. But Mr Liu says he is not a seller: “The map is part of my life,” he claims.
The consequences of the discovery of this map could be considerable. If it does indeed prove to be the first map of the world, “the history of New World discovery will have to be rewritten,” claims Mr Menzies. How much does this matter? Showing that the world was first explored by Chinese rather than European seamen would be a major piece of historical revisionism. But there is more to history than that. It is no less interesting that the Chinese, having discovered the extent of the world, did not exploit it, politically or commercially. After all, Columbus's discovery of America led to exploitation and then development by Europeans which, 500 years later, made the United States more powerful than China had ever been.
Copyright © 2006 The Economist Newspaper and The Economist Group. All rights reserved.
Cantonese Is Losing Its Voice
Speakers of the spicy tongue that can make words of love sound like a fight are having to learn its linguistic kin, the mellower Mandarin.
By David Pierson, Los Angeles Times, January 3, 2006
Carson Hom's family has run a thriving fortune cookie and almond cookie company in Los Angeles County for 35 years.
And for much of that time, it was a business that required two languages: Cantonese, to communicate with employees and the Chinese restaurants that bought the cookies, and English, to deal with health inspectors, suppliers and accountants.
But when Hom, 30, decided to start his own food import company, he learned that this bilingualism wasn't enough anymore.
"I can't communicate," said Hom, whose parents are from Hong Kong. "Everyone around used to speak Cantonese. Now everyone is speaking Mandarin."
Cantonese, a sharp, cackling dialect full of slang and exaggerated expressions, was never the dominant language of China. But it came to dominate the Chinatowns of North America because the first immigrants came from the Cantonese-speaking southern province of Guangdong, where China first opened its ports to foreigners centuries ago.
It is also the chief language of Hong Kong, the vital trading and financial center that became China's link to the West.
But over the last three decades, waves of Mandarin-speaking mainland Chinese and Taiwanese immigrants have diluted the influence of both the Cantonese language and the pioneering Cantonese families who ran Chinatowns for years.
The surging Chinese economy today has challenged Cantonese further. Because Mandarin is China's official language, entrepreneurs like Hom have been forced to adapt, often learning the hard way that business can't be done with Cantonese alone.
Many Cantonese speakers are racing to learn Mandarin any way they can — by watching Chinese soap operas, attending schools, paying for expensive immersion courses and even making more Mandarin-speaking friends. This is no cinch. Although Cantonese and Mandarin share the same written language, they are spoken as differently as English and French.
At the same time, few people are learning Cantonese. San Jose State University and New York University offer classes, but they are almost alone among colleges with established Cantonese communities. The language is not taught at USC, UCLA, Pasadena City College, San Francisco State or Queens College in New York, to name a few.
With the changes, some are lamenting — in ways they can do only in Cantonese — the end of an era. Mandarin is now the vernacular of choice, and they say it doesn't come close to the colorful and brash banter of Cantonese.
"You might be saying, 'I love you' to your girlfriend in Cantonese, but it will still sound like you're fighting," said Howard Lee, a talk show host on Cantonese language KMRB-AM (1430). "It's just our tone. We always sound like we're in a shouting match. Mandarin is so mellow. Cantonese is strong and edgy."
Cantonese is said to be closer than Mandarin to ancient Chinese. It is also more complicated. Mandarin has four tones, so a character can be intonated four ways with four meanings. Cantonese has nine tones.
Beginning in the 1950s, the Chinese government tried to make Mandarin the national language in an effort to bridge the myriad dialects across the country. Since then, the government has been working to simplify the language, renamed Putonghua, and give it a proletarian spin. To die-hard Cantonese, no fans of the Communist government, this is one more reason to look down on Mandarin.
Many say it is far more difficult to learn Cantonese than Mandarin because the former does not always adhere to rules and formulas. Image-rich slang litters the lexicon and can leave anyone ignorant of the vernacular out of touch.
"You have to really listen to people if you want to learn Cantonese," said Gary Tai, who teaches the language at New York University and is also a principal at a Chinese school in Staten Island. "You have to watch movies and listen to songs. You can't learn the slang from books."
Popular phrases include the slang for getting a parking ticket, which in Cantonese is "I ate beef jerky," probably because Chinese beef jerky is thin and rectangular, like a parking ticket. And teo bao (literally "too full") describes someone who is uber-trendy, so hip he or she is going to explode.
Many sayings are coined by movie stars on screen. Telling someone to chill out, comedian Stephen Chow says: "Drink a cup of tea and eat a bun."
Then there are the curse words, and what an abundance there is.
A four-syllable obscenity well known in the Cantonese community punctuates the end of many a sentence.
"I think we all agree that curse words in Cantonese just sound better," said Lee, the radio host. "It's so much more of a direct hit on the nail. In Mandarin, they sound so polite."
His colleague, news broadcaster Vivian Lee, chimed in to clarify that the curse words were not vindictive.
"It's not that Cantonese people are less educated. They're very well educated. The language is just cute and funny. It doesn't hurt anyone," said Lee, who does the news show on the station five days a week. "The Italians need body language. We don't need that at all. We have adjectives."
To stress a point or to twist a sentence into a question, Cantonese speakers need only add a dramatic ahhhhhhh or laaaaaaa at the end.
Something simple like, "Let's go" becomes "C'mon, lets get a move on!" when it's capped with laaaaa.
By comparison, with Mandarin from China, what you see is what you get. The written form has been simplified by the Chinese government so that characters require fewer strokes. It is considered calmer and more melodic.
Take the popular Cantonese expression chi-seen, which means your wires have short-circuited. It is used, often affectionately, to call someone or something crazy. The Mandarin equivalent comes off to Cantonese people sounding like "You have a brain malfunction that has rendered your behavior unusual."
The calm tones of Mandarin are heard more and more around Southern California's Chinese community.
Even quintessential Hong Kong-style restaurants, including wonton noodle shops, now have waitresses who speak Mandarin, albeit badly, so they can take orders. Elected officials in Los Angeles County, even native Cantonese, are holding news conferences in Mandarin.
Some Cantonese speakers feel besieged.
Cheryl Li, a 19-year-old Pasadena City College student whose parents are from Hong Kong, is studying to become an occupational therapist and volunteers at the Garfield Medical Center in Monterey Park, where most of the patients are Chinese.
Recently, she was asking patients, in Mandarin, what they wanted to eat. When one man thought her accent was off, he said, "Stupid second-generation Chinese American doesn't speak Mandarin."
Li responded angrily, "No! I was born here. But I understand enough."
"We're in the minority," she added, reflecting on the incident. "I'm scared Cantonese is going to be a lost language."
Still, Li is studying Mandarin.
There are places where Cantonese is protected and cherished.
At a cavernous Chinese seafood restaurant in Monterey Park, members of the Hong Kong Schools Alumni Federation gathered in a back room to munch on stir-fried scallops, pork offal soup and spare ribs.
It was a regular monthly meeting of the group and a sanctuary for Hong Kong Chinese people who take comfort eating and joking with fellow Cantonese speakers.
"I just can't express myself as freely in Mandarin," said Victor Law, an accountant who left Hong Kong to attend college in the U.S. 34 years ago. "That's why we have this association. I feel like we're the last of a dying breed."
For Law, it's not just the language but many Cantonese traditions that are on the decline. He says it's now hard to find a mah-jongg game that uses Hong Kong rules instead of Taiwanese rules, a distinction concerning how many tiles are used.
"I'm not ready to be a dinosaur," said Amy Yeung, president of the alumni group.
To the trained ear, it was instantly apparent that this was a gathering of Cantonese speakers. The room was deafeningly loud with everyone talking. Even serious discussions were punctuated with wise cracks.
When Yeung announced that members could get seats and walk the red carpet at an Asian film festival, the room erupted in unison in the most common way a Cantonese person expresses astonishment.
Waaaaaaaaaaaaaaaaaaah!
Near the end of the night, Yeung had important news. A mother in Hong Kong called to say she was moved to tears by a scholarship the federation had given to her daughter to attend the Massachusetts Institute of Technology.
"She told me to tell you all, 'Thank you from the bottom of my heart. I didn't know there were such good people in the world,' " Yeung said.
The room fell silent for a moment. Sensing the awkwardness and, God forbid, self-congratulatory tone of the story, Law blurted, "Does she know how to cook?"
Everyone laughed and another successful meeting came to an end.
The alumni association can afford to lament. Many of them speak Mandarin already. But many Cantonese speakers are finding out now that they have to learn Mandarin or risk being left behind in business or even within their families.
To learn Mandarin, Joyce Fong sits in her favorite black leather massage chair in front of her living room TV and goes through Chinese soap operas on DVD. Some are about ancient Chinese dynasties. Others focus on the story of a single mother. And a few are South Korean programs dubbed into Mandarin.
The 67-year-old retiree says she has to pick up the language if she hopes to be able to communicate with her 9- and 5-year-old grandsons in China.
The boys had been living with their parents in the Bay Area, but the family decided to move to China a year ago so that Fong's son, Gregory, could take a job at a university and also raise his children immersed in Chinese culture.
Although the grandchildren will also speak English, they will primarily use Mandarin at school, Fong said.
"I want to encourage them. I tell them, 'Grandma is trying to learn Mandarin too,' " said Fong, who immigrated to the U.S. from Hong Kong 53 years ago and is socially involved in L.A. Chinatown through her family association.
Walnut City Councilman Joaquin Lim grew up in Hong Kong and immigrated to the U.S. in the 1960s. For decades in California, he found he could get by with English and Cantonese.
But that changed when he decided to get into politics a decade ago.
Running for the school board in his suburban community, Lim quickly realized that most of his Chinese constituents in the eastern San Gabriel Valley were newcomers who didn't speak Cantonese.
So Lim had his Mandarin friends speak to him in their mother tongue. He watched movies in Mandarin and listened to Mandarin songs. By the time he ran for City Council in 1995, he felt comfortable enough with the language to campaign door-to-door and talk to Mandarin residents.
But there's always room for improvement — as Mandarin speakers are quick to remind him when he gives speeches. A few months ago, he was speaking to the Chinese language media at a news conference announcing a task force to improve health standards in Chinese restaurants.
As he spoke in Mandarin, fellow task force member Anthony Wong interrupted him in mid-sentence to correct his grammar.
The ethnic Chinese reporters chuckled, acknowledging that his Mandarin was a work in progress.
Lim recently spoke at a graduation ceremony in Cal Poly Pomona for government officials from central China who took a four-week course in American administrative practices.
Lim thought it went well. But the leader of the Chinese delegation had a slightly more reserved review: "It's much better than most Cantonese-speaking people."
Copyright 2006 Los Angeles Times
When Darwin Meets Dickens
By Nick Gillespie, TCS Daily, 29 Dec 2005
One of the subtexts of this year's Modern Language Association conference -- and, truth be told, of most contemporary discussions of literary and cultural studies -- is the sense that lit-crit is in a prolonged lull. There's no question that a huge amount of interesting work is being done -- scholars of 17th-century British and Colonial American literature, for instance, are bringing to light all sorts of manuscripts and movements that are quietly revising our understanding of liberal political theory and gender roles -- and that certain fields -- postcolonial studies, say, and composition and rhetoric -- are hotter than others. But it's been years -- decades even -- since a major new way of thinking about literature has really taken the academic world by storm.
If lit-crit is always something of a roller-coaster ride, the car has been stuck at the top of the first big hill for a while now, waiting for some type of rollicking approach to kick in and get the blood pumping again. What's the next big thing going to be? The next first-order critical paradigm that -- like New Criticism in the 1940s and '50s; cultural studies in the '60s; French post-structural theory in the '70s, and New Historicism in the '80s -- really rocks faculty lounges? (Go here for summaries of these and other movements).
It was with this question in mind that I attended yesterday's panel on "Cognition, Emotion, and Sexuality," which was arranged by the discussion group on Cognitive Approaches to Literature and moderated by Nancy Easterlin of the University of New Orleans. Scholars working in this area use developments in cognitive psychology, neurophysiology, evolutionary psychology, and related fields to figure out not only how we process literature but, to borrow the title of a forthcoming book in the field, Why We Read Fiction.
Although there are important differences, cognitive approaches often overlap with evolutionary approaches, or what The New York Times earlier this year dubbed "The Literary Darwinists"; those latter critics, to quote the Times:
“...read books in search of innate patterns of human behavior: child bearing and rearing, efforts to acquire resources (money, property, influence) and competition and cooperation within families and communities. They say that it's impossible to fully appreciate and understand a literary text unless you keep in mind that humans behave in certain universal ways and do so because those behaviors are hard-wired into us. For them, the most effective and truest works of literature are those that reference or exemplify these basic facts.“
Both cognitive and evolutionary approaches to lit-crit have been gaining recognition and adherents over the past decade or so. Cognitive critics are less interested in recurring plots or specific themes in literature, but they share with the Darwinists an interest in using scientific advances to help explore the universally observed human tendency toward creative expression, or what the fascinating anthropologist Ellen Dissanayake called in Homo Aestheticus: Where Art Comes From and Why, “making special.”
This unironic -- though hardly uncritical -- interest in science represents a clear break with much of what might be called the postmodern orthodoxy, which views science less as a pure source of knowledge and more as a means of controlling and regulating discourse and power. The postmodern view has contributed to a keener appreciation of how appeals to science are often self-interested and obfuscating. In this, it was anticipated in many ways by libertarian analyses such as F.A. Hayek's The Counter-Revolution of Science: Studies on the Abuse of Reason (1952) and Thomas Szasz's The Myth of Mental Illness, which exposed a hidden agenda of social control behind the helper rhetoric of the medical establishment and, not uncoincidentally, appeared the same year as Michel Foucault's The Birth of the Clinic. (For more on connections between libertarian thought and postmodernism, go here and here.)
At the same time, the postmodern view of science as simply one discourse among many could be taken to pathetic and self-defeating extremes, as the Sokal Hoax, in which physicist Alan Sokal published a secret parody in a leading pomo journal, illustrated. Indeed, the status of science -- and perhaps especially evolution and theories of human cognition that proceed from it -- in literary studies is curious. On the one hand, a belief in evolution as opposed to creationism or Intelligent Design is considered by most scholars a sign of cosmopolitan sophistication and a clear point of difference with religious fundamentalists. On the other hand, there are elements of biological determinism implicit in evolution that cut against various left-wing agendas -- and against the postmodern assertions that all stories are equally (in)valid.
Yet if evolution is real in any sense of the word, it must have a profound effect on what we do as human beings when it comes to art and culture.
Which brings us back to the "Cognition, Emotions, and Sexuality" panel, which sought, pace most literary theory of the past few decades, to explore universal processes by which human beings produce and consume literature. That alone makes the cognitive approach a significant break with the status quo.
The first presenter was Alan Palmer, an independent scholar based in London and the author of the award-winning Fictional Minds. For Palmer, how we process fiction is effectively hardwired, though not without cultural emphases that depend on social and historical context; it also functions as a place where we can understand more clearly how we process the "real" world. After summarizing recent cognitive work that suggests "our ways of knowing the world are bound up in how we feel the world...that cognition and emotion are inseparable," he noted that the basic way we read stories is by attributing intentions, motives, and emotions to characters. "Narrative," he argued, "is in essence the description of fictional mental networks," in which characters impute and test meanings about the world.
He led the session through a close reading of a passage from Thomas Pynchon's The Crying of Lot 49. The section in question was filled with discrepant emotions popping up even in the same short phrases. For instance, the female protagonist Oedipa Maas at one point hears in the voice of her husband "something between annoyance and agony." Palmer -- whose argument was incredibly complex and is hard to reproduce -- mapped out the ways in which both the character and the reader made sense of those distinct emotional states of mind. The result was a reading that, beyond digging deep into Pynchon, also helped make explicit the "folk psychology" Palmer says readers bring to texts -- and how we settle on meanings in the wake of unfamiliar emotional juxtapositions. As the panel's respondent, University of Connecticut's Elizabeth Hart, helpfully summarized, Palmers' reading greatly "complexified the passage" and was "richly descriptive" of the dynamics at play.
The second paper, by Auburn's Donald R. Wehrs, argued that infantile sexual experiences based around either the satisfaction of basic wants by mothers or proximity to maternal figures grounded the metaphors used by various philosophers of religious experience. Drawing on work that argues that consciousness emerges from the body's monitoring itself in relation to objects outside of it, Wehrs sketched a metaphoric continuum of images of religious fulfillment with St. Augustine at one end and Emmanuel Levinas on the other; he also briefly located the preacher Jonathan Edwards and Ralph Waldo Emerson on the continuum too. As Hart the respondent noted, Wehrs showed that there's "an emotional underwebbing to the history of ideas." That is, a set of diverse philosophers expressed a "common cognitive ground rooted in infantile erotic experience rather than practical reasoning."
Augustine, says Wehrs, conflates the divine and human and locates the origin of love and religious ecstasy with the stilling of appetite or desire. In essence, peace is understood as the absence of bad appetites, which accords with one basic infantile erotic or physical response to wants. Levinas, on the other hand, also draws on infantile experience but focuses not on ingestion but on proximity to the mother. Both of these reactions are basic cognitive realities that all humans experience as infants; together, they create a range of possible metaphors that recur in religious discussions. On the one hand, Augustine talks of being one with God (and the mother), of an inviolate bond that shows up in somewhat attenuated form in Jonathan Edward's imagery of being penetrated by God. On the other, Levinas stresses proximity to the Other, which mirrors infantile cognitive experience of closeness with the mother. This understanding, he said, is also reflected in Emerson's metaphors of resting and laying in Nature.
Will cognitive approaches become the next big thing in lit-crit? Or bio-criticism of the Darwinian brand? That probably won't happen, even as these approaches will, I think, continue to gain in reputation and standing. More to the point, as I argued in a 1998 article, these scholars who are linking Darwin and Dickens have helped challenge an intellectual orthodoxy that, however exciting it once was, seems pretty well played out. In his tour de force Mimesis and the Human Animal: On the Biogenetic Foundations of Literary Representation (1996), Temple's Robert Storey -- one of Nancy Easterlin's doctoral advisors -- warns:
“If [literary theory] continues on its present course, its reputation as a laughingstock among the scientific disciplines will come to be all but irreversible. Given the current state of scientific knowledge, it is still possible for literary theory to recover both seriousness and integrity and to be restored to legitimacy in the world at large.”
Ten years out, Storey's warning seems less pressing. The lure of the most arch forms of anti-scientific postmodernism has subsided, partly because of their own excesses and partly because of challenges such as Storey's. As important, the work being done by the cognitive scholars and others suggest that literature and science can both gain from ongoing collaboration.
Nick Gillespie is Editor-in-Chief of Reason.
The Battle to Stop Bird Flu
The pandemic has hit New Mexico. Inside the Los Alamos weapons lab, massive computer simulations are unleashing disease and tracking its course, 6 billion people at a time.
By Thomas Goetz, Wired
On a cold January day in 1976, Private David Lewis came down with the flu. Struck with the classic symptoms - headache, sore throat, fever - Lewis was told to go to his barracks at Fort Dix, New Jersey, and get some rest. Instead, he went on a march with other grunts, collapsed, and, after being rushed to the base hospital, died on February 4. He was the first - and, as it would turn out, the only - fatality of the great swine flu epidemic of 1976.
Lewis' death came just as health officials were starting to worry about an influenza outbreak in the US. The best science at the time held that flu epidemics erupted in once-a-decade cycles; since the last epidemic had occurred in 1968, the next one should be on the near horizon. As an article in The New York Times put it just days before Lewis fell ill: "Somewhere, in skies or fields or kitchens, the molecules of the next pandemic wait."
At Fort Dix, a few other soldiers developed flu symptoms. When lab tests revealed that perhaps 500 on the base had caught the virus, officials at the Centers for Disease Control and Prevention faced a quandary. Was this the epidemic they'd feared, in which case they should call for mass inoculation? Or should they play the odds, hoping the disease would go away as often happens?
They had little information to go on: the outbreak in New Jersey, isolated cases from Minnesota to Mississippi, and a flu virus that looked suspiciously like the strain that killed half a million Americans in 1918. Estimates for the chance of an epidemic ranged from 2 to 35 percent. Indeed, there was much the scientists didn't know about influenza, period. Flu viruses hadn't been isolated until the 1930s, and they are moody, fast-mutating pathogens. "The speed with which [mutation] can happen," the Times wrote, "is mystifying." When a strain was identified, there was no telling how virulent it was. At the time, the best computer models were in Russia, where health authorities were doing a fair job predicting the spread of flu from city to city. But those models took advantage of the Soviets' penchant for tracking the movements of their citizens; in the US, where travel was open, it was impossible to create such a forecast.
So on March 24, 1976, President Gerald Ford convened a "blue-ribbon panel" of experts from the CDC, the Food and Drug Administration, and the National Institutes of Health. After a few hours, Ford emerged with Jonas Salk, the doctor behind the polio vaccine, by his side and announced a plan "to inoculate every man, woman, and child in the United States." It was to be the largest immunization drive in US history.
The inoculations began on October 1. As of mid-December, 20 percent of the population had received a shot. But by then, it had become clear that an epidemic was not, in fact, at hand. Lewis remained the only fatality - unless you count the 32 other people who died from the vaccine. Soon the program, the last major inoculation effort in the US, was canceled.
The 1976 swine flu scare has become enshrined as "the epidemic that never was," one of the great fiascoes of our national health care system. But in truth, government officials performed well enough. In just a few months, they went from isolating a strange new flu virus to delivering a vaccine to every American who wanted one. The problem was, all they had were blunt instruments: crude mathematical models, rough estimates of infection rates, and a vaccine that often packed too strong a punch. They were fairly well equipped to react to a worst-case scenario - they just weren't equipped to determine if one was imminent. Forced to guess, they chose "to risk money rather than lives," as Theodore Cooper, an assistant secretary of Health Education and Welfare, said at the time. "Better to be safe than sorry."
All of which raises a question: With the specter of an actual flu epidemic looming, are we any better equipped today? H5N1, the strain of avian influenza currently festering in Asia, has yet to pull off the mutation that would customize it for human-to-human transmission. But we know it's an especially lethal virus; most health experts expect it will make that jump soon enough. So the task for experts is to devise a plan that pinpoints how the virus might spread through the US population - a plan that draws more from the Soviet approach to disease forecasting than from the CDC's approach in 1976.
Thirty years on, a new science of epidemiology is at hand. It's based on sophisticated computer models that can get ahead of a virus and, in a sometimes dazzling demonstration of computer science, provide exacting prescriptions for health care policy rather than best guesses. It's an approach pioneered not by physicians but by physicists. And it owes a lot to the nuclear bomb.
In 1992, the US announced a moratorium on nuclear testing. The move meant that the Pentagon could not use underground test explosions to "certify" its arsenal of weapons - to establish that its nuclear stockpile would work when called upon and be safe until that day. That forced the guardians of the stockpile - the nuclear scientists at Los Alamos National Laboratory in New Mexico - to devise new ways to do their jobs. And that meant massive supercomputer simulations.
Computer simulations have a long history at Los Alamos. They were first deployed at the lab during the Manhattan Project in the 1940s to model nuclear explosions - among the first computer models ever attempted. Early on, they were a coarse tool and no substitute for physical experiments; the physicist Richard Feynman, who worked at the lab in its earliest years, called them "a disease" that would lead scientists into computerized daydreams tangential to the task at hand. But over the next 50 years they became an important instrument at Los Alamos, indispensable to the study of nuclear fusion and rocket propulsion. The 1992 moratorium simply codified that role, making computer models the only game in town. Since then, the lab has built one of the world's largest supercomputing facilities, amassing a total of 85 teraflops of processing power.
These tools are now being used in research that goes far beyond weapons work. Among the lab's 6,000 scientists, you'll find astrophysicists modeling white dwarf stars, chemical engineers replicating the effects of Florida hurricanes, geologists modeling the Earth's core, and biologists constructing microbial genomes. "All science is simulation these days," says Stephen Lee, the deputy division leader of computational sciences at Los Alamos.
The most promising application of sim science to real-world policy targets epidemic disease. A mile from the main compound at Los Alamos, in a grade school turned research lab, half a dozen physicists and computer scientists (and one mathematical biologist) are grinding out disease like pepper from a mill. This is EpiSims, an ambitious computer-simulation project that has released anthrax in Houston, sown the bubonic plague in Chicago, and, most recently, spread the flu in Los Angeles.
In 2000, EpiSims let loose smallpox in Portland, Oregon. Programmers started by creating a computer model of the city that's accurate down to the individual high school, traffic light, and citizen. In EpiSims, as in life, people go about their daily business. So on Tuesday morning, John Doe leaves his apartment in the Pearl District at 6:45, stops at Starbucks at 7:08, gets to his office parking lot at 7:45, greets his colleagues in the elevator at 7:49, and is at his desk checking email by 8:02. There are three-quarters of a million John Does in EpiSims' Portland, and just as many Jane Does, each with their own routines and encounters. This is the secret of EpiSims: its insatiable appetite for minutiae. EpiSims is the closest we've come to a huge city living inside a computer - or more specifically, several hundred computers. James Smith, who runs the EpiSims project at Los Alamos, describes his tools as "giant data fusion engines." Tapping the scientists' sophisticated computing algorithms and the lab's supercomputer clusters, it takes about 300 parallel processors and less than 24 hours to run a one-year simulation.
Smallpox is an opportunistic virus, eager to take advantage of incidental encounters. It spreads through the respiratory system and incubates for as long as 10 days before the onset of fluish symptoms - coughing, fever, stomachache. Only days later do victims develop a pustular rash - the pox. It is vicious; in an untreated smallpox epidemic, 30 percent of those infected will die.
In the 2000 smallpox sim, the EpiSims team tracked the virus as it climbed toward its 30 percent fatality rate not all at once, but person by person: schoolteachers and shop clerks first, then office workers and hospital staff. As smallpox leapt from one unwitting victim to another, the EpiSims team watched disease ooze out of schools and shopping malls, erupt in downtown office buildings, and take root in neighborhoods.Within 90 days, Portland was teeming with smallpox. The epidemic was at hand.
But simulating the spread of disease is only half the job. EpiSims also had to evaluate how officials should respond. So, researchers rebooted the sim and Portland was once again alive and disease free. And this time the city had a plan of action. Four days after the first sign of virus, the authorities closed the schools, kicked off a mass vaccination program, and generally shut the city down. And with it, the disease: In 100 days, it had run its course. That sim was followed by another with a slightly different response strategy, and then another. EpiSims eventually ran through hundreds of smallpox models, sometimes vaccinating only exposed individuals, other times targeting the so-called superspreaders, individuals who transmit more than their share of disease, sometimes putting the entire city in quarantine. With every tweak, the disease would peter out or gain steam accordingly.
The EpiSims smallpox models led to a handful of contrarian conclusions about epidemic disease. The first: "The superspreader hypothesis isn't necessarily true," Smith says. This rule holds that in any population, the more social individuals - the hubs - are the principal conduits for spreading disease. Shatter the network by inoculating or removing these hubs, the theory goes, and you'll stand a better chance of knocking out the disease. But EpiSims has shown that we're all more popular than we might think. Even the most reclusive of us runs to Walgreens for toothpaste or drops by Boston Chicken for takeout. For a highly communicable disease like smallpox or influenza, these incidental interactions spread disease just as well as extended encounters. So chasing after the hubs can mean chasing after 80 percent of the population - a huge waste of time and energy. Better simply to inoculate the entire city.
A second revelation: With a lethal pathogen like smallpox, response time is all. As the delay stretches from 4 to 7 to 10 days before officials move into action, EpiSims found that the outbreak becomes increasingly lethal. It turns out that, in the ticking moments after an epidemic strikes, when health officials act is more important than what they actually do. Start with inoculation. Or quarantine. Or school closings. It doesn't matter. What does matter is reducing the time between first outbreak and first response. At the same time, EpiSims warns against overreacting to a less-lethal disease - as in 1976, when standard health measures would have sufficed. (How to tell the difference? Run a simulation.)
These sorts of precise, real-world conclusions are the payoff of the EpiSims approach. They are, to use Smith's term, "actionable" - worthy of consideration not just by scientists but by policymakers. Such relevance has made EpiSims a darling at Los Alamos and an integral component of a Department of Homeland Security project called Nisac (for National Infrastructure Simulation and Analysis Center), an effort to model a range of disasters and plot recovery strategies. Born in 2000 as a tiny $500,000 joint project between Los Alamos and its sister lab in Sandia, New Mexico, Nisac got a $20 million infusion after September 11, 2001, and a mandate to measure how the nation would fare after another deliberate attack, be it a dirty nuke or a bioweapon like smallpox.
More recently, as attention has turned to DHS's responsibility for acts of God as well as acts of terrorists, EpiSims has begun assessing the threat of avian influenza. With these new simulations, Smith's team is adding even more granularity. They're modeling the health care system down to the hospital bed, to see what happens if flu victims flood hospitals, fill the beds, and then spill back into their homes. They're taking into account slight behavior changes, so if people start wearing surgical masks, SARS-style, disease transmissions in the sim will fall off according to the masks' particulates-per-million filtration rate. The results go to the DHS and straight up the chain, helping inform the ultimate question that looms behind all of Nisac's work. "What do we tell the President?" says DV Rao, who directs the lab's Decision Applications division. At these highest levels, this sort of predictive science is an entirely new and unfamiliar decision-making tool. "I don't know what they make of it now," says Rao. "But in a year, hopefully they're going to say, 'All right. Tell us what we should be doing.'"
On a long flight to Maui in February 2003, Tim Germann, a physical chemist at Los Alamos, was reading Richard Preston's Demon in the Freezer. A vivid account of what's at stake if the last samples of smallpox escape US or Russian labs, Preston's warning struck Germann as real enough. But the scope of the danger was unquantified and apparently unquantifiable. Germann also had in his bag a copy of Science that included a piece coauthored by Emory University biostatistician Ira Longini. ("It was a long flight," Germann says.) Longini was investigating different vaccination strategies in the event of a smallpox outbreak. But his simulation sample totaled just 2,000 people - not large enough to extrapolate his conclusions to a larger population.
The reading made Germann wonder: Sure, a national outbreak of smallpox would be bad. But how bad, and how likely? And who would be at risk? Then Germann realized that he had a way of finding out. His day job involved computational materials science, specifically, how metal atoms - copper and iron - would react under stress or shock. In an epidemic, Germann thought, people might behave like the atoms in his simulations. "Atoms have short-range interactions," Germann explains. "Even though we're doing millions or billions of them, every one just moves in its local neighborhood. People work in the same way." So, just as a cooling metal slows down atoms, a quarantine slows down people. By bolstering these physics models with experimental data on things like how viruses circulate from children to adults, and he could conceivably model the entire US population, or even the entire global population.
Germann cranked up the simulation, adjusted the software, and added the parameters Longini had used to model his smallpox outbreak. That marked the birth of what would be called EpiCast - a combination of epidemic and forecast. "It's basically still the same code," Germann says. "We use it one day for running atoms and the next for people." As it turned out, the model showed that smallpox may not be the cataclysm many imagine. Because of the long lagtime between successive generations of an outbreak (two weeks or more), and the tell-tale symptoms, smallpox would be quickly identified. That, plus the stockpiling, post-9/11, of large quantities of vaccine, means that "it should be possible to contain" an outbreak after the first few waves, Germann says.
The flu, by contrast, has a very short generation time (days instead of weeks) and generic symptoms. What's more, it's nearly impossible to stockpile a vaccine because the virus is so quick to mutate. Add the fact that people can be infected and contagious without knowing it, and you've got one vexing virus. So Germann called Longini and described how his molecular models could be adapted to epidemics. "I thought it was a bit preposterous," recalls Longini, who has been modeling epidemics for 30 years, most recently with a National Institutes of Health research program investigating the risk of pandemic influenza. "When Tim told me he could model the whole country or the whole planet, 6 billion people, it sounded very impressive. But I wondered if it was really possible."
So Germann set to work creating flu scenarios to augment Longini's NIH work. With nearly 300 million agents representing every man, woman, and child in the US, EpiCast doesn't bother to track minute-by-minute behaviors as EpiSims does. Instead, Germann puts his computing power to work detailing how slightly different parameters - various antivirals or different isolation policies, for instance - have slightly different national repercussions. So far, the project has run about 200 simulations of an avian flu epidemic, models that have helped Longini's group reach provocative conclusions that fall along two lines: how a nationwide outbreak might take hold, and what policies would best combat it.
EpiCast reveals that, in contrast with flu epidemics of decades past, an outbreak today won't progress "like a wave across the country," spreading from town to town and state to state. Instead, no matter where it erupts - Seattle, Chicago, Miami - it will swiftly blanket the nation. "It starts in Chicago one day," Germann says, "and a couple of weeks later it's everywhere at once." Thank the airlines. Even though disease has piggybacked on air travel for decades, we generally had only isolated outbreaks of low-transmission viruses - like when SARS leapt from Hong Kong to Canada in 2003 but failed to spread beyond Toronto. In an epidemic of a highly communicable disease, the airlines' hub network would effectively seed every metropolitan area in the country within a month or two - and then reseed them, repeatedly.
EpiCast showed that local intervention measures can have some impact: Close the schools, enforce a quarantine, and the disease will slow down. That buys the federal government time to develop and mass-produce a vaccine. But Germann quickly adds a caveat: Acting locally may not be enough. In a worst-case outbreak, without a viable vaccine, "the disease will climb, and eventually go exponential. And once it's on the exponential curve, it's very difficult to contain." Cue Richard Preston.
In November, the Department of Health and Human Services released its pandemic influenza plan. The report offers a thorough and frank assessment of the havoc a full-fledged pandemic would wreak. The nation, the report says, "will be severely taxed, if not overwhelmed." Disease will break out repeatedly, for as long as a year. Hospitals will run out of beds and vaccines. Doctors and nurses will be overworked to the point of exhaustion. Mass fatalities will overwhelm mortuaries and morgues with bodies. Before it has exhausted itself, the report estimates, the disease could spread to as many as 90 million Americans, hospitalizing 10 million and killing almost 2 million.
The report also sketches out how the federal government should respond in such a scenario. In effect, officials face what Bruce Gellin, director of HHS's National Vaccine Program Office, describes as a reverse Hurricane Katrina: Rather than an all-out response focused on one particular region, a flu epidemic would force the government to ration its resources to serve the entire nation. How best to do that - tactically, quickly, and effectively - is now the focus of EpiCast's work.
After the HHS plan was released, Germann and Longini were called to Washington for a strategy session with officials from the NIH, DHS, and the White House. Plenty of Los Alamos scientists, starting with Oppenheimer and Feynman, have made the trek to the corridors of the Capitol. But those trips were concerned with fighting wars, not disease. During the HHS meeting, the officials talked about how to apply EpiCast to the problem at hand. Germann explained the power of the tool. If HHS wants to know where to stockpile antivirals, EpiCast can pinpoint optimal locations. If the government wants to slow down the spread of disease, EpiCast can suggest whether to screen airline passengers by body temperature - and determine just how high a fever is too high to fly. If the first outbreak is in, say, Los Angeles, "do you send doctors from around the country to the West Coast," Germann says, "or keep them where they are because it'll be everywhere in a few weeks?"
Germann assured the group he could help. Then he returned to Los Alamos. Every question means a new sim, and every sim helps answer questions that are otherwise unanswerable.
Thomas Goetz (thomas@wiredmag.com) is Wired's deputy editor.
no subject
Date: 2006-01-27 03:03 pm (UTC)no subject
Date: 2006-01-27 03:48 pm (UTC)no subject
Date: 2006-01-27 03:18 pm (UTC)although cantonese is full of slang (the WAAAAHHH! made me laugh out loud cuz it's so true) those of us who speak cantonese learn to read and write in standard chinese, which is how mandarin is spoken, so we can usually carry on basic conversation in mandarin, although our tones may be a bit off.
but because it's so colloquial, it's definitely more difficult for someone who speaks mandarin to learn cantonese. which probably also contributes to the "dying breed" comment (although i still hear it all the time during dim sum!).
jangrl
no subject
Date: 2006-01-28 01:07 am (UTC)no subject
Date: 2006-01-28 04:18 am (UTC)But - regardless of whether or not the map is authentic, the reaction to it and any other suggestion that America was explored/settled from the East is very telling of deeper prejudices.
no subject
Date: 2006-01-29 05:18 pm (UTC)