Sam Anderson takes to the pages of the New York Times to praise Roland Barthes, “the man who essentially created cultural criticism,” from the systematic analysis of novelistic structure to the TV recap:
Instead of constructing multivolume monuments of systematic thought, Barthes wrote short books built out of fragments. He was less interested in traditional coherence than in what he called jouissance: joy, surprise, adventure, pleasure — tantric orgasms of critical insight rolling from fragment to fragment…
His critical metabolism ran unusually high: he would flit from subject to subject, defining new fields of interest (semiology, narratology) only to abandon them and leave others to do the busywork. He treated canonical French works with such unorthodox flair it drove conservative professors crazy…
In his inaugural lecture at the Collège de France — a sort of mission statement for the most prestigious academic post in the country — Barthes announced that he aspired above all to “forget” and to “unlearn” and proposed, as a kind of motto, “no power, a little knowledge, a little wisdom and as much flavor as possible.”
I have a hard time giving up knowledge so easily — and really, Barthes did too. (I think it’s mostly the pretense to knowledge, the use of knowledge as a cudgel, that he saw as the problem.)
The part I probably love best and most fully endorse is the section on what a critic is supposed to do:
“Mythologies” is often an angry book, and what angered Barthes more than anything was “common sense,” which he identified as the philosophy of the bourgeoisie, a mode of thought that systematically pretends that complex things are simple, that puzzling things are obvious, that local things are universal — in short, that cultural fantasies shaped by all the dirty contingencies of power and money and history are in fact just the natural order of the universe. The critic’s job, in Barthes’s view, was not to revel in these common-sensical myths but to expose them as fraudulent. The critic had to side with history, not with culture. And history, Barthes insisted, “is not a good bourgeois.”
The pairing of these things, the genuine jouissance and the relentless critical awareness, the ruthless crusade against the conventionally obvious, is what makes it all work.
Never just a cheerleader. Never just a killjoy. Something beyond either. And listing always in favor of flavor.
PS: Mythologies was just published in a terrific new edition/translation which is like twice as long as the bowdlerized version we’ve had in English for forty years. That’s the occasion for the essay.
My books are better thought about than read. They’re insanely dull and unreadable; I mean, do you really want to sit down and read a year’s worth of weather reports or a transcription of the 1010 WINS traffic reports “on the ones” (every ten minutes) over the course of a twenty-four-hour period? I don’t. But they’re wonderful to talk about and think about, to dip in and out of, to hold, to have on your shelf. In fact, I say that I don’t have a readership, I have a thinkership. I guess this is why what I do is called “conceptual writing.” The idea is much more important than the product…
My favorite books on my shelf are the ones that I can’t read, like Finnegans Wake, The Making of Americans, Boswell’s Life of Johnson, or The Arcades Project. I love the idea that these books exist. I love their size and scope; I adore their ambition; I love to pick them up, open them at random, and always be surprised; I love the fact that I will never know them. They’ll never go out of style; they’re timeless; they’re always new to me. I wanted to write books just like these. I think you hit it just right when you spoke of reference books. I never wanted my books to be mistaken for poetry or fiction books; I wanted to write reference books. But instead of referring to something, they refer to nothing. I think of them as ’pataphysical reference books.
For more on pataphysics (which I don’t think really needs that apostrophe), aka “the science of imaginary solutions,” read this.
I also found this fascinating, especially coming from the man who wrote “If It Doesn’t Exist on the Internet, It Doesn’t Exist” (back in 2005):
I’ve made a move in the Luddite direction recently by trying to remove UbuWeb from Google. I want the site to be more underground, more word-of-mouth. The only way you’ll be able to find it is if someone links to it or tells you about it, just like music used to be before MTV. But you’ll still find UbuWeb on all the bad search engines that no one uses: AltaVista, Dogpile, and Yahoo! Again, everyone wants to rush toward the center: they even write books about how to get your Google ranking higher. We’re headed in the opposite direction. We want to get off Google.
But actually, even if you go back to that 2005 essay, it has this gorgeous coda, under the subhed “The New Radicalism”:
In concluding, I’m going to drop a real secret on you. Used to be that if you wanted to be subversive and radical, you’d publish on the web, bypassing all those arcane publishing structures at no cost. Everyone would know about your work at lightening speed; you’d be established and garner credibility in a flash, with an adoring worldwide readership.
Shhhh… the new radicalism is paper. Right. Publish it on a printed page and no one will ever know about it. It’s the perfect vehicle for terrorists, plagiarists, and for subversive thoughts in general. In closing, if you don’t want it to exist — and there are many reasons to want to keep things private — keep it off the web.
Something to think about, when you’re too busy not reading.
It’s harder than you might think to use Google Ngrams to actually chart trends in cultural history — or do “culturomics,” as the Science article authors would have it — because of well-known problems with the data set.
Here, Matthew Battles tries (on more or less a lark) to see some history play out, Bethany Nowviskie spots a trend (maybe true, maybe false), and Sarah Werner flags the problem.
Aw, man — that fhit Seriously Pucks.
You know what would actually be pretty cool, though? If it were easier to go one level deeper and use Ngrams to do Google Instant Regression. You could graph trends against well-known noise (other s-words misread as f) AND other trends — or instantly find similar graphs.
Let’s say the curve of the graph for the f–word in the 1860s is similar to that for other words and phrases — like “ass”* or “confederacy”* — you could correlate language with other language, individual words with stock phrases, and even (using language as an index/proxy) extralinguistic cultural trends or historical events.
Single-variable analysis just doesn’t tell you very much, even on a data set as problematic as print/language. You need systematic data, and better comparison and control capacity between variables, before you can start to do real science.
(* Ignore for the purposes of this example ascribing contemporary historical meanings to these two ambiguous terms.)
Listen to it!
I heard King’s “I Have a Dream” on the radio this afternoon. Despite the grandeur of the visuals of the March on Washington, and the power of the text, I think that radio is the best way to experience it. I am amazed, as a writer, teacher, poet, and speaker, at the range of King’s elocutionary instrument.
He doesn’t just use every sonorous rhetorical tool in the book. He makes words rhyme which shouldn’t. He finds transitory consonants and bends them to fit his alliterative schemes. He has the most versatile spondaic foot I’ve ever heard, so much so it could pass for iambic. (Try to find a genuinely unstressed syllable — or unstressed thought — in the way King says “We Will Not Be Satisfied.”)
And he matches and varies his pitch to highlight his parallelisms of matter and mind, in his voice and in the air; a small, thickly built man, speaking from the roots of the trees, from the center of the earth, knowing that the extension of his own gravity stretches like a column from the molten core to the orbit of the moon. He is a single still point with the granted power to bend straight the crooked lines of history.
The Middle East is anxious about what’s perceived as a decline in Arabic:
[C]alls to forestall the language’s demise are accompanied by cautionary tales about parents who encourage their children to learn other “more useful” languages like English and French, only to find that they can scarcely recite the Arabic alphabet when they get to university. Meanwhile, teachers across the region warn about the rise of “Facebook Arabic,” a transliterated form of the language based on the Latin script. Exemplifying their concerns are the oratorical fumbles of some of the region’s younger political leaders like Saad Hariri, the prime minister of Lebanon, whose shambling inaugural address to the Lebanese parliament provoked much local tittering. Not everyone is amused: Fi’l Amr, a language-advocacy group, has launched a campaign to raise awareness about Arabic’s critical condition by staging mock crime scenes around Beirut depicting “murdered” Arabic letters, surrounded by yellow police tape that reads: “Don’t kill your language.”
Really, though, it’s not actually Arabic that’s suffering, but a particular grapholect, fusha, the Modern Standard Arabic that closely resembles the classical Arabic of the Koran. And fusha has always been more of an imagined commonality binding together the Arab world than a reality.
In a very basic sense, there is no such thing as Arabic; or, at least, there is no single language that all Arabs speak, read, write, and understand. Instead, Arabic is, like English and many other languages, a constellation of various national dialects, regional vernaculars, and social registers bearing different degrees of resemblance to each another. What sets it apart from a language like English is its diglossic nature, whereby the language of literature and formal address (newscasts, political speeches, religious sermons, and so forth) is markedly different, on multiple structural levels, from the language of everyday speech.
You can overstate this, but it’s a little bit like 19th-century Western Europeans watching literacy numbers boom while wringing their hands over the fate of Latin.
As recently as 1970, three out of four Arabs over the age of 15 were illiterate, according to Unesco. Two decades earlier, illiteracy among women was close to 90 per cent. Even in a country like contemporary Egypt – which has long prided itself, as the old saying goes, on reading the books that Iraq writes and Lebanon publishes – less than two-thirds of the population can read. To speak, therefore, of helping restore Arabic to its former glory, or of helping it to “reemerge as a dynamic and vibrant language” as the government of the UAE has recently committed itself to do, is to ignore the reality that Arabic – both in its classical and modern standard incarnation – has never had as many users as it does today. Even taking into consideration the sway that English holds in the private and educational sectors of various countries in the region, or the important position that French occupies in France’s former colonies, it is impossible to pinpoint another moment in the history of the Arab world when so many people could communicate (with varying degrees of ability) in fusha.
This article I’m quoting was written by my friend Elias Muhanna, who blogs about Lebanese politics as Qifa Nabki, and published in The National, then picked up by The Economist. Whoo-hoo! Comp Lit PhDs FTW!
[Michael] Steele is representative of a fascinating but little noted development on the right: the rise of buckrakers who are exploiting the party’s anarchic confusion and divisions to cash in for their own private gain. In this cause, Steele is emulating no one if not Sarah Palin, whose hunger for celebrity and money outstrips even his own.
I think it was either Daniel Larison or Andrew Sullivan last year who noted that conservative pundits’ power and profit tends to go up whenever the Republican Party does worse. If you control the White House and/or Congress, you don’t really need a radio host or non-office-holding former candidate as a spokesman. But buckraking isn’t limited to the right — as Ezra Klein, Glenn Greenwald, and Wonkette point out, it’s kind of hard to see Harold Ford’s Senate run in NY as anything but an attempt at self-promotion. (Ford probably won’t win the primary nomination, but his place as a guy who gets interviewed on cable news is safe for years.)
In fact, “buckraker” is probably best reserved for pseudo-journalists, i.e., pundits who act as political hacks — but for their own benefit, over and above that of their media network or political party. That (as Word Wizard points out) is closer to the origin of the term, both as a variation on “muckraker” and its (probable) coinage by Jacob Weisberg in The New Republic, all the way back in 1986, in an essay called “The buckrakers: Washington journalism enters a new era”:
[M]ark February 1985 as the start of the next era. That was when Patrick J. Buchanan went to work at the White House and his financial disclosure statement revealed, to widespread astonishment and envy, that he had made $400,000 as a journalist in 1984. This included $60,000 for his syndicated column, $25,000 for his weekly appearance on ‘The McLaughlin Group,’ $94,000 for Cable News Network’s ‘Crossfire,’ $81,000 for a radio show, and more than $135,000 for 37 speeches. Welcome to the era of the buckraker.
But Buchanan was always a marginal figure, a good interview but someone who was always at the outside of the Republican party and national politics. The fact that Michael Steele has essentially refashioned the national chairmanship of his party — which (national political figures like Howard Dean aside) used to be a pretty low-profile, behind-the-scenes, support job — into a me-out-front full-time media position seems significant — Steele doesn’t have an incentive to promote the party at the expense of himself.
Serving as party chair is always a short-term gig. As Buchanan and now Palin and Ford have shown, buckraking is forever.
We sampled six cheeses, drank wine and champagne, and learned that cheese was invented in Mesopotamia around 3000 B.C., when travelers carrying milk around in the sun in dried-out sheep stomachs noticed that it had begun to curdle and become delicious (this story sounded suspiciously Wiki to me, and indeed here it is, given as one possible explanation).
“This story sounded suspiciously wiki.” The obvious colloquial analogue would be “the story seemed fishy.” But note the distinction. A “fishy” story, like a “fish story,” is a farfetched story that is probably a lie or exaggeration that in some way redounds to the teller’s benefit. A “wiki” story, on the other hand, is a story, perhaps farfetched, that is probably backed up by no authority other than a Wikipedia article, or perhaps just a random web site. The only advantage it yields to the user is that one appears knowledgeable while having done only the absolute minimum amount of research.
While a fishy story is pseudo-reportage, a wiki story is usually either pseudo-scientific or pseudo-historical. Otherwise, wiki-ness is characterized by unverifiable details, back-of-the-envelope calculations, and/or conclusions that seem wildly incomensurate with the so-called facts presented.
Have folks heard this phrase in the wild? Is it unfair to Wikipedia, or to those who use it as a research source? Do we already have a better word to describe this phenomenon? (And: this phenomenon is all too real, and deserves a name, doesn’t it?)
Today’s a day for thinking about brains, plasticity, and renewal. At least in the pages of the New York Times.
First up is Barbara Strouch, who writes on new neuroscientific research into middle-aged brains:
Over the past several years, scientists have looked deeper into how brains age and confirmed that they continue to develop through and beyond middle age.
Many longheld views, including the one that 40 percent of brain cells are lost, have been overturned. What is stuffed into your head may not have vanished but has simply been squirreled away in the folds of your neurons.
One explanation for how this occurs comes from Deborah M. Burke, a professor of psychology at Pomona College in California. Dr. Burke has done research on “tots,” those tip-of-the-tongue times when you know something but can’t quite call it to mind. Dr. Burke’s research shows that such incidents increase in part because neural connections, which receive, process and transmit information, can weaken with disuse or age.
But she also finds that if you are primed with sounds that are close to those you’re trying to remember — say someone talks about cherry pits as you try to recall Brad Pitt’s name — suddenly the lost name will pop into mind. The similarity in sounds can jump-start a limp brain connection. (It also sometimes works to silently run through the alphabet until landing on the first letter of the wayward word.)
That’s a wonderful technique, all the more so because it sounds like something Cicero might have invented.
We are born with a highly structured brain. But those brains are also transformed by our experiences, especially our early experiences. More than any other animal, we humans constantly reshape our environment. We also have an exceptionally long childhood and especially plastic young brains. Each new generation of children grows up in the new environment its parents have created, and each generation of brains becomes wired in a different way. The human mind can change radically in just a few generations.
These changes are especially vivid for 21st-century readers. At this very moment, if you are under 30, you are much more likely to be moving your eyes across a screen than a page. And you may be simultaneously clicking a hyperlink to the last “Colbert Report,” I.M.-ing with friends and Skyping with your sweetheart.
We are seeing a new generation of plastic baby brains reshaped by the new digital environment. Boomer hippies listened to Pink Floyd as they struggled to create interactive computer graphics. Their Generation Y children grew up with those graphics as second nature, as much a part of their early experience as language or print. There is every reason to think that their brains will be as strikingly different as the reading brain is from the illiterate one.
Should this inspire grief, or hope? Socrates feared that reading would undermine interactive dialogue. And, of course, he was right, reading is different from talking. The ancient media of speech and song and theater were radically reshaped by writing, though they were never entirely supplanted, a comfort perhaps to those of us who still thrill to the smell of a library.
But the dance through time between old brains and new ones, parents and children, tradition and innovation, is itself a deep part of human nature, perhaps the deepest part. It has its tragic side. Orpheus watched the beloved dead slide irretrievably into the past. We parents have to watch our children glide irretrievably into a future we can never reach ourselves. But, surely, in the end, the story of the reading, learning, hyperlinking, endlessly rewiring brain is more hopeful than sad.
Put these two together, and you get a picture that’s even more hopeful. Our brains aren’t just plastic over the span of human evolution or historical epochs, but over individual lives. It might be easier and feel more natural for children, whose brains seem to us to be nothing but plasticity. But we don’t just have a long childhood — to a certain extent, our childhood never ends.
Human beings are among the only species on the planet who evolved to thrive in any kind of climate and terrain on the planet. (Seriously; underwater is the only real exception.) Compared to that, summoning the plasticity required to engage with any new kind of media is a piece of cake.