spacer image
spacer image

August 16, 2009

Classifying Very Small Objects

Tim says,

I love this SO VERY MUCH.


Comments (0) | Permasnark | Posted: 7:54 AM

August 14, 2009

Tim's thoughts: As writing becomes more tied to a particular moment, more real-time, it becomes more like spee... >>

The Future of Analphabetic Writing

A link, and then a long digression (or several).

Andrew Robinson at the Oxford University Press blog writes about attempts at universal languages:

In the mid-1970s, with increasing international travel, the American Institute of Graphic Arts cooperated with the United States Department of Transportation to design a set of symbols for airports and other travel facilities that would be clear both to travellers in a hurry and those without a command of English. They invented 34 iconic symbols. The design committee made a significant observation: “We are convinced that the effectiveness of symbols is strictly limited. They are most effective when they represent a service or concession that can be represented by an object, such a bus or bar glass. They are much less effective when used to represent a process or activity, such as Ticket Purchase…"...

Many scholars of writing today have an increasing respect for the intelligence behind ancient scripts. Down with the monolithic ‘triumph of the alphabet’, they say, and up with Chinese characters, Egyptian hieroglyphs, and Mayan glyphs, with their hybrid mixtures of pictographic, logographic and phonetic signs. Their conviction has in turn nurtured a new awareness of writing systems as being enmeshed within societies, rather than viewing them somewhat aridly as different kinds of technical solution to the problem of efficient visual representation of a particular language.

It's weird how the alphabet, as a sort of half-technology, lies in between the fully functional/universal/superficial pictographic language and the deep cultural contextualism of ideogrammic writing. It's a hybrid, a language of traders bumping against poets, where letters that used to name things (aleph = ox, bet = house in Phoenician) morph into pure sound (alpha, beta = meaningless in Greek).


When I was a kid, I was fascinated by Morse Code. My brothers and I had a set of walkie talkies that included a code on the handsets with the dots and dashes for each letter of the alphabet, and we tried to beep and boop out messages to each other, never getting much farther than "S.O.S." For me, it was the beginning of the digital dream - reducing information to a single variation between two elements.

But my ear was off. I couldn't turn long and short sounds into letters in my brain. Later, I did a science report on the telegraph, and was dumbfounded to learn that skilled telegraph operators COULD translate this text on the fly by ear - that it was faster for them than reading printouts of long and short lines (only useful, really, for receiving messages without an operator at the terminal).

Then, I saw The Hunt For Red October, watching Sean Connery and Scott Glenn trade messages back and forth optically, reading flashes of light through periscopes. That was my first real inkling that morse code could be something that was read in real time, like watching a stock ticker that only flashed one letter - less than one letter - at any given moment.

To this day, I still don't know what to make of morse code. It's a digital code that's based on the alphabet, but seems to go way beyond the alphabet. And what are you doing when you're interpreting morse code on the fly, whether by eye or ear? Are you reading? Speaking?


Sign language poses some of the same problems. Some signs are what we might call iconic or pictographic - they look like or have some connection to the things they refer to. But a lot of them, in American Sign Language at least, depend on writing or spelling out words, sometimes just the first letters of words.

The first principle of writing seems to be that it is language made visual and visible - of speech, that it's aural and oral. But there are visual forms of language that don't bear much of a resemblance to writing, and auditory communications (like listening to morse code) that are essentially dependent on writing.


The nineteenth century was all about reducing the quality of information - the richness of its readability - for quantitative transmission. In addition to the telegraph, there's also shorthand, well documented by Leah Price in this essay in the LRB. It's still pretty amazing that people actually read whole novels in shorthand:

Pen pals in Africa and Australia found one another through the classified pages of shorthand magazines that juxtaposed new material with reprints of published fiction: Robinson Crusoe, Around the World in Eighty Days, all the Sherlock Holmes stories and even an unabridged run of the Strand Magazine. The depositories of copyright libraries are littered with Victorian shorthand editions of A Christmas Carol, Aesop’s fables, English-Welsh and English-Hindi dictionaries, the Old and New Testaments, and biographies of Calvin and Galileo. Pitman’s Shorthand Weekly (later called the Phonetic Journal) featured ‘serials and short stories by well-known authors; miscellaneous articles; illustrated jokes and anecdotes; and prize competitions’. On 17 August 1901, it offered a prize for the best biography of Isaac Pitman by a colonial subscriber. Submissions, naturally, were accepted only in shorthand.

More important still might be the turn Price traces (which is the turn EVERYONE finds in the history of office and business culture in this period) from the not-quite-but-nearly-aristocratic culture of "men of letters" to the technical world, where women operated the machines of language more often than men:

You can still read every syllable from the first International Shorthand Congress and Jubilee of Phonography, thanks to transcripts produced by ‘an army of phonographers . . . not at all concerned with the economic rewards of shorthand, important as these are, but only with the service – personal, social – even professional – which one Pitmanite can render another in any part of the world.’ One delegate described shorthand as a ‘bond of brotherhood’. Like the open-source movement a century and a half later, Pitmanism was idealistic, distributed and male.

And then everything changed. The American Civil War and, later, the First World War removed men from the workforce; the commercialisation of the typewriter and the invention of the phonograph upped the demand for white-collar labour. Women’s delicate hands began to look like the right tools for turning speech into shorthand, or manuscript into typescript, or one copy into many. By 1901, the shorthand transcript of a Midlands stenographers’ club records a speaker arguing that ‘it seemed degrading for a strong, healthy man to be occupied all day long in using the pen upon what was little more than copying words.’ Advertisements for ‘wrist exercisers’ seemed to hint that a man who hunched over a desk all day would not stay strong and healthy for long.

As stenography fell into the hands of girls and hypochondriacs, its ethos changed from identitarian to utilitarian, from voluntaristic to vocational. By 1901, the Phonetic Journal was complaining that ‘the great majority of young girls study simply for the proficiency which will enable them to enter business.’ Isaac Pitman outlived the ‘brotherhood of the pen’. The metaphor was unlucky: while he continued to tinker with the system his brother Benn realised that ordinary users were tired of endless refinements, and froze the US version of the system at its 1852 release. By the time of Isaac’s death, there was a new threat from Gregg’s 1888 system, which cornered the American market by billing itself as user-friendly, and more specifically as a friend to the ladies. Gregg was to Pitman as Windows is to Linux, or Pilates to yoga: a technique stripped of the ideological baggage that had originally impelled its spread.

Shorthand on its face is an intermediate recording technology between (spoken) voice and (alphabetic) text; but any language can take on a life of its own. In fact, it's hard to know where the line between speaking ends and writing begins.


This gets confusing in contemporary software, too. I always get Google Talk confused with Google Voice, and not just because everyone I know calls Google Talk the old name of Google Chat or "gchat." What am I going to do, "voice" someone? There's also the difference between "voice recognition" and "speech recognition."

A friend of mine pointed out that when we're speaking, we almost always use the word "talk" to refer to speech; it's only when we're writing that we call it speech. Maybe the important distinction isn't whether language is auditory or visual, but whether it's recorded or ephemeral. Your voice, speech, mail is a record; your "talking" isn't, even if you keep a transcript.


Talking happens in real time, and to talk, you need a voice, even if it's not produced in the throat. Roger Ebert recently discussed his search for a way to communicate in real time to friends, family, and business partners:

Soon after my second surgery, when it became apparent I wouldn't be able to speak, I of course started writing notes. This got the message across, but was too time-consuming for communications of any length. And notes were unbearably frustrating for a facile speaker like me, accustomed to dancing with the flow of the conversation. There is a point when a zinger is perfectly timed, and a point when it is pointless.

There is a ground rule in the treatment of those who cannot speak; their written notes must take precedence. This was not happening. Something would be said, I would begin writing a comment, and someone else would speak. Then someone else would speak. I would finish my note, and hand it to a person who was speaking. They would hold it, finish, and be responded to by someone else. When my note was finally read, I would hear, What's this about? Or I don't know what that means. I would point to right (the past), to suggest I was responding to something said earlier. They wouldn't know what that meant, either.

God knows my wife tried to help out, but people...are people. Who knows how patient I would be? One on one, conversations-by-note went all right. Business meetings were a torture. I am a quick and I daresay witty speaker. Now I came across as the village idiot. I sensed confusion, impatience and condescension. I ended up having conversations with myself, just sitting there.


Some of the most moving writings I've ever read are the "conversation slips" Franz Kafka wrote at the end of his life, when he was dying of tuberculosis and could no longer eat, drink, or speak. One recurring theme: he continually asks those around him to water the flowers in the room, often while also self-deprecatingly about his own inability to drink:

That cannot be, that a dying man drinks.

Do you have a moment? Then lightly spray the peonies.

Mineral water - once for fun I could

Fear again and again.

A bird was in the room.

Put your hand on my forehead for a moment to give me strength.

Ebert finally opted for the canned OS X voice on his laptop -- that solved the speech in near real-time problem - but he's still searching for a solution that will give him back the full range of his instrument, in all of its analphabetic tonalities -- and that's what a voice is, ultimately, an instrument to play, even if it's played with the alphabetic keys of the keyboard.


At the other end of the spectrum, let's consider language with no voice at all -- bar codes. Matthew Battles has a couple of good posts on bar codes at "The Urge of the Letter" (one, two):

[T]he barcode is a printed thing, meant for “reading” not by human minds, but by computers. Nonetheless, I’ve often wondered if a time will come when barcodes are legible, when we will read them as easily as any other typeface. In a sense that time has arrived: the iPhone and other mobile operating systems now offer applications that will “read” a photo of a barcode and instantly deliver product information to the user’s device—spectacles for a consumer consciousness, delivering into the magisterium of reading and writing an information transaction until quite recently restricted to machines.


An aside for a short prediction: in ten years, Kindles (and other handheld readers) will come with a stylus, not to write, but to scan barcodes as well as alphabetic text, and display data (or metadata) on-screen. Think about it! Your "reading machine" will actually be able to read things, not just show you text!

Posted August 14, 2009 at 9:09 | Comments (5) | Permasnark
File under: Books, Writing & Such, Language

August 1, 2009

Dan's thoughts: If Snarkmarket were Facebook, I would "like" Peter's take on this Shakespeare discussion: the bes... >>

The Bard, Or What You Will

John McWhorter's exhortation to perform Shakespeare in modern-language adaptations caught my eye a while back. His case is that Shakespeare's language is more-or-less unrecognizable to us; we misunderstand most of what we pick up; and (I think this is probably uncontroversial) full-length 100% faithful readings of the longest versions of the texts are chorish.

Original Shakespeare should occupy the place original Chaucer does today: engaged by scholars and hard-core aficionados. However, to require intensive and largely unfeasible decoding in full three-hour live performances is to condemn us to ignorance of something that makes life worth living. As Liddell put it, for a people to genuinely possess, rather than merely genuflect, to a literature, its words "must convey expression not to one man only, but to thousands."

Maybe I'm an outlier, but I think I'm so conditioned by my professional position and highly personal Shakespeare fetish that it's almost unimaginable to me to go to a Shakespeare play and try to comprehend the action and language as if I'm hearing it for the first time. Do people actually do this? Should they?

When I see Shakespeare, it's more like going to a Bloomsday reading. I'm quite consciously seeing an adaptation/interpretation of texts that I have read and (usually) know quite well. My attitude is generally, "let's see how they do this." Again, maybe I'm in the minority on this. But I'm also probably squarely in the middle of the target audience for live Shakespeare.

I actually DON'T think that there's much of a market for middle-of-the-road contemporary-language Shakespeare. When people want the Bard, they want the real stuff, and feel cheated if they think they're getting anything less. Even if they don't understand the language. ESPECIALLY when they don't understand it.

But I think you could generate more interest from everyone if you avoided intelligibility for intelligibility's sake and offered a more stylized take on Shakespeare's language. McWhorter's counterexample to Shakespeare is August Wilson, and Wilson's language is NOT plain-language. It's often not even contemporary. If you wanted an August Wilson take on Shakespeare, you'd really be looking for something completely different.

My own preference for clever updates of Shakespeare - again, I'm a history freak - would be for lots and lots of adaptations that don't just port his text into the present, but into lots of different periods, including mishmashes of multiple times and places. (This is actually what Shakespeare does.) Do Julius Caesar during the American Civil War; give us a Prohibition-era Twelfth Night (I actually saw an adaptation like this in London). Put Shakespeare in masks, just any mask but our own.

Posted August 1, 2009 at 3:11 | Comments (25) | Permasnark
File under: Books, Writing & Such, Language

July 24, 2009

Tim's thoughts: Moreover, reading and typing are both relatively inconspicuous. You can do them easily in public,... >>

Towards A Theory of Secondary Literacy

There's a great scene in Star Trek IV - yes, the one where the crew travels back in time to save whales - where Scotty, the engineer, tries to control a Macintosh by talking to it. When McCoy hands him the mouse, he speaks into it, in a sweetly coaxing voice: "Hello, computer!" When he's told to use the keyboard ("How quaint!"), he irritably cracks his knuckles -- and hunts-and-pecks at Warp 1 to pull up the specs for "transparent aluminum."

As recently as 2000, it seemed inevitable that any minute now, we were going to be able to turn in our quaint keyboards and start controlling computers with our voice. Our computers were going to become just like our telephones, or even better, like our secretaries. But while voice and speech recognition and commands have gotten a lot better, generally the trend has been in the other direction - instead of talking to our computers, we're typing on our phones.

(Which is arguably the hidden message of Scotty and the Mac - even somebody with the most powerful voice-controlled computer in the galaxy can touch-type like a champ. He probably only talks to the computer so his hands are free to text his friends while he's engineering! "brb - needed on away team" -- "anyone know how to recrystallize dilithium" -- That's why he's so inventive! He's crowdsourcing!)

The return to speech, in all of its immediacy, after centuries of the technological dominance of writing, seemed inevitable. The phonograph, film, radio, and television all seemed to point towards a future dominated by communications technology where writing and reading played an increasingly diminished role. I think the most important development, though, was probably the telephone. Ordinary speech, conversation, in real-time, where space itself appeared to vanish. It created a paradigm not just for media theorists and imaginative futurists but for ordinary people to imagine tomorrow.

This was Marshall McLuhan's "global village" - a media and politics where the limitations of speech across place and time were virtually eliminated. Walter Ong called it "secondary orality" - we were seeing a return to a culture dominated by oral communication that wasn't QUITE like the primary orality of nonliterate cultures - it was mediated by writing, by print, and by the technologies and media of the new orality themselves.

Towards the end of his life, in the mid-1990s, Ong gave an interview where he tried to explain how he thought his theory of secondary orality was being misapplied to electronic communications:

“When I first used the term ‘secondary orality,’ I was thinking of the kind of orality you get on radio and television, where oral performance produces effects somewhat like those of ‘primary orality,’ the orality using the unprocessed human voice, particularly in addressing groups, but where the creation of orality is of a new sort. Orality here is produced by technology. Radio and television are ‘secondary’ in the sense that they are technologically powered, demanding the use of writing and other technologies in designing and manufacturing the machines which reproduce voice. They are thus unlike primary orality, which uses no tools or technology at all. Radio and television provide technologized orality. This is what I originally referred to by the term ‘secondary orality.’

I have also heard the term ‘secondary orality’ lately applied by some to other sorts of electronic verbalization which are really not oral at all—to the Internet and similar computerized creations for text. There is a reason for this usage of the term. In nontechnologized oral interchange, as we have noted earlier, there is no perceptible interval between the utterance of the speaker and the hearer’s reception of what is uttered. Oral communication is all immediate, in the present. Writing, chirographic or typed, on the other hand, comes out of the past. Even if you write a memo to yourself, when you refer to it, it’s a memo which you wrote a few minutes ago, or maybe two weeks ago. But on a computer network, the recipient can receive what is communicated with no such interval. Although it is not exactly the same as oral communication, the network message from one person to another or others is very rapid and can in effect be in the present. Computerized communication can thus suggest the immediate experience of direct sound. I believe that is why computerized verbalization has been assimilated to secondary ‘orality,’ even when it comes not in oral-aural format but through the eye, and thus is not directly oral at all. Here textualized verbal exchange registers psychologically as having the temporal immediacy of oral exchange. To handle [page break] such technologizing of the textualized word, I have tried occasionally to introduce the term ‘secondary literacy.’ We are not considering here the production of sounded words on the computer, which of course are even more readily assimilated to ‘secondary orality’” (80-81).

This is where most of the futurists got it wrong - the impact of radio, television, and the telephone weren't going to be solely or even primarily on more and more speech, but, for technical or cultural or who-knows-exactly-what reasons, on writing! We didn't give up writing - we put it in our pockets, took it outside, blended it with sound, pictures, and video, and sent it over radio waves so we could "talk" to our friends in real-time. And we used those same radio waves to download books and newspapers and everything else to our screens so we would have something to talk about.

This is the thing about literacy today, that needs above all not to be misunderstood. Both the people who say that reading/writing have declined and that reading/writing are stronger than ever are right, and wrong. It's not a return to the word, unchanged. It's a literacy transformed by the existence of the electronic media that it initially has nothing in common with. It's also transformed by all the textual forms - mail, the newspaper, the book, the bulletin board, etc. It's not purely one thing or another.

This reminds me of one of my favorite Jacques Derrida quotes, from his essay "The Book to Come":

What we are dealing with are never replacements that put an end to what they replace but rather, if I might use this word today, restructurations in which the oldest form survives, and even survives endlessly, coexisting with the new form and even coming to terms with the new economy --- which is also a calculation in terms of the market as well as in terms of storage, capital, and reserves.

I doubt that "secondary literacy" will catch on, because it sounds like something that middle school English teachers do. But that's too bad - because it's actually a pretty good term to describe the world we live in.

Posted July 24, 2009 at 5:06 | Comments (2) | Permasnark
File under: Books, Writing & Such, Language, Media Galaxy, Object Culture, Technosnark

July 15, 2009

What Fun To Wreck [Language]

Tim says,

Conceptual writer Kenny Goldsmith introduces a new issue of Poetry devoted to probably the most divisive no-va-nt-guar-d writing in generations:

Our immersive digital environment demands new responses from writers. What does it mean to be a poet in the Internet age? These two movements, Flarf and Conceptual Writing, each formed over the past five years, are direct investigations to that end. And as different as they are, they have surprisingly come up with a set of similar solutions. Identity, for one, is up for grabs. Why use your own words when you can express yourself just as well by using someone else’s? And if your identity is not your own, then sincerity must be tossed out as well. Materiality, too, comes to the fore: the quantity of words seems to have more bearing on a poem than what they mean. Disposability, fluidity, and recycling: there’s a sense that these words aren’t meant for forever. Today they’re glued to a page but tomorrow they could re-emerge as a Facebook meme. Fusing the avant-garde impulses of the last century with the technologies of the present, these strategies propose an expanded field for twenty-first-century poetry. This new writing is not bound exclusively between pages of a book; it continually morphs from printed page to web page, from gallery space to science lab, from social spaces of poetry readings to social spaces of blogs. It is a poetics of flux, celebrating instability and uncertainty.
Comments (2) | Permasnark | Posted: 9:27 AM

July 11, 2009


I had never heard of this disorder before:

In hyperlexia, a child spontaneously and precociously masters single-word reading. It can be viewed as a superability, that is, word recognition ability far above expected levels... Hyperlexic children are often fascinated by letters and numbers. They are extremely good at decoding language and thus often become very early readers. Some hyperlexic children learn to spell long words (such as elephant) before they are two and learn to read whole sentences before they turn three. An fMRI study of a single child showed that hyperlexia may be the neurological opposite of dyslexia.[2]

Often, hyperlexic children will have a precocious ability to read but will learn to speak only by rote and heavy repetition, and may also have difficulty learning the rules of language from examples or from trial and error, which may result in social problems... Their language may develop using echolalia, often repeating words and sentences. Often, the child has a large vocabulary and can identify many objects and pictures, but cannot put their language skills to good use. Spontaneous language is lacking and their pragmatic speech is delayed. Hyperlexic children often struggle with Who? What? Where? Why? and How? questions... Social skills often lag tremendously. Hyperlexic children often have far less interest in playing with other children than do their peers.

The thing is, this absolutely and precisely describes me in childhood, especially before the age of 5 or 6. (This is also the typical age when hyperlexic children begin to learn how to interact with others.) It also describes my son - which is how my wife found the description and forwarded it to me.

You walk around your entire life with these stories, these tics, and the entire time, your quirks are really symptoms. It's a little strange.

Posted July 11, 2009 at 8:17 | Comments (0) | Permasnark
File under: Books, Writing & Such, Braiiins, Language, Learnin', Science, Self-Disclosure

July 9, 2009

Austin's thoughts: I just got my hard copy. The first thing I did was start playing with the book mark. Kudos to Dia... >>

Language and the New Liberal Arts

So I'm sitting here, working on making a plain-vanilla hypertext version of New Liberal Arts so folks can read it on their phones, Kindles, whatever, and cleaning up all the extra cruft to make it work -- you can just cut-and-paste from the PDF, it'll be easy, Robin says, forgetting that it's set in opposing faces that sometimes get out of order, that the all-cap fonts turn into gibberish, and that there's a freaking secret message in the thing --

And, maybe just naturally, or maybe as a function of what I'm doing, I am totally blown away - again - by Diana Kimball's "Coding and Decoding" and Rachel Leow's "Translation."

Seriously. Just check them out. They're so elegant and complimentary - Rachel's is about a kind of patient mastery and deep connection to other human beings past and present, Diana's about ambient awareness of linguistic symbols that we discover but whose deciphering is always going to be incomplete. Originally, I was going to write a separate NLA entry for "Languages" - when I first read these two, months ago, I realized that I had nothing I wanted to add.

Posted July 9, 2009 at 6:00 | Comments (1) | Permasnark
File under: Language, New Liberal Arts, Recommended
Jake's thoughts: I could get a netbook, or I could get this wonderful beast. This is a hard decision. I'm a bit di... >>

A Treasure-House of Language

I don't have a lot of criteria for friendship, but the one characteristic I think is invariant is a love of and care for language. If you don't take pleasure or find intellectual satisfaction in how words are strung together - maybe even especially written words - then you and I are quickly going to run out of things to say to or do with each other.

So that said, I think a good index of both your wordnerdery and the likelihood of the two of us becoming and remaining fast friends is your excitement in reading about the new Historical Thesaurus of the Oxford English Dictionary, which will be published - in two glorious volumes! - this fall:

The Historical Thesaurus of the Oxford English Dictionary, published by Oxford University Press, is the culmination of 44 years of painstaking work by scholars at the University of Glasgow.

It not only groups words with similar meanings but does so in chronological order according to their history - with the oldest first and most recent last. According to its publisher, the OED, it's the largest thesaurus in the world and the first historical thesaurus in any language.

With 800,000 meanings, 600,000 words and more than 230,000 categories and sub categories, it's twice as big as Roget's version.

And if that doesn't have him turning in his grave, it also contains almost every word in English from Old English to the present day, or 2003 to be precise - the cut-off date for the new dictionary.

· The largest thesaurus resource in the world, covering more than 920,000 words and meanings based on the Oxford English Dictionary · The very first historical thesaurus to be compiled for any of the world's languages from medieval times through the present · Synonyms listed with dates of first recorded use in English, in chronological order, with earliest synonyms first · For obsolete words, the Thesaurus also includes last recorded use of word · Uses a thematic system of classification · Comprehensive index enables complete cross-referencing of nearly one million words and meanings · Contains a comprehensive sense inventory of Old English · Includes a free fold-out color chart which shows the top levels of the classification structure · Made up of two volumes: The main text, comprising numbers sections for semantic categories, and the index, comprising a full A-Z look up of nearly one million lexical items

Sweet mercy. Bless you marvelous pedants and this magnificent thing you have made.

Posted July 9, 2009 at 5:48 | Comments (2) | Permasnark
File under: Books, Writing & Such, Language, Learnin'

July 4, 2009

Howard Weaver's thoughts: This brings to mind two of my favorite moments from TED conferences I attended in the past: ... >>

Evolution 2.0 (and 3.0 beta)

This is kind of a cool idea. Let's say that evolution writ large is only accidentally about the preservation, transmission, and development of living species, but essentially about the preservation, transmission, and development of information. On this view, organisms are just a means to an end, particularly well-adapted couriers for all of this chemical data.

If that's the case, then maybe there isn't anything particularly special about the specific form of that data (i.e. DNA) or the way it's been transmitted in humans (sexual reproduction). That's just one way of doing things - in nonconscious, nonverbal, or nonhistorical species, genetic transmission, instinct, inherited traditions are the only means you've got. But once modern humans arrive on the scene, with all their increasingly sophisticated means of representing information, then Evolution 1.0, internal transmission of information, isn't the only game in town -- you've also got Evolution 2.0, characterized by the external transmission of information.

Once you reframe evolution in this way, then you can say that our species' rate of evolution "over the last ten thousand years, and particularly... over the last three hundred" is actually off the charts.

So the guy who's arguing this is a physicist named Stephen Hawking. (Maybe you've heard of him - he's awfully smart, and was part of Al Gore's Vice Presidential Action Rangers.) He also says that our tinkering with evolution ain't over:

[W]e are now entering a new phase, of what Hawking calls "self designed evolution," in which we will be able to change and improve our DNA. "At first," he continues "these changes will be confined to the repair of genetic defects, like cystic fibrosis, and muscular dystrophy. These are controlled by single genes, and so are fairly easy to identify, and correct. Other qualities, such as intelligence, are probably controlled by a large number of genes. It will be much more difficult to find them, and work out the relations between them. Nevertheless, I am sure that during the next century, people will discover how to modify both intelligence, and instincts like aggression."

If the human race manages to redesign itself, to reduce or eliminate the risk of self-destruction, we will probably reach out to the stars and colonize other planets. But this will be done, Hawking believes, with intelligent machines based on mechanical and electronic components, rather than macromolecules, which could eventually replace DNA based life, just as DNA may have replaced an earlier form of life.

I can't decide if this is totally anthropocentric, or exactly the opposite. But it's kind of exciting, isn't it? I'm evolving the species right now, just by typing this! And so are you, by reading it! And so are Google's nanobots, by recording all of it in their fifteenth-gen flash brains!

Posted July 4, 2009 at 5:59 | Comments (1) | Permasnark
File under: Books, Writing & Such, Language, Science, Technosnark

July 1, 2009

Nav's thoughts: Yeah, I was wondering about this too after Kottke posted it, especially since the undercurrent of... >>

Language Is A Technology That Restructures Language

Lera Boroditsky has a super-interesting essay at Edge on her work empirically testing the proposition that language structures thought. (Blërg - resisting urge to... blockquote.... sigh.)

So Boroditsky's got some clever tests, including asking speakers/writers of a different language to arrange pictures chronologically (Roman languages tend to arrange chronology from left to right, Hebrew from right to left, and fascinatingly, the Kuuk Thaayorre in Australia do it from east to west), and testing incidences of adjectives speakers of languages with gendered nouns assign to those nouns - Germans think keys (male) are hard and jagged and bridges are slender and beautiful, where Spanish-speakers (whose gender assignations switch the nouns) correspondingly flip associations.

But... okay, look. I believe in this thesis. But the tests to my mind are not conclusive evidence. Here's why.

You can't get into a person's head.

Is is that simple? It is.

Because (stay with me) all of these tests don't show that speakers of different language think differently, but that they represent thought differently. The way we write changes the way we talk, and the way we represent thought in space. The way we talk also changes the way we write. And the way we talk changes the way we talk. You don't have any evidence - at least, any evidence that doesn't assume the premise - that Germans actually THINK bridges are more graceful or beautiful than Spaniards do - just that they're more likely to use adjectives with feminine associations with feminine nouns. What this suggests immediately is that language is a complex and interconnected system where terms and kinds group together, and small linguistic changes actually trigger a series of different linguistic associations and values. It DOESN'T immediately prove that language structures thought - understood as something independent from its representation.

Because if language is the vocal and visual representation of concepts, then ALL of Boroditsky's tests are instances of language. Language structures language. And once you assume unproblematically that language directly represents thought, then you naturally discover that thought and language are inseparable. Which is what was to be shown. But this is logically a tautology - even if its empirical specifics of how that tautology manifests itself are fascinating.

Let me reframe this, then. What I think these experiments show is that in moments where we may think we are simply registering our pure and unmediated experience of the world, we're really on auto-pilot - language is in fact doing our "thinking" for us. But this kind of not-quite-thinking doesn't automatically deserve to be called "thought" at all.

Posted July 1, 2009 at 11:11 | Comments (3) | Permasnark
File under: Braiiins, Language, Science

May 29, 2009

In Praise of Post-

Music critic Simon Reynolds praises music's moments of in-between:

It rankles a bit that the late '80s are now treated as a mere prequel to grunge. The recently aired Seven Ages of Rock on VH1 Classic was a marked improvement on earlier TV histories of rock, which tended to jump straight from Sex Pistols to Nirvana. But its episode on U.S. alternative rock nonetheless presented groups like the Pixies, Dinosaur Jr., and Sonic Youth as preparing the ground for Nirvana. That's not how it felt at the time: Sonic Youth and the rest seemed fully formed significances in their own right, creative forces of monstrous power, time-defining in their own way (albeit through their refusal of the mainstream). My Melody Maker comrade David Stubbs wrote an end-of-year oration proclaiming 1988—annum of Surfer Rosa, Daydream Nation, My Bloody Valentine's Isn't Anything—to be the greatest year for rock music. Ever!

We actually believed this, and our fervor was infectious, striking an inspirational, Obama-like chord with young readers heartily sick of the idea that rock's capacity for renewal had been exhausted in the '60s or the punk mid-'70s. Yet that period will never truly be written into conventional history (despite efforts like Michael Azerrad's Our Band Could Be Your Life) because it doesn't have a name. It's too diverse, and it's not easily characterized. For instance, the groups were "underground," except that by 1988 most of them—Husker Du, Throwing Muses, Sonic Youth, Butthole Surfers—had already signed, or soon were to sign, to majors. Finally, it'll never get fairly written into history because, damn it, grunge did happen.

As I've gotten older, I like 80s alternative music better than the stuff I grew up with in the 90s, although now (with almost two decades' distance), the 90s looks better, and just plain different, from the radio I remember. (I didn't listen to Belle and Sebastian, Neutral Milk Hotel, or Smog in the 90s. I do now.)

The weird thing is that to be a precursor is a recipe for big sales but also diminished significance in your own right. The 80s are full of bands that influenced Nirvana who don't really sound like Nirvana, who don't sound ANYTHING like the rest of what passed for grunge, who actually don't make a lot of sense in that context.

But to be post- is a kind of liberation -- one has a sense of being reflective, developing, moving beyond something else, a continuation with that history but also a break. So the coolest thing to be is post-punk. It's so cool that the first half of this decade saw dozens of bands who were post-post-punk.

So Reynolds identifies two strains of in-between music to go along with 80s post-punk: post-disco and post-psychedelic. I'm convinced that these typologies totally work; I might be more invested in the post-psychedelia bands he lists than the post-disco ones, but it all sounds interesting. And in this case, naming is claiming: giving these bands and their sound a name actually gives you a context to talk about them, one that might be misleading (in which case, time to toss it out) but which might be a way to call more attention to things that would otherwise go unnoticed.

He also includes this nice postscript (har har) on post-rock and post-metal:

There are some other "post-" genres out there, but to my mind, they describe something quite different from the above. Take post-rock, a term that mysteriously emerged in the early '90s to describe experimental guitar bands that increasingly abandoned guitars altogether. (Oh, OK, it was me who came up with that one.)
Posted May 29, 2009 at 6:14 | Comments (0) | Permasnark
File under: Language, Music, Radio, Television

May 27, 2009

Howard Weaver's thoughts: So much for the typing skills I have honed for more than 40 years ... That We,, is meant ... >>

The Negative Dialectics of Whiteness

Ta-Nehisi Coates:

The idea is that Latinos have a dual experience that whites don't have and that, all things being equal, they'll be able to pull from that experience and see things that whites don't. The problem with this reasoning is it implicitly accepts the logic (made for years by white racists) that there is something essential and unifying running through all white people, everywhere. But White--as we know it--is a word so big that, as a descriptor of experience, it almost doesn't exist.

Indeed, it's claims are preposterous. It seeks to lump the miner in Eastern Kentucky, the Upper West Side Jew, the yuppie in Seattle, the Irish Catholic in South Boston, the hipster in Brooklyn, the Cuban-American in Florida, or even the Mexican-American in California all together, and erase the richness of their experience, by marking the bag "White." This is a lie--and another example of how a frame invented (and for decades endorsed) by whites is, at the end of the day, bad for whites. White racism, in this country, was invented to erase the humanity and individuality of blacks. But for it to work it must, necessarily, erase the humanity of whites, too.

TNC of course makes the further (and necessary point) point that these are all fictions that become socially real.

P.S.: I realize the "negative dialectics" reference is probably too insidery for 98% of readers. It's a term that Theodor Adorno used for a title of his book. Hegel defined identity as "the identity of identity and nonidentity" - the idea being that any concept or act of identification glosses over differences and unifies things that are like in some ways but unlike in others. For Adorno, negative dialectics explores "the nonidentity of identity and nonidentity," i.e., disintegrating all of that.

Cf. the kind of weird quasi-discourse on whether Judge Sotomayor will or will not be the first "Hispanic" judge on the Supreme Court - the idea being that Justice Cardoza (whose ancestors, Portuguese Jews, emigrated to New York state in the eighteenth century) would qualify. If you try to pursue a purist/universalist idea of racial identity to the end, you start to focus on definitional descriptors (biological and/or cultural ancestry on the Iberian peninsula) that just wipe out all differences. "Hispanic" in this context may be as much of a lie-word -- that is to say, as powerful a concept -- as "white."

Posted May 27, 2009 at 6:33 | Comments (2) | Permasnark
File under: Language, Society/Culture

May 26, 2009

Faking It In Translation

Suzanne Menghraj loved Pierre Bayard's How to Talk About Books You Haven’t Read so much that she read it twice. She wanted to read Bayard's 2000 book Comment améliorer les oeuvres ratées (How to Improve Failed Works). But it hadn't been translated, and she couldn't speak or read French. So she decided to bang it out herself anyways:

I came very close to failing French several times over the eight years I studied the language. This does not make me proud. But it does make me want to explore my persistent lack of facility with a language whose structure and habits I understand only well enough to catch a word here, a sense or mood there (let’s say I “skim” French). And so, a good French-English dictionary in hand, I read “Hélas!” (literally, “Alas!”), the introduction to Comment améliorer les oeuvres ratées and was as taken with the iconoclastic ambitions expressed in it as I am with those expressed in How to Talk About Books You Haven’t Read—so taken that I decided to give translation of “Hélas!” a shot.

My own speaking French is terrible, and my reading French is so slow that I've read more than a few books with the original in one hand and a translation in the other, jotting notes with a pen between my teeth when I can't be bothered to put either book down. (I'm telling you - this is the only way to read Proust.)

And my German's probably about the same as Menghrai's French. I was astonished when I switched from philosophy to comparative literature, because suddenly everyone around me was fluent as hell - they were born in Austria, they spent every summer in Paris, they didn't just like to dick around with Kant or Baudelaire.

But I still think that my ambient awareness of, my ability to skim four or five different languages, has really helped me do a lot of things I otherwise wouldn't be able to do. I say, let's have more people half-assing it in languages not their own.

Language is like cooking, or sex: if you get all hung up on being really, really good, not only won't it be fun, you're probably never going to get around to doing it at all.

Via Willing Davidson at The Book Bench.

Posted May 26, 2009 at 10:11 | Comments (0) | Permasnark
File under: Books, Writing & Such, Language, Worldsnark

Sonority in Translation

Marvelous profile of Svetlana Gaier, translator of Dostoyevsky into German:

Svetlana Ivanov was 18 years old when the Germans marched into Kiev (she acquired the name Geier later from her husband, a violinist). Although these events were the prelude to great suffering for countless subjects of the Soviet Union, it was a time of great promise for the young woman. Like others willing to work for the Germans for a one-year period, she was eligible to receive a scholarship to go to Germany. Having received private lessons in French and German from childhood, she was able to work as an interpreter for a Dortmund construction firm that was erecting a bridge across the Dnieper River.

Svetlana and her mother – who came from a family of tsarist officers - were victims of Stalinism. Svetlana Geier still recalls watching as a small child while her grandmother cut up family photos into tiny pieces with manicuring scissors: under the Communist regime, their possession could have been dangerous. Her father, a plant breeding expert, was interned during the purges of 1938. He remained in prison for 18 months, was interrogated and abused, but nonetheless eventually released. The following year, he died from the after-effects of imprisonment. Still ostracized even after his release, he spent his final months in a dacha outside of town, cared for by his daughter.

In the eyes of the young interpreter’s countrymen, her work for the Germans had discredited her: "As far as they were concerned, I was a collaborator." After Stalingrad, she could easily imagine what awaited her under Soviet rule. She took advantage of an offer to enter the German Reich with her mother, somewhat starry-eyed, and still hoping to receive a scholarship. That she, a "worker from the east" (her automatic classification in Nazi Germany) actually received it - one of two Humboldt scholarships reserved for "talented foreigners" - borders on the miraculous. Playing benevolent roles in her lengthy and stirring account of these events are a generous entrepreneur, an alert secretary, and a pair of good-natured assistants at the Ministry for the Occupied Eastern Territories...

Now, a year before the end of World War II, Svetlana Ivanov began her literary studies. She recalls the very first lecture she heard, Walter Rehm's "The Essence of the Tragic," which she attended in the company of her fellow students, all of them men with war injuries. She still has her notes.

I'm reminded, more than a little ironically, of the line the rabbi speaks at the beginning of Tony Kushner's Angels in America: "You can never make that crossing that she made, for such Great Voyages in this world do not any more exist. But every day of your lives the miles that voyage between that place and this one you cross. Every day. You understand me? In you that journey is."

I really like this description of her translation method:

Svetlana Geier’s method, if one can call it that, is an acoustic one. She immerses herself in the text until she has absorbed it completely, is able to hear its unique tenor, or as she says, "its melody." Then she induces it to resound in German, and this again takes place acoustically, for Geier dictates her translations. They ring out aloud before ever becoming fixed on paper. Her Dostoevsky translations have received extraordinarily praise for this "sonorous" character in particular. Finally, it is said, the divergent voices of Dostoevsky’s protagonists have become distinguishable.

Geier's last translation, of a book by Dostoevsky that I haven't read, Podrostok - Geier's title, Ein grüner Junge, brings the German closer to Constance Garnett's A Raw Youth -- also sounds fascinating. But, I've already excerpted this short article to death, so you should click on it if you, you know, actually want to know something about her/FD's book.

Posted May 26, 2009 at 9:09 | Comments (0) | Permasnark
File under: Books, Writing & Such, Language, Recommended, Worldsnark
John's thoughts: Tim- I like the intellectual conservation of energy point; I think it dovetails nicely wi... >>

The New Socialism is the New Humanism

We loooove Kevin Kelly around here at Snarkmarket. Robin tipped me off to his stuff and he's since joined Atul Gawande, Roger Ebert, Virginia Heffernan, Clay Shirky, Michael Pollan, Clive Thompson, Gina Trapani, Jason Kottke, Ben Vershbow, Hilzoy, Paul Krugman, Sy Hersh, and Scott Horton (among others) in the Gore-Gladwell Snarkfantastic Hall of Fame. Dude should have his own tag up in here.

But I think there's a rare misstep (or rather, misnaming) in his new Wired essay, "The New Socialism: Global Collectivist Society Is Coming Online." It's right there in the title. That S-word. Socialism.

Now, don't get me wrong. I like socialism where socialism makes sense. Almost everyone agrees that it makes sense to have a socialized police and military. I like socialized (or partially socialized) education, and I think it makes a lot of sense to have socialized health insurance, as part of a broad social safety net that helps keep people safe, capable, knowledgeable, working. Socialism gets no bad rap from me.

I know Kelly is using the word socialism as a provocation. And he takes pains to say that the new socialism, like the new snow, is neither cold nor wet:

We're not talking about your grandfather's socialism. In fact, there is a long list of past movements this new socialism is not. It is not class warfare. It is not anti-American; indeed, digital socialism may be the newest American innovation. While old-school socialism was an arm of the state, digital socialism is socialism without the state. This new brand of socialism currently operates in the realm of culture and economics, rather than government—for now...

Instead of gathering on collective farms, we gather in collective worlds. Instead of state factories, we have desktop factories connected to virtual co-ops. Instead of sharing drill bits, picks, and shovels, we share apps, scripts, and APIs. Instead of faceless politburos, we have faceless meritocracies, where the only thing that matters is getting things done. Instead of national production, we have peer production. Instead of government rations and subsidies, we have a bounty of free goods.

But I think of socialism as something very specific. It's something where a group of citizens pools their resources as part of a democratic (and at least partially technocratic) administering of benefits to everyone. This could be part of a nation-state or a co-op grocery store. And maybe this is too Hobbesian, but I think about it largely as motivated by a defense against something bad. Maybe there's some kind of general surplus-economy I'm missing where we can just socialize good things without risk. That'd be nice.

When masses of people who own the means of production work toward a common goal and share their products in common, when they contribute labor without wages and enjoy the fruits free of charge, it's not unreasonable to call that socialism.

But I'll put this out as an axiom: if there's no risk of something genuinely bad, no cost but opportunity cost, if all we're doing is passing good things around to each other, then that, my friend, is not socialism.

This is a weird paradox: what we're seeing emerge in the digital sphere is TOO altruistic to be socialism! There isn't enough material benefit back to the individual. It's not cynical enough! It solves no collective action problems! And again, it's totally individualistic (yet totally compatible with collectivities), voluntarist (yet totally compatible with owning one's own labor and being compensated for it), anti-statist (yet totally compatible with the state). It's too pure in its intentions and impure in its structure.

Kelly, though, says, we've got no choice. We've got to call this collectivism, even if it's collective individualism, socialism:

I recognize that the word socialism is bound to make many readers twitch. It carries tremendous cultural baggage, as do the related terms communal, communitarian, and collective. I use socialism because technically it is the best word to indicate a range of technologies that rely for their power on social interactions. Broadly, collective action is what Web sites and Net-connected apps generate when they harness input from the global audience. Of course, there's rhetorical danger in lumping so many types of organization under such an inflammatory heading. But there are no unsoiled terms available, so we might as well redeem this one.

In fact, we have a word, a very old word, that precisely describes this impulse to band together into small groups, set collective criteria for excellence, and try to collect and disseminate the best, most useful, most edifying, most relevant bodies of knowledge as widely and as cheaply as possible, for the greatest possible benefit to the individual's self-cultivation and to the preservation and enrichment of the culture as a whole.

And that word is humanism.

... Read more ....
Posted May 26, 2009 at 6:53 | Comments (6) | Permasnark
File under: Braiiins, Language, New Liberal Arts

May 18, 2009

Tim's thoughts: I love everything about Lilburne's story, especially his hedge: "in these very words OR to this e... >>

A Messe Of Pottage

So there's this huge political money scandal in the UK. The Telegraph's Simon Heffer says, let's get Puritanical -- as in the real Puritans:

An unfinished  miniature portrait of Oliver Cr...

Image via Wikipedia

What is now needed is the Cromwellian touch, for I do not believe Parliament's standing has been lower since Oliver dismissed the Rump in April 1653. Mr Cameron should sack from his front bench all those exposed in unacceptable use of taxpayers' money. Central Office should ask chairmen of constituency parties whose MPs have behaved disgracefully to consider whether the chances of the seat being held at the next election would be helped by the selection of a new, financially untainted candidate. To take this swift action now would secure Mr Cameron's moral advantage; it would greatly damage the Prime Minister and the Labour Party; it would put pressure on Mr Brown to do precisely the same.

Heffer even busts out one of my favorite Cromwell stories:

However, we all know what Mr Brown should do, and again Cromwell provides us with our lead. Remember the words he uttered to the Rump, in his anger at its failure to consolidate the new England after the second civil war: "It is high time for me to put an end to your sitting in this place, which you have dishonoured by your contempt for all virtue, and defiled by your practice of every vice; ye are a factious crew, and enemies to all good government; ye are a pack of mercenary wretches, and would like Esau sell your country for a mess of pottage... Is there a single virtue now remaining amongst you? Is there one vice you do not possess? Ye have no more religion than my horse; gold is your god; which of you have not bartered your conscience for bribes?... Ye are grown intolerably odious to the whole nation; ye were deputed here by the people to get grievances redress'd, and are yourselves gone... In the name of God, go!"

The trouble is, this is EVERYBODY's favorite Cromwell speech, and he probably never said most of it. Mercurius Politicus has got the goods:

The earliest record I can find of it is in Thomas Mortimer’s The British Plutarch (1816), which gives this source for it:

The following piece said to have been found lately among some papers which formerly belonged to Oliver Cromwell is supposed to be a copy of the very words addressed by him to the members of the Long Parliament when he turned them out of the House. It was communicated to the Annual Register for 1767 by a person who signed his name T Ireton and said the paper was marked with the following words Spoken by Oliver Cromwell when he put an end to the Long Parliament.

I've had a look through the Annual Register on ECCO but can’t trace the original source. It's true that various letters and other Cromwelliana were turning up during the eighteenth century and onwards into the nineteenth, but a few things make the speech seem too good to be true. The fact that it purports to be a direct transcript, when it's unlikely anyone would have been recording it verbatim, is one. The reference to T Ireton is another -- perhaps an attempt to suggest authenticity by implying a descendant of Henry Ireton had got hold of the speech, but of course Ireton had died in 1651. So without wanting to be a spoilsport, the version of the speech being quoted in the press may not be what it purports to be.

I would look myself to confirm or refute MP's findings, but an injection my dissertation advisor gave me when I kept on doing research on "blood and treasure" instead of writing about Ezra Pound means that when I look at EEBO or ECCO for more than fifteen minutes at a stretch, my eyes begin to bleed.

For the record though, my all-time favorite Cromwell story involves another speech he purportedly gave, this time about torturing (probably) the Levellers (which Leveller John Lilburne somehow managed to overhear AND get to the printer while he was still in prison):

Lt. General Cromwell (I am sure of it) very loud, thumping his fist upon the Council table, til it rang again, and heard him speak in these very words or to this effect; I tell you, Sir, you have no other way to deal with these men, but to break them in pieces; and thumping upon the Council table again, he said, Sir, let me tell you that which is true, if you do not break them, they will break you; yea and bring all the guilt of the blood and treasure shed and spent in this kingdom upon your head and shoulders; and frustrate and make void all that work, that with so many years' industry, toil and pains you have done, and so render you to all rational men in the world as the most contemptiblest generation of silly, low-spirited men in the earth, to be broken and routed by such a despicable, contemptible generation of men as they are; and therefore, Sir, I tell you again, you are necessitated to break them.

Cromwell certainly did have a way of speaking his mind.

(Via Mercurius Politicus.)

Posted May 18, 2009 at 6:53 | Comments (2) | Permasnark
File under: Books, Writing & Such, Language, Learnin', Snarkpolitik, Worldsnark

Now That's What I Call "Inventio"

Tim says,

James Fallows, "On eloquence vs. prettiness":

[Obama's] eloquence is different from what I think of as rhetorical prettiness -- words and phrases that catch your notice as you hear them, and that often can be quoted, remembered, and referred to long afterwards. "Ask not..." from John F. Kennedy. "Blood, toil, tears, and sweat" from Winston Churchill. "Only thing we have to fear is fear itself" from FDR. "I have a dream," from Martin Luther King. Or, to show that memorable language does not necessarily mean elevated thought, "segregation today, segregation tomorrow, segregation forever!" from the early George C. Wallace.

At rare moments in history, language that goes beyond prettiness to beauty is matched with original, serious, difficult thought to produce the political oratory equivalent of Shakespeare. By acclamation Lincoln's Second Inaugural Address is the paramount American achievement of this sort: "With malice toward none, with charity for all, with firmness in the right as God gives us to see the right..."

The reason to distinguish eloquence of thought from prettiness of expression is that the former tells you something important about the speaker, while the latter may or may not do so. Hired assistants can add a fancy phrase, much as gag writers can supply a joke. Not even his greatest admirers considered George W. Bush naturally expressive, but in his most impressive moment, soon after the 9/11 attacks, he delivered a speech full of artful writerly phrases, eg: "Whether we bring our enemies to justice or bring justice to our enemies, justice will be done." Good for him, and good for his staff.

Rhetorical polish, that is, can be a staff-enhanced virtue. The eloquence that comes from original thought is much harder to hire, or to fake. This is the sort of eloquence we've seen from Obama often enough to begin to expect.

(Sorry for the long quote, but I wanted to include all of Fallows's examples.)

Also --

Inventio is the system or method used for the discovery of arguments in Western rhetoric and comes from the Latin word, meaning "invention" or "discovery". Inventio is the central, indispensable canon of rhetoric, and traditionally means a systematic search for arguments (Glenn and Goldthwaite 151).

Inventio comes from the Latin invenire, meaning "to find" or "to come upon". The same Latin root later gave us the English word inventor. Invenire is derived from the Greek heuriskein, also meaning "to find out" or "discover" (cf. eureka, "I have found it").

Comments (0) | Permasnark | Posted: 4:45 AM

May 5, 2009

tim's thoughts: I def posted it first - I assumed you were being cleverly indirect. I REALLY want to kno... >>

Luxurious Artisanal Bibliophile Dada

Sweet Gods of Heaven and Earth! It's the best bookporn post ever!

Xu Bing.jpg

That's Xu Bing's Tian Shu (Book From Heaven). Rachel Leow writes:

To make [the four hundred books and fifty-foot scrolls], Xu painstakingly carved Chinese characters into square woodblocks, in just the way his ancient printing predecessors would have done, had them typeset and printed, and the printed pages mounted and bound into books and scrolls.

The result is a truly spectacular display of bookmanship — volumes fit for an emperor’s library. Yet, there’s the astonishing, Borgesian catch: Out of the three or four thousand Chinese characters used in these volumes and scrolls, not a single one of them is a real Chinese character.

They are made up of recognizable radicals and typical atomic components of Chinese characters, but Xu laboured to ensure that while they all retain the unmistakable look of Chinese script, they are all, so to speak, nonsense. They do not exist in any dictionary, and do not mean anything. Chinese speakers and non-Chinese speakers alike approach the books with the same sense of wonder at their beauty, and the same sense of incomprehension at their content — though, for Chinese readers, the frustrated impulse to read might detract somewhat from their aesthetic enjoyment of the art piece. I’ve heard that some Chinese readers have spent days attempting to locate a character they can read — to no avail. It’s a piece of art whose meaning is to be found in its meaninglessness.

I want to GO to there.

Instead, you should go to a historian's craft to check out more images of Xu Bing's two books (there is also a Book of Earth) and read every gorgeous word Rachel's written about them. These are the internet equivalent of being touched by "beautiful calligraphy, by brushstroked words on fine paper, by sensuous lines of scripts that dance provocatively on the page, inviting comprehension." You wish you wrote this well this early in the morning. (It's still early in Malaysia, right?)

It's a completely different tradition, but I'm reminded of Augustine's theory of signs. For Augustine, a sign (whether a word or a symbol) is a tool, an instrument. Signification shows that things aren't used for their own sake, but in reference to something else. This chain of signification and interpretation goes all the way up to God, who is the only thing completely sufficient in Himself, that CAN'T be made to signify anything else. Since everything else is deficient, anything that isn't God, whether a word, a gesture, a picture, a dream, stars, a person, an animal, a tree, usw., can be turned into a sign. In fact, it HAS to be taken as a sign, at least in this broad sense of an instrument pointing to another purpose; otherwise, you're performing a kind of idolatry, taking a deficient thing to be self-sufficient and giving to it what you should reserve for God alone. At this point, Augustine's semiotics dovetails with his ethics; we shouldn't delight too much in food for its own sake, sex for its own sake -- in short, in pleasure, except insofar as it helps us to serve a godly and natural purpose.

So there is a universal potential for signfication. Any worldly thing can be a sign. But to refuse signification, to delight in the pleasure of the letter itself, is an act of rebellion.

Rachel's language may be more instrumental than Xu Bing's, but it's no less of a pleasure to read.

Posted May 5, 2009 at 2:27 | Comments (2) | Permasnark
File under: Beauty, Books, Writing & Such, Language, Object Culture, Worldsnark

April 30, 2009

Nom De Whatever

Tim says,

Intriguing aside in this Slate article by Huan Hsu on office workers in China adopting English names:

In the United States, people tend to view names and identities as absolute things—which explains why I agonized over deciding on an English name—but in China, identities are more amorphous. My friend Sophie flits amongst her Chinese name, English name, MSN screen name, nicknames she uses with her friends, and diminutives that her parents call her. "They're all me," she says. "A name is just a dai hao." Dai hao, or code name, can also refer to a stock's ticker symbol.

h/t: Saheli

Comments (2) | Permasnark | Posted: 10:49 AM

April 29, 2009

Gay History vs. Queer Studies

Larry Kramer at Yale:

It took a long time for Yale to accept Kramer money. After a number of years of trying to get Yale to accept mine for gay professorships or to let me raise funds for a gay student center, (both offers declined), my extraordinary straight brother Arthur offered Yale $1 million to set up the Larry Kramer Initiative for Lesbian and Gay Studies and Yale accepted it. My good friend and a member of the Yale Corporation, Calvin Trillin, managed to convince President Levin that I was a pussycat. The year was 2001.

Five years later, in 2006, Yale closed down LKI, as it had come to be called. Yale removed its director, Jonathan David Katz. All references to LKI were expunged from Web sites and answering machines and directories and syllabuses. One day LKI was just no longer here.

When this happened I thought my heart would break.

I wanted gay history to be taught. I wanted gay history to be about who we are, and who we were, by name, and from the beginning of our history, which is the same as the beginning of everyone else’s history.

This is a great speech, even though it's peppered with the occasional, um, surprising claims ("George Washington was gay, and that his relationships with Alexander Hamilton and the Marquis de Lafayette were homosexual... his feelings for Hamilton led to a government and a country that became Hamiltonian rather than Jeffersonian") and a tirade against queer studies that feels misplaced and, at times, childish:

It seems as if everything is queer this and queer that... Just as a point of information, I would like to proclaim with great pride: I am not queer! And neither are you. When will we stop using this adolescent and demeaning word to identify ourselves? Like our history that is not taught, using this word will continue to guarantee that we are not taken seriously in the world.

Just like dressing "in drag," "acting" transgendered, or not wanting to let other people define your identities for you guarantee that you won't be taken seriously in the world. Oh, it matters so much to be taken seriously.

In particular, it seems foolish to blame scholars of literature and anthropology or communication for doing what they do with anything rather than history or politics departments who refuse to give gay history a foothold.

Folks care about the words they use, and are chilly towards "homosexual," not because they refuse to grant that same-sex desire/partnering/sex have always been around, but because 1) lots of people's sense of their gender/sexuality doesn't fall under what we'd just call "gay" or "homosexual," not least because 2) to pick of an example, if you were born an anatomical woman but think of yourself as a man attracted to women, you wouldn't think of your attraction as "same-sex," and 3) people finally get to define the words for themselves! "Homosexuality" is a medical word; "sodomy" is religious; "queer" is social. They all have different valences, but the last offers a flexibility that for many, many people, is highly desirable.

Now, I absolutely agree that Eve K Sedgwick doesn't do what George Chauncey does, and that we need about a hundred more Chaunceys a hundred times more than we need a hundred more Sedgwicks. But gosh, Larry, don't bash folks for not being serious because you don't like the name. Bash the institution for taking your money and not supporting what you wanted to do.

Also, pick up Epistemology of the Closet sometime and give it a read. I think you'd find that this marvelous turn of phrase you use (wait for the end) echoed nicely there:

Franklin Pierce, who became one of America's worst presidents, and Nathaniel Hawthorne, who became one of our greatest writers, as roommates at Bowdoin College had interactions that changed them both forever and, indeed, served as the wellspring for what Hawthorne came to write about. Pierce was gay. And Hawthorne? Herman Melville certainly wanted him to be.
Posted April 29, 2009 at 2:43 | Comments (0) | Permasnark
File under: Fairy-Tale Marriage, Language, Learnin'

April 20, 2009

Robin's thoughts: I know this isn't exactly the same thing, but it made my similarly-tickled brain bone resonate a ... >>

William James, You've Got It Goin' On

Jonah Lehrer tickles my brain-bone:

This reminds me of that great William James quote: "We ought," he wrote, "to say a feeling of and, a feeling of if, a feeling of but, and a feeling of by, quite as readily as we say a feeling of blue, or a feeling of cold." What is James talking about? He's pointing out that language creates the illusion of transparency. We pretend that we're just describing the "substantive parts" of the world - those nouns we match together with adjective and verbs in neat sentences - but this substance is inevitably shaded by "transitive" mental processes we aren't aware of, such as gendered nouns and quirks of grammar. In other words, language is a constraint on thought, a concrete riverbed for the stream of consciousness.

Posted April 20, 2009 at 9:51 | Comments (1) | Permasnark
File under: Braiiins, Language

April 15, 2009

Matt's thoughts: My favorite Geoffrey Pullum takedown remains <a href=" >>

Anti-Strunkites, Pt. 2

Michael Leddy pokes holes in Geoffrey Pullum's critique of Strunk and White, particularly Pullum's characterization of S/W's guidance as free-floating, contentless maxims:

Pullum says that "many" of Strunk and White's recommendations are "useless," citing "Omit needless words" as an example. On its own, this advice is no more helpful than telling a musician to avoid playing wrong notes. But "Omit needless words" doesn't appear on its own; it's accompanied by sixteen examples of how to improve cumbersome phrasing (e.g., "the fact that") and a demonstration of how six choppy sentences can be revised into one...

Pullum's summing up — "Following the platitudinous style recommendations of Elements would make your writing better if you knew how to follow them" — seems to forget that The Elements of Style is, after all, a book, with examples and explanations to help the reader to put its recommendations into practice.

He also points out, as I did, that Pullum too often switches his targets.

Key takeaways for me from the Pullum: S/W too often creates sentences that NO ONE trained in comp would write as illustrations of types of writing to avoid, rather than tougher cases; the evidence of S/W "don'ts" in the writings of master contemporary stylists of English literature strongly suggests that these usages are in fact perfectly grammatical/appropriate.

Posted April 15, 2009 at 5:48 | Comments (1) | Permasnark
File under: Books, Writing & Such, Language, Learnin'

April 11, 2009

Howard Weaver's thoughts: Speaking of "telegram style" evokes memories of the days when journalists often filed via telegra... >>



Several generations of college students learned their grammar from the uninformed bossiness of Strunk and White, and the result is a nation of educated people who know they feel vaguely anxious and insecure whenever they write "however" or "than me" or "was" or "which," but can't tell you why. The land of the free in the grip of The Elements of Style.

So I won't be spending the month of April toasting 50 years of the overopinionated and underinformed little book that put so many people in this unhappy state of grammatical angst. I've spent too much of my scholarly life studying English grammar in a serious way. English syntax is a deep and interesting subject. It is much too important to be reduced to a bunch of trivial don't-do-this prescriptions by a pair of idiosyncratic bumblers who can't even tell when they've broken their own misbegotten rules.

That sounds like standard-issue Chronicle of Higher Ed blunderbussery, but the author, Geoffrey K. Pullum, knows what he's talking about -- he's a linguist, and co-wrote The Cambridge Grammar of the English Language -- and the bulk of the essay is a startlingly comprehensive, point-by-point, and erudite take-down of Strunk and White.

... Read more ....
Posted April 11, 2009 at 2:34 | Comments (5) | Permasnark
File under: Books, Writing & Such, Language, Learnin'

April 10, 2009

Thousand-Dollar Steampunk Idea

Tim says,

Teletwitter (or "Twittergraph"): A multiplatform twitter client that pounds out received tweets like an oldtimey telegraph/teletype machine. Morse code optional. Also sheds punctuation formats in telegram style & replaces period with STOP

Comments (1) | Permasnark | Posted: 11:16 AM

April 5, 2009


The Economist just published a magazine article on the relationship between poverty, stress, and memory in childhood development. It's a powerful thesis, and breathtaking in its scope. But Mark Liberman at Language Log has an equally powerful takedown that walks back some of the big conclusions the article suggests.

Basically, the differences found in the research are actually statistically smaller than you'd think. As debunkings go, this is ho-hum. But I'm much more intrigued by Liberman's Whorfian idea about why we get confused when we start to talk about statistical variation among groups:

This is presumably because a significant proportion of [The Economist's] readers would be baffled by talk of effect sizes or percentiles, while the proportion who are bothered by vague talk about generic differences is minuscule. Such things are not effectively taught or widely learned, even among quantitatively-minded intellectuals. But I also think that there's a linguistic aspect. If Benjamin Lee Whorf were alive, he might argue that  our whole society is intellectually hamstrung by the way that English -- like all the other languages of the world -- tends to make us think about the evaluation and comparison of the properties of members of groups. And, I think, he might be right.

The easy and natural ways of talking about group comparisons express differences in terms of properties of the groups involved, or in terms of properties of imaginary generic or average group members: "the working memories of children who have been raised in poverty have smaller capacities than those of middle-class children"; "Those who had spent their whole lives in poverty could hold an average of 8.5 items in their memory at any time. Those brought up in a middle-class family could manage 9.4." Writers and speakers may know what's really going on, at least with half of their brains, but readers and listeners are fooled into thinking that they understand these generic statements, even though in the absence of information about the comparison of distributions rather than the comparison of average values, they're left completely unable to put that understanding to any valid use.

This situation ought to be just as puzzling, at least to members of a more advanced civilization, as the Pirahã's ignorance of numbers is to us.

This kind of cognilinguistatistical analysis just strikes me as so powerful, and so complimentary to all of its various parts, that I wish there were some kind of new program that just devoted to all of the different highly technical ways we have to make sense of stuff: linguistics, statistics, model-building, cryptography, semiotics, paleography, information sciences. I'd double-major in that and philosophy in a heartbeat -- then start a think-tank devoted to high-end hermeneutics.

Also, I would wear all-black tailored suits and a sharp fedora, and ride around in the Batmobile. It would be so, so sweet.

Posted April 5, 2009 at 3:28 | Comments (0) | Permasnark
File under: Braiiins, Language, Learnin', New Liberal Arts

April 3, 2009

Peter's thoughts: Your troubles are over?... >>

Some Of That Information I Actually Need

Joshua Schachter lists several reasons why shortened URLs (those mini-links provided by TinyURL and its children), despite their convenience in some circumstances, are actually pretty bad. And I agree.

Some of Schachter's reasons are technical, related to DNS servers and the code they're written in, and others are more counterfactual - like what happens when a company goes out of business, and all of those links go dead?

But eventually, under the rubric of "usability issues" he gets around to the big one for me: "The clicker can't even tell by hovering where a link will take them, which is bad form."

I don't know about you, but when I'm browsing the web, I hover over links like each one were a suspect public toilet -- only touching down when I'm sure I know what I'm getting into. I take clicking through VERY seriously. Hovering over a link to get a peak at the URL may not always be perfect information, but to me, it's essential. TinyURLs don't let you do that. You're going to the middle of nowhere. This bothers me, every time.

In response to Schachter, Jason lists what he'd like to change about the way Twitter uses shortened URLs:

With respect to Twitter, I would like to see two things happen:

1) That they automatically unshorten all URLs except when the 140 character limit is necessary in SMS messages.

2) In cases where shortening is necessary, Twitter should automatically use a shortener of their own.

That way, users know what they're getting and as long as Twitter is around, those links stay alive.

Very reasonable ideas, all of these. In general, it seems like Twitter's going to have to create its own rhetoric of linking as powerful as the "@username" designation for links to Twitter users. Maybe an "%sitename" HTML tag in lieu of a shortened URL? Not sure.

Posted April 3, 2009 at 3:13 | Comments (5) | Permasnark
File under: Language, Technosnark

March 30, 2009

Grade Distortion

Tim says,

Tim Harford at the Financial Times finds le mot juste -- not grade inflation, but grade distortion:

Grade distortion is a serious affair. Students and their teachers are forced to switch to grey market transactions denominated in alternative currencies: the letter of recommendation, for example. Like most alternative currencies, these are a hassle.

Grade distortions, like price distortions, destroy information and oblige people to look in strange places for some signal amid the noise. Students are judged not on their strongest subjects – A grade, of course – but on whether they also picked up A grades in their weakest. When excellence cannot be displayed, plaudits go instead to those who deliver pat answers without stumbling – politicians in training, presumably.

Via Lone Gunman.

Comments (1) | Permasnark | Posted: 3:24 PM

March 29, 2009

Tim's thoughts: No, Robin -- I think you're right! It's true! Most people don't know how compensation works, and ... >>

The Bonus Armies

Brian Tierney - PRSA International Conference ...

Image by hyku via Flickr

Hilzoy figures out why folks are so p-oed about executive bonuses. It's not totally about the douchebags who ran AIG into the ground (even if they were hard-working, profitable, probably actually fairly competent douchebags). It's about the douchebags who ran the Philadelphia Inquirer and Philadelphia Daily News into the ground (and hundreds of other major businesses like it.

Philadelphia Media Holdings CEO Brian Tierney and his two underlings both got raises and bonuses just before the company declared bankruptcy and just after the papers' unions voted to give back raises to help keep the company solvent. They still laid off hundreds of people and even stiffed the government by failing to turn over the payroll taxes, insurance premiums, and union dues they collected from their employees.

On top of that, Tierney went batshit crazy:

According to Newspaper Guild representative Bill Ross, Tierney once shook up a management meeting by barking "I will not lose my f*cking house over this!" And Ross says a couple of people emerged from a private meeting with the CEO claiming that he'd spoken to them, in his 12th-floor office, with a baseball bat in his hands. Ross also adds that in January, Tierney took to patrolling the parking garage, watching to see what time employees were arriving to work and asking managers about those who were late. "That’s what I'm getting calls about now," says Ross. "He’s walking around the parking garage. If he gets hit by a car, it'll be his own fault."

You know, I live in this town, and I never heard any of these stories until now. Obviously, my two local newspapers didn't report them.

I think Hilzoy has a real point, though; outrage over lucrative bonuses paid to executives of companies in trouble IS in part a transference of anger coming from other places. But it's not just general anger about the economy or plebeian ressentiment. It's anger about this, about looting the store while pleading empty pockets.

Posted March 29, 2009 at 2:54 | Comments (4) | Permasnark
File under: Cities, Language, Snarkpolitik

March 26, 2009

English Has A New Preposition

Tim says,

Guess what it is!

Comments (2) | Permasnark | Posted: 5:20 AM

March 24, 2009

Hidden Heroes of the Cold War's End

Historian of Europe Karl Schlögel on the molecular movements of history:

The grand moments with which history usually preoccupies itself are inconceivable without the molecular events that make them possible. And the Europeans who make a career out of standing and speaking for Europe are nothing at all without the unknown Europeans whose stories are never told. We all know the stages of Brussels, Strasbourg, Paris or Maastricht, upon which "Europe's representatives" play their parts. It's not enough that that we're kept up to date on all their entrances and declarations. It's always the same names, the same faces, the same gestures. In 1949, a group of townspeople from Aachen, Europeans of the first hour, created the Charlemagne Prize for "persons who have advanced the ideas of European understanding in political, economic, and spiritual relations."

In the list of those honoured since 1950, one more or less finds all the great Europeans, from Count Coudenhove-Kalergi to Vaclav Havel, from Jean Monnet to the Euro. One can extrapolate this line and list easily and without a great deal of imagination. But one could also award the prize to people who were indispensable to the Europe that has evolved since 1989. There are more than a few claimants for these honours: the transportation ministers and the engineers who built the bridges, streets, and rails that paved the way to a new Europe and brought Europeans closer to one another. The shippers and logistics experts who have made careers out of shortening distances and creating a sense of proximity should also be eligible. Nor should one leave out the transportation companies and founders of discount airlines who have radically altered the map of Europe in our heads. Now, we not only know where Palermo is, but also Tallinn; not only Lisbon, but also Riga and Odessa. They have established lines of transit between the Rhein-Main area and Galicia, between Warsaw and the English Midlands, between Lviv and Naples. The discount airlines have made Berlin a neighbour of Moscow and contributed to an increase in cosmopolitanism. Krakow now has a connection to Dublin.

Entire economies can no longer function without this flow of traffic. The renovation of apartments, the care for pensioners and for the infirm in cities - even those located far from the border - now lie in the hands of personnel crossing over our borders. The Aachen Prize Committee could easily get an idea of the eligibility of candidates by looking at their timetables, price lists, and bookkeeping methods. They would determine that there's not a place in Europe that can't be looked up. Every act of research would become a joyous virtual journey to the New Europe.

One of the arguments that Schlögel makes is that the fall of the Berlin Wall mattered less than the mid-1980s institution of an express train line between Moscow and West Berlin, connecting the Communist states to the allied "island" in West Berlin, enabling all sorts of traffic of black-market goods, ideas, and people across what had seemed like impermeable borders. "To this day, there is no memorial for the anonymous black marketeers of Patrice Lumumba University at the Zoological Garden railway station. Instead, a freedom memorial is being planned for the exact spot where absolutely nothing happened."

I really like this idea that a city is not only a place, or a set of people, but also a mental/kinetic map of all the places, people, and things connected to that place -- a perpetually unexhausted, evolving set of possibilities.

Posted March 24, 2009 at 11:09 | Comments (0) | Permasnark
File under: Cities, Language, Snarkpolitik
Tim's thoughts: I ripped off the title from <a href=" >>

The Letter Kills, But the Phoneme Gives Life

We've got language on the brain lately here at Snarkmarket, so Ron Silliman's link to a talk abstract by linguist Bob Port at Berkeley caught my eye.

Most of it's written in linguistese, but the main idea is that when we're talking, we're not manipulating a storehouse of meaningful sounds that we're carrying around in our heads, but kicking around each other's speech in a way that approximates but can't be reduced to these fixed categories. But we think that that's what we're doing, because when we learn how to read (matching symbols to sounds), that is kind of what we're doing, even if it isn't when we speak.

Here's the kicker. To explain/summarize this idea, Port writes: All alphabets are a recent technology for low-bitrate representation of language.

Let me explain why I like this.

Language is one of our oldest technologies, and probably the most important. It's inevitable that we use other technologies to try to understand how it works. One of our other really old, really important technologies is writing, which is, in its own way, an heroic and powerful attempt to understand and functionalize how language works.

But writing is too powerful; not only does it change the way that the whole field of language works, it "restructures thought," as Father Ong would say, not least by making the whole field of language look a little more like writing.

Alphabetic writing alone isn't the only communication technology that affects how we see language; clay tablets, books and scrolls, dictionaries, the telegraph, file cabinets, and computer programming all give us different metaphors for thinking about how signs and communication work. But we've got a richer set of storage and communication technologies than ever before, which means we have a broader set of metaphors. We've got more metaphorical memory and processing power, kids!

Which means that we don't have to think of an alphabet as a permanent stone etching, an engraving on the heart, of what a linguistic sound looks like. We can think about it as a low-res copy, a functional representation, that flows in and out of our memory, gets remixed and mashedup and commented on and tagged by friends -- an evolving document.

I think it's a mistake to spend too much time dwelling on whether our current technology just introduces new distortions, because it inevitably does. It's just that asking language (which is what we're talking about) to give you something else is to ask language (even written language) to do something it does not really do. And that itself is three-quarters of the insight.

Posted March 24, 2009 at 6:03 | Comments (3) | Permasnark
File under: Language

March 18, 2009

Tim's thoughts: Is this boring? Could I punch this up a little or a lot more? Because I think this story ... >>

Blood and Treasure: Genealogy and Contexts

About three years ago, Robin noticed a strange phrase making the rounds in political talk about the costs of the Iraq war: "blood and treasure." I'd noticed it too, and when he posted about it, I started to do some digging into its origins. I thought that it would be a nice tidy little search, make for a fun thread and discussion, and we'd figure out that it came from Washington, maybe, or Lincoln, or Clausewitz.

As it turned out, I spent the better part of a year trying to find out where "blood and treasure" came from. I exhausted databases. I learned languages. I asked everyone I knew about it. I gave lectures on it. I contemplated scrapping my planned dissertation to write about it instead, and when that seemed like a bad idea, I contemplated leaving graduate school to write a book about it instead.

"Blood and treasure" is in its own way the key to all mythologies. Tracing the phrase traces the history of human thought about violence, whether in politics, history, religion, philosophy, or literature. I wanted to share here a fraction of what I have found so far.

... Read more ....

March 12, 2009

The Chinese Written Character as a Medium for Typing

Tim says,

James Fallows on technology, tradition, and the simplification of Chinese written characters :

Increasingly, Chinese people don't actually have to write (rite? right?) out these characters by hand. More and more, they key them in with mobile phones or at computers. And when they do that, it's just as easy to 'write' a traditional-style, complex, information-dense character as a streamlined new one. (Reason: you key in clues about the character, either its pronunciation or its root form, and then click to choose the one you want.) So -- according to current arguments -- the technology of computers and mobile phones could actually revive an important, quasi-antique style of writing.

Hmm -- Fallows is definitely one-up on me, since he reads Chinese and I don't, but I wonder whether other considerations (e.g. screen size and corresponding size of characters) might still put some pressure towards some kind of simplification of the character form. A lot of that information-density just turns into noise if it has to be packed into a tiny space.

Alternatively, kids (it's always kids, at first) might start using "abbreviations" that minimize the number of keystrokes required to type useful phrases -- maybe by not choosing the precisely "correct" character but an approximation of it (the root or a related pronunciation or whatever), like our "lol," "brb," "btw," etc.

In short, technology rarely has a purely stabilizing effect on tradition -- it might help block a particular chirographic attempt at reform/revolution, but only to displace it in favor of its own matrix. (And yes, I just quoted Spock from The Wrath of Khan.)

Comments (0) | Permasnark | Posted: 5:07 AM

March 9, 2009


Tim says,

Don't get dizzy now: Jason Kottke picks up on a word I kind of made up in response to one of his posts and runs with it:

Retronovation n. The conscious process of mining the past to produce methods, ideas, or products which seem novel to the modern mind. Some recent examples include Pepsi Throwback's use of real sugar, Pepsi Natural's glass bottle, and General Mills' introduction of old packaging for some of their cereals. In general, the local & natural food and farming thing that's big right now is all about retronovation...time tested methods that have been reintroduced to make food that is closer to what people used to eat. (I'm sure there are non-food examples as well, but I can't think of any.)

No sooner does Jason oh-so-gently throw down the gauntlet than Waxy, who almost certainly meant nothing of the kind, answers the question by linking to an amazing post about a transcript of a story conference between George Lucas, Steven Spielberg, and Lawrence Kasdan about Raiders of the Lost Ark:

(Key: G = George; S = Steven; L = Larry)

G — The thing with this is, we want to make a very believable character. We want him to be extremely good at what he does, as is the Clint Eastwood character or the James Bond character. James Bond and the man with no name were very good at what they did. They were very, fast with a gun. they were very slick, they were very professional. They were Supermen.

S — Like Mifune.

G — Yes, like Mifune. He's a real professional. He's really good. And that is the key to the whole thing. That's something you don't see that much anymore.

Mining 1930s throwaway serials and 60s genre films to create the blueprint for 1980s blockbusters = retronovation, definitely.

But while we're on the subject, let me say a little about the word itself. I write a lot of things super-fast. But I toiled over this word. "Retrovation"? I asked. "Retrinnovation"? It was Mayostard/Mustardayonnaise all over again. "Retronovation" is the clear winner, not only because it sounds better, but because it's etymologically correct: retro + nova => "backwards new." (Or, "return to begin.") Also, hats off to Jason for omitting the hyphen (i.e. "retro-novation"). Fie on the hyphen! The hyphen is only there to draw attention. In fact, I've retronovatively changed the word in my original post to scrap the hyphen I put there. Vive retronovation! Old is the new now!

Comments (1) | Permasnark | Posted: 6:25 PM
spacer image
spacer image