spacer image
spacer image

Welcome! You're looking at an archived Snarkmarket entry. We've got a fresh look—and more new ideas every day—on the front page.

August 14, 2009

<< Welcome to the Choice Factory | Aliens and Mermaids >>

The Future of Analphabetic Writing

A link, and then a long digression (or several).

Andrew Robinson at the Oxford University Press blog writes about attempts at universal languages:

In the mid-1970s, with increasing international travel, the American Institute of Graphic Arts cooperated with the United States Department of Transportation to design a set of symbols for airports and other travel facilities that would be clear both to travellers in a hurry and those without a command of English. They invented 34 iconic symbols. The design committee made a significant observation: “We are convinced that the effectiveness of symbols is strictly limited. They are most effective when they represent a service or concession that can be represented by an object, such a bus or bar glass. They are much less effective when used to represent a process or activity, such as Ticket Purchase…”…

Many scholars of writing today have an increasing respect for the intelligence behind ancient scripts. Down with the monolithic ‘triumph of the alphabet’, they say, and up with Chinese characters, Egyptian hieroglyphs, and Mayan glyphs, with their hybrid mixtures of pictographic, logographic and phonetic signs. Their conviction has in turn nurtured a new awareness of writing systems as being enmeshed within societies, rather than viewing them somewhat aridly as different kinds of technical solution to the problem of efficient visual representation of a particular language.

It’s weird how the alphabet, as a sort of half-technology, lies in between the fully functional/universal/superficial pictographic language and the deep cultural contextualism of ideogrammic writing. It’s a hybrid, a language of traders bumping against poets, where letters that used to name things (aleph = ox, bet = house in Phoenician) morph into pure sound (alpha, beta = meaningless in Greek).


When I was a kid, I was fascinated by Morse Code. My brothers and I had a set of walkie talkies that included a code on the handsets with the dots and dashes for each letter of the alphabet, and we tried to beep and boop out messages to each other, never getting much farther than “S.O.S.” For me, it was the beginning of the digital dream - reducing information to a single variation between two elements.

But my ear was off. I couldn’t turn long and short sounds into letters in my brain. Later, I did a science report on the telegraph, and was dumbfounded to learn that skilled telegraph operators COULD translate this text on the fly by ear - that it was faster for them than reading printouts of long and short lines (only useful, really, for receiving messages without an operator at the terminal).

Then, I saw The Hunt For Red October, watching Sean Connery and Scott Glenn trade messages back and forth optically, reading flashes of light through periscopes. That was my first real inkling that morse code could be something that was read in real time, like watching a stock ticker that only flashed one letter - less than one letter - at any given moment.

To this day, I still don’t know what to make of morse code. It’s a digital code that’s based on the alphabet, but seems to go way beyond the alphabet. And what are you doing when you’re interpreting morse code on the fly, whether by eye or ear? Are you reading? Speaking?


Sign language poses some of the same problems. Some signs are what we might call iconic or pictographic - they look like or have some connection to the things they refer to. But a lot of them, in American Sign Language at least, depend on writing or spelling out words, sometimes just the first letters of words.

The first principle of writing seems to be that it is language made visual and visible - of speech, that it’s aural and oral. But there are visual forms of language that don’t bear much of a resemblance to writing, and auditory communications (like listening to morse code) that are essentially dependent on writing.


The nineteenth century was all about reducing the quality of information - the richness of its readability - for quantitative transmission. In addition to the telegraph, there’s also shorthand, well documented by Leah Price in this essay in the LRB. It’s still pretty amazing that people actually read whole novels in shorthand:

Pen pals in Africa and Australia found one another through the classified pages of shorthand magazines that juxtaposed new material with reprints of published fiction: Robinson Crusoe, Around the World in Eighty Days, all the Sherlock Holmes stories and even an unabridged run of the Strand Magazine. The depositories of copyright libraries are littered with Victorian shorthand editions of A Christmas Carol, Aesop’s fables, English-Welsh and English-Hindi dictionaries, the Old and New Testaments, and biographies of Calvin and Galileo. Pitman’s Shorthand Weekly (later called the Phonetic Journal) featured ‘serials and short stories by well-known authors; miscellaneous articles; illustrated jokes and anecdotes; and prize competitions’. On 17 August 1901, it offered a prize for the best biography of Isaac Pitman by a colonial subscriber. Submissions, naturally, were accepted only in shorthand.

More important still might be the turn Price traces (which is the turn EVERYONE finds in the history of office and business culture in this period) from the not-quite-but-nearly-aristocratic culture of “men of letters” to the technical world, where women operated the machines of language more often than men:

You can still read every syllable from the first International Shorthand Congress and Jubilee of Phonography, thanks to transcripts produced by ‘an army of phonographers . . . not at all concerned with the economic rewards of shorthand, important as these are, but only with the service – personal, social – even professional – which one Pitmanite can render another in any part of the world.’ One delegate described shorthand as a ‘bond of brotherhood’. Like the open-source movement a century and a half later, Pitmanism was idealistic, distributed and male.

And then everything changed. The American Civil War and, later, the First World War removed men from the workforce; the commercialisation of the typewriter and the invention of the phonograph upped the demand for white-collar labour. Women’s delicate hands began to look like the right tools for turning speech into shorthand, or manuscript into typescript, or one copy into many. By 1901, the shorthand transcript of a Midlands stenographers’ club records a speaker arguing that ‘it seemed degrading for a strong, healthy man to be occupied all day long in using the pen upon what was little more than copying words.’ Advertisements for ‘wrist exercisers’ seemed to hint that a man who hunched over a desk all day would not stay strong and healthy for long.

As stenography fell into the hands of girls and hypochondriacs, its ethos changed from identitarian to utilitarian, from voluntaristic to vocational. By 1901, the Phonetic Journal was complaining that ‘the great majority of young girls study simply for the proficiency which will enable them to enter business.’ Isaac Pitman outlived the ‘brotherhood of the pen’. The metaphor was unlucky: while he continued to tinker with the system his brother Benn realised that ordinary users were tired of endless refinements, and froze the US version of the system at its 1852 release. By the time of Isaac’s death, there was a new threat from Gregg’s 1888 system, which cornered the American market by billing itself as user-friendly, and more specifically as a friend to the ladies. Gregg was to Pitman as Windows is to Linux, or Pilates to yoga: a technique stripped of the ideological baggage that had originally impelled its spread.

Shorthand on its face is an intermediate recording technology between (spoken) voice and (alphabetic) text; but any language can take on a life of its own. In fact, it’s hard to know where the line between speaking ends and writing begins.


This gets confusing in contemporary software, too. I always get Google Talk confused with Google Voice, and not just because everyone I know calls Google Talk the old name of Google Chat or “gchat.” What am I going to do, “voice” someone? There’s also the difference between “voice recognition” and “speech recognition.”

A friend of mine pointed out that when we’re speaking, we almost always use the word “talk” to refer to speech; it’s only when we’re writing that we call it speech. Maybe the important distinction isn’t whether language is auditory or visual, but whether it’s recorded or ephemeral. Your voice, speech, mail is a record; your “talking” isn’t, even if you keep a transcript.


Talking happens in real time, and to talk, you need a voice, even if it’s not produced in the throat. Roger Ebert recently discussed his search for a way to communicate in real time to friends, family, and business partners:

Soon after my second surgery, when it became apparent I wouldn’t be able to speak, I of course started writing notes. This got the message across, but was too time-consuming for communications of any length. And notes were unbearably frustrating for a facile speaker like me, accustomed to dancing with the flow of the conversation. There is a point when a zinger is perfectly timed, and a point when it is pointless.

There is a ground rule in the treatment of those who cannot speak; their written notes must take precedence. This was not happening. Something would be said, I would begin writing a comment, and someone else would speak. Then someone else would speak. I would finish my note, and hand it to a person who was speaking. They would hold it, finish, and be responded to by someone else. When my note was finally read, I would hear, What’s this about? Or I don’t know what that means. I would point to right (the past), to suggest I was responding to something said earlier. They wouldn’t know what that meant, either.

God knows my wife tried to help out, but people…are people. Who knows how patient I would be? One on one, conversations-by-note went all right. Business meetings were a torture. I am a quick and I daresay witty speaker. Now I came across as the village idiot. I sensed confusion, impatience and condescension. I ended up having conversations with myself, just sitting there.


Some of the most moving writings I’ve ever read are the “conversation slips” Franz Kafka wrote at the end of his life, when he was dying of tuberculosis and could no longer eat, drink, or speak. One recurring theme: he continually asks those around him to water the flowers in the room, often while also self-deprecatingly about his own inability to drink:

That cannot be, that a dying man drinks.

Do you have a moment? Then lightly spray the peonies.

Mineral water - once for fun I could

Fear again and again.

A bird was in the room.

Put your hand on my forehead for a moment to give me strength.

Ebert finally opted for the canned OS X voice on his laptop — that solved the speech in near real-time problem - but he’s still searching for a solution that will give him back the full range of his instrument, in all of its analphabetic tonalities — and that’s what a voice is, ultimately, an instrument to play, even if it’s played with the alphabetic keys of the keyboard.


At the other end of the spectrum, let’s consider language with no voice at all — bar codes. Matthew Battles has a couple of good posts on bar codes at “The Urge of the Letter” (one, two):

[T]he barcode is a printed thing, meant for “reading” not by human minds, but by computers. Nonetheless, I’ve often wondered if a time will come when barcodes are legible, when we will read them as easily as any other typeface. In a sense that time has arrived: the iPhone and other mobile operating systems now offer applications that will “read” a photo of a barcode and instantly deliver product information to the user’s device—spectacles for a consumer consciousness, delivering into the magisterium of reading and writing an information transaction until quite recently restricted to machines.


An aside for a short prediction: in ten years, Kindles (and other handheld readers) will come with a stylus, not to write, but to scan barcodes as well as alphabetic text, and display data (or metadata) on-screen. Think about it! Your “reading machine” will actually be able to read things, not just show you text!

Posted August 14, 2009 at 9:09 | Comments (5) | Permasnark
File under: Books, Writing & Such, Language


Great post! Wow.

It's amazing how we find ways to 'humanize' and include analphabetic info in all sorts of alphabets. Think of telegraph operators having a specific 'hand' (I think that's what they called it) -- a subtle personal rhythm, eccentric emphases or whatever, that made them instantly recognizable to other operators.

Or think of handwriting -- it's almost like the alphabet component is the carrier wave and the swooshy, swoopy, stroke-y component is the REAL signal. Ha!

So it's a shame when alphabets don't allow this -- the super-mechanical ones, like a synthesize voice or a barcode.

Although, is that really true? Maybe *any* alphabet can be abused, modified, stylized -- if only you're clever enough.

What's a television doing in the physical world without people to read it, right? I'll hold off on the Heidegger, but there is all sorts of good lit theory type stuff for this discussion:

I'm all about the Heidegger, and even the Althusser, but at the moment I confess I'm more interested in cognitive/brain stuff when it comes to interpreting language, simply because I don't know it as well. Like what parts of the brain, what evolutionary adaptations, have we hotwired to put to work to understanding written text? Speech, certainly -- but maybe also facial recognition? A certain kind of iconic imagination associated with religion? Our sense of cultural markers that work to patrol group boundaries?

It's funny that you mention rhythm, Robin - one of the things Oliver Sacks says about music is that rhythm is something both deep in the brain - people who suffer musical loss because of a brain injury NEVER lose a sense of rhythm - and deeply human, such that other species don't seem to understand it. So rhythm is, somehow, the first language - neither visual nor strictly auditory but somehow tactile.

Think about soothing a baby to sleep in the dark; the regular rhythm, the universal tonalities, the first music, the original poetry (b/c it's the poetry of origin). That's where humanity begins.

Maybe the important distinction isn’t whether language is auditory or visual, but whether it’s recorded or ephemeral. Your voice, speech, mail is a record; your “talking” isn’t, even if you keep a transcript.

Another way to put this: writing is speech divorced from time. It is removed from the moment and circumstance of its utterance. As writing becomes more tied to a particular moment, more real-time, it becomes more like speech. The chat message or the mute's note make fullest sense only at the moment they are conceived; this is what Ebert finds so frustrating.

Most writing, instead, is relatively timeless; not only can I make sense of it days, weeks, even centuries after its conception, but I can experience it in random-access fashion, jumping from point to point rather than moving linearly from beginning to end. Like the Trafalmadorians in Slaughterhouse-Five, I can see the entire time span of the text at once.

Speech isn't just ephemeral; it is bound to time by its rhythm, its cadence, its rise and fall. I'm fascinated by the evolution of musical notation; one of the earliest systems developed among monks recording the way that they chanted biblical texts in medieval Europe (see: neumes). The system allowed recording of the relative pitch and duration of chanted syllables, and was perhaps more an aid to memory than an exact science. But by recording these aspects of chant, and also of speech, they were made visible: the speed and pitch at which something is said are key components of its meaning, and are almost always lost in translation to writing.

Nowadays our writing technologies have come far. With markup and styling languages we can do more with text, more easily and cheaply, than ever before. Can we now devise new systems to represent the auditory and temporal characteristics of talking in textual form?

Posted by: Matt Penniman on August 14, 2009 at 10:49 PM

As writing becomes more tied to a particular moment, more real-time, it becomes more like speech. The chat message or the mute's note make fullest sense only at the moment they are conceived; this is what Ebert finds so frustrating.

This is pretty much exactly what I (and Walter Ong) mean by secondary literacy -- writing and reading transformed by electronic media. Just as speech is transformed by phonograph, radio, TV, telephone, etc. - becoming a time- and space-shiftable recording.

The other thing that's happening is that "real-time" and "timeless" are giving way to a kind of relative (a)synchrony; the time of the blog post, text message, digital chat, twitter reply, Tivo recording are a lot fuzzier than the time of the conversation or the book. Which is not to say that the time of the newspaper, letter, or telegraph weren't screwing around with this opposition, 'cause they totally were. (Did I just use "time" as a plural dative noun? Yes, I did.)

spacer image
spacer image