spacer image
spacer image

Welcome! You're looking at an archived Snarkmarket entry. We've got a fresh look—and more new ideas every day—on the front page.

July 24, 2009

| Counterfactual Friday >>

Towards A Theory of Secondary Literacy

There’s a great scene in Star Trek IV - yes, the one where the crew travels back in time to save whales - where Scotty, the engineer, tries to control a Macintosh by talking to it. When McCoy hands him the mouse, he speaks into it, in a sweetly coaxing voice: “Hello, computer!” When he’s told to use the keyboard (“How quaint!”), he irritably cracks his knuckles — and hunts-and-pecks at Warp 1 to pull up the specs for “transparent aluminum.”

As recently as 2000, it seemed inevitable that any minute now, we were going to be able to turn in our quaint keyboards and start controlling computers with our voice. Our computers were going to become just like our telephones, or even better, like our secretaries. But while voice and speech recognition and commands have gotten a lot better, generally the trend has been in the other direction - instead of talking to our computers, we’re typing on our phones.

(Which is arguably the hidden message of Scotty and the Mac - even somebody with the most powerful voice-controlled computer in the galaxy can touch-type like a champ. He probably only talks to the computer so his hands are free to text his friends while he’s engineering! “brb - needed on away team” — “anyone know how to recrystallize dilithium” — That’s why he’s so inventive! He’s crowdsourcing!)

The return to speech, in all of its immediacy, after centuries of the technological dominance of writing, seemed inevitable. The phonograph, film, radio, and television all seemed to point towards a future dominated by communications technology where writing and reading played an increasingly diminished role. I think the most important development, though, was probably the telephone. Ordinary speech, conversation, in real-time, where space itself appeared to vanish. It created a paradigm not just for media theorists and imaginative futurists but for ordinary people to imagine tomorrow.

This was Marshall McLuhan’s “global village” - a media and politics where the limitations of speech across place and time were virtually eliminated. Walter Ong called it “secondary orality” - we were seeing a return to a culture dominated by oral communication that wasn’t QUITE like the primary orality of nonliterate cultures - it was mediated by writing, by print, and by the technologies and media of the new orality themselves.

Towards the end of his life, in the mid-1990s, Ong gave an interview where he tried to explain how he thought his theory of secondary orality was being misapplied to electronic communications:

“When I first used the term ‘secondary orality,’ I was thinking of the kind of orality you get on radio and television, where oral performance produces effects somewhat like those of ‘primary orality,’ the orality using the unprocessed human voice, particularly in addressing groups, but where the creation of orality is of a new sort. Orality here is produced by technology. Radio and television are ‘secondary’ in the sense that they are technologically powered, demanding the use of writing and other technologies in designing and manufacturing the machines which reproduce voice. They are thus unlike primary orality, which uses no tools or technology at all. Radio and television provide technologized orality. This is what I originally referred to by the term ‘secondary orality.’

I have also heard the term ‘secondary orality’ lately applied by some to other sorts of electronic verbalization which are really not oral at all—to the Internet and similar computerized creations for text. There is a reason for this usage of the term. In nontechnologized oral interchange, as we have noted earlier, there is no perceptible interval between the utterance of the speaker and the hearer’s reception of what is uttered. Oral communication is all immediate, in the present. Writing, chirographic or typed, on the other hand, comes out of the past. Even if you write a memo to yourself, when you refer to it, it’s a memo which you wrote a few minutes ago, or maybe two weeks ago. But on a computer network, the recipient can receive what is communicated with no such interval. Although it is not exactly the same as oral communication, the network message from one person to another or others is very rapid and can in effect be in the present. Computerized communication can thus suggest the immediate experience of direct sound. I believe that is why computerized verbalization has been assimilated to secondary ‘orality,’ even when it comes not in oral-aural format but through the eye, and thus is not directly oral at all. Here textualized verbal exchange registers psychologically as having the temporal immediacy of oral exchange. To handle [page break] such technologizing of the textualized word, I have tried occasionally to introduce the term ‘secondary literacy.’ We are not considering here the production of sounded words on the computer, which of course are even more readily assimilated to ‘secondary orality’” (80-81).

This is where most of the futurists got it wrong - the impact of radio, television, and the telephone weren’t going to be solely or even primarily on more and more speech, but, for technical or cultural or who-knows-exactly-what reasons, on writing! We didn’t give up writing - we put it in our pockets, took it outside, blended it with sound, pictures, and video, and sent it over radio waves so we could “talk” to our friends in real-time. And we used those same radio waves to download books and newspapers and everything else to our screens so we would have something to talk about.

This is the thing about literacy today, that needs above all not to be misunderstood. Both the people who say that reading/writing have declined and that reading/writing are stronger than ever are right, and wrong. It’s not a return to the word, unchanged. It’s a literacy transformed by the existence of the electronic media that it initially has nothing in common with. It’s also transformed by all the textual forms - mail, the newspaper, the book, the bulletin board, etc. It’s not purely one thing or another.

This reminds me of one of my favorite Jacques Derrida quotes, from his essay “The Book to Come”:

What we are dealing with are never replacements that put an end to what they replace but rather, if I might use this word today, restructurations in which the oldest form survives, and even survives endlessly, coexisting with the new form and even coming to terms with the new economy —- which is also a calculation in terms of the market as well as in terms of storage, capital, and reserves.

I doubt that “secondary literacy” will catch on, because it sounds like something that middle school English teachers do. But that’s too bad - because it’s actually a pretty good term to describe the world we live in.

Tim-sig.gif
Posted July 24, 2009 at 5:06 | Comments (2) | Permasnark
File under: Books, Writing & Such, Language, Media Galaxy, Object Culture, Technosnark

Comments

Here's a factor I've rarely seen discussed in conversations about "post-textual literacy" and the like: I can read about 10 times faster than I can listen. Almost everybody can read three or four times faster.

Doesn't this suggest an efficiency in information transfer that will dominate for a long time?

Moreover, reading and typing are both relatively inconspicuous. You can do them easily in public, and while distracted.

The need to be able to get information while distracted, to perceive signal through noise, has also meant that real-time interaction is overrated. This was already indicated with the answering machine, the VCR, and TiVo. We want our interactions to be swift and responsive, but asynchronous. We don't want to wait. Real-time is giving way to my-time.

There's also something about the way the audio channel, through TV, radio, etc., has become dominated by entertainment. We just don't get actionable information that way anymore.

Because it demands attention (McLuhan would say it's a "hot medium"), listening to sound is *frustrating*. It's frustrating to navigate a sound menu over the telephone, or to listen to ALL of your messages all of the way through to find the one you need. Google Voice has generated huge buzz basically through its promise to turn speech into text - visual voicemail, transcripts of calls.

But talking isn't frustrating. Nobody gets excited about listening to their computer. People get excited about talking to their computer, and having their computer understand them. But for the most part, this hasn't materialized. My Blackberry's got voice search capacity for Google - I never use it. I have expensive speech recognition software on my laptop - I *rarely* use it. There's something cultural and technical holding us back here - I don't think speech entry is clearly superior to typing, but we're all typists now.

spacer image
spacer image