This week, I finished reading a wonderful book – God Says No, by James Hannaham. The protagonist, Gary Gray, has this endearingly earnest, not-too-bright, surprisingly perceptive and doomed sense about him that really made me want to root for him throughout. Gary’s an overweight black guy attending a Christian college in Central Florida; he gets his girlfriend pregnant just as he realizes he has to question his sexuality. These two events catalyze a series of fairly significant catastrophes in Gary’s life, and through each one, I wanted Gary to succeed, to attain what he wanted.
Spoiler ahead. Read more…
Last week, a bunch of digital humanists got together at the Center for History and New Media to make a new tech tool that the broader academic AND nonacademic communities could use for their work. The catch: they had to conceive, design, and ship the thing in JUST one week. And as anyone who’s spent any time hanging out with university people, the pace is usually pretty glacial compared to the commercial world, even/especially for tech.
They called it “One Week | One Tool,” with a great subhead in the deck: “A Digital Humanities Barn-Raising.” You can see the sweet team that put it together here. And CHNM has a great track record with these kinds of projects: Zotero and Omeka alone are free, open-source world-class products that have made life in the research and curatorial feeds way easier.
Now about half these people working on One Week | One Tool are in my Twitter feed, and they spend a lot of time talking to each other, so as this was unfolding, I read about this event non-stop. People wrote blog posts about it. Folks (especially those of us who were on the periphery) made puns and cracked jokes. It was an ongoing communal broadcast that you could follow on the #oneweek hashtag if you wanted the full dish. It was very, very similar to the excitement around the 48HrMag (now Longshot Magazine) project when it was first announced, albeit within a slightly smaller, maybe more homogeneous community.
But, importantly, with all that information circulating, nobody said anything about what the tool actually was. There were even enigmatic teaser tweets, like “Just used the new #oneweek tool for the first time; works great!” It wasn’t LeBron-taking-his-talents-to-South-Beach suspense, but at a certain point, more and more people were waiting to find out what the heck the thing was. They even launched a video stream to make the announcement. I don’t know if they tried to get ESPN to donate some time and sell commercials for charity, but who knows?
And sure enough, it became big news. Everybody who’d been following it live-tweeted the news once they’d gotten it. (Some people even begged their Tweeple to post it, since they couldn’t watch the video broadcast.) It got written up in ReadWriteWeb, the Chronicle of Higher Educaton, and the Atlantic, among other big-for-DH venues.
And they put together a great open-source tool: Anthologize, a WordPress plugin that helps you take online content like blog posts and collect, edit, design, and format them into a book — for either digital or print. Solid software, with obvious utility for lots of people, not just academics. (Although part of me quietly wonders if the CHNM’s last big project, “Hacking the Academy,” motivated the choice, since that explicitly was an effort to turn a whole bunch of scattered blog posts — again, all written and/or curated in one week — into a book.)
Now this is the part on Snarkmarket where, usually, I would try to explain what I think all of this means — for you, for us, for media, for journalism, for education, for the children. And this time, I’m deliriously happy, because I think we’ve already done it. I can just take two posts+comment threads from the Snarkmarket archive and blockquote the hell out of them. (And as everyone knows, me and blockquotes are totally BFFs.)
Here are some highlights from Robin’s still-uber-potent “The future of media? Bet on events“:
So far we’ve got this TED/Phoot Camp media-making workshop spear-gun. Now, bolt on deadly additions from Iron Chef and the Long Now Foundation’s debates. Now we’ve got a
laser swordmedia product that is:* Live. It’s an event that happens at a specific time and place in the real world. It’s something you can buy a ticket for—or follow on Twitter.
* Generative. Something new gets created. The event doesn’t have to produce a series of luminous photo essays; the point is simply that contributors aren’t operating in playback mode. They’re thinking on their feet, collaborating on their feet, creating on their feet. There’s risk involved! And that’s one of the most compelling reasons to follow along.
* Publishable. The result of all that generation ought, ideally, to be something you can publish on the web, something that people can happily discover two weeks or two years after the event is over.
* Performative. The event has an audience—either live or online, and ideally both. The event’s structure and products are carefully considered and well-crafted. I love the BarCamp model; this is not a BarCamp.
* Serial. It doesn’t just happen once, and it doesn’t just happen once a year. Ideally it happens… what? Once a month? It’s a pattern: you focus sharply on the event, but then the media that you produce flares out onto the web to grow your audience and pull them in—to focus on the next event. Focus, flare.
I wrote this in the comments:
I like positioning the generative-web-event as being somewhere between a seminar, a TV show, and a magazine.
Like a seminar, or workshop: it’s brainy, and collaborative, aimed at creating knowledge, not just reciting it;
Like a TV show: it’s live! It’s happening now! Or, rather — it was happening then. We’re going to show you something that’s going to gain and capture your attention;
Like a magazine: you’re not capturing a random viewer, who is just trying to tune in to whatever catches their attention at that moment. You’re connecting with subscribers, and trying to gain and hold their attention. Too much of the web, of social media, is like flicking through the channels, with too much of the bad aspects of that and not enough of the good.
And Shamptonian asks:
Regardless of the tools, methods and processes involved, I keep wrestling with the existential question of “what is the ultimate purpose of this media?”
Are we generating it:
1. For profit?
2. For attention?
3. For education?
4. For helping humanity?
5. For the evolution of civilization?I have no answers 🙂 I think I’m just growing weary of having to assign purpose to art, and the increasing belief that the forms of [artistic] media (poetry, literature, painting, photography, video, etc.) are less meaningful, less marketable, less ‘social’, if they do not have a broader intent.
Actually, that whole comment thread is one of my favorites ever: it features a goodly chunk of the all-time Snarkmatrix comment all-stars, and we talk about the awesomeness of the Snarkmarket ampersand, the non-value of farts in windowless rooms, and even spawned what’s still my favorite mass-culture media idea, “Lego Hamlet.” Read it, or read it again.
Now, Robin started out his events post thinking about events for profit, but clearly, as Anthologize proves, you can also get a lot of mileage for events that look to educate and help humanity AND — maybe most importantly — generate attention. Here’s Robin again:
A specter is haunting the internet, and I think it’s even scarier than the challenge of getting people to pay money. It’s the challenge of getting them to pay attention. I think it’s only going to get worse—which is to say, better, because we as internet users and blog readers and tweet slingers will have more cool, weird, interesting stuff to look at all the time, and it will just keep coming faster and getting cooler and fragments and—ack!
So what kinds of cultural objects historically have gotten people to pay attention? Well, I wrote about this last month:
The way our culture works, depending on what field you’re operating in, certain kinds of objects (or in some cases, events) generate more cultural focus than others. Shirky gives an example from painting: “Anyone can be a painter, but the question is then, ‘Have you ever had a show; have you ever had a solo show?’ People are always looking for these high-cost signals from other people that this is worthwhile.” In music, maybe it used to be an album; in comedy, it might be an hour-long album or TV special; I’m sure you can think of others in different media. It’s a high-cost object that broadcasts its significance. It’s not a thing; it’s a work…
It’s no surprise, then, that he Big Digital Shake-Up in the way cultural objects are produced, consumed, sold, disseminated, re-disseminated, etc. is shifting our concepts of both authorship and the work in many genres and media. What are the new significant objects in the fields that interest you? Pomplamoose makes music videos; Robin wrote a novella, but at least part of that “work” included the blog and community created by it; and Andrew Sullivan somehow manages to be the “author” of both the book The Conservative Soul and the blog The Daily Dish, even when it switches from Time to The Atlantic, even when someone else is guest-writing it. And while it takes writing a book to get on Fresh Air, to really get people on blogs talking about your book, it helps to have a few blog posts, reviews, and interviews about it, so there’s something besides the Amazon page to link to.
I put forward a guess at the end of that post, which is a partial answer to that question. One new kind of media that’s starting to function as a work is a blog. Not, in most cases, a blog post — but a blog. If the New York Times decides, “hey, we’re going to start and host a blog all about parenting” — that blog becomes a Work. It produces ongoing cultural focus, and not just because it’s in the New York Times. Some posts get more attention than others, especially if they cross over into a long-form venue, but writing that blog, sticking with it, being its author, creates focus, readership, and a long accumulation of content. And I’m sure Lisa Belkin (who already wrote a book about parenting) will get another book out of it.
But the other new, emergent work, which might be more radical, is the generative web event. 48HrMag, One Week | One Tool, Robin’s novellas, and maybe even the New Liberal Arts (especially if we put together another edition) are all ancestral species of this new thing — the children of TED and Phoot Camp and Long Now and Iron Chef, and the parents of whatever’s going to come next.
David B Hart on “the metaphysical meaning of baseball“:
I know there are those who will accuse me of exaggeration when I say this, but, until baseball appeared, humans were a sad and benighted lot, lost in the labyrinth of matter, dimly and achingly aware of something incandescently beautiful and unattainable, something infinitely desirable shining up above in the empyrean of the ideas; but, throughout most of the history of the race, no culture was able to produce more than a shadowy sketch of whatever glorious mystery prompted those nameless longings.
Note that this isn’t just a sportswriter losing himself in lofty/weepy rhetoric or a satirist engaging in arch irony. Hart’s a serious writer on religion, and First Things (where this essay appears) is all about phrases like “sub specie aeternitatis.*” So they mean every word.
This essay did remind me, briefly, of Stephen Jay Gould’s confession of faith in his essay on the relationship between art and science, “Nonoverlapping Magisteria”:
I am not, personally, a believer or a religious man in any sense of institutional commitment or practice. But I have enormous respect for religion, and the subject has always fascinated me, beyond almost all others (with a few exceptions, like evolution, paleontology, and baseball).
Substitute mathematics and poetry for evolution and paleontology and I am right there**. In fact, for me, at least in childhood, Catholicism and baseball are inseparable, beautiful, impractical dreams.
* This is a phrase of Spinoza’s (I don’t know if he invented it, but he used it a lot) that means “from the point of view of eternity.”
** Okay, I really like film and basketball, too.
My life was insane in August 2006. I moved twice and generally tried to piece my life and relationships together after a huge falling-out with my wife’s family, where we ended up moving out of a house they owned. For most of August, I sublet a bug-infested studio apartment without air conditioning or even working windows that had the questionable virtues of being on a bus line and across from a 7-11. I remember cleaning the kitchen, which had at one point harbored rats, top-to-bottom with industrial strength oven cleaner, which filled the house with toxic fumes but ate through the layers of grime and filth that had accrued over the years. Still, I stacked chairs in front of the kitchen and never used it once in the five weeks we were there.
As a consequence, I don’t really remember what the heck was going on Snarkmarket then; but that’s what archives are for! Here are a handful of posts that caught my fancy trolling through the stacks:
This last point reminds me of this Economist article on “the unemployment netroots” — basically, young, highly-skilled, politically-active folks who have the incentives and abilities to get organized in a way that the long-term unemployed (for various reasons) have never been able to do.
Highly-engaged older people have long made it a point to be politically active as members of a semi-solid bloc; maybe young people, who’ve been disproportionately hurt by the Great Recession (I wrote a half-joking post about this called “The Coming Age Wars“) could pull it off. After all, in all of human history, the greatest revolutionary force has always been the idle, disaffected young.
My favorite Bertrand Russell book is Introduction to Mathematical Philosophy, not least for the perspective shift he pulls off just in the title. He elaborates on it like this (I’ll paraphrase): This isn’t philosophy of mathematics, where we’ll sit around and ask deep, open-ended, metaphysical questions about whether or not numbers really exist, or if they’re just in our heads. It’s mathematical philosophy, where we’re going to try to think about philosophy (including the philosophy of logic and mathematics) like mathematicians would, using mathematicians’ tools.
Here’s an example of how this works. There’s a famous proof of the existence of God by St Anselm, called the Ontological Argument. Let’s say God is just our idea of the most perfect thing possible. Everything that could be good, God is: he’s all-knowing, all-powerful, all-good. Well, then this most perfect thing possible would have to exist, because something that exists is better than something that doesn’t — so an idea of a God who doesn’t exist wouldn’t really be completely perfect, would it?
Kant had already said that this proof was baloney, because “existence” wasn’t a predicate like goodness or knowledge. But Russell and analytic philosophy took it one step further. In math and formal logic, existence isn’t a predicate — it’s a quantifier. Like in the sentence, “For every natural number, there is a larger natural number.” We’re not making deep existence claims here, just singling out an element in a system.
So if we can come up with a model that’s foundationally and structurally sound, and works, let’s use it. What looked like an impossible problem wasn’t a problem after all; we’d just gotten twisted up in the way we talked about it.
So philosophy of mathematics => mathematical philosophy. Change of grammar => change of perspective.
You can imagine all kinds of variations on this. For instance:
This gets you three totally different approaches.
Sometimes, we get lots of ambiguity because we can’t pull this reversal off. For instance, “digital history” means both the history of digital technology (usually done using recognizably traditional historical methods) AND using digital tools to do historical research.
What else could we switch around so we could see things differently?
Images of Russia, during World War II and today:
At Geekosystem, Robert Quigley writes:
Russian photographer Sergey Larenkov is a master of a technique called, alternatively, perspective-matching photography or the fancier computational rephotography, which consists of precisely matching the points-of-view of vintage and modern photographs and exploring what happens where they merge. Since last year, Larenkov has been assembling a series of such photos on World War II…
Some Photoshop whizzes have criticized Larenkov’s work on the grounds that the mergers are too jarring in their contrasts and could be executed with greater smoothness on his part, but, in the absence of an explanation of his work, I think that’s kind of the point: It clearly takes a great deal of patience and technical aptitude to create these photos, and the harshness of imposing war and its devastation on pristine modern European cities works better when it’s not too slick.
Browsing through Larenko’s gallery, the work is pretty uneven, but in a way that’s actually revealing. Some of them just put photos of groups of people from WWII against contemporary backgrounds, or vice versa. It looks sort of like one of those kitschy sepia-tinted photos of your family dressed in old-timey clothes you might get at a theme park. Overwhelmingly, the best images, like those above, blend outdated or obliterated buildings and vehicles into the existing cityscape. It’s the materiel, not the men, who matter.
Partly, this is because vintage photos of destroyed cities are just so compelling. This is an underappreciated contribution of Matthew Brady and the other photographers of the American Civil War. They kicked off a new kind of photorealist aesthetic focusing on machines and the worlds destroyed by them. All those strange geometries and fragmented buildings then funnel into the first waves of photographic abstraction. Here are some pictures of Charleston and Fort Sumter (after the allies retook the fort, bombarding it with heavy guns):
Like me, Ta-Nehisi Coates is fascinated by the way that the Civil War is a war driven by and brought to bear on “stuff” (human beings in the form of slaves and soldiers being just the most visible, contested, and precious kind of stuff) He quotes the historian Daniel Walker Howe:
While the growing of cotton came to dominate economic life in the Lower South, the manufacture of cotton textiles was fueling the industrial revolution on both sides of the Atlantic… During the immediate postwar years of 1816 to 1820, cotton constituted 39 percent of U.S. exports; twenty years later the proportion had increased to 59 percent, and the value of the cotton sold overseas in 1836 exceed $71 million. By giving the United States its leading export staple, the workers in the cotton fields enabled the country not only to buy manufactured goods from Europe but also to pay interest on its foreign debt and continue to import more capital to invest in transportation and industry. Much of Atlantic civilization in the nineteenth century was built on the back of the enslaved field hand.
Neil deGrasse Tyson likes to point out how it’s a mathematical certainty that the air we breathe and the water we drink passed through the lungs and kidneys (respectively) of everyone who ever lived. Likewise, in these Civil War photos, both the destroyed Southern buildings (one of them a US army fort) and the Northern cannons that destroyed them result from the profits of American slavery. Americans like to think about victories in World War II without thinking about the cities and people destroyed in Russia and dozens of other countries (including Japan, Italy, and Germany) that stand behind that war — in no small part because we don’t have to live with them, to walk down those streets, to feel those ghosts. But we’re haunted all the same.
Bob Stein, founder of the Institute for the Future of the Book, talks about working for Alan Kay, starting the Criterion Collection and Voyager on laserdisc, Hypercard e-books, and interactive CD-ROMs — essentially, the whole prehistory of where we are now with just about all digital media:
The book was always fundamental to me. One of the things I really liked was that the original logo for Criterion, which we designed in 1984, was a book turning into a disc. It was central. When I was writing the paper for Britannica, I felt like I had to relate the idea of interactive media to books, and I was really wrestling with the question “What is a book?” What’s essential about a book? What happens when you move that essence into some other medium? And I just woke up one day and realized that if I thought about a book not in terms of its physical properties—ink on paper—but in terms of the way it’s used, that a book was the one medium where the user was in control of the sequence and the pace at which they accessed the material. I started calling books “user-driven media,” in contrast to movies, television, and radio, which were producer-driven. You were in control of a book, but with these other media you weren’t; you just sat in a chair and they happened to you. I realized that once microprocessors got into the mix, what we considered producer-driven was going to be transformed into something user-driven. And that, of course, is what you have today, whether it’s TiVo or the DVD.
And how did DVDs get commentary tracks? Let Bob tell you:
You have to understand how much of this stuff is accidental. I knew the guy who was the curator of films at the LA County Museum of Art, and I brought him to New York to oversee color correction. He’s telling us all these amazing stories, particularly about King Kong, because it’s his favorite film. Someone said, “Gee, we’ve got this extra sound track on the LaserDisc, why don’t you tell these stories?” He was horrified at the idea, but we promised we’d get him superstoned if he did, and he gave this amazing discussion about the making of King Kong, which we released as the second sound track…
We had people driving to our home, where our offices were, by the second day, and begging for copies. It was Los Angeles, it was the film industry—and finally someone had done something serious with film. Film was suddenly being treated in a published form, like literature. But this still wasn’t mainstream. Citizen Kane was three discs and cost $125. It cost us $40 to manufacture. The most LaserDiscs we ever sold was about twenty thousand copies of Blade Runner.
I don’t usually squee with delight, but: Squeee!
There are many invented scenes, places, characters, and events I love in my friend and colleague’s novella Annabel Scheme, but my favorite invention is probably the fictional MMORPG “World of Jesus.” An online VR game set in Palestine at the time of Christ.
Here’s why I’m writing about it. Read Write Web has a short write-up of virtual ancient worlds, mostly created by libraries, museums, and universities:
When the first immersive 3D games came out, I asked a programmer if he knew of anyone who had used that technology to create a Virtual Ancient Rome or Virtual Ancient Athens. I loved the idea of walking around in a place whose current face was changed out of all recognition from its golden age. He shook his head. Creating virtual worlds was way too time consuming and required too much specialist knowledge and so was too expensive. A virtual Rome wouldn’t create the profit that Doom did.
Fast forward a decade and the programming necessary becomes easier to do and the number of people who know how to do it have increased substantially. The costs involved in creating a virtual world have decreased at the same time that academic and scholarly institutions have become much more willing to invest in it.
There are terrific settings here: Rome, Athens, Tenochtitlan, and Beijing’s Forbidden City. But — and I think this is surprising — no Jerusalem. No World of Jesus.
For those who haven’t read the book, on its face, the game’s name sounds like a clever zinger, like something that would be the punchline to a joke on Futurama or at a relatively hip Bible Camp. But what I think Annabel Scheme does particularly well is pushing past surface details and cute references to dwell within its two worlds, the technological and the spiritual, taking both of them seriously.
I can’t think of any better manifestation of that than “World of Jesus.” The character who plays the game believes in this world and his place in it: his religious faith and his technological faith are one and the same, turning a mechanical ritual into treasures in heaven.
And so we believe in it, because it’s a reflexive, self-allegorizing move too: for the reader, the fictional San Francisco of Scheme and Hu is just as much a virtual world, with its own enticements, traps, rules and ways to break them, as “World of Jesus” is for them. Dreams within dreams, virtualized virtuality.
It helps that Robin brings some of his most evocative and affecting writing in this chapter, too, as his AI narrator Hu becomes “embodied” for the first time in the world of the game:
The first thing I noticed was the light.
My eyes opened in a small, simple house with wooden shutters, and the light was peeking in through the cracks, picking up motes of dust in the air. I’d never seen anything like it. Are there motes in the real world? Scheme’s earrings didn’t show motes.
In World of Jesus, you could choose between looking over your character’s shoulder or through its eyes. I saw myself from behind, then spun around: I’d chosen the girl in silk.
Then I switched to see through my own eyes. All I ever did was look over Scheme’s shoulder. I wanted a new perspective.
The door opened automatically. Outside, the sun beamed in blue-gold through a scrim of tall cedars and fell in wide bars on a dusty, stone-paved street. Everything looked… mildly medieval. I had a feeling that this Jerusalem was not historically accurate.
I lifted my eyes to the sky, and it felt like my heart was going to jump out of my chest. It was probably just my eight processors all seizing up at once; I wasn’t built for this. Grail servers are optimized to process gobs of text, not 3D graphics, so the carefully-crafted World of Jesus was a new exertion.
I didn’t care. That sky. It was the most beautiful thing I had ever seen. White curls and wisps dotted the glowing blue bowl. I couldn’t do anything except stand and stare.
A voice crackled: “Hu, is that you?”
I turned. It was a woman in a simple gray tunic, with red hair just like Scheme’s.
“Yes, it’s me,” I said—and realized that I spoke like everyone else.
This is what literature is: taking a machine (our own literacy) built for processing text and making it render images instead. Characters, actions, an entire world — a virtual gamespace, by way of the alphabet.
Let me tell you something: I think that if a game company were to make it, and do it well, “World of Jesus” would be a smash hit. If you wanted to get your Warcraft on, you could play as a centurion and slash-and-hack Persian armies and crucify dissidents. Or you could be a Jewish rebel fighting to overthrow the Romans. Maybe you’re a female disciple, fighting to retain women’s leadership roles after Christ’s death. Or you’re a regular person: a tax collector, a fisherman, a falafel merchant. An online RPG that doesn’t necessarily have to be about how many people you can kill. (See: “A four-year-old plays Grand Theft Auto.”)
Many faiths, many ages, many games within games. Or if you wanted to play in story mode: what a story!
So, last night, I finally met my illustrious co-blogger Matt Thompson for dinner at a DC restaurant. We didn’t get a picture — I had to limp/run out of the restaurant to catch a late-night train — but 1) Robin wasn’t there and 2) we weren’t wearing our black paisley vests either, so maybe it’s for the best.
Taking Robin’s place as our guest/facilitator/cultural psychoanalyst was longtime friend of the Snark Rachel Leow, whose blog a historian’s craft you should know. Here are some of the things we collectively figured out:
In a post yesterday, I offhandedly referred to “giving up TV.” But like giving up Facebook, very few of us have actually given up TV. What’s happened instead is that (like with Facebook), TV has become a problem.
Sure — historically, TV has probably lost whatever monopoly it had on our total cognitive-surplus, staring-at-screen time. It also may have lost a fair degree of its cognitive priority. For instance, when I recently needed to cut some money from my monthly household budget, I dropped my cable TV, switched the internet to DSL, and kept my phone’s data plan — not the decision I would have made three years ago.
But I probably watch more TV than ever now. It’s just coming in the form of DVDs, video games and Netflix streaming on my Wii, and catching up via Hulu, The Daily Show, etc. on my computer. But — wait. See what I just did there? I just ran together everything I do on the big, stationary screen that sits in my living room (called a television) and the short-to-medium form video originally broadcast for that screen, but which I can’t watch there (called television). And both big, stationary screens that we watch from 6-10 feet away and short-to-medium form broadcast video seem to have a pretty firm lock on our psyches and social practice. They’re powerful, versatile, and fun.
One of the things I loved from the Steve Jobs/Bill Gates joint appearance at D5 a few years ago — a really illuminating talk that I periodically return to, that holds up well and has new resonances now — is how they analyze the natural form factors for digital media. And it sort of divides pretty cleanly, with Jobs (big hit then: iPhone) focusing more on smaller forms and Gates (big hit then: XBox) on bigger ones. Gates, I think, doesn’t get enough credit for his vision here:
Walt: What’s your device in five years that you rely on the most?
Bill: I don’t think you’ll have one device. I think you’ll have a full-screen device that you can carry around and you’ll do dramatically more reading off of that.
Kara: Light.
Bill: Yeah. I mean, I believe in the tablet form factor. I think you’ll have voice. I think you’ll have ink. You’ll have some way of having a hardware keyboard and some settings for that. And then you’ll have the device that fits in your pocket, which the whole notion of how much function should you combine in there, you know, there’s navigation computers, there’s media, there’s phone. Technology is letting us put more things in there, but then again, you really want to tune it so people know what they expect. So there’s quite a bit of experimentation in that pocket-size device. But I think those are natural form factors and that we’ll have the evolution of the portable machine. And the evolution of the phone will both be extremely high volume, complementary–that is, if you own one, you’re more likely to own the other.
Kara: And then at home, you’d have a setup that they all plug into?
Bill: Well, home, you’ll have your living room, which is your 10-foot experience, and that’s connected up to the Internet and there you’ll have gaming and entertainment and there’s a lot of experimentation in terms of what content looks like in that world. And then in your den, you’ll have something a lot like you have at your desk at work. You know, the view is that every horizontal and vertical surface will have a projector so you can put information, you know, your desk can be a surface that you can sit and manipulate things.
That idea of “the 10-foot experience” is really powerful to me — even though my living room and TV set are clearly a lot smaller than Bill Gates’s. And the whole point of it is that it’s heterogeneous and versatile — not just in terms of the kinds of machines and platforms that run on them, but in terms of the use of the space itself.
And here’s Jobs, equally visionary, if not more so. (Apologies again for the long blockquote, I like the banter.)
Walt: So what’s your five-year outlook at the devices you’ll carry?
Steve: You know, it’s interesting. The PC has proved to be very resilient because, as Bill said earlier, I mean, the death of the PC has been predicted every few years.
Walt: And here when you’re saying PC, you mean personal computer in general, not just Windows PCs?
Steve: I mean, personal computer in general.
Walt: Yeah, OK.
Steve: And, you know, there was the age of productivity, if you will, you know, the spreadsheets and word processors and that kind of got the whole industry moving. And it kind of plateaued for a while and was getting a little stale and then the Internet came along and everybody needed more powerful computers to get on the Internet, browsers came along, and it was this whole Internet age that came along, access to the Internet. And then some number of years ago, you could start to see that the PC that was taken for granted, things had kind of plateaued a little bit, innovation-wise, at least. And then I think this whole notion of the PC–we called it the digital hub, but you can call it anything you want, sort of the multimedia center of the house, started to take off with digital cameras and digital camcorders and sharing things over the Internet and kind of needing a repository for all that stuff and it was reborn again as sort of the hub of your digital life.
And you can sort of see that there’s something starting again. It’s not clear exactly what it is, but it will be the PC maybe used a little more tightly coupled with some back-end Internet services and some things like that. And, of course, PCs are going mobile in an ever greater degree.
So I think the PC is going to continue. This general purpose device is going to continue to be with us and morph with us, whether it’s a tablet or a notebook or, you know, a big curved desktop that you have at your house or whatever it might be. So I think that’ll be something that most people have, at least in this society. In others, maybe not, but certainly in this one.
But then there’s an explosion that’s starting to happen in what you call post-PC devices, right? You can call the iPod one of them. There’s a lot of things that are not…
Walt: You can get into trouble for using that term. I want you to know that.
Steve: What?
Walt: I’m kidding. Post-PC devices.
Steve: Why?
Walt: People write letters to the editor, they complain about it. Anyway, go ahead.
Steve: Okay. Well, anyway, I think there’s just a category of devices that aren’t as general purpose, that are really more focused on specific functions, whether they’re phones or iPods or Zunes or what have you. And I think that category of devices is going to continue to be very innovative and we’re going to see lots of them.
Kara: Give me an example of what that would be.
Steve: Well, an iPod as a post-PC…
Kara: Well, yeah.
Steve: A phone as a post-PC device.
Walt: Is the iPhone and some of these other smart phones–and I know you believe that the iPhone is much better than these other smart phones at the moment, but are these things–aren’t they really just computers in a different form factor? I mean, when we use the word phone, it sounds like…
Steve: We’re getting to the point where everything’s a computer in a different form factor. So what, right? So what if it’s built with a computer inside it? It doesn’t matter. It’s, what is it? How do you use it? You know, how does the consumer approach it? And so who cares what’s inside it anymore?
And that sort of seems to be where we stand right now when it comes to TV: caught between all of the different services and hardware devices competing for that 10-foot experience and the emergent category of these post-PC, video-capable handheld devices — tablets, phones, game consoles, plus the screen of your laptop/desktop PC in the middle.
There are a couple of things from Jobs’s appearance at this year’s conference, D8, that follow up on this exchange. The first, which was better publicized, was Jobs’s comparison of post-PCs like the iPhone and iPad and traditional laptop and desktop PCs to cars and trucks, respectively. The analogy being — just as in the early 1900s, most cars were initially trucks, then smaller cars emerged that were better tailored for urban and suburban living, smaller, post-PC devices like the iPad weren’t going to eliminate traditional PCs, but would gradually replace them as the dominant form of consumer computing. It’s a powerful, provocative idea; 2007 Jobs was clearly more skeptical towards it, more inclined to think that the PC was going to morph into something else.
The other is Jobs’s discussion of the balkanization of the television business — that is, the business of getting content to those screens, not the content providers as such: the multiplicity of settop boxes and lack of genuinely national providers or international standards that prevented any company, from Apple to Google to TiVo, however technologically sophisticated, from rolling out a clear go-to-market strategy. This, I think, does seem to explain why, despite all of the local innovations in DVRs, net-connected game consoles, streaming content, and so forth, TV still seems to be forever putting the pieces together.
Last, finally, is the whole consumption/production imbroglio that similarly washed over the iPad. Is the TV space “merely” a space for consumption? Is that a bad thing? Or could there be new/emergent ways to create/contribute/share/connect there, too?
What do you think? What’s next for TV?