Archive for January, 2011
John Battelle posted a nice rumination on EPIC 2014 today—how cool is that? It’s still amazing to realize that, back in 2004, this flickering Flash video from two 24-year-olds in St. Petersburg (well, maybe Matt was 23) made it all the way out here to San Francisco and played on screens like his. (Remember, this was before YouTube. The propagation of video across the internet was still a shaky thing.)
But I do want to add one twist. In his post, Battelle grades EPIC 2014 as a forecast by checking its predictions against reality. A snarky commenter calls him out:
This may not be the best example of long-term prediction. The most important statement in the video is the last one — “perhaps there was another way” — which reveals it to be just another desperate propaganda tool by the people who are scared by the prospect of the New York Times turning into a print-only “newsletter for the elite and the elderly.”
Now, I don’t know about “desperate,” but, truth be told, it was definitely a propaganda tool. Matt and I made EPIC 2014 because we’d already given one presentation about the future of news—a slide show made in PowerPoint, filled with graphs and data points and earnest bulleted exhortations—and it was a total clunker. It put people to sleep. So EPIC 2014 was our second try, and I think its most distinguishing characteristic was not that it was a future forecast but that it was a story. It was a fable, actually!—populated by the broad, archetypal characters that the form demands.
And grading it as a story, I (not-very-humbly) give it an A, because thanks to good luck and good timing (and great narration) it spread fast and far—from John Battelle’s desk to Rupert Murdoch’s and beyond—and it sent chills down a few spines along the way. It made people gasp, it made people laugh (yes, the name “Googlezon” is supposed to be funny) and it bent a few careers off in new directions.
I wouldn’t trade any of that, ever, for the cold consolation of being right about the future.
This is part of the week-long Food for Thinkers carnival of posts hosted by Nicola Twilley over at GOOD Food HQ that all answer this question: “What does—or could, or even should—it mean to write about food today?”
If you’re somebody who’s interested in communicating big ideas, food is one of the most powerful tools you’ve got. It’s a universal, irresistible hook—the most common denominator. You can get people to think about economics, sociology, physiology, psychogeography (?!)—anything you want, really—if you simply connect it back to food. Not food policy!—I mean the real, tactile experience of food in the field and food on the plate.
And I think one way you can make it even more powerful is by combining it with another irresistible hook: fiction.
Back in October, in a paean to the great new sci-fi novel Windup Girl, I made this prediction (which was actually a pitch):
Science fiction is never really about the future—instead, it’s an interesting way to talk about the present. For decades, the genre huddled in the shade of the space race and the Cold War, because those were the dramas of the day. And it was in science fiction, I think, that we actually talked about them most honestly—about both our highest hopes (e.g. Star Trek) and our deepest fears (e.g. The Terminator—really a Cold War movie, and barely about robots at all).
So what’s present now? I think the next few really great works of science fiction—including, maybe, the next great science fiction movie—are going to be about food.
But actually, this is not such a leap from where we are now. I mean, I’ve been thinking about it, and: I’m not sure fiction even functions without food.
The places where we eat and drink are powerful junction points. In fiction, they’re the places where characters come together to share information and make new connections. Think of Monk’s from Seinfeld and Central Perk from Friends. Think of Rick’s American Cafe and the Prancing Pony. Think of the Mos Eisley Cantina!
There’s a reason every adventure starts in a tavern.
But we can take this further, right? Today, in 2011, food isn’t just part of the background; it’s right up front, in sharp focus. More people are operating as petit-gourmands than ever before, at least here in the U.S. and Europe. We think, every day, about our food’s composition and its origin. We look at labels. We ask for options. We feel waves of angst and dread. We are uncertain.
This is the perfect environment for fiction.
I mean, just think about all the drama up and down the food supply chain:
- The hustle and charm of a street food cart.
- The bustle (often bloody) of a working kitchen.
- The loyalties and betrayals of a winemaking dynasty.
- The real-life meathook horror of a factory farm.
- The rivalry and romance of a great farmer’s market.
- The secrets of a mysterious cheese shop.
What do you mean, there are no mysterious cheese shops? There ought to be mysterious cheese shops!
These people and places are story factories—just like hospitals, law firms, and all the other institutions that support long-running serial dramas. Every day, there’s a new crisis in the kitchen. Every week, some new stranger shows up at the farmer’s market.
But let me get tactical here. I’m going to talk about books, because it’s the medium I know best, and it’s where I see an opportunity—one that might make this a bit more concrete:
Right now, one of the big problems with books is that there are fewer and fewer credible places to sell them. The big chains are struggling; indies are an asterisk. Amazon is a titan, of course, but there are some other quiet giants, too: places like Wal-Mart and CostCo where thrillers and self-help books pile up four pallets deep. These are places where people mostly buy things other than books—so perhaps they constitute a new frontier.
But chew on this: there are more farmer’s markets than Whole Foods stores in the United States. So what if you set up a stand next to the radish-monger and sold books at the farmer’s market? What if wasn’t the same pulpy selection you get at Wal-Mart—the latest Lee Child and James Patterson—but instead an inventory specifically concocted to tickle the brains and tug the heart-strings of farmer’s market true believers?
Then, what about selling books at fancy food stores, wineries, and (yes) mysterious cheese shops? Don’t people have enough cook books already? Couldn’t those stores stock a little rack of cheap Food Cart Boys thrillers and sell them as impulse buys?
Maybe there’s another format that would work even better. Maybe it’s actually a rack of audio books, and you can play one in the kitchen while you make something great out of that dino kale and that mysterious cheese.
I think the market is ripe. Everybody’s wondering: okay, first vampires, then zombies… what’s next? What’s the next wave? I think it’s food: tales of weird sci-fi food, tales of illicit criminal food, tales of food and love.
I want the next wave to be food, because I think those could be amazing stories, and because I think they’re worth telling.
If Michael Pollan is right and one of the things that hurts us here in the U.S. is our lack of a coherent, deep-rooted food culture… well, maybe we need to start building that culture, at long last. But I don’t think we can do that with policy papers or New York Times Magazine articles, no matter how smart and wise they are. I think you need to do it with fiction—in every format, from books to TV to movies to video games.
But mostly books: books sold in new places, reaching new audiences, carrying new intellectual payloads.
The next boy wizard will enroll in a magical cooking school.
The next Jason Bourne will be pursued by a sinister agribusiness giant and/or the Tuna Yakuza.
The next Girl with the Dragon Tattoo will be a girl with a street food cart.
Read more Food for Thinkers posts over here at GOOD Food HQ!
This is from the introduction* to Steven Johnson’s Interface Culture, a book from 1997 that I hadn’t previously read:
A few final observations, and warnings, about the pages that follow. The first should be a comfort to readers who have tired of the recent bombast emanating from both the digital elite and their neo-Luddite critics. I have tried to keep this book as free of dogma and polemic as possible, emphasizing both the tremendous intellectual liberation of the modern interface and the darker, more sinister implications of that same technology.
From its outset this book has been conceived as a kind of secular response to the twin religions of techno-boosterism and techno-phobia. On the most elemental level, I see it as a book of connections, a book of links — one in which desktop metaphors cohabit with Gothic cathedrals, and hypertext links rub shoulders with Victorian novels. Like the illuminations of McLuhan’s electric speed, the commingling of traditional culture and its digital descendants should be seen as a cause for celebration, and not outrage.
This is likely to trouble extremists on both sides of the spectrum.** The neo-Luddites want you to imagine the computer as a betrayal of the book’s slower, more concentrated intelligence; the techno-utopians want you to renounce your ties to the fixed limits of traditional media. Both sides are selling a revolution — it’s just that they can’t agree on whether it’s a good thing. This book is about the continuities more than the radical breaks, the legacies more than the disavowals.
For that reason, the most controversial thing about this book may be the case it makes for its own existence. This book is both an argument for a new type of criticism and a working example of that criticism going about its business.***
* I added some extra paragraph breaks to the excerpt to make it read more like a blog post.
** Compare my “Bookfuturist Manifesto,” from The Atlantic.com, August 2010.
*** I pretty much want to be Steven Johnson right now.
Remember when I said Gawker Media ought to invest in bespoke, quick-turn design and illustration to transform posts into more than just blobs of text?
Well, they totally did!
To be clear, I am not taking any credit for this. All hail Sam Spratt, Wendy MacNaughton, and the rest of the crew that’s done great (fast!) work on Gawker blogs lately. Also hail Nieman Lab’s Greg T. Spielberg, who, in usual Lab style, took something the rest of us sort of half-noticed in our web peripheral vision and brought it into crisp useful focus.
I will however take credit when, in the year 2014, Gizmodo merges with Amazon to form GIZMODOZON.
Like Robin, I love the counter-conventional wisdom John Herrman brings to “I Just Want A Dumb TV.” And I really like Frank Chimero’s distinction between “steadfast,” long-enduring, simple tools and “hot-swap” components of a system that you can change on the fly.
But I want to pivot from this taxonomy of “dumb” things to create a complimentary taxonomy of “smart” ones. If the current crop of “smart” TVs somehow goes wrong, how does it do it? And is a “dumb” monitor the best alternative?
“Smart” and “dumb” applied to electronics/tech has a long history, but for our purposes here, let’s look at the smartphone as one model of what a smart appliance looks like. That seems to be what makers of smart TVs did, anyways. So let’s say, bare minimum, a “smart” appliance needs:
- A fairly versatile processor and operating system;
- The ability to connect to other devices on a local or global network;
- Ability to run some kind of local secondary applications.
In short, it should slightly resemble a modern, networked computer. The problem with smart TVs is they work too much like smartphones and not enough like PCs.
See, smartphones are hypermobile, so you stuff a ton of capacity into the device because it’s going to have to do most things by itself. Phone, games, maps, email, the web, etc, — everything that can be jammed into those little screens.
Television screens, on the other hand, are antimobile. Like desktop PCs, they stay in one place, and you hook other things up to them: cable boxes, game systems, Blu-Ray players, and (wirelessly) remote controls.
With a smart TV, you can go in two directions to make the device “smarter”: you can either try to make them super self-sufficient, doing more and more on one piece of hardware. Or you can make the device better and better at talking to other devices.
There are good aesthetic reasons to do the first one: you can cut cords and clutter and save some money and electricity. Also, it’s wired in with software, not hardware. It’s not like you’ve got this crummy, outdated VCR built into the box; you can (in principle) update your OS and get a whole new set of applications and capabilities.
Still, the second way of making a TV smart seems better to me. Forget connecting my TV to the web; I want to connect my TV to my phone, my laptop, my refrigerator, my alarm clock, my media players (etc etc etc). But do it all wirelessly, over a local network. Make it easier for me to get my media — wherever it comes from — up on the biggest screen in my house. I can’t do that with a totally dumb TV, but I can’t do that easily with current-generation smart TVs either.
This is why I guess I’m more interested in “two-screen” approaches to television, where you’re using an iPad (or something) to browse channels and read about programs and tweet about what you’re watching and otherwise interact with and control what’s on your screen. Because the lesson of “hot-swapping” is that good parts that talk to each other well make the whole more than the sum of its parts.
Over on Gizmodo, John Herrman takes TV manufacturers to task for pitching all these widget-enabled internet-connected “smart TVs.” He says:
So, here’s the idea: Just buy dumb TVs. Buy TVs with perfect pictures, nice speakers and and attractive finish. Let set top boxes or Blu-ray players or Apple TVs take care of all the amazing connectivity and content afforded to us by today’s best internet TVs. Spend money on what you know you’ll still want in a few years—a good screen—and let your A/V cabinet host the changing cast of disposable accessories. […]
And TV manufacturers: Don’t just make more dumb TVs. Make them dumber.
I love the exhortation: Make them dumber! Yes, we want stuff that’s even dumber and more durable and more flexible. We want stuff we can plug into other stuff forever.
It does seem true that in the places where requirements are clear—this must make a good picture—and interfaces consistent, things you buy can actually find their footing and hold steady in the swirl of the shiny new.
I’d love a directory of these steadfast components. I feel like my Samsung TV (very dumb) might be a candidate. The 24″ Dell LCD I’ve had at home for five years would definitely go in that directory—I think these Dell monitors are widely recognized as the, like, basic black t-shirts of computer components at this point.
But what else? And what about other domains? Certainly, a good cast-iron frying pan is a kitchen component. There’s probably some classic kind of shoe that, thanks to its timelessness and durability, has reached component status (I do not know what it is). And there are probably some components in here, right?
We can’t expect stability in durability in every domain yet. There’s not going to be a component-caliber tablet computer for quite a while, obviously. But where components are available… where things are dumb and durable… man, that’s the good stuff. That’s the stuff I find myself wanting more and more of.
What are your favorite components—either things you have or things you’d like to get?
Another update: Tim Maly goes deeper with “shearing layers.”
1. pull a Carmody v. to leave a comment that exceeds the original post in insight and value.
It’s always nice when three blogs in your “must read” folder happily converge. First, Jason Kottke pulls a couple of super-tight paragraphs from a Chronicle of Higher Ed article by Clancy Martin, philosophy professor and onetime salesman of luxury jewelry, about how he plied his former trade:
The jewelry business — like many other businesses, especially those that depend on selling — lends itself to lies. It’s hard to make money selling used Rolexes as what they are, but if you clean one up and make it look new, suddenly there’s a little profit in the deal. Grading diamonds is a subjective business, and the better a diamond looks to you when you’re grading it, the more money it’s worth — as long as you can convince your customer that it’s the grade you’re selling it as. Here’s an easy, effective way to do that: First lie to yourself about what grade the diamond is; then you can sincerely tell your customer “the truth” about what it’s worth.
As I would tell my salespeople: If you want to be an expert deceiver, master the art of self-deception. People will believe you when they see that you yourself are deeply convinced. It sounds difficult to do, but in fact it’s easy — we are already experts at lying to ourselves. We believe just what we want to believe. And the customer will help in this process, because she or he wants the diamond — where else can I get such a good deal on such a high-quality stone? — to be of a certain size and quality. At the same time, he or she does not want to pay the price that the actual diamond, were it what you claimed it to be, would cost. The transaction is a collaboration of lies and self-deceptions.
This structure is so neat that it has to be generalizable, right? Look no further than politics, says Jamelle Bouie (filling in for Ta-Nehisi Coates). In “Why Is Stanley Kurtz Calling Obama a Socialist?”, he writes that whether or not calling Obama a socialist started out as a scare tactic, conservative commentators like Kurtz actually believe it now. He pulls a quote from Slacktivist’s Fred Clark on the problem of bearing false witness:
What may start out as a well-intentioned choice to “fight dirty” for a righteous cause gradually forces the bearers of false witness to behave as though their false testimony were true. This is treacherous — behaving in accord with unreality is never effective, wise or safe. Ultimately, the bearers of false witness come to believe their own lies. They come to be trapped in their own fantasy world, no longer willing or able to separate reality from unreality. Once the bearers of false witness are that far gone it may be too late to set them free from their self-constructed prisons.
What’s nice about pairing these two observations is that Martin’s take on self-deception in selling jewelry is binary, a pas de deux with two agents, both deceiving themselves and letting themselves be deceived. Bouie and Clark don’t really go there, but the implication is clear: in politics, the audience is ready to be convinced/deceived because it is already convincing/deceiving itself.
There’s no more dangerous position to be in, truth-wise, than to think you’re getting it figured out, that you see things other people don’t, that you’re getting over on someone. That’s how confidence games work, because that’s how confidence works. And almost nobody’s immune, as Jonah Lehrer points out, quoting Richard Feynman on selective reporting in science. He refers to a famous 1909 experiment which sought to measure the charge of the electron:
Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.
It’s all little lies and adjustments, all the way down. Where else can I get such a good deal on such a high-quality stone?
So I found this bit from The Stone’s valedictory post just totally delightful:
Let me finish, as we began The Stone last May, with a reference to Socrates. Socrates had a friend called Simon. He was a sandal-maker. According to legend, he let Socrates use his house for discussions when the conduct of such discussions was not allowed in the agora. Simon’s house was just outside the boundary (horos) of the agora. That boundary was defined by fascinating stone markers, about three feet high, one of which declares “Horoseimi tes agoras.” I am the boundary of the agora. Again, according to legend, Simon was imprisoned after Socrates was arrested, though later released. Not much more is known about him.
Hanging out in the sandal-maker’s house—I love that.
I love these playful renderings of classic fashion drawn from old photos (which are also included). Simple concept; great execution.