The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52
Snarkmarket commenter-in-chief since 2003, editor since 2008. Technology journalist and media theorist; reporter, writer, and recovering academic. Born in Detroit, living in Brooklyn, Tim loves hip-hop and poetry, and books have been and remain his drug of choice. Everything changes; don't be afraid. Follow him at

The banyan tree
 / 

Here’s another post-constellational metaphor for a mode of thinking and writing, from Sarah Vowell’s interview with The Onion A/V Club, with reference to her new book on Hawaiian history Unfamiliar Fishes:

AVC: Your writing also features these tangents that circulate back around to the main thesis and manage to fit well in the context of the larger topic. When you’re writing, do you craft these tangents consciously, or do they come about naturally in the way you write?

SV: Both. When I went to Hawaii, I had never seen a banyan tree before.A banyan tree is this tree that starts with one trunk, and then when the branches branch off, little tendrils sprout off the branches and eventually grow down to the ground and take root and become another trunk, and more and more branches and tendrils develop off of that, so each banyan tree becomes its own monster-looking forest. And when I first saw one of those trees, I thought, “That is how I think.” Little thoughts just sprout off and drip down and take root, and then they end up supporting more and more tendrils of thought, until it all coheres into one thing, but it’s still rickety-looking and spooky. I like to think that my tangents have a point. I do love a tangent. I think part of it is inherent within the discipline of non-fiction.

I always found that when I was a college student and researching my papers always the night before—and this was before the Internet—I’d be in the library and I’d find one thing, and see something else and want to follow that, which now is how the Internet has taught us to think, to click on link after link after link. But there is something inherent in research that fosters that way of thinking, and then there’s this other interesting thing, and that builds and builds. When I’m writing, I have all these index cards, and I sit on my living-room rug and move them around until they make sense. When I’m talking, it’s just the unedited me. Anyway, there are just sometimes asides, some of them are just about the joy of fact. I find facts fun, and sometimes I’ll just put something in if I think it’s interesting, even though it’s not going anywhere.

You can think about the banyan tree as an associative style of writing, but also as a new kind of community, and way of writing in public — or better still, both at the same time. A matrix.

Also, let’s not forget to note “the joy of fact”! A greater phrase even than Ezra Pound’s “luminous detail.” I believe I need a T-shirt for this. Or create a small shrine for a school of nonfiction writing, devoted to digging in the crates and extracting, not only facts, but their joy.

5 comments

The Two Writers
 / 

American Journalism Review has a new story about how The Atlantic’s Alexis Madrigal convinced Dan Sinker to out himself as @MayorEmanuel:

A month earlier, for example, a reporter for the NBC-owned television station in Chicago requested an interview. @MayorEmanuel told him to “just call the office: (312) FUC-KOFF.”

When Madrigal received no response, he tried a different tack: “I think it is incumbent on you to at least tell me to fuck off,” he wrote, also providing his e-mail address. “It’s the only time I’ve ever used the F-word in my Twitter feed,” Madrigal adds.

@MayorEmanuel brushed him off. But a short while later, Madrigal received an e-mail from an anonymous e-mail account. The subject line read, “OK, asshole.”

“There were two points in it,” Madrigal says. “One, if you tweet about this, it’s over before it even started. And two, you’re the journalist — you pitch me.”

Snarkmarket’s part of the story, too. There’s a link to The Two Mayors, and I got to talk to AJR’s Greg Masters about why I think Madrigal got the scoop. I’m particularly delighted I got quoted talking about one of my favorite movies, comparing Alexis’s appoach to @MayorEmanuel to “W.W. Beauchamp sidling up to William Munny at the end of Unforgiven.”

[Warning: violent. Munny = Eastwood. Beauchamp = Saul Rubinek, in the glasses.]

Also, if you missed it, definitely check out Dan Sinker’s appearance on Colbert, where he is way more William Munny gentle father than William Munny/@MayorEmanuel murderous sonofabitch:

The Colbert Report Mon – Thurs 11:30pm / 10:30c
Dan Sinker
www.colbertnation.com
Colbert Report Full Episodes Political Humor & Satire Blog Video Archive
Comments

Simply the best
 / 

A couple of weeks ago, Aaron Bady, who blogs as zunguzungu, tweeted something that made me stop and think:

All blogs should have a “best of” page: http://tinyurl.com/4ckf7m8

Lots of blogs have auto-generated “top posts” or tags or “about” posts that act as introductions to the site. But how do you distinguish what’s churning (or churned a long time ago) from what’s really hung on as valuable? What are the exemplars? If your blog had a portfolio, what would it look like? And how would you decide what went into it?

For instance, before starting this post, I went through our analytics to find our highest-traffic posts, assuming that even if it’s an imperfect metric (I think it misses some hits and spreads out traffic to some of the older posts that got new URLs unevenly), it’ll help some of the best stuff rise to the top.

And it turns out that Snarkmarket’s highest-traffic single post is Robin’s “Stock and flow,” which is a little over a year old. Not only a good candidate for the blog’s “best of” page, but actually illustrates the concept of a “best of” very well.

On the other hand, one of the other top posts is “OMG!!11! Google LOL,” written by Matt in 2005. It’s no slouch — nice little post about Google’s then brand-spanking-new IM client. But I strongly suspect that the accidental Google juice of the title skewed this post’s numbers a little bit. At any rate, I wouldn’t pick it for the “best of.” Not when Matt’s “Towards Engagement” or “Free Book Idea: Too Big To Succeed” sitting out there.

So this is an open call to the Snarkmatrix. What do you think are the site’s best posts? Which ones were the most important? Which are the smartest? The funniest? The strangest? The most relevant, six or seven years later? Which meant the most to you? If you had to say “here are ten posts you should read from Snarkmarket,” which would you pick?

Let ‘er rip in the comments below.

9 comments

Coming out
 / 

Yesterday, I gave a talk at the University of Maryland’s Institute for Technology & the Humanities (aka MITH) about changing the way humanities PhDs are educated. It was titled “Stop Being Polite and Start Getting Real: Professional Education for Professional Humanists.” The really wonderful (and super-speedy) folks at MITH just posted the audio of the talk on their website; it’s about an hour long, but if you’re interested in things like how PhD programs should be built more like AK-47s, please check the link above and give it a whirl.

A lot of the talk is based on my own experience having gotten a humanities PhD and not being able to find a tenure-track job or full-time employment doing other kinds of university work, and how I eventually wound up becoming a technology journalist. So at the end of the lecture, I talk about a lot of personal stuff, including my son being diagnosed with autism, the accident where I broke my arm & leg, waiting a year to go on the job market and getting walloped by the 2008 economic meltdown — all stuff I’ve talked about here before.

One thing, though, I haven’t — the principal reason I cautioned the folks in the room who were live-tweeting the event to tweet this carefully. So I wanted to lay it on the table before some of you downloaded the podcast and were like, “what the what?”

I am a birth dad. I have an older son who was born and placed for adoption in 2003, during the spring of my first year in graduate school. He’s going to be eight years old in just a few weeks, and I love him more than anything.

We have an open adoption, which means that he knows that he’s adopted, that I’m his birth father (as it happens, his only father, because he was adopted by two women), and we see each other and exchange information and phone calls pretty regularly. We (he, me, Sylvia, Noah, his family) have a great, casual, very loving relationship. He’s just like me. I mean, just. Maybe better adjusted. And yes, he has red hair.

When I was 22, I was so terrified of both being a father and what the news of the adoption might mean that I told no one — including friends, family, and especially the people in my graduate program and at school. (This included my upstairs neighbor, which was tricky.) I’d just moved to Philadelphia. I felt completely intimidated and totally alone.

The only thing I did well was study and write and perform in my graduate seminars. So I threw myself into them and pretended it wasn’t happening. I even walked from the hospital downtown to attend classes just a day or so after he was born.

Over the years, as my relationship with my son has changed, grown more open and more clear that we were always going to be a significant part of each other’s lives, I opened up to more and more people — friends, family, sympathetic acquaintances and strangers. (For instance, Robin knew before today, but Matt didn’t. At least, I don’t think he did. After all, he is a reporter.)

Before I told my parents and brothers and sister, my son’s adoptive moms compared it to coming out. You’re not ashamed. You know you have to affirm who you are. That doesn’t mean you have to fork it over to people when you first meet them or hand them your business card. It’s driving you crazy when you don’t tell the people close to you. At a certain point, the most crazy-making issue is addressing why you haven’t said something before now. But ultimately, it’s because you can’t ever be certain how people will react.

For those reasons, I’ve still been reluctant to say too much, especially on the open web. There are plenty of privacy issues that go way beyond myself — I’ve really never wanted anybody in my family to be Googleable. Still, I gave a talk about it at the MLA a few years ago. If you were really determined to find out, it’s been findable. That’s a different thing, however, from stating it for everyone to see.

But since so much of my life now, so many of my friendships, happen online, and since I’m determined to not let fear or anxiety about what I do or don’t say control how I feel about the world, this seems like as good a time as any to tell a whole lot more people all at once.

As Jeff Mangum put it in Neutral Milk Hotel’s song “Ghost,” I’m resolved to “never be afraid / to watch the morning paper blow / into a hole / where no one can escape.” Or as xkcd put it in the comic “dreams” (This is actually the very last part of my talk), Fuck. That. Shit.

It’s an experience — one that’s always ongoing — that broke my heart and changed my life, irrevocably, for the better. Orders of magnitude better. It taught me who I was and is teaching me who I am. I can’t explain it any better than that.

10 comments

What is social information?
 / 

Just a little A+B=Hmm for the weekend. First, Freeman Dyson reviews James Gleick’s The Information: A History, a Theory, a Flood , which begins with a drum language once used by Kele speakers in the Congo:

Kele is a tonal language with two sharply distinct tones. Each syllable is either low or high. The drum language is spoken by a pair of drums with the same two tones. Each Kele word is spoken by the drums as a sequence of low and high beats. In passing from human Kele to drum language, all the information contained in vowels and consonants is lost. In a European language, the consonants and vowels contain all the information, and if this information were dropped there would be nothing left. But in a tonal language like Kele, some information is carried in the tones and survives the transition from human speaker to drums. The fraction of information that survives in a drum word is small, and the words spoken by the drums are correspondingly ambiguous. A single sequence of tones may have hundreds of meanings depending on the missing vowels and consonants. The drum language must resolve the ambiguity of the individual words by adding more words. When enough redundant words are added, the meaning of the message becomes unique…

The story of the drum language illustrates the central dogma of information theory. The central dogma says, “Meaning is irrelevant.” Information is independent of the meaning that it expresses, and of the language used to express it. Information is an abstract concept, which can be embodied equally well in human speech or in writing or in drumbeats. All that is needed to transfer information from one language to another is a coding system. A coding system may be simple or complicated. If the code is simple, as it is for the drum language with its two tones, a given amount of information requires a longer message. If the code is complicated, as it is for spoken language, the same amount of information can be conveyed in a shorter message.

Then there’s Devin Friedman’s “The Viral Me,” which looks skeptically but pretty honestly at both startup incubator Y Combinator and the broader sphere of social media. (This is a little older, but I’d have missed it if John Pavlus hadn’t tweeted about it today.)

One of YC’s big successes in the past year is a company called DailyBooth. It’s like Twitter—it’s a platform for communication, you can “follow” people, and people can “follow” you—but instead of typing 140 characters, you just take pictures of yourself. Here I am in my room in my pajamas. Here I am at Starbucks. Here I am in my new sweater. Here I am in my room again in my pajamas. (It seems like, as often as not, a DailyBooth picture is of someone in his bedroom in pajamas.) That’s the whole thing. There’s no pretext that you have information you need to get across or a really good joke. It’s a thingy that, you might argue, reduces the psychological physics of the social layer to its simplest equation: I’m alive right now; I’m a person; look at me.

DailyBooth is a good way to see one of the central paradoxes of the social layer. People engage in this stuff, I think, for the affirmation. To prove that they exist. But in effect, the collection and aggregation of all those photos, all those bits of unique self-expression from, literally, 500 million people (and Zuck says that a billion is basically a fait accompli) actually nullifies humanity. True, the smallest detail of your life might be amplified and spread instantly across what is the simplest and most effective distribution network ever invented. But more likely is that detail being almost instantly buried by the incredible volume of other people’s smallest details.

But why? At this point, it’s a cliché to say that adding too much information makes all the information we have meaningless. It’s the paradox of more that Malcolm Gladwell talks about in Blink: we will usually say that it’s better to have more information, but we don’t really believe it. We really believe in efficiency, in le mot juste, in exactly what we need to know something in a limited amount of time without getting confused.

I won’t say this is a Western way of thinking, because that’s a cliché, too — but compared to the drum language, it’s a very alphabetic way to think. And we have to recognize that in social media, the system of information is not, or is not purely, alphabetic. It’s also an accumulation of photos, tones, pings, a message shuffling back and forth between stations with a simple transmission: ‘I am here,” waiting for the return signal, “I am here.” And if you haven’t learned to listen for that tonal information, if you haven’t guessed that redundancy might be the key to the meaning, then it might just seem like noise.

But (Freeman Dyson paraphrasing founder of information theory Claude Shannon):

Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.

Two more things. First — isn’t it funny that in the months since Friedman’s article came out, we’ve had a string of revolutions in North Africa and the Middle East in which social media played a non-negligible part — where the general consensus seems to have become that social media became important precisely because citizens were able to signal to each other, in an extremely minimal way, that they knew things were bad, that the government was dishonest, that something needed to change? That, while some organizers were doubtlessly using a range of media to transmit very complex information back and forth to one another, masses of people were suddenly emboldened by that simple ping: “I’m alive right now; I’m a person; look at me”?

Second — I’m re-reading David Foster Wallace’s Infinite Jest, which I plowed through in college, and not very well, because somebody told me it was kind of like James Joyce but with more about mathematics, and I was all screw these kids playing tennis, I’m going to read some Raymond Carver. And ten years later, I’m just built to understand it so much better than I was then. Through the sheer force of biography alone, but for every other reason too.

Anyways, one of Wallace’s little linguistic ticks, which kinda nagged me when I just wanted to get my Carver on, where twice or more in his long sentences he’ll like, repeat the same piece of information, usually just to clarify the referent of a pronoun or to specify who or what he’s talking about, but in a very ostentatious way, and frequently just for its own sake.

Here’s an example (perhaps not the best but the best I can find) about a tennis drill called “Side-to-Sides” (all emphasis mine, all footnotes dropped):

The cardiovascular finale is Side-to-Sides, conceived by van der Meer in the B.S. ’60s and demonic in its simplicity. Again split into fours on eight courts. For the top 18’s, prorector R. Dunkel at net with an armful of balls and more in a hopper beside him, hitting fungoes, one to the forehand corner and then one to the backhand corner and then farther out to the forehand corner and so on. And on. Hal Incandenza is expected at least to get a racquet on each ball; for Stice and Wayne the expectations are higher. A very unpleasant drill fatigue-wise, and for Hal also ankle-wise, what with all the stopping and reversing. Hal wears two bandages over a left ankle he shaves way more often than his upper lip. Over the bandages goes an Air-Stirrup inflatable ankle brace that’s very lightweight but looks a bit like a medieval torture-implement. It was ina stop-and-reverse move much like Side-to-Sides that Hal tore all the soft left-ankle tissue he then owned, at fifteen, in his ankle, at Atlanta’s Easter Bowl, in the third round, which he was losing anyway. Dunkel goes fairly easy on Hal, at least on the first two go-arounds, because of the ankle. Hal’s going to be seeded in at least the top 4 of the WhataBurger Inv. in a couple weeks, and woe to the prorector who lets Hal get hurt the way Hal let some of his Little Buddies get hurt yesterday.

So Wallace has already signaled that this is going to be a paragraph about repetition to exhaustion or even injury before he even does it. You could say he needs to keep clarifying and repeating these things because his sentences are so convoluted that otherwise you couldn’t follow them, but 1) his syntax is pretty clear and 2) it’s not like he’s a freak about specifying everything. He doesn’t even spell out “invitational,” let alone give any other proper noun the same first name + last name treatment he offers Hal Incandenza, who’s the main character in the story, Hal is, so we’re not likely to forget who’s being spoken about here. You could say from a literary standpoint that the repetition of the ankle mirrors the repetition of the drill, Hal’s pain in his ankle, and his and the prorector’s worry about the ankle. But it’s also just Wallace — who understands all of this, by the way, better than we do: communication, information, redundancy, efficiency, purity, the dangers of too much information, and especially the fear of being alone and the need to find connection with other human beings — creating a structure that allows him to ping his reader, saying “I am here”… and waiting for his reader to respond in kind, “I’m alive right now; I’m a person; look at me.”

6 comments

The Last Hours of @MayorEmanuel
 / 

As a follow-up to my earlier compilation, “The Two Mayors,” here is the stunning conclusion to the story of @MayorEmanuel. He won the election and as predicted by Mayor Daley, vanished into a time vortex in order to save the multiverse.

I’ve also been boning up on my @MayorEmanuel backstory, and man, it is totally batshit in the best possible way. There are layers and layers to this thing that I couldn’t even guess at, and a few I’m probably still missing. In short, the anonymous author(s) of the thread have been building towards this science-fiction/comic-book resolution of the story for a while now, first planting the seeds months ago, then grinding them up like fine celery salt.

You can read a quick-and-dirty PDF of all of @MayorEmanuel’s tweets here, assembled by @najuu (h/t Carla Casilli). I’m not Storifying the whole thing, because 1) Twitter’s archives have a hard time going back that far in the Storify interface and 2) even if they did, I’m not stupid. But I would like to do my small part to gather the limbs of Osiris just here at the end. Enjoy.

Read more…

4 comments

The Two Mayors
 / 

Today, the city of Chicago elects its mayor. In other cities, there would be a primary vote, then another at the time of the general election in November. But given the scarcity of Chicago Republicans — it’s like 25 guys, and they’re all professors in three departments at U of C — the Democratic primary would effectively determine who will be mayor of the city anyways.

So, Chicago’s mayoral race is nonpartisan. And it’s at the end of February — which in Chicago, is even more masochistic than it would be in cities with a more temperate climate.

Since Chicago’s longstanding Mayor Richard M Daley announced he would not seek re-election for another, Rahm Emanuel, former Chicago-area Congressman, Democratic Party powerhouse, and (until recently) Chief of Staff for President Obama, has sought to sew this thing up. There were some brief problems establishing his residency and right to run for office, but now it looks like he’s off to the races.

Since Emanuel announced he was running for office, he’s been joined by a delightfully funny and foul-mouthed shadow on Twitter calling himself @MayorEmanuel. Like Fake Steve Jobs before him, @MayorEmanuel combines a kind of exaggeration of the known qualities of the real Rahm Emanuel — profanity, intelligence, hyper-competitiveness — with a fully-realized, totally internal world of characters and events that has little to do with the real world and everything to do with the comic parallel universe @MayorEmanuel inhabits.

For instance, @MayorEmanuel’s “about” section on Twitter reads: “Your next motherfucking mayor. Get used to it, assholes.” The idea is that if we strip back the secrecy and public image to something so impolitic, so unlikely, we might arrive at something approximating the truth. But, despite my status as a one-time — and actually, I still hope future — Chicagoan, I haven’t been a regular reader of @MayorEmanuel. My friends retweet his funniest one-liners, and that’s good enough for me.

Yesterday, however, @MayorEmanuel outdid himself. He wrote an extended, meandering narrative of the day before the primary that took the whole parallel Rahm Emanuel thing to a different emotional, comic, cultural place entirely. It even features a great cameo by friend of the Snark Alexis Madrigal. The story is twisting, densely referential, far-ranging — and surprisingly, rather beautiful.

And so, once more using the magic of Storify, I’d like to share that story with you. I’ve added some annotations that I hope help explain what’s happening and aren’t too distracting.

In its original form, it has no title. I call it “The Two Mayors.” Read it after the jump.

Read more…
7 comments

Age of majority
 / 

Radiohead’s new album King of Limbs dropped on Friday, prompting much love from the Twittersphere. Maybe too much. The British band hits a kind of sweet spot for the educated set: progressive contemporary music that’s equally accessible whether you’re into old-school prog/classic rock, 90s alternative, or 00s house. Still, some of the exchanges seemed a little, um, exuberant:

Still, I think music fans and cultural observers need to grapple with this a little: Radiohead’s first album, Pablo Honey, came out 18 years ago. Here’s another way to think about it: when that album came out, I was 13; now I’m 31. And from at least The Bends to the present, they’ve commanded the attention of the musical press and the rock audience as one of the top ten — or higher — bands at any given moment. You might have loved Radiohead, you might have been bored by them, you might have wished they’d gone back to an earlier style you liked better, but you always had to pay attention to them, and know where you stood. For 18 years. That’s an astonishing achievement.

Here are some comparisons. The Rolling Stones have obviously outdone everyone in the rock longevity department; even if they were sometimes a punchline, they’ve made solid music and have always been insanely profitable. But really, if you take the stretch from 1964’s The Rolling Stones to 1981’s Tattoo You — which is actually mostly a B-sides album of leftovers from 1978’s Some Girls — that’s only 17 years. If you just do their first album through Some Girls, it’s only 14 years. And that’s when the Stones basically stop evolving as a band and stop being a crucial signpost for popular music.

Very few other rock bands last that long. The Beatles didn’t. Talking Heads didn’t. The Pixies and The Velvet Underground obviously didn’t. The Who only had 13 years between their first album and Keith Moon’s overdose. When Bruce Springsteen had a hit with “Streets of Philadelphia” eighteen years after Born To Run, it was an amazing comeback. R.E.M. had about 20 years of fairly consistent attention between “Radio Free Europe” and Reveal, but that’s an unknown underground band on one end and a kind of boring washed-up band on the other with a peak in the middle.

The Flaming Lips are still pushing it. U2’s been going for about 30 years, although they’ve lost a lot of cred along the way that Radiohead hasn’t. Bob Dylan is a freak. But this is the level we’re talking about here: U2, Dylan, and Radiohead. It’s worth tipping your cap. And watching some videos.

Read more…
31 comments

Bookfuturism avant la lettre
 / 

This is from the introduction* to Steven Johnson’s Interface Culture, a book from 1997 that I hadn’t previously read:

A few final observations, and warnings, about the pages that follow. The first should be a comfort to readers who have tired of the recent bombast emanating from both the digital elite and their neo-Luddite critics. I have tried to keep this book as free of dogma and polemic as possible, emphasizing both the tremendous intellectual liberation of the modern interface and the darker, more sinister implications of that same technology.

From its outset this book has been conceived as a kind of secular response to the twin religions of techno-boosterism and techno-phobia. On the most elemental level, I see it as a book of connections, a book of links — one in which desktop metaphors cohabit with Gothic cathedrals, and hypertext links rub shoulders with Victorian novels. Like the illuminations of McLuhan’s electric speed, the commingling of traditional culture and its digital descendants should be seen as a cause for celebration, and not outrage.

This is likely to trouble extremists on both sides of the spectrum.** The neo-Luddites want you to imagine the computer as a betrayal of the book’s slower, more concentrated intelligence; the techno-utopians want you to renounce your ties to the fixed limits of traditional media. Both sides are selling a revolution — it’s just that they can’t agree on whether it’s a good thing. This book is about the continuities more than the radical breaks, the legacies more than the disavowals.

For that reason, the most controversial thing about this book may be the case it makes for its own existence. This book is both an argument for a new type of criticism and a working example of that criticism going about its business.***

Notes

* I added some extra paragraph breaks to the excerpt to make it read more like a blog post.
** Compare my “Bookfuturist Manifesto,” from The Atlantic.com, August 2010.
*** I pretty much want to be Steven Johnson right now.

3 comments

Multiple intelligences (or Why smart TVs should be more like PCs)
 / 

Like Robin, I love the counter-conventional wisdom John Herrman brings to “I Just Want A Dumb TV.” And I really like Frank Chimero’s distinction between “steadfast,” long-enduring, simple tools and “hot-swap” components of a system that you can change on the fly.

But I want to pivot from this taxonomy of “dumb” things to create a complimentary taxonomy of “smart” ones. If the current crop of “smart” TVs somehow goes wrong, how does it do it? And is a “dumb” monitor the best alternative?

“Smart” and “dumb” applied to electronics/tech has a long history, but for our purposes here, let’s look at the smartphone as one model of what a smart appliance looks like. That seems to be what makers of smart TVs did, anyways. So let’s say, bare minimum, a “smart” appliance needs:

  1. A fairly versatile processor and operating system;
  2. The ability to connect to other devices on a local or global network;
  3. Ability to run some kind of local secondary applications.

In short, it should slightly resemble a modern, networked computer. The problem with smart TVs is they work too much like smartphones and not enough like PCs.

See, smartphones are hypermobile, so you stuff a ton of capacity into the device because it’s going to have to do most things by itself. Phone, games, maps, email, the web, etc, — everything that can be jammed into those little screens.

Television screens, on the other hand, are antimobile. Like desktop PCs, they stay in one place, and you hook other things up to them: cable boxes, game systems, Blu-Ray players, and (wirelessly) remote controls.

With a smart TV, you can go in two directions to make the device “smarter”: you can either try to make them super self-sufficient, doing more and more on one piece of hardware. Or you can make the device better and better at talking to other devices.

There are good aesthetic reasons to do the first one: you can cut cords and clutter and save some money and electricity. Also, it’s wired in with software, not hardware. It’s not like you’ve got this crummy, outdated VCR built into the box; you can (in principle) update your OS and get a whole new set of applications and capabilities.

Still, the second way of making a TV smart seems better to me. Forget connecting my TV to the web; I want to connect my TV to my phone, my laptop, my refrigerator, my alarm clock, my media players (etc etc etc). But do it all wirelessly, over a local network. Make it easier for me to get my media — wherever it comes from — up on the biggest screen in my house. I can’t do that with a totally dumb TV, but I can’t do that easily with current-generation smart TVs either.

This is why I guess I’m more interested in “two-screen” approaches to television, where you’re using an iPad (or something) to browse channels and read about programs and tweet about what you’re watching and otherwise interact with and control what’s on your screen. Because the lesson of “hot-swapping” is that good parts that talk to each other well make the whole more than the sum of its parts.

One comment