The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52

How a sale is made
 / 

It’s always nice when three blogs in your “must read” folder happily converge. First, Jason Kottke pulls a couple of super-tight paragraphs from a Chronicle of Higher Ed article by Clancy Martin, philosophy professor and onetime salesman of luxury jewelry, about how he plied his former trade:

The jewelry business — like many other businesses, especially those that depend on selling — lends itself to lies. It’s hard to make money selling used Rolexes as what they are, but if you clean one up and make it look new, suddenly there’s a little profit in the deal. Grading diamonds is a subjective business, and the better a diamond looks to you when you’re grading it, the more money it’s worth — as long as you can convince your customer that it’s the grade you’re selling it as. Here’s an easy, effective way to do that: First lie to yourself about what grade the diamond is; then you can sincerely tell your customer “the truth” about what it’s worth.

As I would tell my salespeople: If you want to be an expert deceiver, master the art of self-deception. People will believe you when they see that you yourself are deeply convinced. It sounds difficult to do, but in fact it’s easy — we are already experts at lying to ourselves. We believe just what we want to believe. And the customer will help in this process, because she or he wants the diamond — where else can I get such a good deal on such a high-quality stone? — to be of a certain size and quality. At the same time, he or she does not want to pay the price that the actual diamond, were it what you claimed it to be, would cost. The transaction is a collaboration of lies and self-deceptions.

This structure is so neat that it has to be generalizable, right? Look no further than politics, says Jamelle Bouie (filling in for Ta-Nehisi Coates). In “Why Is Stanley Kurtz Calling Obama a Socialist?“, he writes that whether or not calling Obama a socialist started out as a scare tactic, conservative commentators like Kurtz actually believe it now. He pulls a quote from Slacktivist’s Fred Clark on the problem of bearing false witness:

What may start out as a well-intentioned choice to “fight dirty” for a righteous cause gradually forces the bearers of false witness to behave as though their false testimony were true. This is treacherous — behaving in accord with unreality is never effective, wise or safe. Ultimately, the bearers of false witness come to believe their own lies. They come to be trapped in their own fantasy world, no longer willing or able to separate reality from unreality. Once the bearers of false witness are that far gone it may be too late to set them free from their self-constructed prisons.

What’s nice about pairing these two observations is that Martin’s take on self-deception in selling jewelry is binary, a pas de deux with two agents, both deceiving themselves and letting themselves be deceived. Bouie and Clark don’t really go there, but the implication is clear: in politics, the audience is ready to be convinced/deceived because it is already convincing/deceiving itself.

There’s no more dangerous position to be in, truth-wise, than to think you’re getting it figured out, that you see things other people don’t, that you’re getting over on someone. That’s how confidence games work, because that’s how confidence works. And almost nobody’s immune, as Jonah Lehrer points out, quoting Richard Feynman on selective reporting in science. He refers to a famous 1909 experiment which sought to measure the charge of the electron:

Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.

It’s all little lies and adjustments, all the way down. Where else can I get such a good deal on such a high-quality stone?

8 comments

The Western 101, via Netflix Watch Instantly
 / 

I love Westerns. My allegiance to the genre has long been known on the Snarkmatrix. (I refer you to the comment threads on Exhibit A or Exhibit B.) So I am excited that people are excited by Joel and Ethan Coen’s new Western, True Grit.

And jeez, I hope I get a few hours by myself in the next week or so to see this movie. Parenting is a serious drag on your ability to partake of the cinema, which is one reason I’ve become such a devotée of Netflix Watch Instantly. I didn’t even get to catch the restored Metropolis when it came to town, and I had only A) waited months for it and B) written a chapter of my dissertation about its director. So I don’t know if True Grit is as good as everyone says it is. What I do know, what I know the hell out of, are Westerns, and Netflix. If you don’t know Westerns, that’s fine. So long as you’ve got a Netflix subscription and streaming internet, I’ve got your back.

You probably know that True Grit (2010) is an adaptation of the same Charles Portis novel (True Grit) that was previously adapted into a movie [True Grit (1969)] that won John Wayne a Best Actor Oscar for his portrayal of the eyepatched marshal Rooster Cogburn. It’s not a remake, you’ve heard entoned, it’s a more-faithful adaptation of the novel.

Fine. Who cares? At a certain point, remakes and adaptations stop being remakes and adaptations. Does anyone care that His Girl Friday was a gender-swapping adaptation of The Front Page, a terrific Ben Hecht and Charles McArthur play which had already been made into a movie in 1931, and which was made into a movie again in 1974 with Billy Wilder directing and Walter Matthau and Jack Lemmon playing the Cary Grant and Rosalind Russell roles?

Okay, I do. But besides me, not really. Because His Girl Friday obliterated The Front Page in our movie-watching conciousness, even though the latter is the prototype of every fast-talking newspaper comedy from, shit, His Girl Friday to the Coen Brothers’ The Hudsucker Proxy. It’s been over forty years since True Grit (1969). It’s a good movie, but if you haven’t seen it, don’t sweat it too much.

You should, however, be sweating the Western. Because not least among their virtues is that Joel and Ethan Coen care and care deeply about genre. Virtually all of their movies are a loving pastiche of one genre form or another, whether playful (like Hudsucker’s newspaper comedy or The Big Lebowski’s skewed take on the hardboiled detective), not so playful (No Country For Old Men) or both somehow at once (Miller’s Crossing, Fargo). And the Western is fickle. You’ve got to contend with books, movies, radio, and TV, all with their own assumptions, all alternating giddy hats-and-badges-and-guns-and-horses entertainment and stone-serious edge-of-civilization Greek-tragedy-meets-American-origin-stories primal rites.

I’ll save you some time, though, by giving you just twelve links, briefly annotated.
Read more…

4 comments

Rooting for the home team
 / 

It’s a classic paradox of American democracy: citizens love America, hate Congress, but generally like their own district’s Congressman. (Until they don’t, and then they vote for someone else, who they usually like).

Josh Huder (via Ezra Klein) takes on the apparent paradox, armed with some good data and historical analysis.

Huder points out something even more paradoxical: Congressional approval takes a hit not just when there’s a scandal, or when there’s partisan gridlock in the face of a crisis, but even when Congress works together to pass major legislation:

By simply doing its job Congress can alienate large parts of its constituency. So while people like their legislators, they dislike when they get together with fellow members and legislate.

From this, Huder concludes that “disapproval is built into the institution’s DNA.” But let me come at this from a different angle: professional and/or sports.

There’s almost an exact isomorphism here. Fans/constituents like/love their home teams (unless their performance suffers for an extended period of time, when they switch to “throw the bums out” mode), and LOVE the game itself. But nobody really likes the league. Who would say, “I love the MLB” or “I love the NCAA” — meaning the actual organizations themselves?

Never! The decisions of the league are always suspect. They’re aggregate, bureaucratic, necessary, and not the least bit fun. Even when leagues make the right decision, we discount it; they’re just “doing their job.” The only time they can really capture our attention is when they do something awful. And most of the time, they’re just tedious drags on our attention, easily taken for granted.

If it’s a structure, it doesn’t seem to be limited to politics. It’s a weird blend of local/pastime attachment, combined with contempt/misunderstanding for the actual structures that work. Because we don’t *want* to notice them at work at all, really.

One comment

Two observations on Lanier on Wikileaks
 / 

Robin set the table up (and h/t to Alexis for getting Lanier’s essay in circulation).

Here are three disjoint thoughts, slightly too long for tweets/comments:

  1. Part of Lanier’s critique of Wikileaks works astonishingly well as a critique of Google’s Ngrams, too. (I’m working up a longer post on this.) In particular, I’m thinking of this observation:

    A sufficiently copious flood of data creates an illusion of omniscience, and that illusion can make you stupid. Another way to put this is that a lot of information made available over the internet encourages players to think as if they had a God’s eye view, looking down on the whole system.

  2. I feel like we need a corollary to the Ad Hitlerem/Godwin’s Law fallacy. I’m going to call it “the Gandhi principle.” Just like trotting out the Hitler analogy for everything you disagree with shuts down a conversation by overkill, so do comparisons with Mahatma Gandhi, Martin Luther King, Nelson Mandela, Jesus, and other secular and not-so-secular activist saints.

We’ve canonized these guys, to the point where 1) we think they did everything themselves, 2) they never used different strategies, 3) they never made mistakes, and 4) disagreeing with them then or now violates a deep moral law.

More importantly, in comparison, every other kind of activism is destined to fall short. Lanier’s essay, like Malcolm Gladwell’s earlier essay on digital activism, violates the Gandhi principle. (Hmm, maybe this should be the No-Gandhi Principle. Or it doesn’t violate the Gandhi Principle, but invokes it. Which is usually a bad thing. Still sorting this part out.) The point is, both Ad Hitlerem and the Gandhi Principle opt for terminal purity over differential diagnosis. If you’re not bringing it MLK-style, you’re not really doing anything.

The irony is, Lanier’s essay is actually pretty strong at avoiding the terminal purity problem in other places — i.e., if you agree with someone’s politics, you should agree with (or ignore) their tactics, or vice versa. At its best, it brings the nuance, rather than washing it out.

Google’s Ngrams is also subject to terminal purity arguments — either it’s exposing our fundamental cultural DNA, or it’s dicking around with badly-OCRed data, and it couldn’t possibly be anything in between. To which I say — oy.

4 comments

Sci-Fi Film History 101 (via Netflix Watch Instantly)

Here’s another Netflix list from Friend of the Snarkmatrix Matt Penniman! —RS

As a supplement to Tim’s list, I thought I might offer the following. It attempts to catalog the history of science fiction in film. More specifically: it features films that take a scientific possibility or question as their central premise.

20,000 Leagues Under the Sea (1916)
deep sea life

Metropolis (1927)
robotics, dehumanization

Gojira (1954)
mutation

The Fly (1958)
hybridization

La jetée (1961)
time travel

Planet of the Apes (1968)
evolution

Solaris (1972)
alien intelligence

Close Encounters of the Third Kind (1977)
alien intelligence

Mad Max (1979)
post-apocalypse society

Blade Runner (1982)
robotics

Aliens (1986)
biological weapons

Terminator 2: Judgment Day (1991)
robotics, time travel

Ghost in the Shell 2.0 (1995)
robotics, networked information

Bonus selections:

Robot Stories (2004)

Moon (2009)

One comment

Film History 101 (via Netflix Watch Instantly)
 / 

Robin is absolutely right: I like lists, I remember everything I’ve ever seen or read, and I’ve been making course syllabi for over a decade, so I’m often finding myself saying “If you really want to understand [topic], these are the [number of objects] you need to check out.” Half the fun is the constraint of it, especially since we all now know (or should know) that constraints = creativity.

So when Frank Chimero asked:

Looking to do some sort of survey on film history. Any sort of open curriculum out there like this that runs in tandem with Netflix Instant?

I quickly said, “I got this,” and got to work.

See, trying to choose over the set of every film ever made is ridiculously hard. Choosing over a well-defined subset is both easier and more useful.

Also, I knew I didn’t want to pick the best movies ever made, or my favorites, or even the most important. Again, that pressure, it’ll cripple you. I wanted to pick a smattering of films that if you watched any given, sufficiently large subset of them, you’d know a lot more about movies than when you started.

This is actually a lot like trying to design a good class. You’re not always picking the very best examples of whatever it is you’re talking about, or even the things that you most want your students to know, although obviously both of those factor into it. It’s much more pragmatic. You’re trying to pick the elements that the class is most likely to learn something from, that will catalyze the most chemistry. It’s a difficult thing to sort, but after you’ve done it for a while, it’s like driving a car, playing a video game, or driving a sport — you just start to see the possibilities opening up.

Then I decided to add my own constraints. First, I decided that I wasn’t going to include any movies after the early 1970s. You can quibble about the dates, but basically, once you get to the Spielberg-Scorsese-Coppola-Woody Allen generation of filmmakers — guys who are older but still active and supremely influential today — movies are basically recognizable to us. Jaws or Goodfellas or Paris, Texas are fantastic, classic, crucial movies, but you don’t really have to put on your historical glasses to figure them out and enjoy them, even if they came out before you were of movie-going age. The special effects are crummier, but really, movie-making just hasn’t changed that much.

Also, I wasn’t going to spend more than a half-hour putting it together. I knew film history and Netflix’s catalog well enough to do it fast, fast, fast.

And so, this was the list I came up with. As it happened, it came to a nice round 33.

I made exactly one change between making up the list and posting it here, swapping out David Lynch’s Eraserhead for Jean-Luc Godard’s Breathless. I cheated a little with Eraserhead — it’s a late movie that was shot over a really, really long period of time in the 70s and came out towards the end of that decade. And Breathless isn’t Godard’s best movie, but it’s probably the most iconic, so it was an easy choice.

There are huge limitations to this list, mostly driven by the limitations of the catalog. Netflix’s selection of Asian and African movies, beyond a handful of auteurs like Akira Kurosawa, isn’t very good. There’s no classic-period Hitchcock. There’s no Citizen Kane. There aren’t any documentaries or animated films. And you could argue until you’re blue in the face about picking film X over film Y with certain directors or movements or national cinemas.

But you know what? You wouldn’t just learn something from watching these movies, or just picking five you haven’t seen before — you would actually have fun. Except maybe Birth of a Nation. Besides its famous pro-Ku Klux Klan POV, that sucker is a haul. Happy watching.

14 comments

I have mixed feelings about Facebook.
 / 

I’m not going to recount the long insomniac thought trail that led me here, but suffice it to say I ended up thinking about mission statements early this morning. Google’s came immediately to mind: To organize the world’s information, and make it universally accessible and searchable. I’m not sure what Twitter’s mission statement might be, but a benign one didn’t take too long to present itself: To enable a layer of concise observations on top of the world. (Wordsmiths, have at that one.)

I got completely stuck trying to think of a mission for Facebook that didn’t sound like vaguely malevolent marketing b.s. To make everything about you public? To connect you with everyone you know?

When I read Zadie Smith’s essay as an indictment of Facebook – its values, its defaults, and its tendencies – rather than the “generation” it defines, her criticisms suddenly seem a lot more cogent to me. I realized that I actually am quite ambivalent about Facebook. I thought it was worth exploring why.

I was thinking about the ways social software has changed my experience of the world. The first world-altering technology my mind summoned was Google Maps (especially its mobile manifestation), and at the thought of it, all the pleasure centers of my brain instantly lit up. Google Maps, of course, has its problems, errors, frustrating defaults, troubling implications – but these seem so far outweighed by the delights and advantages it’s delivered over the years that I can unequivocally state I love this software.

I recently had an exchange with my friend Wes about whether Google Maps, by making it so difficult to lose your way, also made it difficult to stumble into serendipity. I walked away thinking that what Google Maps enabled – the expectation that I can just leave my house, walk or drive, and search for anything I could want as I go – enabled much more serendipity than it forestalled. It’s eliminated most of the difficulties that might have prevented me from wandering through neighborhoods in DC, running around San Francisco, road-tripping across New England. And it demands very little of me, and imposes very little upon me. (One imposition, for example: All the buildings I’ve lived in have been photographed on Street View. I’m happy to abide by this invasion of privacy, because without it, I wouldn’t have found the place I live in today.) For me, Google Maps is basically an unalloyed social good.

Google has been very prolific with these sorts of products – things that bring me overwhelming usefulness with much less tangible concern. Google Search itself is, of course, a masterpiece. News Search, Gmail, Reader, Docs, Chrome, Android, Voice – even failed experiments such as Wave – I find that these things have heightened what I expect software to do for me. They have made the Internet more useful, information more accessible, and generally, life more pleasurable.

I was trying to think of a Facebook product that ameliorated my life in some similar way, and the first thing to come to mind was Photos. Facebook Photos created for me the expectation that every snapshot, every captured moment, would be shared and tagged for later retrieval. At my fifth college reunion, I made a point of taking photos with every classmate I wanted to reconnect with on Facebook. When I go home and tag my photos, I told my buddies, it will remind you that we should catch up. And it worked like a charm! I reconnected with dozens of old friends on Facebook, and now I see their updates scrolling by regularly, each one producing a tinge of warmth and good feelings.

But the dark side of Facebook Photos almost immediately presented itself as well. For me, the service has replaced the notion of a photograph as a shared, treasured moment with the reality of a photograph as a public event. I realized all of a sudden that I can’t remember the last time I took a candid photo. Look through my photos, and even those moments you might call “candid” are actually posed. I can’t sit for a picture without expecting that the photo will be publicized. Not merely made public – my public Flickr stream never provoked this sense – publicized. And although this is merely a default, easily overridden, to do so often feels like an overreaction. To go to a friend’s photo of me and untag myself, or to make myself untaggable, feels like I’m basically negating the purpose of Facebook Photos. The product exists so these images might be publicized. And increasingly, Facebook seems to be what photos are for.

Of course that’s not true. I also suddenly realized that I’ve been quietly stowing away a secret cache of images on my phone – a shot of Bryan sleeping, our cat Otis in a grocery bag, an early-morning sunlit sky – that are quickly becoming the most treasured images I possess, the ones I return to again and again.

Perhaps Facebook Photos has made my private treasure trove more valuable.

I use Facebook Photos as an example first because it’s the part of the service that’s most significantly altered my experience of the world, but also because I think it reflects something about the software’s ethos. That dumb, relentless publicness of photos on Facebook doesn’t have to be the default. Photos, by default, could be accessible only to users tagged in a set, for example, not publicized to all my friends and their friends. I’m not even sure that’s an option. (My privacy settings allow most users to see only my photos, not photos I’m tagged in. But I’m not sure what that even means. When another friend shares a photo publicly, and I’m tagged in it, I’m fairly certain our friends see that information.)

Facebook engineered the photo-sharing system in such a way as to maximize exposure rather than, say, utility. For Facebook, possibly, exposure is utility.* I think that characterizes most of the choices that underpin Facebook’s products. With most of the other social software products I use – the Google suite, WordPress, Twitter, Flickr, Dropbox, etc. – I am constantly aware of and grateful for the many ways the software is serving me. With Facebook, I’m persistently reminded that I am always serving it – feeding an endless stream of information to the insatiable hive, creating the world’s most perfect consumer profile of myself.

I don’t trust Google for a second, but I value it immensely. I trust Facebook less, and I’m growing more ambivalent about its value.

I don’t think I want to give up Facebook. I value the connections it offers, however shallow they are. I enjoy looking at photos of my friends. I like knowing people’s birthdays.

But I am wary of it, its values and its defaults. How it’s changing my expectations and my experience of the world.

* Thought added post-publication.

7 comments

Now that's what I call local
 / 

Sorry; this snippet from Matt’s second-day liveblog/Twitter curation of the conversation at PubCamp blew my mind a little bit:

Matt Thompson: One of the most frequent issues NPR.org users have is not being able to find something on our website. The vast majority of the time, that’s because they heard something on their local programming and are searching for it in the national site. If we had shared authentication across the system, we could be able to recognize other stations users authenticate with and show them local content.

So simple, but so powerful.

You’ve got to fine-tune just how local you get to match user expectations, though:

Matt Thompson: Discussion turns to users qualms over things like the Open Graph, turning on WaPo.com, for example, and suddenly seeing your friends’ names all over the page. How does the Washington Post know who my friends are?

But we quickly come back to the simple-but-powerful stuff again:

Matt Thompson: I asked for my pony: a registration system that would just keep track of what I’d read on the site, then let me know when those stories were updated/corrected.

I think we almost need to bring it back to the user end and offer something like a hybrid between the “Private Browsing/Incognito” mode that’s started to get incorporated into web browsers and the browser extension FlashBlock, that disables Flash ads and videos except when you whitelist them.

Call it “SocialBlock” (which sounds way more fun than it actually is). I browse with my identity intact, carrying it with me, but can select which sites/services I offer it to. And it’s just a quick click to turn it on or off.

One comment

Tweets from PubCamp 2010
 / 

I’m sitting in the dev lounge during the last of the day’s sessions at Public Media Camp, an unconference for folks interested in public media stuff.

Fair warning: This is not going to be your standard Matt Thompson Conference Liveblog, and will possibly not be interesting in any way. I’m trying out two things: (1) live curation of Twitter (which I haven’t really done), and (2) a Snarkmarket-customized CoverItLive template, that will allegedly not require you to see the title page. I’ll be very excited if this latter thing is true. Update: Not true. Still have to click to see the liveblog. Darn it.

Comments

Blogger, Reporter, Author
 / 

I want to distinguish blogging from reporting, and bloggers from reporters. But more than that, I want to distinguish the first question from the second.

Blogging is pretty easy to define as an activity. It’s writing online in a serial form, collected together in a single database. It doesn’t matter whether you’re doing it as an amateur or professional, as an individual or in a group, under your own byline or a pseudonym, long-form or on Twitter.

Reporting is a little trickier, but it’s not too tough. You search for information, whether from people or records or other reports, you try to figure out what’s true, and you relay it to somebody else. Anyone can report. They assign reports to elementary school students. Or you can be Woodward-and-Bernsteining it up, using every trick you can think of to track down data from as many sources as possible.

Now, both of these are different from what it means to be a blogger or a reporter. The latter are a matter of identity, not activity. I’ll offer an analogy. If someone says, “I’m a writer,” we don’t assume that they mean that they’re literate and capable of writing letters, words, or sentences. We might not assume that they’re a professional writer, but we do assume that they identify with the act of writing as either a profession, vocation, or distinguished skill. They own their action; it helps define who they are.

Likewise, if someone calls themselves (or if someone else calls them) a reporter or blogger, they might be referring to their job or career, but they’re definitely referring to at least a partial aspect of their identity. And just like we have preconceptions about what it means to be a “writer” — a kind of Romantic origin myth, of genius and personality expressed through language — we have preconceptions about what it means to be a blogger or a reporter.

They’re not just preconceptions, though, but practices codified in institutions, ranging from the law to business and labor practices to the collective assumptions and morés of a group.

There are lots of ways you could trace and track this, but let me follow one thread that I think is particularly important: the idea of the author-function.

Traditionally (by which I mean according to the vagaries of recent collective memory), reporters who are not columnists have bylines, but are not seen as authors. Their authority instead accrues to their institution.

If we read a story written by a typical reporter, we might say “did you see ____ in the New York Times?” If other newspapers or journalistic outlets pick up the story, if they attribute it at all, they’ll say, “According to a report in the New York Times…” This is similar to medical and scientific research, where journalists will usually say, “scientists at MIT have discovered…”

Some people within this field are different. If Paul Krugman writes something interesting, I probably won’t say “the New York Times”; I’ll say “Paul Krugman.”

In fact, there’s a whole apparatus newspapers use in order to distinguish writers I’m supposed to care about and writers I’m not. A columnist’s byline will be bigger. Their picture might appear next to their column.* They might write at regular intervals and appear in special sections of the paper. This is true in print or online.

(*This was actually one of the principal ways authorship was established in the early modern period: including an illustration of the author. Think about the famous portraits of Shakespeare. Sometimes to be thrifty, printers would reuse and relabel woodcuts: engravings of René Descartes were particularly popular, so a lot of 17th-century authors’ pictures are actually Descartes.)

Blogs do basically the same thing. Quick: name me three bloggers besides Josh Marshall who write for Talking Points Memo. If you could do it, 1) you’re good, and 2) you probably know these people personally, or at least through the internet.

These guys and girls are bloggers, they’re reporters, they’re opinionated, they have strong voices, and some of them are better than others. But I don’t know what they look like; if they followed me on Twitter tomorrow, I probably wouldn’t recognize their names. Josh Marshall, the impresario, is an author of the blog in a way that his charges are not. Or to take another example, Jason Kottke — whose writing is nearly as ego-less as it can probably get in terms of style, but who still is the absolute author of his blog.

The Atlantic, for better or worse (I think better), took an approach to blogging that foregrounded authorship: names, photos, and columns. There are “channels” through which lots of different people write, and sometimes you pick their names and voices out of the stream, but they’re not Andrew Sullivan, Ta-Nehisi Coates, James Fallows, Megan McArdle, Jeffrey Goldberg, Alexis Madrigal, et al., or Ross Douthat, Matt Yglesias and co. before them.

Now all of these writers tackle different topics and work in different styles, but they’re all authors. Their blogs are written and held together through the force of their names and personalities. Sullivan has a team of researchers/assistants, Coates has a giant community of commenters, Alexis has a crew of rotating contributors. It doesn’t matter; it’s always their blog.

The one person who never quite fit into this scheme was Marc Ambinder. Early on, when the first group of bloggers came in, it made more sense. For one thing, almost all of them wrote about politics and culture. They each had a slightly different angle — different ages, different political positions, different training. Ambinder’s schtick was that he was a reporter. It seemed to make as much sense as anything else.

As time went on, the blogs became less and less about politics in a recognizable sense. Ta-Nehisi Coates starting writing about the NFL, Jim Fallows increasingly about China and flying planes. And then the Atlantic starting putting author pictures up, by the posts and on the front page.

I remember sometime not long ago seeing Ambinder’s most recent photo on TheAtlantic.com and saying to myself, “I know what Marc Ambinder looks like, and that’s not Marc Ambinder.” He wasn’t wearing his glasses. He’d lost a ton of weight — later I’d find out he’d had bariatric surgery. He found himself embroiled in long online arguments where he was called out by name about his politics, his sexuality, his relationships.

Here’s somebody who by dint of professional training and personal preference simply did not want to be on stage. He didn’t want people looking at him. He didn’t want to talk about himself. He couldn’t be a personality like Andrew Sullivan or Ta-Nehisi Coates, or even a classically-handsome TV anchor talking head WITH personality like Brian Williams or Anderson Cooper. He wanted to do his job, represent his profession and institution, and go home.

I’m sympathetic, because I find it just as hard to act the opposite way. By training and disposition, I’m a writer, not a reporter. I’ve had to learn repeatedly what it means to represent an institution rather than just my own ideas and sensibilities — that not every word that appears under my byline is going to be the word I chose. The vast majority of people I meet and interact with don’t care who I am or what I think, just the institution I write for.

That’s humbling, but it’s powerful, too. Sometimes, it’s appealing. One of the things I love about cities are the anonymity you can enjoy: I could be anybody and anybody could be me. If you identify with it and take it to its limit, adopting those values as yours, it’s almost impossible to turn around and do the other thing.

So far, we have lived in a world where most the bloggers who have been successful have done so by being authors — by being taken seriously as distinct voices and personalities with particular obsessions and expertise about the world. And that colors — I won’t say distorts, but I almost mean that — our perception of what blogging is.

There are plenty of professional bloggers who don’t have that. (I read tech blogs every day, and couldn’t name you a single person who writes for Engadget right now.) They might conform to a different stereotype about bloggers. But that’s okay. I really did write snarky things about obscure gadgets in my basement while wearing pajama pants this morning. But I don’t act, write, think, or dress like that every day.

10 comments