The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52

The Western 101, via Netflix Watch Instantly

I love Westerns. My allegiance to the genre has long been known on the Snarkmatrix. (I refer you to the comment threads on Exhibit A or Exhibit B.) So I am excited that people are excited by Joel and Ethan Coen’s new Western, True Grit.

And jeez, I hope I get a few hours by myself in the next week or so to see this movie. Parenting is a serious drag on your ability to partake of the cinema, which is one reason I’ve become such a devotée of Netflix Watch Instantly. I didn’t even get to catch the restored Metropolis when it came to town, and I had only A) waited months for it and B) written a chapter of my dissertation about its director. So I don’t know if True Grit is as good as everyone says it is. What I do know, what I know the hell out of, are Westerns, and Netflix. If you don’t know Westerns, that’s fine. So long as you’ve got a Netflix subscription and streaming internet, I’ve got your back.

You probably know that True Grit (2010) is an adaptation of the same Charles Portis novel (True Grit) that was previously adapted into a movie [True Grit (1969)] that won John Wayne a Best Actor Oscar for his portrayal of the eyepatched marshal Rooster Cogburn. It’s not a remake, you’ve heard entoned, it’s a more-faithful adaptation of the novel.

Fine. Who cares? At a certain point, remakes and adaptations stop being remakes and adaptations. Does anyone care that His Girl Friday was a gender-swapping adaptation of The Front Page, a terrific Ben Hecht and Charles McArthur play which had already been made into a movie in 1931, and which was made into a movie again in 1974 with Billy Wilder directing and Walter Matthau and Jack Lemmon playing the Cary Grant and Rosalind Russell roles?

Okay, I do. But besides me, not really. Because His Girl Friday obliterated The Front Page in our movie-watching conciousness, even though the latter is the prototype of every fast-talking newspaper comedy from, shit, His Girl Friday to the Coen Brothers’ The Hudsucker Proxy. It’s been over forty years since True Grit (1969). It’s a good movie, but if you haven’t seen it, don’t sweat it too much.

You should, however, be sweating the Western. Because not least among their virtues is that Joel and Ethan Coen care and care deeply about genre. Virtually all of their movies are a loving pastiche of one genre form or another, whether playful (like Hudsucker’s newspaper comedy or The Big Lebowski’s skewed take on the hardboiled detective), not so playful (No Country For Old Men) or both somehow at once (Miller’s Crossing, Fargo). And the Western is fickle. You’ve got to contend with books, movies, radio, and TV, all with their own assumptions, all alternating giddy hats-and-badges-and-guns-and-horses entertainment and stone-serious edge-of-civilization Greek-tragedy-meets-American-origin-stories primal rites.

I’ll save you some time, though, by giving you just twelve links, briefly annotated.
Read more…


Rooting for the home team

It’s a classic paradox of American democracy: citizens love America, hate Congress, but generally like their own district’s Congressman. (Until they don’t, and then they vote for someone else, who they usually like).

Josh Huder (via Ezra Klein) takes on the apparent paradox, armed with some good data and historical analysis.

Huder points out something even more paradoxical: Congressional approval takes a hit not just when there’s a scandal, or when there’s partisan gridlock in the face of a crisis, but even when Congress works together to pass major legislation:

By simply doing its job Congress can alienate large parts of its constituency. So while people like their legislators, they dislike when they get together with fellow members and legislate.

From this, Huder concludes that “disapproval is built into the institution’s DNA.” But let me come at this from a different angle: professional and/or sports.

There’s almost an exact isomorphism here. Fans/constituents like/love their home teams (unless their performance suffers for an extended period of time, when they switch to “throw the bums out” mode), and LOVE the game itself. But nobody really likes the league. Who would say, “I love the MLB” or “I love the NCAA” — meaning the actual organizations themselves?

Never! The decisions of the league are always suspect. They’re aggregate, bureaucratic, necessary, and not the least bit fun. Even when leagues make the right decision, we discount it; they’re just “doing their job.” The only time they can really capture our attention is when they do something awful. And most of the time, they’re just tedious drags on our attention, easily taken for granted.

If it’s a structure, it doesn’t seem to be limited to politics. It’s a weird blend of local/pastime attachment, combined with contempt/misunderstanding for the actual structures that work. Because we don’t *want* to notice them at work at all, really.

One comment

Two observations on Lanier on Wikileaks

Robin set the table up (and h/t to Alexis for getting Lanier’s essay in circulation).

Here are three disjoint thoughts, slightly too long for tweets/comments:

  1. Part of Lanier’s critique of Wikileaks works astonishingly well as a critique of Google’s Ngrams, too. (I’m working up a longer post on this.) In particular, I’m thinking of this observation:

    A sufficiently copious flood of data creates an illusion of omniscience, and that illusion can make you stupid. Another way to put this is that a lot of information made available over the internet encourages players to think as if they had a God’s eye view, looking down on the whole system.

  2. I feel like we need a corollary to the Ad Hitlerem/Godwin’s Law fallacy. I’m going to call it “the Gandhi principle.” Just like trotting out the Hitler analogy for everything you disagree with shuts down a conversation by overkill, so do comparisons with Mahatma Gandhi, Martin Luther King, Nelson Mandela, Jesus, and other secular and not-so-secular activist saints.

We’ve canonized these guys, to the point where 1) we think they did everything themselves, 2) they never used different strategies, 3) they never made mistakes, and 4) disagreeing with them then or now violates a deep moral law.

More importantly, in comparison, every other kind of activism is destined to fall short. Lanier’s essay, like Malcolm Gladwell’s earlier essay on digital activism, violates the Gandhi principle. (Hmm, maybe this should be the No-Gandhi Principle. Or it doesn’t violate the Gandhi Principle, but invokes it. Which is usually a bad thing. Still sorting this part out.) The point is, both Ad Hitlerem and the Gandhi Principle opt for terminal purity over differential diagnosis. If you’re not bringing it MLK-style, you’re not really doing anything.

The irony is, Lanier’s essay is actually pretty strong at avoiding the terminal purity problem in other places — i.e., if you agree with someone’s politics, you should agree with (or ignore) their tactics, or vice versa. At its best, it brings the nuance, rather than washing it out.

Google’s Ngrams is also subject to terminal purity arguments — either it’s exposing our fundamental cultural DNA, or it’s dicking around with badly-OCRed data, and it couldn’t possibly be anything in between. To which I say — oy.


A hypothetical path to the Speakularity

Yesterday NiemanLab published some of my musings on the coming “Speakularity” – the moment when automatic speech transcription becomes fast, free and decent.

I probably should have underscored the fact that I don’t see this moment happening in 2011, given the fact that these musings were solicited as part of a NiemanLab series called “Predictions for Journalism 2011.” Instead, I think several things possibly could converge next year that would bring the Speakularity a lot closer. This is pure hypothesis and conjecture, but I’m putting this out there because I think there’s a small chance that talking about these possibilities publicly might actually make them more likely.

First, let’s take a clear-eyed look at where we are, in the most optimistic scenario. Watch the first minute-and-a-half or so of this video interview with Clay Shirky. Make sure you turn closed-captioning on, and set it to transcribe the audio. Here’s my best rendering of some of Shirky’s comments alongside my best rendering of the auto-caption:

Manual transcript: Auto transcript:
Well, they offered this penalty-free checking account to college students for the obvious reason students could run up an overdraft and not suffer. And so they got thousands of customers. And then when the students were spread around during the summer, they reneged on the deal. And so HSBC assumed they could change this policy and have the students not react because the students were just hopelessly disperse. So a guy named Wes Streeting (sp?) puts up a page on Facebook, which HSBC had not been counting on. And the Facebook site became the source of such a large and prolonged protest among thousands and thousands of people that within a few weeks, HSBC had to back down again. So that was one of the early examples of a managed organization like a bank running into the fact that its users and its customers are not just atomized, disconnected people. They can actually come together and act as a group now, because we’ve got these platforms that allow us to coordinate with one another. will they offer the penalty-free technique at the college students pretty obvious resistance could could %uh run a program not suffer as they got thousands of customers and then when the students were spread around during the summer they were spread over the summer the reneged on the day and to hsbc assumed that they could change this policy and have the students not react because the students were just hopeless experts so again in western parts of the page on face book which hsbc had not been counting on the face book site became the source of such a large and prolonged protest among thousands and thousands of people that within a few weeks hsbc had to back down again so that was one of the early examples are female issue organization like a bank running into the fact that it’s users are not just after its customers are not just adam eyes turned disconnected people they get actually come together and act as a group mail because we’ve got these platforms to laos to coordinate

Cringe-inducing, right? What little punctuation exists is in error (“it’s users”), there’s no capitalization, “atomized” has become “adam eyes,” “platforms that allow us” are now “platforms to laos,” and HSBC is suddenly an example of a “female issue organization,” whatever that means.

Now imagine, for a moment, that you’re a journalist. You click a button to send this video to Google Transcribe, where it appears in an interface somewhat resembling the New York Times’ DebateViewer. Highlight a passage in the text, and it will instantly loop the corresponding section of video, while you type in a more accurate transcription of the passage.

That advancement alone – quite achievable with existing technology – would speed our ability to transcribe a clip like this quite a bit. And it wouldn’t be much more of an encroachment than Google has already made into the field of automatic transcription. All of this, I suspect, could happen in 2011.

Now allow me a brief tangent. One of the predictions I considered submitting for NiemanLab’s series was that Facebook would unveil a dramatically enhanced Facebook Videos in 2011, integrating video into the core functionality of the site the way Photos have been, instead of making it an application. I suspect this would increase adoption, and we’d see more people getting tagged in videos. And Google might counter by adding social tagging capabilities to YouTube, the way they have with Picasa. This would mean that in some cases, Google would know who appeared in a video, and possibly know who was speaking.

Back to Google. This week, the Google Mobile team announced that they’ve built personalized voice recognition into Android. If you turn it on for your Android device, it’ll learn your voice, improving the accuracy of the software the way dictation programs such as Dragon do now.

Pair these ideas and fast-forward a bit. Google asks YouTube users whether they want to enable personalized voice recognition on videos they’re tagged in. If Google knows you’re speaking in a video, it uses what it knows about your voice to make your part of the transcription more accurate. (And hey, let’s throw in that they’ve enabled social tagging at the transcript level, so it can make educated guesses about who’s saying what in a video.)

A bit further on: Footage for most national news shows is regularly uploaded to YouTube, and this footage tends to feature a familiar blend of voices. If they were somewhat reliably tagged, and Google could begin learning their voices, automatic transcriptions for these shows could become decently accurate out of the box. That gets us to the democratized Daily Show scenario.

This is a bucketload of hypotheticals, and I’m highly pessimistic Google could make its various software layers work together this seamlessly anytime soon, but are you starting to see the path I’m drawing here?

And at this point, I’m talking about fairly mainstream applications. The launch of Google Transcribe alone would be a big step forward for journalists, driving down the costs of transcription for news applications a good amount.

Commenter Patrick at NiemanLab mentioned that the speech recognition industry will do everything in its power to prevent Google from releasing anything like Transcribe anytime soon. I agree, but I think speech transcription might be a smaller industry economically than GPS navigation,* and that didn’t prevent Google from solidly disrupting that universe with Google Navigate.

I’m stepping way out on a limb in all of this, it should be emphasized. I know very little about the technological or market realities of speech recognition. I think I know the news world well enough to know how valuable these things would be, and I think I have a sense of what might be feasible soon. But as Tim said on Twitter, “the Speakularity is a lot like the Singularity in that it’s a kind of ever-retreating target.”

The thing I’m surprised not many people have made hay with is the dystopian part of this vision. The Singularity has its gray goo, and the Speakularity has some pretty sinister implications as well. Does the vision I paint above up the creep factor for anyone?

* To make that guess, I’m extrapolating from the size of the call center recording systems market, which is projected to hit $1.24 billion by 2015. It’s only one segment of the industry, but I suspect it’s a hefty piece (15%? 20%?) of that pie. GPS, on the other hand, is slated to be a $70 billion market by 2013.


Sci-Fi Film History 101 (via Netflix Watch Instantly)

Here’s another Netflix list from Friend of the Snarkmatrix Matt Penniman! —RS

As a supplement to Tim’s list, I thought I might offer the following. It attempts to catalog the history of science fiction in film. More specifically: it features films that take a scientific possibility or question as their central premise.

20,000 Leagues Under the Sea (1916)
deep sea life

Metropolis (1927)
robotics, dehumanization

Gojira (1954)

The Fly (1958)

La jetée (1961)
time travel

Planet of the Apes (1968)

Solaris (1972)
alien intelligence

Close Encounters of the Third Kind (1977)
alien intelligence

Mad Max (1979)
post-apocalypse society

Blade Runner (1982)

Aliens (1986)
biological weapons

Terminator 2: Judgment Day (1991)
robotics, time travel

Ghost in the Shell 2.0 (1995)
robotics, networked information

Bonus selections:

Robot Stories (2004)

Moon (2009)

One comment

Film History 101 (via Netflix Watch Instantly)

Robin is absolutely right: I like lists, I remember everything I’ve ever seen or read, and I’ve been making course syllabi for over a decade, so I’m often finding myself saying “If you really want to understand [topic], these are the [number of objects] you need to check out.” Half the fun is the constraint of it, especially since we all now know (or should know) that constraints = creativity.

So when Frank Chimero asked:

Looking to do some sort of survey on film history. Any sort of open curriculum out there like this that runs in tandem with Netflix Instant?

I quickly said, “I got this,” and got to work.

See, trying to choose over the set of every film ever made is ridiculously hard. Choosing over a well-defined subset is both easier and more useful.

Also, I knew I didn’t want to pick the best movies ever made, or my favorites, or even the most important. Again, that pressure, it’ll cripple you. I wanted to pick a smattering of films that if you watched any given, sufficiently large subset of them, you’d know a lot more about movies than when you started.

This is actually a lot like trying to design a good class. You’re not always picking the very best examples of whatever it is you’re talking about, or even the things that you most want your students to know, although obviously both of those factor into it. It’s much more pragmatic. You’re trying to pick the elements that the class is most likely to learn something from, that will catalyze the most chemistry. It’s a difficult thing to sort, but after you’ve done it for a while, it’s like driving a car, playing a video game, or driving a sport — you just start to see the possibilities opening up.

Then I decided to add my own constraints. First, I decided that I wasn’t going to include any movies after the early 1970s. You can quibble about the dates, but basically, once you get to the Spielberg-Scorsese-Coppola-Woody Allen generation of filmmakers — guys who are older but still active and supremely influential today — movies are basically recognizable to us. Jaws or Goodfellas or Paris, Texas are fantastic, classic, crucial movies, but you don’t really have to put on your historical glasses to figure them out and enjoy them, even if they came out before you were of movie-going age. The special effects are crummier, but really, movie-making just hasn’t changed that much.

Also, I wasn’t going to spend more than a half-hour putting it together. I knew film history and Netflix’s catalog well enough to do it fast, fast, fast.

And so, this was the list I came up with. As it happened, it came to a nice round 33.

I made exactly one change between making up the list and posting it here, swapping out David Lynch’s Eraserhead for Jean-Luc Godard’s Breathless. I cheated a little with Eraserhead — it’s a late movie that was shot over a really, really long period of time in the 70s and came out towards the end of that decade. And Breathless isn’t Godard’s best movie, but it’s probably the most iconic, so it was an easy choice.

There are huge limitations to this list, mostly driven by the limitations of the catalog. Netflix’s selection of Asian and African movies, beyond a handful of auteurs like Akira Kurosawa, isn’t very good. There’s no classic-period Hitchcock. There’s no Citizen Kane. There aren’t any documentaries or animated films. And you could argue until you’re blue in the face about picking film X over film Y with certain directors or movements or national cinemas.

But you know what? You wouldn’t just learn something from watching these movies, or just picking five you haven’t seen before — you would actually have fun. Except maybe Birth of a Nation. Besides its famous pro-Ku Klux Klan POV, that sucker is a haul. Happy watching.


I have mixed feelings about Facebook.

I’m not going to recount the long insomniac thought trail that led me here, but suffice it to say I ended up thinking about mission statements early this morning. Google’s came immediately to mind: To organize the world’s information, and make it universally accessible and searchable. I’m not sure what Twitter’s mission statement might be, but a benign one didn’t take too long to present itself: To enable a layer of concise observations on top of the world. (Wordsmiths, have at that one.)

I got completely stuck trying to think of a mission for Facebook that didn’t sound like vaguely malevolent marketing b.s. To make everything about you public? To connect you with everyone you know?

When I read Zadie Smith’s essay as an indictment of Facebook – its values, its defaults, and its tendencies – rather than the “generation” it defines, her criticisms suddenly seem a lot more cogent to me. I realized that I actually am quite ambivalent about Facebook. I thought it was worth exploring why.

I was thinking about the ways social software has changed my experience of the world. The first world-altering technology my mind summoned was Google Maps (especially its mobile manifestation), and at the thought of it, all the pleasure centers of my brain instantly lit up. Google Maps, of course, has its problems, errors, frustrating defaults, troubling implications – but these seem so far outweighed by the delights and advantages it’s delivered over the years that I can unequivocally state I love this software.

I recently had an exchange with my friend Wes about whether Google Maps, by making it so difficult to lose your way, also made it difficult to stumble into serendipity. I walked away thinking that what Google Maps enabled – the expectation that I can just leave my house, walk or drive, and search for anything I could want as I go – enabled much more serendipity than it forestalled. It’s eliminated most of the difficulties that might have prevented me from wandering through neighborhoods in DC, running around San Francisco, road-tripping across New England. And it demands very little of me, and imposes very little upon me. (One imposition, for example: All the buildings I’ve lived in have been photographed on Street View. I’m happy to abide by this invasion of privacy, because without it, I wouldn’t have found the place I live in today.) For me, Google Maps is basically an unalloyed social good.

Google has been very prolific with these sorts of products – things that bring me overwhelming usefulness with much less tangible concern. Google Search itself is, of course, a masterpiece. News Search, Gmail, Reader, Docs, Chrome, Android, Voice – even failed experiments such as Wave – I find that these things have heightened what I expect software to do for me. They have made the Internet more useful, information more accessible, and generally, life more pleasurable.

I was trying to think of a Facebook product that ameliorated my life in some similar way, and the first thing to come to mind was Photos. Facebook Photos created for me the expectation that every snapshot, every captured moment, would be shared and tagged for later retrieval. At my fifth college reunion, I made a point of taking photos with every classmate I wanted to reconnect with on Facebook. When I go home and tag my photos, I told my buddies, it will remind you that we should catch up. And it worked like a charm! I reconnected with dozens of old friends on Facebook, and now I see their updates scrolling by regularly, each one producing a tinge of warmth and good feelings.

But the dark side of Facebook Photos almost immediately presented itself as well. For me, the service has replaced the notion of a photograph as a shared, treasured moment with the reality of a photograph as a public event. I realized all of a sudden that I can’t remember the last time I took a candid photo. Look through my photos, and even those moments you might call “candid” are actually posed. I can’t sit for a picture without expecting that the photo will be publicized. Not merely made public – my public Flickr stream never provoked this sense – publicized. And although this is merely a default, easily overridden, to do so often feels like an overreaction. To go to a friend’s photo of me and untag myself, or to make myself untaggable, feels like I’m basically negating the purpose of Facebook Photos. The product exists so these images might be publicized. And increasingly, Facebook seems to be what photos are for.

Of course that’s not true. I also suddenly realized that I’ve been quietly stowing away a secret cache of images on my phone – a shot of Bryan sleeping, our cat Otis in a grocery bag, an early-morning sunlit sky – that are quickly becoming the most treasured images I possess, the ones I return to again and again.

Perhaps Facebook Photos has made my private treasure trove more valuable.

I use Facebook Photos as an example first because it’s the part of the service that’s most significantly altered my experience of the world, but also because I think it reflects something about the software’s ethos. That dumb, relentless publicness of photos on Facebook doesn’t have to be the default. Photos, by default, could be accessible only to users tagged in a set, for example, not publicized to all my friends and their friends. I’m not even sure that’s an option. (My privacy settings allow most users to see only my photos, not photos I’m tagged in. But I’m not sure what that even means. When another friend shares a photo publicly, and I’m tagged in it, I’m fairly certain our friends see that information.)

Facebook engineered the photo-sharing system in such a way as to maximize exposure rather than, say, utility. For Facebook, possibly, exposure is utility.* I think that characterizes most of the choices that underpin Facebook’s products. With most of the other social software products I use – the Google suite, WordPress, Twitter, Flickr, Dropbox, etc. – I am constantly aware of and grateful for the many ways the software is serving me. With Facebook, I’m persistently reminded that I am always serving it – feeding an endless stream of information to the insatiable hive, creating the world’s most perfect consumer profile of myself.

I don’t trust Google for a second, but I value it immensely. I trust Facebook less, and I’m growing more ambivalent about its value.

I don’t think I want to give up Facebook. I value the connections it offers, however shallow they are. I enjoy looking at photos of my friends. I like knowing people’s birthdays.

But I am wary of it, its values and its defaults. How it’s changing my expectations and my experience of the world.

* Thought added post-publication.


Now that's what I call local

Sorry; this snippet from Matt’s second-day liveblog/Twitter curation of the conversation at PubCamp blew my mind a little bit:

Matt Thompson: One of the most frequent issues users have is not being able to find something on our website. The vast majority of the time, that’s because they heard something on their local programming and are searching for it in the national site. If we had shared authentication across the system, we could be able to recognize other stations users authenticate with and show them local content.

So simple, but so powerful.

You’ve got to fine-tune just how local you get to match user expectations, though:

Matt Thompson: Discussion turns to users qualms over things like the Open Graph, turning on, for example, and suddenly seeing your friends’ names all over the page. How does the Washington Post know who my friends are?

But we quickly come back to the simple-but-powerful stuff again:

Matt Thompson: I asked for my pony: a registration system that would just keep track of what I’d read on the site, then let me know when those stories were updated/corrected.

I think we almost need to bring it back to the user end and offer something like a hybrid between the “Private Browsing/Incognito” mode that’s started to get incorporated into web browsers and the browser extension FlashBlock, that disables Flash ads and videos except when you whitelist them.

Call it “SocialBlock” (which sounds way more fun than it actually is). I browse with my identity intact, carrying it with me, but can select which sites/services I offer it to. And it’s just a quick click to turn it on or off.

One comment

Tweets from PubCamp 2010

I’m sitting in the dev lounge during the last of the day’s sessions at Public Media Camp, an unconference for folks interested in public media stuff.

Fair warning: This is not going to be your standard Matt Thompson Conference Liveblog, and will possibly not be interesting in any way. I’m trying out two things: (1) live curation of Twitter (which I haven’t really done), and (2) a Snarkmarket-customized CoverItLive template, that will allegedly not require you to see the title page. I’ll be very excited if this latter thing is true. Update: Not true. Still have to click to see the liveblog. Darn it.


Blogger, Reporter, Author

I want to distinguish blogging from reporting, and bloggers from reporters. But more than that, I want to distinguish the first question from the second.

Blogging is pretty easy to define as an activity. It’s writing online in a serial form, collected together in a single database. It doesn’t matter whether you’re doing it as an amateur or professional, as an individual or in a group, under your own byline or a pseudonym, long-form or on Twitter.

Reporting is a little trickier, but it’s not too tough. You search for information, whether from people or records or other reports, you try to figure out what’s true, and you relay it to somebody else. Anyone can report. They assign reports to elementary school students. Or you can be Woodward-and-Bernsteining it up, using every trick you can think of to track down data from as many sources as possible.

Now, both of these are different from what it means to be a blogger or a reporter. The latter are a matter of identity, not activity. I’ll offer an analogy. If someone says, “I’m a writer,” we don’t assume that they mean that they’re literate and capable of writing letters, words, or sentences. We might not assume that they’re a professional writer, but we do assume that they identify with the act of writing as either a profession, vocation, or distinguished skill. They own their action; it helps define who they are.

Likewise, if someone calls themselves (or if someone else calls them) a reporter or blogger, they might be referring to their job or career, but they’re definitely referring to at least a partial aspect of their identity. And just like we have preconceptions about what it means to be a “writer” — a kind of Romantic origin myth, of genius and personality expressed through language — we have preconceptions about what it means to be a blogger or a reporter.

They’re not just preconceptions, though, but practices codified in institutions, ranging from the law to business and labor practices to the collective assumptions and morés of a group.

There are lots of ways you could trace and track this, but let me follow one thread that I think is particularly important: the idea of the author-function.

Traditionally (by which I mean according to the vagaries of recent collective memory), reporters who are not columnists have bylines, but are not seen as authors. Their authority instead accrues to their institution.

If we read a story written by a typical reporter, we might say “did you see ____ in the New York Times?” If other newspapers or journalistic outlets pick up the story, if they attribute it at all, they’ll say, “According to a report in the New York Times…” This is similar to medical and scientific research, where journalists will usually say, “scientists at MIT have discovered…”

Some people within this field are different. If Paul Krugman writes something interesting, I probably won’t say “the New York Times”; I’ll say “Paul Krugman.”

In fact, there’s a whole apparatus newspapers use in order to distinguish writers I’m supposed to care about and writers I’m not. A columnist’s byline will be bigger. Their picture might appear next to their column.* They might write at regular intervals and appear in special sections of the paper. This is true in print or online.

(*This was actually one of the principal ways authorship was established in the early modern period: including an illustration of the author. Think about the famous portraits of Shakespeare. Sometimes to be thrifty, printers would reuse and relabel woodcuts: engravings of René Descartes were particularly popular, so a lot of 17th-century authors’ pictures are actually Descartes.)

Blogs do basically the same thing. Quick: name me three bloggers besides Josh Marshall who write for Talking Points Memo. If you could do it, 1) you’re good, and 2) you probably know these people personally, or at least through the internet.

These guys and girls are bloggers, they’re reporters, they’re opinionated, they have strong voices, and some of them are better than others. But I don’t know what they look like; if they followed me on Twitter tomorrow, I probably wouldn’t recognize their names. Josh Marshall, the impresario, is an author of the blog in a way that his charges are not. Or to take another example, Jason Kottke — whose writing is nearly as ego-less as it can probably get in terms of style, but who still is the absolute author of his blog.

The Atlantic, for better or worse (I think better), took an approach to blogging that foregrounded authorship: names, photos, and columns. There are “channels” through which lots of different people write, and sometimes you pick their names and voices out of the stream, but they’re not Andrew Sullivan, Ta-Nehisi Coates, James Fallows, Megan McArdle, Jeffrey Goldberg, Alexis Madrigal, et al., or Ross Douthat, Matt Yglesias and co. before them.

Now all of these writers tackle different topics and work in different styles, but they’re all authors. Their blogs are written and held together through the force of their names and personalities. Sullivan has a team of researchers/assistants, Coates has a giant community of commenters, Alexis has a crew of rotating contributors. It doesn’t matter; it’s always their blog.

The one person who never quite fit into this scheme was Marc Ambinder. Early on, when the first group of bloggers came in, it made more sense. For one thing, almost all of them wrote about politics and culture. They each had a slightly different angle — different ages, different political positions, different training. Ambinder’s schtick was that he was a reporter. It seemed to make as much sense as anything else.

As time went on, the blogs became less and less about politics in a recognizable sense. Ta-Nehisi Coates starting writing about the NFL, Jim Fallows increasingly about China and flying planes. And then the Atlantic starting putting author pictures up, by the posts and on the front page.

I remember sometime not long ago seeing Ambinder’s most recent photo on and saying to myself, “I know what Marc Ambinder looks like, and that’s not Marc Ambinder.” He wasn’t wearing his glasses. He’d lost a ton of weight — later I’d find out he’d had bariatric surgery. He found himself embroiled in long online arguments where he was called out by name about his politics, his sexuality, his relationships.

Here’s somebody who by dint of professional training and personal preference simply did not want to be on stage. He didn’t want people looking at him. He didn’t want to talk about himself. He couldn’t be a personality like Andrew Sullivan or Ta-Nehisi Coates, or even a classically-handsome TV anchor talking head WITH personality like Brian Williams or Anderson Cooper. He wanted to do his job, represent his profession and institution, and go home.

I’m sympathetic, because I find it just as hard to act the opposite way. By training and disposition, I’m a writer, not a reporter. I’ve had to learn repeatedly what it means to represent an institution rather than just my own ideas and sensibilities — that not every word that appears under my byline is going to be the word I chose. The vast majority of people I meet and interact with don’t care who I am or what I think, just the institution I write for.

That’s humbling, but it’s powerful, too. Sometimes, it’s appealing. One of the things I love about cities are the anonymity you can enjoy: I could be anybody and anybody could be me. If you identify with it and take it to its limit, adopting those values as yours, it’s almost impossible to turn around and do the other thing.

So far, we have lived in a world where most the bloggers who have been successful have done so by being authors — by being taken seriously as distinct voices and personalities with particular obsessions and expertise about the world. And that colors — I won’t say distorts, but I almost mean that — our perception of what blogging is.

There are plenty of professional bloggers who don’t have that. (I read tech blogs every day, and couldn’t name you a single person who writes for Engadget right now.) They might conform to a different stereotype about bloggers. But that’s okay. I really did write snarky things about obscure gadgets in my basement while wearing pajama pants this morning. But I don’t act, write, think, or dress like that every day.