The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52

Was Marc Ambinder actually a blogger?
 / 

Today Last week, Marc Ambinder reached the end of his tenure as a politics blogger for the Atlantic, and toasted the event with a thoughtful post on the nature of blogging. The central nugget:

Really good print journalism is ego-free. By that I do not mean that the writer has no skin in the game, or that the writer lacks a perspective, or even that the writer does not write from a perspective. What I mean is that the writer is able to let the story and the reporting process, to the highest possible extent, unfold without a reporter’s insecurities or parochial concerns intervening. Blogging is an ego-intensive process. Even in straight news stories, the format always requires you to put yourself into narrative. You are expected to not only have a point of view and reveal it, but be confident that it is the correct point of view. There is nothing wrong with this. As much as a writer can fabricate a detachment, or a “view from nowhere,” as Jay Rosen has put it, the writer can also also fabricate a view from somewhere. You can’t really be a reporter without it. I don’t care whether people know how I feel about particular political issues; it’s no secret where I stand on gay marriage, or on the science of climate change, and I wouldn’t have it any other way. What I hope I will find refreshing about the change of formats is that I will no longer be compelled to turn every piece of prose into a personal, conclusive argument, to try and fit it into a coherent framework that belongs to a web-based personality called “Marc Ambinder” that people read because it’s “Marc Ambinder,” rather than because it’s good or interesting.

My esteemed coblogger tweeted some terrific observations about Ambinder’s post:

@mthomps @robinsloan Now you can blog and be a reporter in a different way from how Ambinder & The Atlantic think of those two things.less than a minute ago via YoruFukurou

@mthomps @robinsloan But Ambinder’s (& others’) conception of “reporter” & Atlantic’s (& others’) conception of blogging are incompatible.less than a minute ago via YoruFukurou


I expect when Tim has more than 140 characters, he’ll nod to the fact that The Atlantic’s website actually encompasses many different ideas of what blogging means – from Andrew Sullivan’s flood of commentless links and reader emails to Ta-Nehisi Coates’ rollicking salons to Ambinder’s own sparsely-linked analyses. And beyond the bounds of the Atlantic there are so many other ideas, as many types of blogs as there are types of books, and maybe more – Waiter Rant to Romenesko to Muslims Wearing Things to this dude’s LiveJournal to BLDGBLOG.

That Ambinder’s essay doesn’t really acknowledge this – that it seems so curiously essentialist about a format that’s engendered so much diversity – disappoints me, because he’s such a thoughtful, subtle writer at his best. His sudden swerve into the passive voice – “You are expected to not only have a point of view” – briefly made me worry that he intends to become one of those print journalists who uses the cloak of institutional voice to write weaselly ridiculous phrases such as “Questions are being raised.”

It puzzles me that the same fellow who wrote that “a good story demolishes counterarguments” would casually drop the line, “Really good print journalism is ego-free.” “What I mean,” Ambinder says, “is that the writer is able to let the story and the reporting process, to the highest possible extent, unfold without a reporter’s insecurities or parochial concerns intervening.” I think I know what type of long-form journalism he’s referring to – there’s a wonderful genre of stories that make their case with a simple, sequential presentation of fact after unadorned fact. The Looming Tower. The Problem from Hell. David Grann’s stunning “Trial by Fire” in the New Yorker.

But there’s an equally excellent genre of journalism that foregrounds the author’s curiosities, concerns and assumptions – James Fallows’ immortal foretelling of the Iraq War, Atul Gawande’s investigation of expenditures in health care. This is ego-driven reporting, in the best possible way. For every Problem from Hell, there’s another Omnivore’s Dilemma. Far from demolishing counterarguments, Ambinder’s mention of “ego-free journalism” instantly summons to mind its opposite.

Likewise, his contention that “blogging is an ego-intensive process” has to grapple with the fact that some of the best blogging is just the reverse. It doesn’t square with examples such as Jim Romenesko, whose art is meticulously effacing himself from the world he covers, leaving a digest rich with voice and judgment so veiled you barely even notice someone’s behind it. In fact, contra Ambinder, I’ve found that one of the most difficult types of blogging to teach traditional reporters is this very trick of being a listener and reader first, suppressing the impulse to develop your own take until you’ve surveyed others and brought the best of them to your crowd. Devoid as it is of links, non-Web journalism often fosters a pride of ownership that can become insidious – a constant race to generate information that might not actually help us understand the world any better, but is (1) new and (2) yours. Unchecked, that leads inevitably to this.

In just the way Marc Ambinder’s post wasn’t necessarily an attack on blogging, this isn’t necessarily a defense of it, or an attack on traditional journalism. If Ambinder recast his musings on blogging in a slightly different way, I’d actually agree with him wholeheartedly. If, as I’ve been arguing in this post, the form is flexible enough to encompass so many approaches, that means every choice contributes to a blog’s unique identity. Perhaps more than any other publishing/broadcasting format, a blog is a manifestation of the choices and idiosyncrasies of its authors.

And I think this is what Ambinder’s experience reflects – his choices and his idiosyncrasies. He chose to blog about national politics – an extraordinarily crowded (and particularly solipsistic) field. To distinguish himself from the crowd, he chose to craft a persona known for its canny insider’s pose and behind-the-scenes insights. I think it was a terrific choice; I’ve enjoyed his Atlantic writing a lot. But there’s little essential about the format that compelled him to this choice.

The title of this post is, of course, facetious. (Although I’d kind of love it if the pointless “Who’s a journalist” debates gave way to pointless “Who’s a blogger” ones.) Of course Marc Ambinder was a blogger – he tended to a series of posts displayed on the Web in reverse-chronological order. Beyond that, there are common patterns and proven techniques, but very few rules. Print imposes more constraints, but some folks find a sort of freedom in that. I hope Marc Ambinder does, and I hope to read the product.

13 comments

ONAmarket: Don't call it UGC!
 / 

Comments

ONAmarket: Rebooting the News
 / 

Comments

ONAmarket: Rethinking Online Comments
 / 

One comment

Snark by Snark er … ONAmarket
 / 

As the official Snarkmarket liveblogger, I’m always on the lookout for good stuff to liveblog. This morning’s event is the intro panel for the Online News Association conference in DC.

Comments

All the pieces matter: Monopoly and The Wire
 / 

MONOPOLYWIREMEDIUM1

The Poke is a UK satirical site, a little bit like Chicago’s The Onion. Thursday, they published a fake news article about a version of Monopoly — complete with a fully-imagined and -illustrated fake gameboard — branded on the beloved HBO series The Wire:

“The Wire is all about corners,” says Hasbro spokesperson Jane McDougall, “and the Monopoly board is all about corners. It was a natural fit.” Based around the journey a young gangster might take through the fictionalised Baltimore of the show, players move from corner to stoop, past institutions featured in successive series like the school system and the stevedores union, acquiring real estate, money and power before ending up at the waterfront developments and City Hall itself.

There’s a classic scene in the first season of The Wire where D’Angelo (nephew of the drug boss Avon Barksdale and one of the series’s many unlikely protagonists) tries to teach two young dealers who work for him (Bodie and Wallace) how to play chess. Chess quickly turns into an elaborate metaphor to describe the violent realities and unreal ideals of the drug world they all live in:

But of course, it turns out not just to describe the drug world, but any world seen through the lens of The Wire. The two sides of the chess board could be one drug gang warring against another — Avon vs Marlo Stansfield. It could be the police detail trying to catch and trap the leader of one of the gangs. In the world of the police, too, pawns are expendable, and the people at the top fall under a completely different logic. (Every so often, a pawn will be transformed — like Prez, the hapless street cop who becomes first an invaluable decoder and data-miner and eventually, a middle school math teacher.)

But the single-plane, A vs B world of chess is really only an adequate metaphor for the narrow world of The Wire‘s first season, the immediate objectives that eventually get unravelled. As Stringer Bell tries to tell his partner Avon, “there are games beyond the game.”

That’s the world Stringer tries to navigate. You begin with drugs, fighting for corners. Then you step back, build institutions – other people work for you. Eventually, you transcend the street level and become a power broker, directing traffic but never touching the street. Then you take your ill-gotten capital — your Monopoly money — and turn it into real capital, by investing in (get this) real estate, political connections, legitimate businesses. Stringer Bell’s dream is Michael Corleone’s dream (which was Joe Kennedy’s dream). Power into wealth and back into power again. But it’s all just business.

That’s where Monopoly comes in. Like chess, Monopoly is about controlling territory. Unlike chess, it’s not neofeudal combat, with handed-down traditions and ideologies of strategy and honor — the illusion that everything is perfectly under the player’s control, that all the pieces in the game are visible.

Monopoly is transparently about money and greed. It lays bare the multiple, adjacent worlds and the interlocking systems that tie them together. (In The Wire, the worlds adjacent to drugs and cops include the ports, politics, the schools, and the media.) You gain territory and choose how you build on it, but you also roll dice and overturn hidden cards that can send you in a completely different direction. It’s actually absurdly easy for players to cheat — especially if you let them control the bank. And every time you pass Go, the game — at least in part — starts over again.

The Wire is about a lot of things — the decline of the American city, the futility of the war on drugs, the corruption of our institutions. It’s also about the gap between our ideologies of how things ought to be as opposed to the way they actually are. “You want it to be one way,” drug kingpin Marlo tells a worn-out security guard who tries to stop him from shoplifting. “But it’s the other way.”

Overwhelmingly, that gap plays out in the field of work. The second season, about the blue-collar port workers, is transparently about work — but really, every season is about workers, bosses, money, promotions, recognition. The innovation of The Wire with respect to its representation of drug gangs and cops is to present them as the mundane, kind of screwed-up workplaces that they are.

And capitalism has always been screwed-up about work. On the one hand, we’ve got Weber: the Protestant idea that work has an ethical value, that everybody has a calling and that we prove ourselves through our success. On the other, we’ve got Marx: the only way the system works is by extracting value from its workers, and the more value it can extract for less investment, the better the people at the top make out. “Do more with less,” as the newspaper editor, mayor, and police bosses say over and over again.

I think this is how I finally came to terms with The Wire‘s last season, which added journalism to the mix. It’s about that disillusionment — the idea that the work of journalism has an intrinsic value, and the corruption of that through cost-cutting and self-serving behavior. And maybe that disillusionment is extra bitter for Simon, who couldn’t stand what capitalism did to his newspaper, his city, its employers, its politics. The gall is too thick.

Simon’s collaborator Ed Burns had a more reconciled view of it; he’d worked as a cop, as a teacher, then a screenwriter/producer, and seemed to find satisfaction in different parts of each of them. It’s Burns’s wisdom we get when Lester Freamon tells Jimmy McNulty — who (like Simon) unleashes his anger on anyone who tries to get between him and his work — “the job will not save you.”

A Wire-themed Monopoly board might have begun as a joke, but let me tell you, Hasbro: you definitely think about it. I posted the link on Twitter, and it was picked up by Kottke and then by Slate, who both attributed me. You wouldn’t believe the reaction people had to this. Just like the series itself, it struck a chord. Also, just think of all the quotes from the series you can use to talk trash while you play:

3 comments

The Comedy Closer
 / 

Bill Murray is 60 years old today, which is a little bit unbelievable. The Beatles, Dylan, and The Stones can be in their 60s, and Woody Allen sometimes seems like he was ALWAYS in his 70s, but Bill Murray? 60? My parents aren’t even 60 yet, but Bill Murray is?

Maybe between movies, he gets in a spaceship that approaches the speed of light– so 60 earth years have passed, but he’s still really (let’s say) 48. He understands aging, all too well, because he’s seen it happen to the people around him at lightning speed, but he himself is only slowly, gently moving through middle age.

HiLobrow has a short but very fine appreciation, which makes me miss their daily HiLo Heroes birthday posts all the more. The erstwhile site only does occasional pieces averaging about one per week now. I’m guessing it’s because the editorial load was too large to bear.

I know the editors, but I haven’t actually asked them why; I know that from my own occasional entry-writer’s perspective, it seemed like way too much work. But golly-gosh, these are still some of my favorite things to read on the web.

Here are some of my favorite Bill Murray clips. Watching them, you see that Murray’s real genius may be in his ability to react to those around him with sanity AND lunacy; like Woody Allen at his peak, he’s George and Gracie rolled into one. He’s such a generous comedic actor, he makes even ciphers like Andie McDowell in Groundhog Day or Scarlett Johanssen in Lost In Translation look great. And because your attention’s still on him, you don’t even notice he’s doing it.

On Twitter, I compared him to baseball closer Mariano Rivera. Murray — maybe especially has he’s gotten older — is the relief pitcher who finishes every game/scene. He makes everybody look better; you’re always talking about him, but somebody else usually gets the win. The starters set the table, and he just kills you a half-dozen different ways. Fastball = punchline, change-up = muted expression, curveball = unexpected character transformation, and a devastating fluttery cut fastball that’s a mixture of all three.

4 comments

MSU Commencement Speech: May 3, 2001
 / 

In Twitter today — and I mean, like ten minutes ago — I got involved in a Twitter discussion with Matt Novak and Mat Honan about our memories of the Cold War. Matt was about six years old when it ended, Mat 17, I was 11, so we all had slightly different memories, but generally each recall the atmosphere of fear and dread we had then.

Mat Honan pointed out that 9/11/2001 hadn’t scared him the way it had many others because he’d grown up in the shadow of nuclear war. The spectacle of the destruction of whole cities, whole nations, is of a different order of magnitude than three-four unconventional attacks on American cities. It just is. Maybe the latter is actually more frightening, because it’s more concrete, in the same way that falling out of a roller coaster scares us more than dying of heart disease. The first one, you can see.

I remembered that I’d been thinking a lot about nuclear war in 2000-2001 — mostly how the threat had been gently fading for ten years, like a fingerprint on glass — and that I’d mentioned it in my very unusual commencement speech that I gave to Michigan State’s College of Arts & Letters in May 2001.

I’d already gotten my BA in Mathematics in the fall, and was finishing my second/dual degree in Philosophy, starting an MA program in Math that everyone knew I’d never finish. (Hey, they gave me a job teaching algebra that spring and that summer!)

I knew I wanted to be a professor, but didn’t know in what; I wrote some awful applications to philosophy programs in Berkeley, Princeton, and Chicago explaining that I was interested in Greek philosophy, Nietzsche, formal logic, and John Locke, which I’m sure pegged me as someone who had no idea what they wanted to do and no clear research program to pursue, and that was probably right. I was still waiting for the official rejection slip from Berkeley, trying to make up my mind whether I was going to split to Chicago for their consolation-prize Masters’ Program, stay in East Lansing and teach more math, or try to find real work.

Paul Gauguin, Where Do We Come From? What Are We? Where Are We Going?
Paul Gauguin, Where Do We Come From? What Are We? Where Are We Going?

I was obsessed with T.S. Eliot and Gauguin, respectively; I wanted to go to Boston that summer to find out more about each of them, but blew out a tire on the way and never made it. I’d already written my commencement speech though. Here it is.

(And before you ask, yes—this is total Sloan-bait for him to post HIS speech that he gave the next year to the BIG room at MSU, assuming he can find it on his hard drive.)

Read more…

One comment

Snarkmarket Dispatches From Within Wired.com
 / 

Plenty of my posts at Wired.com’s Gadget Lab are pretty different from what I used to post, or even would want to post, here at Snarkmarket. (We don’t do a whole lot of product hands-on, industry news, or microprocessor specs, for example, here at Snarkmarket.)

Some of them, though, are totally SM-appropriate. Here’s a short list of posts that Snarkmarket readers might have missed in the past week that I think you’d love under any masthead:

Hope you enjoy! (And please, comment! We need an injection of Snarkmarket comment awesomeness at Wired badly. It’s a bad vibe over there.)

Comments

Constellation: The Internet ≅ Islam
 / 

I’ve reading semi-extensively (i.e., as much as I can without breaking down and buying any more books) about the history of Islam. I’m partly motivated by a desire to better understand its philosophy and manuscript traditions, partly for a half-dozen other reasons too complicated to explain, but mostly just from long-standing interest. A few of my Kottke posts came out of this, as Robin pointed out.

So the best article I’ve come across in a while that touches on all of these things is “What is the Koran?” which appeared in The Atlantic back in 1999. It’s an examination of scholarly debates over the historicity of the Koran, and the propriety of Western scholars applying empirical/rationalist techniques to a holy text (especially when, historically-speaking, Orientalism of this kind hasn’t been motivated by knowledge for knowledge’s sake), plus clampdowns on Muslim writers who’ve brought the traditional history into question.

(Brief summary: Mohammed didn’t write, but received revelations from God, which he recited and others memorized and/or wrote down. A while later, just as with Christianity, a council produced an officially sanctioned text, knocking out variant copies and apocryphal texts, some of which were… extremely interesting. So, let’s imagine the Gnostic Gospels coming out in a country ruled by fundamentalists.)

Anyways, part of the problem these scholars are struggling with is just how FAST Islam grew from outsider rebels to ruling establishment:

Not surprisingly, given the explosive expansion of early Islam and the passage of time between the religion’s birth and the first systematic documenting of its history, Muhammad’s world and the worlds of the historians who subsequently wrote about him were dramatically different. During Islam’s first century alone a provincial band of pagan desert tribesmen became the guardians of a vast international empire of institutional monotheism that teemed with unprecedented literary and scientific activity. Many contemporary historians argue that one cannot expect Islam’s stories about its own origins—particularly given the oral tradition of the early centuries—to have survived this tremendous social transformation intact. Nor can one expect a Muslim historian writing in ninth- or tenth-century Iraq to have discarded his social and intellectual background (and theological convictions) in order accurately to describe a deeply unfamiliar seventh-century Arabian context. R. Stephen Humphreys, writing in Islamic History: A Framework for Inquiry (1988), concisely summed up the issue that historians confront in studying early Islam.

If our goal is to comprehend the way in which Muslims of the late 2nd/8th and 3rd/9th centuries [Islamic calendar / Christian calendar] understood the origins of their society, then we are very well off indeed. But if our aim is to find out “what really happened,” in terms of reliably documented answers to modern questions about the earliest decades of Islamic society, then we are in trouble.

But one of the things that happened during this period is that Islam went from wild, oral, incomprehensible traditions to scholarly/poetic/cultural flowering to clamped-down authoritarian fundamentalism:

As Muslims increasingly came into contact with Christians during the eighth century, the wars of conquest were accompanied by theological polemics, in which Christians and others latched on to the confusing literary state of the Koran as proof of its human origins. Muslim scholars themselves were fastidiously cataloguing the problematic aspects of the Koran—unfamiliar vocabulary, seeming omissions of text, grammatical incongruities, deviant readings, and so on. A major theological debate in fact arose within Islam in the late eighth century, pitting those who believed in the Koran as the “uncreated” and eternal Word of God against those who believed in it as created in time, like anything that isn’t God himself. Under the Caliph al-Ma’mun (813-833) this latter view briefly became orthodox doctrine. It was supported by several schools of thought, including an influential one known as Mu’tazilism, that developed a complex theology based partly on a metaphorical rather than simply literal understanding of the Koran.

By the end of the tenth century the influence of the Mu’tazili school had waned, for complicated political reasons, and the official doctrine had become that of i’jaz, or the “inimitability” of the Koran. (As a result, the Koran has traditionally not been translated by Muslims for non-Arabic-speaking Muslims. Instead it is read and recited in the original by Muslims worldwide, the majority of whom do not speak Arabic. The translations that do exist are considered to be nothing more than scriptural aids and paraphrases.) The adoption of the doctrine of inimitability was a major turning point in Islamic history, and from the tenth century to this day the mainstream Muslim understanding of the Koran as the literal and uncreated Word of God has remained constant.

Okay. Now let’s read The Economist, “The future of the internet: A virtual counter-revolution“:

THE first internet boom, a decade and a half ago, resembled a religious movement. Omnipresent cyber-gurus, often framed by colourful PowerPoint presentations reminiscent of stained glass, prophesied a digital paradise in which not only would commerce be frictionless and growth exponential, but democracy would be direct and the nation-state would no longer exist…

Fifteen years after its first manifestation as a global, unifying network, it has entered its second phase: it appears to be balkanising, torn apart by three separate, but related forces…. It is still too early to say that the internet has fragmented into “internets”, but there is a danger that it may splinter along geographical and commercial boundaries… To grasp why the internet might unravel, it is necessary to understand how, in the words of Mr Werbach, “it pulled itself together” in the first place. Even today, this seems like something of a miracle…

One reason may be that the rapid rise of the internet, originally an obscure academic network funded by America’s Department of Defence, took everyone by surprise. “The internet was able to develop quietly and organically for years before it became widely known,” writes Jonathan Zittrain, a professor at Harvard University, in his 2008 book, “The Future of the Internet—And How To Stop It”. In other words, had telecoms firms, for instance, suspected how big it would become, they might have tried earlier to change its rules.

Maybe this is a much more common pattern than we might realize; things start out radical and unpredictable, resolve into a productive, self-generating force, then stagnate and become fixed or die. Boil it down, and it sounds fairly typical. That how stars work, that’s how cities work, maybe that’s just how life works.

But in both articles, Islam and the Internet are presented as outliers. Judaism and Christianity didn’t grow as fast as Islam did, and their textual tradition produces similar problems, but don’t appear to be as sharp. Likewise, information networks like railroads, the telegraph, and telephone are presented as normally-developing; the internet is weird. Either this pattern is more common than we think it is, or it isn’t. Either way, it’s a meaningful congruence.

If that’s the case — if we can use the history of Islam to think about the internet, and vice versa, then what are the lessons? What are the porential consequences? What interventions, if necessary, are possible? (We have to confront the possibility in both cases that any intervention might be ruinous.)

5 comments