The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52

This is what sports liveblogging is for
 / 

Every sport, I believe, has its own optimal medium. For baseball, I like the intimacy of radio, and the timing and traditions of the medium lend themselves well to a sport driven by discrete, well-defined actions. Pro and college football actually work better on television than in person — unless you’re intoxicated, when all bets are off. Soccer, as this year’s World Cup proves, lends itself to Twitter’s ability to celebrate goals, talk trash, and complain about calls (or diving for calls) in truncated bursts. Basketball, hockey, and (usually) tennis have a combination of speed, intimacy, and crowd effect that make the stadium experience hardest to beat or replicate.

But what about a tennis match, like that between John Isner and Nicolas Mahut at Wimbledon, that because of endless tiebreaks and evening suspensions, spills over into more than ten hours and a third day? In such a case, stadium attendance and television alike become gruesome; you’re watching something that resembles a tennis match, but feels more like an all-night dance-a-thon. It’s horrible and fascinating at the same time. You can’t bear to watch, but you need periodic updates, because at any moment, something — anything — may happen.

Here, then, is the perfect sports experience for the liveblog. And here, too, The Guardian’s Xan Brooks is the master, riveting to read even in retrospect. Consider :

4.05pm: The Isner-Mahut battle is a bizarre mix of the gripping and the deadly dull. It’s tennis’s equivalent of Waiting For Godot, in which two lowly journeymen comedians are forced to remain on an outside court until hell freezes over and the sun falls from the sky. Isner and Mahut are dying a thousand deaths out there on Court 18 and yet nobody cares, because they’re watching the football. So the players stand out on their baseline and belt aces past each-other in a fifth set that has already crawled past two hours. They are now tied at 18-games apiece.

On and on they go. Soon they will sprout beards and their hair will grow down their backs, and their tennis whites will yellow and then rot off their bodies. And still they will stand out there on Court 18, belting aces and listening as the umpire calls the score. Finally, I suppose, one of them will die.

Ooh, I can see the football out of the corner of my eye. England still 1-0 up!

And, four and a half hours later:

8.40pm: It’s 56 games all and darkness is falling. This, needless to say, is not a good development, because everybody knows that zombies like the dark. So far in this match they’ve been comparatively puny and manageable, only eating a few of the spectators in between bashing their serves.

But come night-fall the world is their oyster. They will play on, play on, right through until dawn. Perhaps they will even leave the court during the change-overs to munch on other people. Has Roger Federer left the grounds? Perhaps they will munch on him, hounding him down as he runs for his car, disembowelling him in the parking lot and leaving Wimbledon without its reigning champion. Maybe they will even eat the trophy too.

Growing darker, darker all the while.

They are still tied at 59 all in the fifth and final set. This set alone is longer than any other match in tennis history. Play will resume tomorrow.

One comment

McChrystal's secret strategy
 / 

There’s been a lot of noise about Gen. Stanley McChrystal’s Obama-badmouthing candor with Rolling Stone, but besides perhaps Colson Whitehead (“I didn’t know they had truffle fries in Afghanistan“), Andrew Fitzgerald at Current has distilled it to its essence better than anyone on the net: first substance (“Focusing on the few controversial remarks misses the point of this RS McChrystal piece. Really tough look at Afg.”), then snark (“Let’s say McChrystal is fired… How long before he shows up as a commentator on FNC? Is it months? Weeks? Hours?”).

When I saw this last tweet, I had an epiphany. All the commentators and journalists were wondering how McChrystal could have let this bonehead, 99%-sure-to-cost-your-job move happen. Did he think he was talking off the record? Was he blowing off steam? Did he think no one would find out? And if he wanted to trash the administration publicly, why in the world would did he give this info to Rolling Stone? I mean, did he even see Almost Famous? (Is Obama Billy Crudup? I kind of think he is.)

But let’s just suppose that this was McChrystal’s intention all along. I pretty much buy the New York magazine profile of Sarah Palin, which lays out why she resigned her office; being governor of Alaska is a crummy, poorly-paying job, her family was going broke fighting legal bills, and she was getting offers she couldn’t refuse. It’s like being an Ivy League liberal arts major, getting offered a job at Goldman Sachs right out of college; it’s not what you came there to do, but how are you going to let that go? (Besides, it isn’t like you have to know a ton about what you’re doing; you’re there for who you are already.) Also, Palin could do the new math of GOP politics in her head — public office is less important than being a public figure, with a big platform. Or as Andrew says, “FNC commentator is the new Presidential candidate.”

Well, let’s try this equation: if it’s tough to be the governor of Alaska, how much harder does it have to be to be in charge of Afghanistan? What are the chances that you’re going to come out of this thing smelling like roses anyways? How can you remove yourself from that position while still coming off as an honorable, somewhat reluctant, but still passionate critic of the administration? And make a splash big enough doing it that it gets beyond policy circles and editorial pages?

I have no idea whether it’s true, but it’s worth entertaining the possibility that the good general threaded the needle here.

6 comments

Machines making mistakes
 / 

Why Jonah Lehrer can’t quit his janky GPS:

The moral is that it doesn’t take much before we start attributing feelings and intentions to a machine. (Sometimes, all it takes is a voice giving us instructions in English.) We are consummate agency detectors, which is why little kids talk to stuffed animals and why I haven’t thrown my GPS unit away. Furthermore, these mistaken perceptions of agency can dramatically change our response to the machine. When we see the device as having a few human attributes, we start treating it like a human, and not like a tool. In the case of my GPS unit, this means that I tolerate failings that I normally wouldn’t. So here’s my advice for designers of mediocre gadgets: Give them voices. Give us an excuse to endow them with agency. Because once we see them as humanesque, and not just as another thing, we’re more likely to develop a fondness for their failings.

This connects loosely with the first Snarkmarket post I ever commented on, more than six (!) years ago.

2 comments

Universal acid
 / 

The philosopher Dan Dennett, in his terrific book Darwin’s Dangerous Idea, coined a phrase that’s echoed in my head ever since I first read it years ago. The phrase is universal acid, and Dennett used it to characterize natural selection—an idea so potent that it eats right through established ideas and (maybe more importantly) institutions—things like, in Darwin’s case, religion. It also resists containment; try to say “well yes, but, that’s just over there” and natural selection burns right through your “yes, but.”

If that’s confusing, the top quarter of this page goes a bit deeper on Dennett’s meaning. It also blockquotes this passage from the book, which gets into the sloshiness of universal acid:

Darwin’s idea had been born as an answer to questions in biology, but it threatened to leak out, offering answers—welcome or not—to questions in cosmology (going in one direction) and psychology (going in the other direction). If [the cause of design in biology] could be a mindless, algorithmic process of evolution, why couldn’t that whole process itself be the product of evolution, and so forth all the way down? And if mindless evolution could account for the breathtakingly clever artifacts of the biosphere, how could the products of our own “real” minds be exempt from an evolutionary explanation? Darwin’s idea thus also threatened to spread all the way up, dissolving the illusion of our own authorship, our own divine spark of creativity and understanding.

Whoah!

(P.S. I think one of the reasons I like the phrase so much is that it seems to pair with Marx’s great line “…all that is solid melts into air.” Except it’s even better, right? Marx just talks about melting. This is more active: this is burning. This is an idea so corrosive it bores a channel to the very center of the earth.)

So I find myself wondering what else might qualify as a universal acid.

I think capitalism must. Joyce Appleby charts the course it took in her wonderful new book The Relentless Revolution. “Relentless” is right—that’s exactly what you’d expect from a universal acid. I think the sloshiness is also there; capitalism transformed not just production and trade but also politics, culture, gender roles, family structure, and on and on.

I suspect, much more hazily, that computation might turn out to be another another kind of universal acid—especially this new generation of diffuse, always-available computation that seems to fuse into the world around us, thanks to giant data-centers and wireless connections and iPads and things yet to come.

But what else? Any other contemporary candidates for universal acid?

56 comments

Recipes for history
 / 

Two links on the history of city/country dynamic in civilizations that go great together. The first one is about older stuff: an interview with Peter Heather about his book Empires and Barbarians: The Fall of Rome and the Birth of Modern Europe, which looks at the whole first millennium rather than the usual rise and fall:

What the book is trying to show is that the Roman Empire came into existence at a point where the relative underdevelopment of central, eastern, and northern Europe meant that the comparatively more developed Mediterranean world could provide a powerbase of sufficient strength to dominate the continent. As soon as development in northern Europe caught up, however, that relationship was bound to reverse, no matter what any Roman ruler might have tried to do about it. You can also see the first millennium as the time when Europe as some kind of unitary entity comes into being. By the end of it, dynasties are in place across the vast majority of its territory, and their subsequent history will lead pretty directly to the modern map of states. The same had not been remotely true at the birth of Christ a thousand years before…

To my mind, the most relevant finding is that this whole process of economic development and state formation in the non-imperial Europe of the first millennium was the result of a developing range of contacts with the more developed imperial world. In a process highly analogous to modern globalization, flows of wealth, weaponry, technology, and ideas ran from more developed Europe into its less developed periphery in increasing quantities and over a wider geographical area as the first millennium progressed. And, as in modern globalization, the benefits of all this were not shared equally by the totality of the population in non-imperial Europe, but were largely monopolized by particular groupings who used the wealth weaponry and ideologies to build new political structures which put themselves firmly at the head of their own societies.

Sometimes it’s impossible to remain imperial. If your imperial power—as it often is—is based on a pattern of precocious regional development, then as soon as surrounding regions catch up, as they undoubtedly will, that power must ebb (the fate of the Mediterranean in the first millennium). In these circumstances, it is important to accept the inevitable and gracefully renegotiate a new strategic balance of power, or one is likely to be imposed by force.

The other link is Edible Geography’s transcript of a talk by historian Rachel Laudan, who looks at the rise of Wal-Mart in Mexico City (and the end of hand-ground tortillas and the ridiculous amount of work/time that go into them) from a similar long-historical perspective:

There’s only one way to feed a city, at least historically, and that’s to feed it with grains—rice, wheat, maize, barley, sorghum, etc.. You can go round the world, and there just aren’t cities that aren’t fed on grains, except for possibly in the high Andes. Basically, to maintain a city, you’ve got to get grains into it. Be it Bangkok, be it Guangzhou, be it London, or be it Rome—throughout history, grains and cities are two sides of the coin.

And what do you need in terms of grains? For most of history—really, until about 150 years ago—most people in most cities, except for the very wealthy, lived almost exclusively on grains. They got about ninety percent of their calories from grains.

That meant that for every single person in a city you had to have 2 lbs of grains a day, turned into something that people could eat.

[Holding up a standard supermarket package of tortillas.] This is a kilo of tortillas. That’s what one person in a city needed. It’s the same weight, more or less, whatever the grain is—you can go to the historical record, you can research in China, in India, in the Near East, and you will still be talking about 2 lbs of grain-based food for every person in the city every day.

So you can do some calculations. If you’ve got a city of a million, like ancient Rome, you’ve got to get two million pounds of grain into the city every day. It’s the same for all the cities in the world— it’s 2 lbs of grain per person. That’s the power, that’s the energy that drives cities.

Even when you watch a TV series like Rome, one of the things that comes across is how obsessed the Romans were with grain — keeping grain coming into the city, getting it to the markets, using it to feed their armies, maintaining imperial control over regions (like Egypt) that supplied the bulk of their grain. For them, corn crises were like our oil crises; peak corn was their peak oil.

And they knew it. Google “Cicero corn.” It’s amazing how much he talks about it; disputes about corn come up in one lawsuit or political speech after another like paternity tests on “Maury.”

And as the example of Mexico City shows, this is far from ancient history. One of my favorite little pieces of cultural criticism is Stephen Greenblatt and Catherine Gallagher’s essay “The Potato and the Materialist Imagination,” which looks at debates about potatoes and population in 19th-century England. A point that Laudan makes is that you can’t just eat grain like you can fruit — at a minimum, you’ve got to shuck, grind, and cook it, turn it into couscous or tortillas or whatever. When you’re talking about bread, or anything that adds extra ingredients, the degree of difficulty goes up.

But that degree of difficulty is what a civilization is — a division of labor that necessitates socialization, technology, rituals, aggregation. The English were terrified about growing potatoes in Ireland, not because they were worried about famines, but the opposite. Because potatoes grew underground, they thought the crop was famine resistant — nobody had any kind of experience with destruction of crops by a fungus. No, they were worried because potatoes worked too well — you could dig them out of the ground, boil them, and eat them, and sustain a huge population for a fraction of the cost of bread. And without bread, no civilization. This is what the English did to maintain their labor force in Ireland, the Prussians did in Poland; they leaned on it and dug calories out of the ground and grew and grew until it almost killed them, which is why America is full of Walshes and Sczepanskis today. (Laudan refers to a similar crisis in Spain; when the Spanish first ground maize, they didn’t use alkali to process it the way the native Mexicans did, and many, many people contracted pellagra, a nutritional deficiency that causes blindness.)

There were three transformative miracles in the nineteenth century that made the world what it is now, and they all came out of the ground: potatoes, petroleum, and paper made from trees. We thought they’d all last forever. We’ve been living in their shadow all along, and only now are we beginning to shiver.

9 comments

Straw men, shills, and killer robots
 / 

Indulge me, please, for digging into some rhetorical terminology. In particular, I want to try to sort out what we mean when we call something a “straw man.”

Here’s an example. Recently, psychologist/Harvard superstar Steven Pinker wrote an NYT op-ed, “Mind Over Mass Media,” contesting the idea that new media/the internet hurts our intelligence or our attention spans, and specifically contesting trying to marshal neuroscience studies in support of these claims. Pinker writes:

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Please note that nowhere does Pinker name these “critics of new media” or attribute this quote, “experience can change the brain.” But also note that everyone and their cousin immediately seemed to know that Pinker was talking about Nicholas Carr, whose new book The Shallows was just reviewed by Jonah Lehrer, also in the NYT. Lehrer’s review (which came first) is probably best characterized as a sharper version of Pinker’s op-ed:

There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.

Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.

I also really liked this wry observation that Lehrer added on at his blog, The Frontal Cortex:

Much of Carr’s argument revolves around neuroscience, as he argues that our neural plasticity means that we quickly become mirrors to our mediums; the brain is an information-processing machine that’s shaped by the kind of information it processes. And so we get long discussions of Eric Kandel, aplysia and the malleability of brain cells. (Having work in the Kandel lab for several years, I’m a big fan of this research program. I just never expected the kinase enzymes of sea slugs to be applied to the internet.)

Now, at least in my Twitter feed, the response to Pinker’s op-ed was positive, if a little backhanded. This is largely because Pinker largely seems to have picked this fight less to defend the value of the internet or even the concept of neuroplasticity than to throw some elbows at his favorite target, what he calls “blank slate” social theories that dispense with human nature. He wrote a contentious and much-contested book about it. He called it The Blank Slate. That’s why he works that dig in about how “the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.” Pinker doesn’t think we’re clay at all; instead, we’re largely formed.

So on Twitter we see a lot of begrudging support: “Pinker’s latest op-ed is good. He doesn’t elevate 20th C norms to faux-natural laws.” And: “I liked Pinker’s op-ed too, but ‘habits of deep reflection… must be acquired in… universities’? Debunk, meet rebunk… After that coffin nail to the neuroplasticity meme, Pinker could have argued #Glee causes autism for all I care.” And: “Surprised to see @sapinker spend so much of his op-ed attacking straw men (“critics say…”). Overall, persuasive though.

And this is where the idea of a “straw man” comes in. See, Pinker’s got a reputation for attacking straw men, which is why The Blank Slate, which is mostly a long attack on a version of BF Skinner-style psychological behaviorism, comes off as an attack on postmodern philosophy and literary criticism and mainstream liberal politics and a whole slew of targets that get lumped together under a single umbrella, differences and complexities be damned.

(And yes, this is a straw man characterization of Pinker’s book, probably unfairly so. Also, neither everyone nor the cousins of everyone knew Pinker was talking about Carr. But we all know what we know.)

However, on Twitter, this generated an exchange between longtime Snarkmarket friend Howard Weaver and I about the idea of a straw man. I wasn’t sure whether Howard, author of that last quoted tweet, was using “straw men” just to criticize Pinker’s choice not to call out Carr by name, or whether he thought Pinker had done what Pinker often seems to do in his more popular writing, arguing against a weaker or simpler version of what the other side actually thinks. That, at least, is a critically stronger sense of what’s meant by straw men. (See, even straw men can have straw men!)

So it seems like there are (at least) four different kinds of rhetorical/logical fallacies that could be called “arguing against a straw man”:

  1. Avoiding dealing with an actual opponent by making them anonymous/impersonal, even if you get their point-of-view largely right;
  2. Mischaracterizing an opponent’s argument (even or especially if you name them), usually by substituting a weaker or more easily refuted version;
  3. Assuming because you’ve shown this person to be at fault somewhere, that they’re wrong everywhere — “Since we now know philosopher Martin Heidegger was a Nazi, how could anyone have ever qualified him for a bank loan?”;
  4. Cherry-picking your opponent, finding the weakest link, then tarring all opponents with the same brush. (Warning! Cliché/mixed metaphor overload!)

Clearly, you can mix-and-match; the most detestable version of a straw man invents an anonymous opponent, gives him easily-refuted opinions nobody actually holds, and then assumes that this holds true for everybody who’d disagree with you. And the best practice would seem to be:

  1. Argue with the ideas of a real person (or people);
  2. Pick the strongest possible version of that argument;
  3. Characterize your opponent’s (or opponents’) beliefs honestly;
  4. Concede points where they are, seem to be, or just might be right.

If you can win a reader over when you’ve done all this, then you’ve really written something.

There’s even a perverse version of the straw man, which Paul Krugman calls an “anti-straw man,” but I want to call “a killer robot.” This is when you mischaracterize an opponent’s point-of-view by actually making it stronger and more sensible than what they actually believe. Krugman’s example comes from fiscal & monetary policy, in particular imagining justifications for someone’s position on the budget that turns out to contradict their stated position on interest rates. Not only isn’t this anyone’s position, it couldn’t be their position if their position was consistent at all. I agree with PK that this is a special and really interesting case.

Now, as Howard pointed out, there is another sense of “straw man,” used to mean any kind of counterargument that’s introduced by a writer with the intent of arguing against it later. You might not even straight-out refute it; it could be a trial balloon, or thought experiment, or just pitting opposites against each other as part of a range of positions. There’s nothing necessarily fallacious about it, it’s just a way of marking off an argument that you, as a writer, wouldn’t want to endorse. (Sometimes this turns into weasely writing/journalism, too, but hey, again, it doesn’t have to be.)

Teaching writing at Penn, we used a book that used the phrase “Straw Man” this way, and had a “Straw Man” exercise where you’d write a short essay that just had an introduction w/thesis, a counterargument (which we called a “straw man”), then a refutation of that counterargument. And then there was a “Straw Man Plus One” assignment, where you’d…

Never mind. The point is, we wound up talking about straw men a lot. And we’d always get confused, because sometimes “straw man” would mean the fallacy, sometimes it would mean the assignment, sometimes it would be the counterargument used in that (or any) assignment, sometimes it would be the paragraph containing the counterargument…

Oy. By 2009-10, confusion about this term had reached the point where two concessions were made. First, for the philosophers in the crowd who insisted on a strict, restrictive meaning of “straw man” as a fallacy, and who didn’t want their students using fallacious “straw men” in their “Straw Man” assignments, they changed the name of the assignment to “Iron Man.” Then, as part of a general move against using gendered language on the syllabus, it turned into “Iron Person.” Meanwhile, the textbook we used still called the assignment “Straw Man,” turning confusion abetted to confusion multiplied.

I probably confused things further by referring to the “iron person” assignment as either “the robot” — the idea being, again, that you build something that then is independent of you — or “the shill.” This was fun, because I got to talk about how con men (and women) work. The idea of the shill is that they pretend to be independent, but they’re really on the con man’s side the entire time. The best shill sells what they do, so that you can’t tell they’re in on it. They’re the ideal opponent, perfect as a picture. That got rid of any lingering confusion between the fallacy and the form.

Likewise, I believe that here and now we have sorted the range of potential meanings of “straw man,” once and for all. And if you can prove that I’m wrong, well, then I’m just not going to listen to you.

One comment

Waiting for Superman
 / 

Times like this truly do make me wish superheroes were real.

There’s an affecting moment in J Michael Straczynski’s recent run on the comic Thor. The Norse god of thunder’s been dead for three years, but has come back to life, as only gods and comic book superheroes can.

One of the first places he goes is New Orleans. Thor was dead when Hurricane Katrina hit a year earlier, and he knows he could have stopped the hurricanes, the floods, or otherwise saved the city and its people. But he wonders where the rest of the superheroes were: “Why were not force fields erected? Why were tides not evaporated by heat and blast? Why were buildings not supported by strength of arms and steel?”

Just then, Iron Man shows up, to tell Thor that all superheroes need to register with the federal government to prevent superpower-caused disasters. Instead of preventing Katrina or repairing New Orleans, Iron Man and his fellow superheroes have been fighting each other over this registration requirement, part of what Marvel Comics called Civil War.

There’s some meaning to be drawn from this, that I can’t fully articulate. Something about thinking too small, thinking about short-term hurdles and squabbles rather than the big picture; a blindness to the fact of habitual human suffering that would be willful if it weren’t also somehow sickeningly necessary.

I’m not sure. But I think I know why I’ve been reading more comic books lately.

12 comments

We like our cities logical
 / 

I like old Law & Order episodes — there’s a reason why I put the show smack in the middle of my Showroulette pitch — but wasn’t heartbroken when I’d heard that the flagship series was cancelled. (The quirkier, more salacious spinoffs, like “Law & Order: Freaky Sex Crimes Unit,” remain.) The show had been losing its edge for a while, in writing, acting, and even casting. I mean, how are going to cast the judge from The Wire as … a judge on Law & Order? That’s just lazy. At least the guys from The Sopranos didn’t always play mobsters.

A couple of things I’ve seen lately, though, in the wake of the show’s cancellation, suggest that Law & Order wasn’t quite as sharp because the city itself had lost its edge — in a good way, at least for New York (if not procedural dramas). This New York Times article notes how the show helped improve New York’s image to tourists and parvenues (“This Crime Spree Made New York Feel Safe“):

In 1990, when the show made its debut, 2,245 people were murdered in New York (a high-water mark), and several of those victims became emblematic of the haphazard, senseless violence that gripped the city…

[But] as [the detectives] pulled on the threads of the case, a pattern and motive always emerged. Unlike in the real New York, there is almost no pure street crime in “Law & Order.” In a show obsessed with the city’s class structure, you were far more likely to be murdered by your financial adviser than by a drug dealer. Crime has no single cause, the show seemed to argue, but crimes do, and they can be solved one at a time…

Mr. Wolf portrayed a city in which there were no senseless crimes, only crimes that hadn’t yet been made sense of. He took the conventions of the English country murder mystery and tucked them inside the ungovernable city. In so doing, for a national audience, he de-randomized New York violence.

The plunging murder rate has to help too — just 466 homicides in all of New York City in 2009, an all-time low. For a city of almost 9 million people, it’s pretty impressive that fewer people were killed in New York last year than follow me on Twitter. Let’s put it this way — Philadelphia and Baltimore, which also had record-low homicide numbers, together easily beat New York even though the two cities combined have something like half the population of Brooklyn. New York went from one of the most dangerous cities in the country to one of the safest.

The Wire’s David Simon, though, argues that the rising wealth and lowered danger of New York skews New York’s sense of what’s happening in American cities nationwide — and because New York dominates America’s media imagination, that has a disproportionate effect on how we understand what’s happening elsewhere. (Make sure you watch this video to the end, where he gives Law & Order a pop):

Some of this is familiar anti-NYC stuff, particularly from people who 1) live/grew up elsewhere and 2) work in/adjacent to media and publishing. But I think Simon’s bigger point, that the “urban experience” in America has become much more heterogeneous, both within and between cities, is 1) true and 2) has consequences, is really worth paying closer attention to.

One comment

The trouble with digital culture
 / 

One of the problems with studying any medium is that it’s too easy to mistake the part for the whole. Literature professors can confidently chart the development of the novel over centuries by referencing only a tiny well-regarded sliver of all novels published, some immensely popular and others forgotten. When you turn to the broader field of print culture, books themselves jostle against newspapers, advertisements, letters and memos, government and business forms, postcards, sheet music, reproduced images, money, business cards and nameplates, and thousands of other forms that have little if anything to do with the codex book. We tend towards influential, fractional exemplars, partly out of necessity (raised to the level of institutions) and partly out of habit (raised to the level of traditions). But trouble inevitably arises when we forget that the underexamined whole exists, or pretend that it doesn’t matter. It always does. If nothing else, the parts that we cut out for special scrutiny draw their significance in no small part by how they relate to the other, subterranean possibilities.

The culture of digital technology, like that of print, is impressively broad, thoroughly differentiated, and ubiquitously integrated into most of our working and non-working lives. This makes it difficult for media scholars and historians to study, just as it makes it difficult (but inevitable) for scholars to recognize how this technology has changed, is changing, and should continue to change the academy. Self-professed digital humanists — and I consider myself one — generally look at digital culture, then identify themselves and model their practices on only a sliver of the whole.

Digital culture far exceeds the world wide web, social networks, e-books, image archives, games, e-mail, and programming codes. It exceeds anything we see on our laptops, phones, or television screens. It even exceeds the programmers, hackers, pirates, clerics, artists, electricians, and engineers who put that code into practice, and the protocols, consoles, and infrastructure that govern and enable their use.

This is important, because digital humanists’ efforts to “hack the academy” most often turn out NOT to be about replacing an established analog set of practices and institutions with new digital tools and ideas. Instead, it’s a battle within digital culture itself: the self-styled “punk” culture of hackers, pirates, coders, and bloggers against the office suite, the management database, the IT purchaser. Twitter vs. Raisers’ Edge. These are also reductions, but potentially instructive ones.

For my own part, I tend to see digital humanism less as a matter of individual or group identity, or the application of digital tools to materials and scholarship in the humanities, but instead as something that is happening, continuing to emerge, develop, and differentiate itself, both inside and outside of the academy, as part of the spread of information and the continual redefinition of our assumptions about how we encounter media, technological, and other objects in the world. In this, every aspect of digital technology, whether old or new, establishment or counter-establishment, plays a part.

I’m writing this as part of the Center for History and New Media’s “Hacking the Academy” project, filed under the hashtag #criticism. Check out the other submissions here.

3 comments

Courage, the invisible, and the law
 / 

Ta-Nehisi Coates has been my favorite writer to read on this Rand Paul mess. (Short version – Ron Paul’s son won the Republican primary for a Senate seat in Kentucky, after which his candidacy kind of fell apart after some really clumsy and embarassing interviews where he tried to say that he was against the Civil Rights Act banning segregation, but that he wasn’t a racist, would have marched with MLK, and thinks a free society means people/businesses are free to do despicable things.) Here are the key bullet points:

  • Why can the media only focus on how bad this looks for Paul politically, rather than try to engage with his opinion as a serious position? “[W]hile I expect politicians and their handlers to think in terms of messaging, I also expect–perhaps foolishly–for media to be in the business of pushing past that messaging to actual ideas. What we get instead is a faux-objectivity, that avoids the substance of issues and instead focuses on how that substance is pitched. In that sense, much like the relationship between entertainment and many entertainment journalists, it’s really hard to see media as more than quasi-independent extension of campaign apparatus.”
  • Why can’t Paul and his conservative/libertarian supporters actually engage with this stuff more seriously? “What I’m driving at is raising the question about methods is never wrong, to the contrary it’s essential. That process is undermined by people who raise those questions, without having thought about them, without being able to speak to their nuances, and are mostly concerned with tribal signaling. People were dragged from their homes, raped and murdered over civil rights. Talk about it, by all means. But talk about it with the intellectual seriousness it deserves.This is not a third grade science fair project.”
  • This post, “Towards an abstract courage,” is my favorite, because it addresses the idea that certainly, Paul and every other decent person would have been allies with King and other desegregationists to bring segregated businesses down, without the federal government stepping in. “Now, after the police dogs, night-sticks and fire-hoses have been beaten back, Rand Paul wants to reopen the question, while, to be sure, claiming that he would have had the ‘courage to march with Martin Luther King.’ This is a common strain of courage. It chiefly shines through in men born 50 years too late. Presently among the crowd, they are distinguished at that decisive moment when queried about wars they won’t have to fight, in times they will never live. These men populate our history books. They are all on the wrong side.”
  • To that end, “Towards a manifested courage” tells the story of Joan Trumpauer, one of the white freedom riders arrested in Jackson, MS for integrating a lunch counter.

Coates links to Charles Lane in the Washington Post, who writes:

Suppose an African American customer sits down at a “whites only” restaurant and asks for dinner. The owner tells him to leave. The customer refuses and stays put. What are the owner’s options at that point? He can forcibly remove the customer himself, but, as Paul concedes, that could expose the restaurateur to criminal or civil liability. So he’ll have to call the cops. When they arrive, he’ll have to explain his whites-only policy and ask them to remove the unwanted black man because he’s violating it. But they can only do that on the basis of some law, presumably trespassing. In other words, the business owner’s discriminatory edict is meaningless unless some public authority enforces it.

Conversely, it is precisely because of this nexus between private discrimination and public enforcement that the larger community, through the political and judicial process, acquires a valid interest in legislating against discrimination. The public is entitled to say whether their tax money should pay for arresting black trespassers on whites-only property.

This, for me, is a huge point, since it establishes that segregation and desegregation aren’t at substance purely a matter of freedom of association or the content of characters/hearts, but a matter of recognition under the law. What we see are the people, those angry faces — but what makes the invisible infrastructure for all of that anger is the law.

To see how important — and how slippery — this point can be, read this NYT editorial excoriating Paul, then Chris Bray at History News Network, who justly slams the NYT:

[T]he American history of racial oppression and brutality is a history of government. The founding document of the republic privileged slavery as a lawful institution, and government served that institution for another seventy-eight years after that. The Emancipation Proclamation didn’t free all American slaves; it freed slaves in states engaged in rebellion…

After the abandonment of Reconstruction, “redeemed” southern governments rebuilt structures of oppression through law and the institutions of government. Jim Crow laws were laws; the regime of racial segregation was not simply a set of social choices. That guy standing in the schoolhouse door? He was a governor. Why is that so hard to figure out?

I think it’s because we’ve seen the pictures of the dogs and the firehoses and the angry men and women behind them, and we’ve assumed that that’s what discrimination looks like, to the point that we can’t understand anyone or anything as racist unless it looks like that.

But I don’t think that’s it at all. It’s a secret history of the invisible that we’re tracing. And the thing about being invisible is that it’s pretty easy to be everywhere.

4 comments