The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52

De inventione punctus
 / 

All signs suggest punctuation is in flux. In particular, our signs that mark grammatical (and sometimes semantic) distinctions are waning, while those denoting tone and voice are waxing. Furthermore, signs with a slim graphical profile (the apostrophe and comma, especially) are having a rough go of it. Compared to the smiley face or even the question mark, they’re too visually quiet for most casual writers to notice or remember, even (or especially) on our high-def screens.

But we’re also working within the finite possibilities and inherited structures of our keyboards. It’s the age of secondary literacy: writing and reading transformed by electronic communication, from television to the telephone.

See 1. Jan Swafford’s unfortunately titled “Why e-books will never replace real books,” which takes seriously Marshall McLuhan’s argument that print (and computers, too) change the ways we think and see:

I’ve taught college writing classes for a long time, and after computers came in, I began to see peculiar stuff on papers that I hadn’t seen before: obvious missing commas and apostrophes, when I was sure most of those students knew better. It dawned on me that they were doing all their work on-screen, where it’s hard to see punctuation. I began to lecture them about proofing on paper, although, at first, I didn’t make much headway. They were unused to dealing with paper until the final draft, and they’d been taught never to make hand corrections on the printout. They edited on-screen and handed in the hard copy without a glance.

Handwriting is OK! I proclaimed. I love to see hand corrections! Then I noticed glitches in student writing that also resulted from editing on-screen: glaring word and phrase redundancies, forgetting to delete revised phrases, strangely awkward passages. I commenced an ongoing sermon: You see differently and in some ways better on paper than on computer. Your best editing is on paper. Try it and see if I’m right. You’ll get a better grade. The last got their attention. The students were puzzled and skeptical at first, but the ones who tried it often ended up agreeing with me.

And especially, see 2. Anne Trubek’s “The Very Long History of Emoticons“:

A punctuation purist would claim that emoticons are debased ways to signal tone and voice, something a good writer should be able to indicate with words. But the contrary is true: The history of punctuation is precisely the history of using symbols to denote tone and voice. Seen in this way, emoticons are simply the latest comma or quotation mark… The earliest marks indicated how a speaker’s voice should adjust to reflect the tone of the words. Punctus interrogativus is a precursor to today’s question mark, and it indicates that the reader should raise his voice to indicate inquisitiveness. Tone and voice were literal in those days: Punctuation told the speaker how to express the words he was reading out loud to his audience, or to himself. A question mark, a comma, a space between two words: These are symbols that denote written tone and voice for a primarily literate—as opposed to oral—culture. There is no significant difference between them and a modern emoticon.

I @atrubek. And I’m feeling all zen about this observation of hers, too: “A space is a punctuation mark.” There’s a whole philosophy in that idea, I know it.

I’m also feeling all zen about this idea that computer screens (keyboards, too) are sites of multiple, overlapping, and conflicting cultures, and that it’s up to us (in part) to help decide what the assumptions of those cultures are. —–>
Here, see 1. The Slow Media Manifesto, and 2. Nick Carr’s much-debated (whaaa? Nick Carr in a debate?) post about “delinkification“, which is actually a pretty solid meditation on the rhetoric of the hyperlink (you could say, the way we punctuate them). In short, if you think the superimposed montage of words and link of in-text hyperlinks pose some cognition/decision problems, which might not be appropriate to all kinds of reading, then it might make sense to try using a different strategy (like footnoting), instead. And being the relatively sophisticated mammals we are, in different contexts, we can sort these strategies out (even if we don’t fully understand that or how we’re doing it).

11 comments

Western threads
 / 

red-dead-500

I saw the new video game Red Dead Redemption for the first time this weekend, courtesy of my pal Wilson, who described it (and I paraphrase) as “every awesome Western ever, combined.”

It is indeed totally stunning, and it’s got me thinking about Westerns. Among other things:

What clicks in your mind when you think about Westerns? Any recent movies I ought to see? Any other fun stuff out there?

Update: Yes, this post was Tim-bait, and whoah yes, he delivers. I’m considering just pasting his comment into the body of the post and moving what I wrote to the comments…

16 comments

Only crash
 / 

Sometimes you run across an idea so counter-intuitive and brain-bending that you immediately want to splice it into every domain you can think of. Sort of like trying a novel chemical compound against a bunch of cancers: does it work here? How about here? Or here?

That’s how I feel about crash-only software (link goes to a PDF in Google’s viewer). Don’t pay too much attention to the technical details; just check out the high-level description:

Crash-only programs crash safely and recover quickly. There is only one way to stop such software—by crashing it—and only one way to bring it up—by initiating recovery.

Wow. The only way to stop it is by crashing it. The normal shutdown process is the crash.

Let’s go a little deeper. You can imagine that commands and events follow “code paths” through software. For instance, when you summoned up this text, your browser followed a particular code path. And people who use browsers do this a lot, right? So you can bet your browser’s “load and render text” code path is fast, stable and bug-free.

But what about a much rarer code path? One that goes: “load and render text, but uh-oh, it looks like the data for the font outlines got corrupted halfway through the rendering process”? That basically never happens; it’s possible that that code path has never been followed. So it’s more likely that there’s a bug lurking there. That part of the browser hasn’t been tested much. It’s soft and uncertain.

One strategy to avoid these soft spots is to follow your worst-case code paths as often as your best-case code paths (without waiting for, you know, the worst case)—or even to make both code paths the same. And crash-only software is sort of the most extreme extension of that idea.

Maybe there are biological systems that already follow this practice, at least loosely. I’m thinking of seeds that are activated by the heat of a forest fire. It’s like: “Oh no! Worst-case scenario! Fiery apocalypse! … Exactly what we were designed for.” And I’m thinking of bears hibernating—a sort of controlled system crash every winter.

What else could we apply crash-only thinking to? Imagine a crash-only government, where the transition between administrations is always a small revolution. In a system like that, you’d optimize for revolution—build buffers around it—and as a result, when a “real” revolution finally came, it’d be no big deal.

Or imagine a crash-only business that goes bankrupt every four years as part of its business plan. Every part of the enterprise is designed to scatter and re-form, so the business can withstand even an existential crisis. It’s a ferocious competitor because it fears nothing.

Those are both fanciful examples, I know, but I’m having fun just turning the idea around in my head. What does crash-only thinking connect to in your brain?

21 comments

Like a school of fish
 / 

I love little observations of the everyday like this one in Nick Paumgarten’s essay on elevators:

Passengers seem to know instinctively how to arrange themselves in an elevator. Two strangers will gravitate to the back corners, a third will stand by the door, at an isosceles remove, until a fourth comes in, at which point passengers three and four will spread toward the front corners, making room, in the center, for a fifth, and so on, like the dots on a die. With each additional passenger, the bodies shift, slotting into the open spaces. The goal, of course, is to maintain (but not too conspicuously) maximum distance and to counteract unwanted intimacies—a code familiar (to half the population) from the urinal bank and (to them and all the rest) from the subway. One should face front. Look up, down, or, if you must, straight ahead. Mirrors compound the unease.

This reminds me of what is quite possibly the best poetic description of riding the elevator, part III of T.S. Eliot’s “Burnt Norton” (from Four Quartets). In particular, it’s about the long elevator ride at the tube stop at Russell Square:

Here is a place of disaffection
Time before and time after
In a dim light: neither daylight
Investing form with lucid stillness
Turning shadow into transient beauty
With slow rotation suggesting permanence
Nor darkness to purify the soul
Emptying the sensual with deprivation
Cleansing affection from the temporal.
Neither plenitude nor vacancy. Only a flicker
Over the strained time-ridden faces
Distracted from distraction by distraction
Filled with fancies and empty of meaning
Tumid apathy with no concentration
Men and bits of paper, whirled by the cold wind
That blows before and after time,
Wind in and out of unwholesome lungs
Time before and time after.
Eructation of unhealthy souls
Into the faded air, the torpid
Driven on the wind that sweeps the gloomy hills of London,
Hampstead and Clerkenwell, Campden and Putney,
Highgate, Primrose and Ludgate. Not here
Not here the darkness, in this twittering world.

Descend lower, descend only
Into the world of perpetual solitude,
World not world, but that which is not world,
Internal darkness, deprivation
And destitution of all property,
Desiccation of the world of sense,
Evacuation of the world of fancy,
Inoperancy of the world of spirit;
This is the one way, and the other
Is the same, not in movement
But abstention from movement; while the world moves
In appetency, on its metalled ways
Of time past and time future.

(Why hasn’t “Not here the darkness, in this twittering world” been quoted regularly?)

Another great bit from Paumgarten, which relates to my earlier “potatoes, paper, petroleum” observation about the 19th century:

The elevator, underrated and overlooked, is to the city what paper is to reading and gunpowder is to war. Without the elevator, there would be no verticality, no density, and, without these, none of the urban advantages of energy efficiency, economic productivity, and cultural ferment. The population of the earth would ooze out over its surface, like an oil slick, and we would spend even more time stuck in traffic or on trains, traversing a vast carapace of concrete.

A meta/editorial/critical note: Paumgarten’s essay has a regrettable B-story, about a guy who worked at a magazine who was trapped in an elevator. He dribbles it out graf by graf, to create the illusion of dramatic tension. Just speaking for myself, I didn’t care; also, it kind of bothers me that this is starting to become one of the default templates for magazine writing. Either find a reason to do it and do it well, or just… try something else.

2 comments

This is what sports liveblogging is for
 / 

Every sport, I believe, has its own optimal medium. For baseball, I like the intimacy of radio, and the timing and traditions of the medium lend themselves well to a sport driven by discrete, well-defined actions. Pro and college football actually work better on television than in person — unless you’re intoxicated, when all bets are off. Soccer, as this year’s World Cup proves, lends itself to Twitter’s ability to celebrate goals, talk trash, and complain about calls (or diving for calls) in truncated bursts. Basketball, hockey, and (usually) tennis have a combination of speed, intimacy, and crowd effect that make the stadium experience hardest to beat or replicate.

But what about a tennis match, like that between John Isner and Nicolas Mahut at Wimbledon, that because of endless tiebreaks and evening suspensions, spills over into more than ten hours and a third day? In such a case, stadium attendance and television alike become gruesome; you’re watching something that resembles a tennis match, but feels more like an all-night dance-a-thon. It’s horrible and fascinating at the same time. You can’t bear to watch, but you need periodic updates, because at any moment, something — anything — may happen.

Here, then, is the perfect sports experience for the liveblog. And here, too, The Guardian’s Xan Brooks is the master, riveting to read even in retrospect. Consider :

4.05pm: The Isner-Mahut battle is a bizarre mix of the gripping and the deadly dull. It’s tennis’s equivalent of Waiting For Godot, in which two lowly journeymen comedians are forced to remain on an outside court until hell freezes over and the sun falls from the sky. Isner and Mahut are dying a thousand deaths out there on Court 18 and yet nobody cares, because they’re watching the football. So the players stand out on their baseline and belt aces past each-other in a fifth set that has already crawled past two hours. They are now tied at 18-games apiece.

On and on they go. Soon they will sprout beards and their hair will grow down their backs, and their tennis whites will yellow and then rot off their bodies. And still they will stand out there on Court 18, belting aces and listening as the umpire calls the score. Finally, I suppose, one of them will die.

Ooh, I can see the football out of the corner of my eye. England still 1-0 up!

And, four and a half hours later:

8.40pm: It’s 56 games all and darkness is falling. This, needless to say, is not a good development, because everybody knows that zombies like the dark. So far in this match they’ve been comparatively puny and manageable, only eating a few of the spectators in between bashing their serves.

But come night-fall the world is their oyster. They will play on, play on, right through until dawn. Perhaps they will even leave the court during the change-overs to munch on other people. Has Roger Federer left the grounds? Perhaps they will munch on him, hounding him down as he runs for his car, disembowelling him in the parking lot and leaving Wimbledon without its reigning champion. Maybe they will even eat the trophy too.

Growing darker, darker all the while.

They are still tied at 59 all in the fifth and final set. This set alone is longer than any other match in tennis history. Play will resume tomorrow.

One comment

McChrystal's secret strategy
 / 

There’s been a lot of noise about Gen. Stanley McChrystal’s Obama-badmouthing candor with Rolling Stone, but besides perhaps Colson Whitehead (“I didn’t know they had truffle fries in Afghanistan“), Andrew Fitzgerald at Current has distilled it to its essence better than anyone on the net: first substance (“Focusing on the few controversial remarks misses the point of this RS McChrystal piece. Really tough look at Afg.”), then snark (“Let’s say McChrystal is fired… How long before he shows up as a commentator on FNC? Is it months? Weeks? Hours?”).

When I saw this last tweet, I had an epiphany. All the commentators and journalists were wondering how McChrystal could have let this bonehead, 99%-sure-to-cost-your-job move happen. Did he think he was talking off the record? Was he blowing off steam? Did he think no one would find out? And if he wanted to trash the administration publicly, why in the world would did he give this info to Rolling Stone? I mean, did he even see Almost Famous? (Is Obama Billy Crudup? I kind of think he is.)

But let’s just suppose that this was McChrystal’s intention all along. I pretty much buy the New York magazine profile of Sarah Palin, which lays out why she resigned her office; being governor of Alaska is a crummy, poorly-paying job, her family was going broke fighting legal bills, and she was getting offers she couldn’t refuse. It’s like being an Ivy League liberal arts major, getting offered a job at Goldman Sachs right out of college; it’s not what you came there to do, but how are you going to let that go? (Besides, it isn’t like you have to know a ton about what you’re doing; you’re there for who you are already.) Also, Palin could do the new math of GOP politics in her head — public office is less important than being a public figure, with a big platform. Or as Andrew says, “FNC commentator is the new Presidential candidate.”

Well, let’s try this equation: if it’s tough to be the governor of Alaska, how much harder does it have to be to be in charge of Afghanistan? What are the chances that you’re going to come out of this thing smelling like roses anyways? How can you remove yourself from that position while still coming off as an honorable, somewhat reluctant, but still passionate critic of the administration? And make a splash big enough doing it that it gets beyond policy circles and editorial pages?

I have no idea whether it’s true, but it’s worth entertaining the possibility that the good general threaded the needle here.

6 comments

Machines making mistakes
 / 

Why Jonah Lehrer can’t quit his janky GPS:

The moral is that it doesn’t take much before we start attributing feelings and intentions to a machine. (Sometimes, all it takes is a voice giving us instructions in English.) We are consummate agency detectors, which is why little kids talk to stuffed animals and why I haven’t thrown my GPS unit away. Furthermore, these mistaken perceptions of agency can dramatically change our response to the machine. When we see the device as having a few human attributes, we start treating it like a human, and not like a tool. In the case of my GPS unit, this means that I tolerate failings that I normally wouldn’t. So here’s my advice for designers of mediocre gadgets: Give them voices. Give us an excuse to endow them with agency. Because once we see them as humanesque, and not just as another thing, we’re more likely to develop a fondness for their failings.

This connects loosely with the first Snarkmarket post I ever commented on, more than six (!) years ago.

2 comments

Universal acid
 / 

The philosopher Dan Dennett, in his terrific book Darwin’s Dangerous Idea, coined a phrase that’s echoed in my head ever since I first read it years ago. The phrase is universal acid, and Dennett used it to characterize natural selection—an idea so potent that it eats right through established ideas and (maybe more importantly) institutions—things like, in Darwin’s case, religion. It also resists containment; try to say “well yes, but, that’s just over there” and natural selection burns right through your “yes, but.”

If that’s confusing, the top quarter of this page goes a bit deeper on Dennett’s meaning. It also blockquotes this passage from the book, which gets into the sloshiness of universal acid:

Darwin’s idea had been born as an answer to questions in biology, but it threatened to leak out, offering answers—welcome or not—to questions in cosmology (going in one direction) and psychology (going in the other direction). If [the cause of design in biology] could be a mindless, algorithmic process of evolution, why couldn’t that whole process itself be the product of evolution, and so forth all the way down? And if mindless evolution could account for the breathtakingly clever artifacts of the biosphere, how could the products of our own “real” minds be exempt from an evolutionary explanation? Darwin’s idea thus also threatened to spread all the way up, dissolving the illusion of our own authorship, our own divine spark of creativity and understanding.

Whoah!

(P.S. I think one of the reasons I like the phrase so much is that it seems to pair with Marx’s great line “…all that is solid melts into air.” Except it’s even better, right? Marx just talks about melting. This is more active: this is burning. This is an idea so corrosive it bores a channel to the very center of the earth.)

So I find myself wondering what else might qualify as a universal acid.

I think capitalism must. Joyce Appleby charts the course it took in her wonderful new book The Relentless Revolution. “Relentless” is right—that’s exactly what you’d expect from a universal acid. I think the sloshiness is also there; capitalism transformed not just production and trade but also politics, culture, gender roles, family structure, and on and on.

I suspect, much more hazily, that computation might turn out to be another another kind of universal acid—especially this new generation of diffuse, always-available computation that seems to fuse into the world around us, thanks to giant data-centers and wireless connections and iPads and things yet to come.

But what else? Any other contemporary candidates for universal acid?

56 comments

Recipes for history
 / 

Two links on the history of city/country dynamic in civilizations that go great together. The first one is about older stuff: an interview with Peter Heather about his book Empires and Barbarians: The Fall of Rome and the Birth of Modern Europe, which looks at the whole first millennium rather than the usual rise and fall:

What the book is trying to show is that the Roman Empire came into existence at a point where the relative underdevelopment of central, eastern, and northern Europe meant that the comparatively more developed Mediterranean world could provide a powerbase of sufficient strength to dominate the continent. As soon as development in northern Europe caught up, however, that relationship was bound to reverse, no matter what any Roman ruler might have tried to do about it. You can also see the first millennium as the time when Europe as some kind of unitary entity comes into being. By the end of it, dynasties are in place across the vast majority of its territory, and their subsequent history will lead pretty directly to the modern map of states. The same had not been remotely true at the birth of Christ a thousand years before…

To my mind, the most relevant finding is that this whole process of economic development and state formation in the non-imperial Europe of the first millennium was the result of a developing range of contacts with the more developed imperial world. In a process highly analogous to modern globalization, flows of wealth, weaponry, technology, and ideas ran from more developed Europe into its less developed periphery in increasing quantities and over a wider geographical area as the first millennium progressed. And, as in modern globalization, the benefits of all this were not shared equally by the totality of the population in non-imperial Europe, but were largely monopolized by particular groupings who used the wealth weaponry and ideologies to build new political structures which put themselves firmly at the head of their own societies.

Sometimes it’s impossible to remain imperial. If your imperial power—as it often is—is based on a pattern of precocious regional development, then as soon as surrounding regions catch up, as they undoubtedly will, that power must ebb (the fate of the Mediterranean in the first millennium). In these circumstances, it is important to accept the inevitable and gracefully renegotiate a new strategic balance of power, or one is likely to be imposed by force.

The other link is Edible Geography’s transcript of a talk by historian Rachel Laudan, who looks at the rise of Wal-Mart in Mexico City (and the end of hand-ground tortillas and the ridiculous amount of work/time that go into them) from a similar long-historical perspective:

There’s only one way to feed a city, at least historically, and that’s to feed it with grains—rice, wheat, maize, barley, sorghum, etc.. You can go round the world, and there just aren’t cities that aren’t fed on grains, except for possibly in the high Andes. Basically, to maintain a city, you’ve got to get grains into it. Be it Bangkok, be it Guangzhou, be it London, or be it Rome—throughout history, grains and cities are two sides of the coin.

And what do you need in terms of grains? For most of history—really, until about 150 years ago—most people in most cities, except for the very wealthy, lived almost exclusively on grains. They got about ninety percent of their calories from grains.

That meant that for every single person in a city you had to have 2 lbs of grains a day, turned into something that people could eat.

[Holding up a standard supermarket package of tortillas.] This is a kilo of tortillas. That’s what one person in a city needed. It’s the same weight, more or less, whatever the grain is—you can go to the historical record, you can research in China, in India, in the Near East, and you will still be talking about 2 lbs of grain-based food for every person in the city every day.

So you can do some calculations. If you’ve got a city of a million, like ancient Rome, you’ve got to get two million pounds of grain into the city every day. It’s the same for all the cities in the world— it’s 2 lbs of grain per person. That’s the power, that’s the energy that drives cities.

Even when you watch a TV series like Rome, one of the things that comes across is how obsessed the Romans were with grain — keeping grain coming into the city, getting it to the markets, using it to feed their armies, maintaining imperial control over regions (like Egypt) that supplied the bulk of their grain. For them, corn crises were like our oil crises; peak corn was their peak oil.

And they knew it. Google “Cicero corn.” It’s amazing how much he talks about it; disputes about corn come up in one lawsuit or political speech after another like paternity tests on “Maury.”

And as the example of Mexico City shows, this is far from ancient history. One of my favorite little pieces of cultural criticism is Stephen Greenblatt and Catherine Gallagher’s essay “The Potato and the Materialist Imagination,” which looks at debates about potatoes and population in 19th-century England. A point that Laudan makes is that you can’t just eat grain like you can fruit — at a minimum, you’ve got to shuck, grind, and cook it, turn it into couscous or tortillas or whatever. When you’re talking about bread, or anything that adds extra ingredients, the degree of difficulty goes up.

But that degree of difficulty is what a civilization is — a division of labor that necessitates socialization, technology, rituals, aggregation. The English were terrified about growing potatoes in Ireland, not because they were worried about famines, but the opposite. Because potatoes grew underground, they thought the crop was famine resistant — nobody had any kind of experience with destruction of crops by a fungus. No, they were worried because potatoes worked too well — you could dig them out of the ground, boil them, and eat them, and sustain a huge population for a fraction of the cost of bread. And without bread, no civilization. This is what the English did to maintain their labor force in Ireland, the Prussians did in Poland; they leaned on it and dug calories out of the ground and grew and grew until it almost killed them, which is why America is full of Walshes and Sczepanskis today. (Laudan refers to a similar crisis in Spain; when the Spanish first ground maize, they didn’t use alkali to process it the way the native Mexicans did, and many, many people contracted pellagra, a nutritional deficiency that causes blindness.)

There were three transformative miracles in the nineteenth century that made the world what it is now, and they all came out of the ground: potatoes, petroleum, and paper made from trees. We thought they’d all last forever. We’ve been living in their shadow all along, and only now are we beginning to shiver.

9 comments

Straw men, shills, and killer robots
 / 

Indulge me, please, for digging into some rhetorical terminology. In particular, I want to try to sort out what we mean when we call something a “straw man.”

Here’s an example. Recently, psychologist/Harvard superstar Steven Pinker wrote an NYT op-ed, “Mind Over Mass Media,” contesting the idea that new media/the internet hurts our intelligence or our attention spans, and specifically contesting trying to marshal neuroscience studies in support of these claims. Pinker writes:

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Please note that nowhere does Pinker name these “critics of new media” or attribute this quote, “experience can change the brain.” But also note that everyone and their cousin immediately seemed to know that Pinker was talking about Nicholas Carr, whose new book The Shallows was just reviewed by Jonah Lehrer, also in the NYT. Lehrer’s review (which came first) is probably best characterized as a sharper version of Pinker’s op-ed:

There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.

Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.

I also really liked this wry observation that Lehrer added on at his blog, The Frontal Cortex:

Much of Carr’s argument revolves around neuroscience, as he argues that our neural plasticity means that we quickly become mirrors to our mediums; the brain is an information-processing machine that’s shaped by the kind of information it processes. And so we get long discussions of Eric Kandel, aplysia and the malleability of brain cells. (Having work in the Kandel lab for several years, I’m a big fan of this research program. I just never expected the kinase enzymes of sea slugs to be applied to the internet.)

Now, at least in my Twitter feed, the response to Pinker’s op-ed was positive, if a little backhanded. This is largely because Pinker largely seems to have picked this fight less to defend the value of the internet or even the concept of neuroplasticity than to throw some elbows at his favorite target, what he calls “blank slate” social theories that dispense with human nature. He wrote a contentious and much-contested book about it. He called it The Blank Slate. That’s why he works that dig in about how “the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.” Pinker doesn’t think we’re clay at all; instead, we’re largely formed.

So on Twitter we see a lot of begrudging support: “Pinker’s latest op-ed is good. He doesn’t elevate 20th C norms to faux-natural laws.” And: “I liked Pinker’s op-ed too, but ‘habits of deep reflection… must be acquired in… universities’? Debunk, meet rebunk… After that coffin nail to the neuroplasticity meme, Pinker could have argued #Glee causes autism for all I care.” And: “Surprised to see @sapinker spend so much of his op-ed attacking straw men (“critics say…”). Overall, persuasive though.

And this is where the idea of a “straw man” comes in. See, Pinker’s got a reputation for attacking straw men, which is why The Blank Slate, which is mostly a long attack on a version of BF Skinner-style psychological behaviorism, comes off as an attack on postmodern philosophy and literary criticism and mainstream liberal politics and a whole slew of targets that get lumped together under a single umbrella, differences and complexities be damned.

(And yes, this is a straw man characterization of Pinker’s book, probably unfairly so. Also, neither everyone nor the cousins of everyone knew Pinker was talking about Carr. But we all know what we know.)

However, on Twitter, this generated an exchange between longtime Snarkmarket friend Howard Weaver and I about the idea of a straw man. I wasn’t sure whether Howard, author of that last quoted tweet, was using “straw men” just to criticize Pinker’s choice not to call out Carr by name, or whether he thought Pinker had done what Pinker often seems to do in his more popular writing, arguing against a weaker or simpler version of what the other side actually thinks. That, at least, is a critically stronger sense of what’s meant by straw men. (See, even straw men can have straw men!)

So it seems like there are (at least) four different kinds of rhetorical/logical fallacies that could be called “arguing against a straw man”:

  1. Avoiding dealing with an actual opponent by making them anonymous/impersonal, even if you get their point-of-view largely right;
  2. Mischaracterizing an opponent’s argument (even or especially if you name them), usually by substituting a weaker or more easily refuted version;
  3. Assuming because you’ve shown this person to be at fault somewhere, that they’re wrong everywhere — “Since we now know philosopher Martin Heidegger was a Nazi, how could anyone have ever qualified him for a bank loan?”;
  4. Cherry-picking your opponent, finding the weakest link, then tarring all opponents with the same brush. (Warning! Cliché/mixed metaphor overload!)

Clearly, you can mix-and-match; the most detestable version of a straw man invents an anonymous opponent, gives him easily-refuted opinions nobody actually holds, and then assumes that this holds true for everybody who’d disagree with you. And the best practice would seem to be:

  1. Argue with the ideas of a real person (or people);
  2. Pick the strongest possible version of that argument;
  3. Characterize your opponent’s (or opponents’) beliefs honestly;
  4. Concede points where they are, seem to be, or just might be right.

If you can win a reader over when you’ve done all this, then you’ve really written something.

There’s even a perverse version of the straw man, which Paul Krugman calls an “anti-straw man,” but I want to call “a killer robot.” This is when you mischaracterize an opponent’s point-of-view by actually making it stronger and more sensible than what they actually believe. Krugman’s example comes from fiscal & monetary policy, in particular imagining justifications for someone’s position on the budget that turns out to contradict their stated position on interest rates. Not only isn’t this anyone’s position, it couldn’t be their position if their position was consistent at all. I agree with PK that this is a special and really interesting case.

Now, as Howard pointed out, there is another sense of “straw man,” used to mean any kind of counterargument that’s introduced by a writer with the intent of arguing against it later. You might not even straight-out refute it; it could be a trial balloon, or thought experiment, or just pitting opposites against each other as part of a range of positions. There’s nothing necessarily fallacious about it, it’s just a way of marking off an argument that you, as a writer, wouldn’t want to endorse. (Sometimes this turns into weasely writing/journalism, too, but hey, again, it doesn’t have to be.)

Teaching writing at Penn, we used a book that used the phrase “Straw Man” this way, and had a “Straw Man” exercise where you’d write a short essay that just had an introduction w/thesis, a counterargument (which we called a “straw man”), then a refutation of that counterargument. And then there was a “Straw Man Plus One” assignment, where you’d…

Never mind. The point is, we wound up talking about straw men a lot. And we’d always get confused, because sometimes “straw man” would mean the fallacy, sometimes it would mean the assignment, sometimes it would be the counterargument used in that (or any) assignment, sometimes it would be the paragraph containing the counterargument…

Oy. By 2009-10, confusion about this term had reached the point where two concessions were made. First, for the philosophers in the crowd who insisted on a strict, restrictive meaning of “straw man” as a fallacy, and who didn’t want their students using fallacious “straw men” in their “Straw Man” assignments, they changed the name of the assignment to “Iron Man.” Then, as part of a general move against using gendered language on the syllabus, it turned into “Iron Person.” Meanwhile, the textbook we used still called the assignment “Straw Man,” turning confusion abetted to confusion multiplied.

I probably confused things further by referring to the “iron person” assignment as either “the robot” — the idea being, again, that you build something that then is independent of you — or “the shill.” This was fun, because I got to talk about how con men (and women) work. The idea of the shill is that they pretend to be independent, but they’re really on the con man’s side the entire time. The best shill sells what they do, so that you can’t tell they’re in on it. They’re the ideal opponent, perfect as a picture. That got rid of any lingering confusion between the fallacy and the form.

Likewise, I believe that here and now we have sorted the range of potential meanings of “straw man,” once and for all. And if you can prove that I’m wrong, well, then I’m just not going to listen to you.

One comment