The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52
Snarkmarket commenter-in-chief since 2003, editor since 2008. Technology journalist and media theorist; reporter, writer, and recovering academic. Born in Detroit, living in Brooklyn, Tim loves hip-hop and poetry, and books have been and remain his drug of choice. Everything changes; don't be afraid. Follow him at

Beyond the Venn diagram
 / 

Edwards-Venn Cogwheel

Joshua Glenn buried this nugget in a comment at HiLobrow:

Anyone who has ever spent time in a conference room equipped with an overhead projector is familiar with the basic Venn diagram — three overlapping circles whose eight regions represent every possible intersection of three given sets, the eighth region being the space around the diagram. Although it resembled the intertwined rings already familiar in Christian (and later Led Zeppelin) iconography, when Venn devised the diagram in 1880, it was hailed as a conceptually innovative way to represent complex logical problems in two dimensions.

There was just one problem with it, according to British statistician, geneticist, and Venn diagram expert A.W.F. Edwards, author of the entertaining book “Cogwheels of the Mind” (Johns Hopkins): It didn’t scale up. With four sets, it turns out, circles are no use — they don’t have enough possible combinations of overlaps. Ovals work for four sets, Venn found, but after that one winds up drawing spaghetti-like messes — and, as he put it, “the visual aid for which mainly such diagrams exist is soon lost.” What to do?

A rival lecturer in mathematics at Oxford by the name of Charles Dodgson — Lewis Carroll — tried to come up with a better logical diagram by using rectangles instead of circles, but failed (though not before producing an 1887 board game based on his “triliteral” design). In fact, it wasn’t until a century later that the problem of drawing visually appealing Venn diagrams for arbitrary numbers of sets was solved — by Edwards, it turns out. In 1988 Edwards came up with a six-set diagram that was nicknamed the “Edwards-Venn cogwheel.”

The original post, which extends Glenn’s ongoing remapping of generational lines past the mid-nineteenth century — Decadents! Pragmatists! Industrial Tyrants! Mark Twain AND Henry James! — is pretty sweet too. (For what it’s worth, one big thing Decadents and Pragmatists had in common was that they were both obsessed with generational changes.)

Now we just need a generational map that’s ALSO an Edwards-Venn cogwheel!

2 comments

Short-term time capsules
 / 

The other day, for reasons whose names I do not wish to recall, I was looking in Snarkmarket’s archives from 2007. There were chains I’d remembered, great ideas I’d forgotten, funny anachronisms, and prescient observations.

Generally, though, I was struck by how much shorter the posts were. I’m sure this is equally because in 2007, my many paragraphs were safely in the comments and because Twitter now carries most of the content that went into one-to-three-sentence “hey, look at this!” links. (Plus, migrating the archives to WordPress truncated some of the longer posts that used to have “jumps” built-in.)

Five selected links, just from July 2007:

Maybe I’ll try to do this once a month.

2 comments

Food auteurism
 / 

Here’s an idea related to “even your coffee has an author-function“: food auteurism, a phrase which seems very natural and yet before today didn’t have any hits on Google.

For most of the 20th century, after the industrialization of the food and agriculture industry, food was mostly anonymous. Traditionally, your farmer, grocer, butcher, rabbi vouched for the quality of your food, but that gave way to government inspections, certification, and standardization, plus branding.

Now, though, that industrial anonymity is troubling, and we increasingly want our food to be sourced. This is partly driven by nutrition, partly by social concerns, and partly by a need to differentiate our identity through what we eat. And it’s achieved partly through a return to a quasi-preindustrial model (farmers’ markets and local gardens), partly through a shift in brand identification (let me drink 15th Avenue Coffee instead of Starbucks), and partly through a new rise in authority of food writers and experts: Alice Waters, Michael Pollan, Mark Bittman.

It’s a new way to generate and focus cultural attention, and to help us make sense of the explosion of information and misinformation about food. Food as an information network.

Comments

Author-functions and work-functions
 / 

There are many, many noteworthy things in this interview with Clay Shirky, but this caught my attention (bold-emphasis is mine):

[W]hat we’re dealing with now, I think, is the ramification of having long-form writing not necessarily meaning physical objects, not necessarily meaning commissioning editors and publishers in the manner of making those physical objects, and not meaning any of the sales channels or the preexisting ways of producing cultural focus. This is really clear to me as someone who writes and publishes both on a weblog and books. There are certain channels of conversation in this society that you can only get into if you have written a book. Terry Gross has never met anyone in her life who has not JUST published a book. Right?

The way our culture works, depending on what field you’re operating in, certain kinds of objects (or in some cases, events) generate more cultural focus than others. Shirky gives an example from painting: “Anyone can be a painter, but the question is then, ‘Have you ever had a show; have you ever had a solo show?’ People are always looking for these high-cost signals from other people that this is worthwhile.” In music, maybe it used to be an album; in comedy, it might be an hour-long album or TV special; I’m sure you can think of others in different media. It’s a high-cost object that broadcasts its significance. It’s not a thing; it’s a work.

But, this is important: it’s even more fine-grained than that. It’s not like you can just say, “in writing, books are the most important things.” It depends on what genre of writing you’re in. If you’re a medical or scientific researcher, for instance, you don’t have to publish a book to get cultural attention; an article, if it’s in a sufficiently prestigious journal, will do the trick. And the news stories won’t even start with your name, if they get around to it at all; instead, a voice on the radio will say, “according to a new study published in Nature, scientists at the University of Pennsylvania…” The authority accrues to the institution: the university, the journal, and ultimately Science itself.

The French historian/generally-revered-writer-of-theory Michel Foucault used this difference to come up with an idea: In different cultures, different kinds of writers are accorded a different status that depends on how much authority accrues to their writing. In the ancient world, for instance, stories/fables used to circulate without much, if any, attribution of authorship; medical texts, on the other hand, needed an auctoritas like Galen or Avicenna to back them up. It didn’t make any sense to talk about “authorship” as if that term had a universal, timeless meaning. Not every writer was an author, not every writing an act of authorship. (Foucault uses a thought-experiment about Nietzsche scribbling aphorisms on one side of a sheet of paper, a laundry or shopping list on the other.)

At the same time, you can’t just ignore authorship. Even if it’s contingent, made-up, it’s still a real thing. It’s built on social conventions and serves a social function. There are rules. Depending on context, it can be construed broadly or narrowly. And it can change — and these changes can reveal things that might otherwise be hidden. For instance, from the early days of print until the 20th century, publishers in England shared some of the author-function of a book because they could be punished for what it said. At some point in the 20th century, audiences became much more interested in who the director of a film was. (In some cases, the star or producer or studio, maybe even the screenwriter still share some of that author-function.) And these social ripples — who made it, who foots the bill, who’s an authority, who gets punished? — those are all profound ways of producing “cultural focus.”

Foucault focused on authorship — the subjective side of that cultural focus — because he was super-focused on things like authority and punishment. But it’s clear that there’s an objective side of this story, too, the story of the work — and that the two trajectories, work and author, work together. You become an “author” and get to be interviewed by Terry Gross because you’ve written a book. And you get to write a book (and have someone with a suitable amount of authority publish it) because you accrue a certain amount and kind of demonstrable authority and skill (in a genre where writing a book is the appropriate kind of work).

It’s no surprise, then, that the Big Digital Shake-Up in the way cultural objects are produced, consumed, sold, disseminated, re-disseminated, etc. is shifting our concepts of both authorship and the work in many genres and media. What are the new significant objects in the fields that interest you? Pomplamoose makes music videos; Robin wrote a novella, but at least part of that “work” included the blog and community created by it; and Andrew Sullivan somehow manages to be the “author” of both the book The Conservative Soul and the blog The Daily Dish, even when it switches from Time to The Atlantic, even when someone else is guest-writing it. And while it takes writing a book to get on Fresh Air, to really get people on blogs talking about your book, it helps to have a few blog posts, reviews, and interviews about it, so there’s something besides the Amazon page to link to.

Maybe being the author of a blog is a new version of being an author of a book. I started (although I’m not the only author of) Bookfuturism because I started stringing together a bunch of work that seemed to be about the future of reading; through that, my writing here, and some of the things I wrote elsewhere, I became a kind of authority on the subject (only on the internet, but still, I like who links to me); and maybe I’ll write a book, or maybe I’ll start a blog with a different title when it’s time to write about something else. I don’t know.

It’s all being reconfigured, as we’re changing our assumptions about what and who we pay attention to.

Chimerical post-script: Not completely sure where it fits in, but I think it does: Robin and José Afonso Furtado pointed me to this post by Mike Shatzkin about the future of bookselling, arguing (I’m paraphrasing) that with online retailers like Amazon obliterating physical bookstores, we need a new kind of intermediary that helps curate and consolidate books for the consumer, “powered” by Amazon. It’s not far off from Robin’s old post about a “Starbucks API.” See? Even your coffee has an author-function.

Anyways, new authors, new publishers, new media, new works, new devices, new stores, new curators, new audiences — everything with a scrap of auctoritas is up for grabs.

8 comments

De inventione punctus
 / 

All signs suggest punctuation is in flux. In particular, our signs that mark grammatical (and sometimes semantic) distinctions are waning, while those denoting tone and voice are waxing. Furthermore, signs with a slim graphical profile (the apostrophe and comma, especially) are having a rough go of it. Compared to the smiley face or even the question mark, they’re too visually quiet for most casual writers to notice or remember, even (or especially) on our high-def screens.

But we’re also working within the finite possibilities and inherited structures of our keyboards. It’s the age of secondary literacy: writing and reading transformed by electronic communication, from television to the telephone.

See 1. Jan Swafford’s unfortunately titled “Why e-books will never replace real books,” which takes seriously Marshall McLuhan’s argument that print (and computers, too) change the ways we think and see:

I’ve taught college writing classes for a long time, and after computers came in, I began to see peculiar stuff on papers that I hadn’t seen before: obvious missing commas and apostrophes, when I was sure most of those students knew better. It dawned on me that they were doing all their work on-screen, where it’s hard to see punctuation. I began to lecture them about proofing on paper, although, at first, I didn’t make much headway. They were unused to dealing with paper until the final draft, and they’d been taught never to make hand corrections on the printout. They edited on-screen and handed in the hard copy without a glance.

Handwriting is OK! I proclaimed. I love to see hand corrections! Then I noticed glitches in student writing that also resulted from editing on-screen: glaring word and phrase redundancies, forgetting to delete revised phrases, strangely awkward passages. I commenced an ongoing sermon: You see differently and in some ways better on paper than on computer. Your best editing is on paper. Try it and see if I’m right. You’ll get a better grade. The last got their attention. The students were puzzled and skeptical at first, but the ones who tried it often ended up agreeing with me.

And especially, see 2. Anne Trubek’s “The Very Long History of Emoticons“:

A punctuation purist would claim that emoticons are debased ways to signal tone and voice, something a good writer should be able to indicate with words. But the contrary is true: The history of punctuation is precisely the history of using symbols to denote tone and voice. Seen in this way, emoticons are simply the latest comma or quotation mark… The earliest marks indicated how a speaker’s voice should adjust to reflect the tone of the words. Punctus interrogativus is a precursor to today’s question mark, and it indicates that the reader should raise his voice to indicate inquisitiveness. Tone and voice were literal in those days: Punctuation told the speaker how to express the words he was reading out loud to his audience, or to himself. A question mark, a comma, a space between two words: These are symbols that denote written tone and voice for a primarily literate—as opposed to oral—culture. There is no significant difference between them and a modern emoticon.

I @atrubek. And I’m feeling all zen about this observation of hers, too: “A space is a punctuation mark.” There’s a whole philosophy in that idea, I know it.

I’m also feeling all zen about this idea that computer screens (keyboards, too) are sites of multiple, overlapping, and conflicting cultures, and that it’s up to us (in part) to help decide what the assumptions of those cultures are. —–>
Here, see 1. The Slow Media Manifesto, and 2. Nick Carr’s much-debated (whaaa? Nick Carr in a debate?) post about “delinkification“, which is actually a pretty solid meditation on the rhetoric of the hyperlink (you could say, the way we punctuate them). In short, if you think the superimposed montage of words and link of in-text hyperlinks pose some cognition/decision problems, which might not be appropriate to all kinds of reading, then it might make sense to try using a different strategy (like footnoting), instead. And being the relatively sophisticated mammals we are, in different contexts, we can sort these strategies out (even if we don’t fully understand that or how we’re doing it).

11 comments

Like a school of fish
 / 

I love little observations of the everyday like this one in Nick Paumgarten’s essay on elevators:

Passengers seem to know instinctively how to arrange themselves in an elevator. Two strangers will gravitate to the back corners, a third will stand by the door, at an isosceles remove, until a fourth comes in, at which point passengers three and four will spread toward the front corners, making room, in the center, for a fifth, and so on, like the dots on a die. With each additional passenger, the bodies shift, slotting into the open spaces. The goal, of course, is to maintain (but not too conspicuously) maximum distance and to counteract unwanted intimacies—a code familiar (to half the population) from the urinal bank and (to them and all the rest) from the subway. One should face front. Look up, down, or, if you must, straight ahead. Mirrors compound the unease.

This reminds me of what is quite possibly the best poetic description of riding the elevator, part III of T.S. Eliot’s “Burnt Norton” (from Four Quartets). In particular, it’s about the long elevator ride at the tube stop at Russell Square:

Here is a place of disaffection
Time before and time after
In a dim light: neither daylight
Investing form with lucid stillness
Turning shadow into transient beauty
With slow rotation suggesting permanence
Nor darkness to purify the soul
Emptying the sensual with deprivation
Cleansing affection from the temporal.
Neither plenitude nor vacancy. Only a flicker
Over the strained time-ridden faces
Distracted from distraction by distraction
Filled with fancies and empty of meaning
Tumid apathy with no concentration
Men and bits of paper, whirled by the cold wind
That blows before and after time,
Wind in and out of unwholesome lungs
Time before and time after.
Eructation of unhealthy souls
Into the faded air, the torpid
Driven on the wind that sweeps the gloomy hills of London,
Hampstead and Clerkenwell, Campden and Putney,
Highgate, Primrose and Ludgate. Not here
Not here the darkness, in this twittering world.

Descend lower, descend only
Into the world of perpetual solitude,
World not world, but that which is not world,
Internal darkness, deprivation
And destitution of all property,
Desiccation of the world of sense,
Evacuation of the world of fancy,
Inoperancy of the world of spirit;
This is the one way, and the other
Is the same, not in movement
But abstention from movement; while the world moves
In appetency, on its metalled ways
Of time past and time future.

(Why hasn’t “Not here the darkness, in this twittering world” been quoted regularly?)

Another great bit from Paumgarten, which relates to my earlier “potatoes, paper, petroleum” observation about the 19th century:

The elevator, underrated and overlooked, is to the city what paper is to reading and gunpowder is to war. Without the elevator, there would be no verticality, no density, and, without these, none of the urban advantages of energy efficiency, economic productivity, and cultural ferment. The population of the earth would ooze out over its surface, like an oil slick, and we would spend even more time stuck in traffic or on trains, traversing a vast carapace of concrete.

A meta/editorial/critical note: Paumgarten’s essay has a regrettable B-story, about a guy who worked at a magazine who was trapped in an elevator. He dribbles it out graf by graf, to create the illusion of dramatic tension. Just speaking for myself, I didn’t care; also, it kind of bothers me that this is starting to become one of the default templates for magazine writing. Either find a reason to do it and do it well, or just… try something else.

2 comments

This is what sports liveblogging is for
 / 

Every sport, I believe, has its own optimal medium. For baseball, I like the intimacy of radio, and the timing and traditions of the medium lend themselves well to a sport driven by discrete, well-defined actions. Pro and college football actually work better on television than in person — unless you’re intoxicated, when all bets are off. Soccer, as this year’s World Cup proves, lends itself to Twitter’s ability to celebrate goals, talk trash, and complain about calls (or diving for calls) in truncated bursts. Basketball, hockey, and (usually) tennis have a combination of speed, intimacy, and crowd effect that make the stadium experience hardest to beat or replicate.

But what about a tennis match, like that between John Isner and Nicolas Mahut at Wimbledon, that because of endless tiebreaks and evening suspensions, spills over into more than ten hours and a third day? In such a case, stadium attendance and television alike become gruesome; you’re watching something that resembles a tennis match, but feels more like an all-night dance-a-thon. It’s horrible and fascinating at the same time. You can’t bear to watch, but you need periodic updates, because at any moment, something — anything — may happen.

Here, then, is the perfect sports experience for the liveblog. And here, too, The Guardian’s Xan Brooks is the master, riveting to read even in retrospect. Consider :

4.05pm: The Isner-Mahut battle is a bizarre mix of the gripping and the deadly dull. It’s tennis’s equivalent of Waiting For Godot, in which two lowly journeymen comedians are forced to remain on an outside court until hell freezes over and the sun falls from the sky. Isner and Mahut are dying a thousand deaths out there on Court 18 and yet nobody cares, because they’re watching the football. So the players stand out on their baseline and belt aces past each-other in a fifth set that has already crawled past two hours. They are now tied at 18-games apiece.

On and on they go. Soon they will sprout beards and their hair will grow down their backs, and their tennis whites will yellow and then rot off their bodies. And still they will stand out there on Court 18, belting aces and listening as the umpire calls the score. Finally, I suppose, one of them will die.

Ooh, I can see the football out of the corner of my eye. England still 1-0 up!

And, four and a half hours later:

8.40pm: It’s 56 games all and darkness is falling. This, needless to say, is not a good development, because everybody knows that zombies like the dark. So far in this match they’ve been comparatively puny and manageable, only eating a few of the spectators in between bashing their serves.

But come night-fall the world is their oyster. They will play on, play on, right through until dawn. Perhaps they will even leave the court during the change-overs to munch on other people. Has Roger Federer left the grounds? Perhaps they will munch on him, hounding him down as he runs for his car, disembowelling him in the parking lot and leaving Wimbledon without its reigning champion. Maybe they will even eat the trophy too.

Growing darker, darker all the while.

They are still tied at 59 all in the fifth and final set. This set alone is longer than any other match in tennis history. Play will resume tomorrow.

One comment

McChrystal's secret strategy
 / 

There’s been a lot of noise about Gen. Stanley McChrystal’s Obama-badmouthing candor with Rolling Stone, but besides perhaps Colson Whitehead (“I didn’t know they had truffle fries in Afghanistan“), Andrew Fitzgerald at Current has distilled it to its essence better than anyone on the net: first substance (“Focusing on the few controversial remarks misses the point of this RS McChrystal piece. Really tough look at Afg.”), then snark (“Let’s say McChrystal is fired… How long before he shows up as a commentator on FNC? Is it months? Weeks? Hours?”).

When I saw this last tweet, I had an epiphany. All the commentators and journalists were wondering how McChrystal could have let this bonehead, 99%-sure-to-cost-your-job move happen. Did he think he was talking off the record? Was he blowing off steam? Did he think no one would find out? And if he wanted to trash the administration publicly, why in the world would did he give this info to Rolling Stone? I mean, did he even see Almost Famous? (Is Obama Billy Crudup? I kind of think he is.)

But let’s just suppose that this was McChrystal’s intention all along. I pretty much buy the New York magazine profile of Sarah Palin, which lays out why she resigned her office; being governor of Alaska is a crummy, poorly-paying job, her family was going broke fighting legal bills, and she was getting offers she couldn’t refuse. It’s like being an Ivy League liberal arts major, getting offered a job at Goldman Sachs right out of college; it’s not what you came there to do, but how are you going to let that go? (Besides, it isn’t like you have to know a ton about what you’re doing; you’re there for who you are already.) Also, Palin could do the new math of GOP politics in her head — public office is less important than being a public figure, with a big platform. Or as Andrew says, “FNC commentator is the new Presidential candidate.”

Well, let’s try this equation: if it’s tough to be the governor of Alaska, how much harder does it have to be to be in charge of Afghanistan? What are the chances that you’re going to come out of this thing smelling like roses anyways? How can you remove yourself from that position while still coming off as an honorable, somewhat reluctant, but still passionate critic of the administration? And make a splash big enough doing it that it gets beyond policy circles and editorial pages?

I have no idea whether it’s true, but it’s worth entertaining the possibility that the good general threaded the needle here.

6 comments

Machines making mistakes
 / 

Why Jonah Lehrer can’t quit his janky GPS:

The moral is that it doesn’t take much before we start attributing feelings and intentions to a machine. (Sometimes, all it takes is a voice giving us instructions in English.) We are consummate agency detectors, which is why little kids talk to stuffed animals and why I haven’t thrown my GPS unit away. Furthermore, these mistaken perceptions of agency can dramatically change our response to the machine. When we see the device as having a few human attributes, we start treating it like a human, and not like a tool. In the case of my GPS unit, this means that I tolerate failings that I normally wouldn’t. So here’s my advice for designers of mediocre gadgets: Give them voices. Give us an excuse to endow them with agency. Because once we see them as humanesque, and not just as another thing, we’re more likely to develop a fondness for their failings.

This connects loosely with the first Snarkmarket post I ever commented on, more than six (!) years ago.

2 comments

Recipes for history
 / 

Two links on the history of city/country dynamic in civilizations that go great together. The first one is about older stuff: an interview with Peter Heather about his book Empires and Barbarians: The Fall of Rome and the Birth of Modern Europe, which looks at the whole first millennium rather than the usual rise and fall:

What the book is trying to show is that the Roman Empire came into existence at a point where the relative underdevelopment of central, eastern, and northern Europe meant that the comparatively more developed Mediterranean world could provide a powerbase of sufficient strength to dominate the continent. As soon as development in northern Europe caught up, however, that relationship was bound to reverse, no matter what any Roman ruler might have tried to do about it. You can also see the first millennium as the time when Europe as some kind of unitary entity comes into being. By the end of it, dynasties are in place across the vast majority of its territory, and their subsequent history will lead pretty directly to the modern map of states. The same had not been remotely true at the birth of Christ a thousand years before…

To my mind, the most relevant finding is that this whole process of economic development and state formation in the non-imperial Europe of the first millennium was the result of a developing range of contacts with the more developed imperial world. In a process highly analogous to modern globalization, flows of wealth, weaponry, technology, and ideas ran from more developed Europe into its less developed periphery in increasing quantities and over a wider geographical area as the first millennium progressed. And, as in modern globalization, the benefits of all this were not shared equally by the totality of the population in non-imperial Europe, but were largely monopolized by particular groupings who used the wealth weaponry and ideologies to build new political structures which put themselves firmly at the head of their own societies.

Sometimes it’s impossible to remain imperial. If your imperial power—as it often is—is based on a pattern of precocious regional development, then as soon as surrounding regions catch up, as they undoubtedly will, that power must ebb (the fate of the Mediterranean in the first millennium). In these circumstances, it is important to accept the inevitable and gracefully renegotiate a new strategic balance of power, or one is likely to be imposed by force.

The other link is Edible Geography’s transcript of a talk by historian Rachel Laudan, who looks at the rise of Wal-Mart in Mexico City (and the end of hand-ground tortillas and the ridiculous amount of work/time that go into them) from a similar long-historical perspective:

There’s only one way to feed a city, at least historically, and that’s to feed it with grains—rice, wheat, maize, barley, sorghum, etc.. You can go round the world, and there just aren’t cities that aren’t fed on grains, except for possibly in the high Andes. Basically, to maintain a city, you’ve got to get grains into it. Be it Bangkok, be it Guangzhou, be it London, or be it Rome—throughout history, grains and cities are two sides of the coin.

And what do you need in terms of grains? For most of history—really, until about 150 years ago—most people in most cities, except for the very wealthy, lived almost exclusively on grains. They got about ninety percent of their calories from grains.

That meant that for every single person in a city you had to have 2 lbs of grains a day, turned into something that people could eat.

[Holding up a standard supermarket package of tortillas.] This is a kilo of tortillas. That’s what one person in a city needed. It’s the same weight, more or less, whatever the grain is—you can go to the historical record, you can research in China, in India, in the Near East, and you will still be talking about 2 lbs of grain-based food for every person in the city every day.

So you can do some calculations. If you’ve got a city of a million, like ancient Rome, you’ve got to get two million pounds of grain into the city every day. It’s the same for all the cities in the world— it’s 2 lbs of grain per person. That’s the power, that’s the energy that drives cities.

Even when you watch a TV series like Rome, one of the things that comes across is how obsessed the Romans were with grain — keeping grain coming into the city, getting it to the markets, using it to feed their armies, maintaining imperial control over regions (like Egypt) that supplied the bulk of their grain. For them, corn crises were like our oil crises; peak corn was their peak oil.

And they knew it. Google “Cicero corn.” It’s amazing how much he talks about it; disputes about corn come up in one lawsuit or political speech after another like paternity tests on “Maury.”

And as the example of Mexico City shows, this is far from ancient history. One of my favorite little pieces of cultural criticism is Stephen Greenblatt and Catherine Gallagher’s essay “The Potato and the Materialist Imagination,” which looks at debates about potatoes and population in 19th-century England. A point that Laudan makes is that you can’t just eat grain like you can fruit — at a minimum, you’ve got to shuck, grind, and cook it, turn it into couscous or tortillas or whatever. When you’re talking about bread, or anything that adds extra ingredients, the degree of difficulty goes up.

But that degree of difficulty is what a civilization is — a division of labor that necessitates socialization, technology, rituals, aggregation. The English were terrified about growing potatoes in Ireland, not because they were worried about famines, but the opposite. Because potatoes grew underground, they thought the crop was famine resistant — nobody had any kind of experience with destruction of crops by a fungus. No, they were worried because potatoes worked too well — you could dig them out of the ground, boil them, and eat them, and sustain a huge population for a fraction of the cost of bread. And without bread, no civilization. This is what the English did to maintain their labor force in Ireland, the Prussians did in Poland; they leaned on it and dug calories out of the ground and grew and grew until it almost killed them, which is why America is full of Walshes and Sczepanskis today. (Laudan refers to a similar crisis in Spain; when the Spanish first ground maize, they didn’t use alkali to process it the way the native Mexicans did, and many, many people contracted pellagra, a nutritional deficiency that causes blindness.)

There were three transformative miracles in the nineteenth century that made the world what it is now, and they all came out of the ground: potatoes, petroleum, and paper made from trees. We thought they’d all last forever. We’ve been living in their shadow all along, and only now are we beginning to shiver.

9 comments