These two blockquotes, curated by Andrew Simone and Alan Jacobs respectively, arrived in my RSS reader within moments of each other. I liked Jacobs’s adjective, which applies to Simone’s selection, too: “Kierkegaardian.”
The first is from Jim Rossignol’s This Gaming Life, riffing on Lars Svendsen’s A Philosophy of Boredom:
Fernando Pessoa… identifies boredom as “the feeling that there’s nothing worth doing.” The bored are those people for whom no activity seems satisfactory. The problem is often not that there is a lack of things to do in general but, rather, that there is a lack of things that are worthwhile. Boredom can arise in all kinds of situations, but it usually makes itself known when we cannot do what we want to do or when we must do something we do not wish to do or something we cannot find a satisfactory reason for. “Boredom is not a question of idleness,” suggests Svendsen, “but of meaning.” Boredom does not, however, equate to the kind of meaninglessness found in depression. The bored are not necessarily unhappy with life; they are simply unfulfilled by circumstances, activities, and the things around them.
The second is from Walker Percy’s “Bourbon, Neat”:
Not only should connoisseurs of bourbon not read this article, neither should persons preoccupied with the perils of alcoholism, cirrhosis, esophageal hemorrhage, cancer of the palate, and so forth — all real dangers. I, too, deplore these afflictions. But, as between these evils and the aesthetic of bourbon drinking, that is, the use of bourbon to warm the heart, to reduce the anomie of the late twentieth century, to cut the cold phlegm of Wednesday afternoons, I choose the aesthetic. What, after all, is the use of not having cancer, cirrhosis, and such, if a man comes home from work every day at five-thirty to the exurbs of Montclair or Memphis and there is the grass growing and the little family looking not quite at him but just past the side of his head, and there’s Cronkite on the tube and the smell of pot roast in the living room, and inside the house and outside in the pretty exurb has settled the noxious particles and the sadness of the old dying Western world, and him thinking: “Jesus, is this it? Listening to Cronkite and the grass growing?”
Is this one reason why we’re giving up on TV as our primary mode of consuming cognitive surplus? Creating something, even if it’s just a Wikipedia article about Thundercats, seems more meaningful? Or (alternative hypothesis) are people ROFLing at LOLCats mostly drunk?
Joshua Glenn buried this nugget in a comment at HiLobrow:
Anyone who has ever spent time in a conference room equipped with an overhead projector is familiar with the basic Venn diagram — three overlapping circles whose eight regions represent every possible intersection of three given sets, the eighth region being the space around the diagram. Although it resembled the intertwined rings already familiar in Christian (and later Led Zeppelin) iconography, when Venn devised the diagram in 1880, it was hailed as a conceptually innovative way to represent complex logical problems in two dimensions.
There was just one problem with it, according to British statistician, geneticist, and Venn diagram expert A.W.F. Edwards, author of the entertaining book “Cogwheels of the Mind” (Johns Hopkins): It didn’t scale up. With four sets, it turns out, circles are no use — they don’t have enough possible combinations of overlaps. Ovals work for four sets, Venn found, but after that one winds up drawing spaghetti-like messes — and, as he put it, “the visual aid for which mainly such diagrams exist is soon lost.” What to do?
A rival lecturer in mathematics at Oxford by the name of Charles Dodgson — Lewis Carroll — tried to come up with a better logical diagram by using rectangles instead of circles, but failed (though not before producing an 1887 board game based on his “triliteral” design). In fact, it wasn’t until a century later that the problem of drawing visually appealing Venn diagrams for arbitrary numbers of sets was solved — by Edwards, it turns out. In 1988 Edwards came up with a six-set diagram that was nicknamed the “Edwards-Venn cogwheel.”
The original post, which extends Glenn’s ongoing remapping of generational lines past the mid-nineteenth century — Decadents! Pragmatists! Industrial Tyrants! Mark Twain AND Henry James! — is pretty sweet too. (For what it’s worth, one big thing Decadents and Pragmatists had in common was that they were both obsessed with generational changes.)
Now we just need a generational map that’s ALSO an Edwards-Venn cogwheel!
The other day, for reasons whose names I do not wish to recall, I was looking in Snarkmarket’s archives from 2007. There were chains I’d remembered, great ideas I’d forgotten, funny anachronisms, and prescient observations.
Generally, though, I was struck by how much shorter the posts were. I’m sure this is equally because in 2007, my many paragraphs were safely in the comments and because Twitter now carries most of the content that went into one-to-three-sentence “hey, look at this!” links. (Plus, migrating the archives to WordPress truncated some of the longer posts that used to have “jumps” built-in.)
Five selected links, just from July 2007:
Maybe I’ll try to do this once a month.
Here’s an idea related to “even your coffee has an author-function“: food auteurism, a phrase which seems very natural and yet before today didn’t have any hits on Google.
For most of the 20th century, after the industrialization of the food and agriculture industry, food was mostly anonymous. Traditionally, your farmer, grocer, butcher, rabbi vouched for the quality of your food, but that gave way to government inspections, certification, and standardization, plus branding.
Now, though, that industrial anonymity is troubling, and we increasingly want our food to be sourced. This is partly driven by nutrition, partly by social concerns, and partly by a need to differentiate our identity through what we eat. And it’s achieved partly through a return to a quasi-preindustrial model (farmers’ markets and local gardens), partly through a shift in brand identification (let me drink 15th Avenue Coffee instead of Starbucks), and partly through a new rise in authority of food writers and experts: Alice Waters, Michael Pollan, Mark Bittman.
It’s a new way to generate and focus cultural attention, and to help us make sense of the explosion of information and misinformation about food. Food as an information network.
There are many, many noteworthy things in this interview with Clay Shirky, but this caught my attention (bold-emphasis is mine):
[W]hat we’re dealing with now, I think, is the ramification of having long-form writing not necessarily meaning physical objects, not necessarily meaning commissioning editors and publishers in the manner of making those physical objects, and not meaning any of the sales channels or the preexisting ways of producing cultural focus. This is really clear to me as someone who writes and publishes both on a weblog and books. There are certain channels of conversation in this society that you can only get into if you have written a book. Terry Gross has never met anyone in her life who has not JUST published a book. Right?
The way our culture works, depending on what field you’re operating in, certain kinds of objects (or in some cases, events) generate more cultural focus than others. Shirky gives an example from painting: “Anyone can be a painter, but the question is then, ‘Have you ever had a show; have you ever had a solo show?’ People are always looking for these high-cost signals from other people that this is worthwhile.” In music, maybe it used to be an album; in comedy, it might be an hour-long album or TV special; I’m sure you can think of others in different media. It’s a high-cost object that broadcasts its significance. It’s not a thing; it’s a work.
But, this is important: it’s even more fine-grained than that. It’s not like you can just say, “in writing, books are the most important things.” It depends on what genre of writing you’re in. If you’re a medical or scientific researcher, for instance, you don’t have to publish a book to get cultural attention; an article, if it’s in a sufficiently prestigious journal, will do the trick. And the news stories won’t even start with your name, if they get around to it at all; instead, a voice on the radio will say, “according to a new study published in Nature, scientists at the University of Pennsylvania…” The authority accrues to the institution: the university, the journal, and ultimately Science itself.
The French historian/generally-revered-writer-of-theory Michel Foucault used this difference to come up with an idea: In different cultures, different kinds of writers are accorded a different status that depends on how much authority accrues to their writing. In the ancient world, for instance, stories/fables used to circulate without much, if any, attribution of authorship; medical texts, on the other hand, needed an auctoritas like Galen or Avicenna to back them up. It didn’t make any sense to talk about “authorship” as if that term had a universal, timeless meaning. Not every writer was an author, not every writing an act of authorship. (Foucault uses a thought-experiment about Nietzsche scribbling aphorisms on one side of a sheet of paper, a laundry or shopping list on the other.)
At the same time, you can’t just ignore authorship. Even if it’s contingent, made-up, it’s still a real thing. It’s built on social conventions and serves a social function. There are rules. Depending on context, it can be construed broadly or narrowly. And it can change — and these changes can reveal things that might otherwise be hidden. For instance, from the early days of print until the 20th century, publishers in England shared some of the author-function of a book because they could be punished for what it said. At some point in the 20th century, audiences became much more interested in who the director of a film was. (In some cases, the star or producer or studio, maybe even the screenwriter still share some of that author-function.) And these social ripples — who made it, who foots the bill, who’s an authority, who gets punished? — those are all profound ways of producing “cultural focus.”
Foucault focused on authorship — the subjective side of that cultural focus — because he was super-focused on things like authority and punishment. But it’s clear that there’s an objective side of this story, too, the story of the work — and that the two trajectories, work and author, work together. You become an “author” and get to be interviewed by Terry Gross because you’ve written a book. And you get to write a book (and have someone with a suitable amount of authority publish it) because you accrue a certain amount and kind of demonstrable authority and skill (in a genre where writing a book is the appropriate kind of work).
It’s no surprise, then, that the Big Digital Shake-Up in the way cultural objects are produced, consumed, sold, disseminated, re-disseminated, etc. is shifting our concepts of both authorship and the work in many genres and media. What are the new significant objects in the fields that interest you? Pomplamoose makes music videos; Robin wrote a novella, but at least part of that “work” included the blog and community created by it; and Andrew Sullivan somehow manages to be the “author” of both the book The Conservative Soul and the blog The Daily Dish, even when it switches from Time to The Atlantic, even when someone else is guest-writing it. And while it takes writing a book to get on Fresh Air, to really get people on blogs talking about your book, it helps to have a few blog posts, reviews, and interviews about it, so there’s something besides the Amazon page to link to.
Maybe being the author of a blog is a new version of being an author of a book. I started (although I’m not the only author of) Bookfuturism because I started stringing together a bunch of work that seemed to be about the future of reading; through that, my writing here, and some of the things I wrote elsewhere, I became a kind of authority on the subject (only on the internet, but still, I like who links to me); and maybe I’ll write a book, or maybe I’ll start a blog with a different title when it’s time to write about something else. I don’t know.
It’s all being reconfigured, as we’re changing our assumptions about what and who we pay attention to.
Chimerical post-script: Not completely sure where it fits in, but I think it does: Robin and José Afonso Furtado pointed me to this post by Mike Shatzkin about the future of bookselling, arguing (I’m paraphrasing) that with online retailers like Amazon obliterating physical bookstores, we need a new kind of intermediary that helps curate and consolidate books for the consumer, “powered” by Amazon. It’s not far off from Robin’s old post about a “Starbucks API.” See? Even your coffee has an author-function.
Anyways, new authors, new publishers, new media, new works, new devices, new stores, new curators, new audiences — everything with a scrap of auctoritas is up for grabs.
I love this. Matt Jones at BERG shares a list of totally uncool technologies. Mice! Kiosks! CDs! Landline phones! 512MB flash drives!
Matt argues that these technologies all live in the Trough of Disillusionment (which is where you fall after cresting the heights of hype), and that recombining or recontextualizing these technologies…
…can expose a previously unexploited affordance or feature of the technology – that was not brought to the fore by the original manufacturers or hype that surrounded it. By creating a chimera, you can indulge in some material exploration.
The rest of the post is really interesting, and you should check it out. But I want to dwell on the word “chimera” for a second.
We obviously love hybrids and interdisciplinary thinking here at Snarkmarket. But you know, I think we might love chimeras even more.
Hybrids are smooth and neat. Interdisciplinary thinking is diplomatic; it thrives in a bucolic university setting. Chimeras, though? Man, chimeras are weird. They’re just a bunch of different things bolted together. They’re abrupt. They’re discontinuous. They’re impolitic. They’re not plausible; you look at a chimera and you go, “yeah right.” And I like that! Chimeras are on the very edge of the recombinatory possible. Actually—they’re over the edge.
Tim’s last post feels chimeric to me.
I was going for something chimeric with this post, I think.
Chimeric thinking. It’s a thing.
All signs suggest punctuation is in flux. In particular, our signs that mark grammatical (and sometimes semantic) distinctions are waning, while those denoting tone and voice are waxing. Furthermore, signs with a slim graphical profile (the apostrophe and comma, especially) are having a rough go of it. Compared to the smiley face or even the question mark, they’re too visually quiet for most casual writers to notice or remember, even (or especially) on our high-def screens.
But we’re also working within the finite possibilities and inherited structures of our keyboards. It’s the age of secondary literacy: writing and reading transformed by electronic communication, from television to the telephone.
See 1. Jan Swafford’s unfortunately titled “Why e-books will never replace real books,” which takes seriously Marshall McLuhan’s argument that print (and computers, too) change the ways we think and see:
I’ve taught college writing classes for a long time, and after computers came in, I began to see peculiar stuff on papers that I hadn’t seen before: obvious missing commas and apostrophes, when I was sure most of those students knew better. It dawned on me that they were doing all their work on-screen, where it’s hard to see punctuation. I began to lecture them about proofing on paper, although, at first, I didn’t make much headway. They were unused to dealing with paper until the final draft, and they’d been taught never to make hand corrections on the printout. They edited on-screen and handed in the hard copy without a glance.
Handwriting is OK! I proclaimed. I love to see hand corrections! Then I noticed glitches in student writing that also resulted from editing on-screen: glaring word and phrase redundancies, forgetting to delete revised phrases, strangely awkward passages. I commenced an ongoing sermon: You see differently and in some ways better on paper than on computer. Your best editing is on paper. Try it and see if I’m right. You’ll get a better grade. The last got their attention. The students were puzzled and skeptical at first, but the ones who tried it often ended up agreeing with me.
And especially, see 2. Anne Trubek’s “The Very Long History of Emoticons“:
A punctuation purist would claim that emoticons are debased ways to signal tone and voice, something a good writer should be able to indicate with words. But the contrary is true: The history of punctuation is precisely the history of using symbols to denote tone and voice. Seen in this way, emoticons are simply the latest comma or quotation mark… The earliest marks indicated how a speaker’s voice should adjust to reflect the tone of the words. Punctus interrogativus is a precursor to today’s question mark, and it indicates that the reader should raise his voice to indicate inquisitiveness. Tone and voice were literal in those days: Punctuation told the speaker how to express the words he was reading out loud to his audience, or to himself. A question mark, a comma, a space between two words: These are symbols that denote written tone and voice for a primarily literate—as opposed to oral—culture. There is no significant difference between them and a modern emoticon.
I @atrubek. And I’m feeling all zen about this observation of hers, too: “A space is a punctuation mark.” There’s a whole philosophy in that idea, I know it.
I’m also feeling all zen about this idea that computer screens (keyboards, too) are sites of multiple, overlapping, and conflicting cultures, and that it’s up to us (in part) to help decide what the assumptions of those cultures are. —–>
Here, see 1. The Slow Media Manifesto, and 2. Nick Carr’s much-debated (whaaa? Nick Carr in a debate?) post about “delinkification“, which is actually a pretty solid meditation on the rhetoric of the hyperlink (you could say, the way we punctuate them). In short, if you think the superimposed montage of words and link of in-text hyperlinks pose some cognition/decision problems, which might not be appropriate to all kinds of reading, then it might make sense to try using a different strategy (like footnoting), instead. And being the relatively sophisticated mammals we are, in different contexts, we can sort these strategies out (even if we don’t fully understand that or how we’re doing it).
I saw the new video game Red Dead Redemption for the first time this weekend, courtesy of my pal Wilson, who described it (and I paraphrase) as “every awesome Western ever, combined.”
It is indeed totally stunning, and it’s got me thinking about Westerns. Among other things:
What clicks in your mind when you think about Westerns? Any recent movies I ought to see? Any other fun stuff out there?
Update: Yes, this post was Tim-bait, and whoah yes, he delivers. I’m considering just pasting his comment into the body of the post and moving what I wrote to the comments…
Sometimes you run across an idea so counter-intuitive and brain-bending that you immediately want to splice it into every domain you can think of. Sort of like trying a novel chemical compound against a bunch of cancers: does it work here? How about here? Or here?
That’s how I feel about crash-only software (link goes to a PDF in Google’s viewer). Don’t pay too much attention to the technical details; just check out the high-level description:
Crash-only programs crash safely and recover quickly. There is only one way to stop such software—by crashing it—and only one way to bring it up—by initiating recovery.
Wow. The only way to stop it is by crashing it. The normal shutdown process is the crash.
Let’s go a little deeper. You can imagine that commands and events follow “code paths” through software. For instance, when you summoned up this text, your browser followed a particular code path. And people who use browsers do this a lot, right? So you can bet your browser’s “load and render text” code path is fast, stable and bug-free.
But what about a much rarer code path? One that goes: “load and render text, but uh-oh, it looks like the data for the font outlines got corrupted halfway through the rendering process”? That basically never happens; it’s possible that that code path has never been followed. So it’s more likely that there’s a bug lurking there. That part of the browser hasn’t been tested much. It’s soft and uncertain.
One strategy to avoid these soft spots is to follow your worst-case code paths as often as your best-case code paths (without waiting for, you know, the worst case)—or even to make both code paths the same. And crash-only software is sort of the most extreme extension of that idea.
Maybe there are biological systems that already follow this practice, at least loosely. I’m thinking of seeds that are activated by the heat of a forest fire. It’s like: “Oh no! Worst-case scenario! Fiery apocalypse! … Exactly what we were designed for.” And I’m thinking of bears hibernating—a sort of controlled system crash every winter.
What else could we apply crash-only thinking to? Imagine a crash-only government, where the transition between administrations is always a small revolution. In a system like that, you’d optimize for revolution—build buffers around it—and as a result, when a “real” revolution finally came, it’d be no big deal.
Or imagine a crash-only business that goes bankrupt every four years as part of its business plan. Every part of the enterprise is designed to scatter and re-form, so the business can withstand even an existential crisis. It’s a ferocious competitor because it fears nothing.
Those are both fanciful examples, I know, but I’m having fun just turning the idea around in my head. What does crash-only thinking connect to in your brain?
I love little observations of the everyday like this one in Nick Paumgarten’s essay on elevators:
Passengers seem to know instinctively how to arrange themselves in an elevator. Two strangers will gravitate to the back corners, a third will stand by the door, at an isosceles remove, until a fourth comes in, at which point passengers three and four will spread toward the front corners, making room, in the center, for a fifth, and so on, like the dots on a die. With each additional passenger, the bodies shift, slotting into the open spaces. The goal, of course, is to maintain (but not too conspicuously) maximum distance and to counteract unwanted intimacies—a code familiar (to half the population) from the urinal bank and (to them and all the rest) from the subway. One should face front. Look up, down, or, if you must, straight ahead. Mirrors compound the unease.
This reminds me of what is quite possibly the best poetic description of riding the elevator, part III of T.S. Eliot’s “Burnt Norton” (from Four Quartets). In particular, it’s about the long elevator ride at the tube stop at Russell Square:
Here is a place of disaffection
Time before and time after
In a dim light: neither daylight
Investing form with lucid stillness
Turning shadow into transient beauty
With slow rotation suggesting permanence
Nor darkness to purify the soul
Emptying the sensual with deprivation
Cleansing affection from the temporal.
Neither plenitude nor vacancy. Only a flicker
Over the strained time-ridden faces
Distracted from distraction by distraction
Filled with fancies and empty of meaning
Tumid apathy with no concentration
Men and bits of paper, whirled by the cold wind
That blows before and after time,
Wind in and out of unwholesome lungs
Time before and time after.
Eructation of unhealthy souls
Into the faded air, the torpid
Driven on the wind that sweeps the gloomy hills of London,
Hampstead and Clerkenwell, Campden and Putney,
Highgate, Primrose and Ludgate. Not here
Not here the darkness, in this twittering world.Descend lower, descend only
Into the world of perpetual solitude,
World not world, but that which is not world,
Internal darkness, deprivation
And destitution of all property,
Desiccation of the world of sense,
Evacuation of the world of fancy,
Inoperancy of the world of spirit;
This is the one way, and the other
Is the same, not in movement
But abstention from movement; while the world moves
In appetency, on its metalled ways
Of time past and time future.
(Why hasn’t “Not here the darkness, in this twittering world” been quoted regularly?)
Another great bit from Paumgarten, which relates to my earlier “potatoes, paper, petroleum” observation about the 19th century:
The elevator, underrated and overlooked, is to the city what paper is to reading and gunpowder is to war. Without the elevator, there would be no verticality, no density, and, without these, none of the urban advantages of energy efficiency, economic productivity, and cultural ferment. The population of the earth would ooze out over its surface, like an oil slick, and we would spend even more time stuck in traffic or on trains, traversing a vast carapace of concrete.
A meta/editorial/critical note: Paumgarten’s essay has a regrettable B-story, about a guy who worked at a magazine who was trapped in an elevator. He dribbles it out graf by graf, to create the illusion of dramatic tension. Just speaking for myself, I didn’t care; also, it kind of bothers me that this is starting to become one of the default templates for magazine writing. Either find a reason to do it and do it well, or just… try something else.