The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52

Adaptive Melancholy
 / 

If it makes us less likely to eat or dance or drink or screw, and sometimes makes us kill ourselves, then why do people get depressed?

This radical idea — the scientists were suggesting that depressive disorder came with a net mental benefit — has a long intellectual history. Aristotle was there first, stating in the fourth century B.C. “that all men who have attained excellence in philosophy, in poetry, in art and in politics, even Socrates and Plato, had a melancholic habitus; indeed some suffered even from melancholic disease”…

But Andrews and Thomson weren’t interested in ancient aphorisms or poetic apologias. Their daunting challenge was to show how rumination might lead to improved outcomes, especially when it comes to solving life’s most difficult dilemmas. Their first speculations focused on the core features of depression, like the inability of depressed subjects to experience pleasure or their lack of interest in food, sex and social interactions. According to Andrews and Thomson, these awful symptoms came with a productive side effect, because they reduced the possibility of becoming distracted from the pressing problem.

The capacity for intense focus, they note, relies in large part on a brain area called the left ventrolateral prefrontal cortex (VLPFC), which is located a few inches behind the forehead. While this area has been associated with a wide variety of mental talents, like conceptual knowledge and verb conjugation, it seems to be especially important for maintaining attention. Experiments show that neurons in the VLPFC must fire continuously to keep us on task so that we don’t become sidetracked by irrelevant information. Furthermore, deficits in the VLPFC have been associated with attention-deficit disorder.

Several studies found an increase in brain activity (as measured indirectly by blood flow) in the VLPFC of depressed patients. Most recently, a paper to be published next month by neuroscientists in China found a spike in “functional connectivity” between the lateral prefrontal cortex and other parts of the brain in depressed patients, with more severe depressions leading to more prefrontal activity. One explanation for this finding is that the hyperactive VLPFC underlies rumination, allowing people to stay focused on their problem. (Andrews and Thomson argue that this relentless fixation also explains the cognitive deficits of depressed subjects, as they are too busy thinking about their real-life problems to bother with an artificial lab exercise; their VLPFC can’t be bothered to care.) Human attention is a scarce resource — the neural effects of depression make sure the resource is efficiently allocated.

But the reliance on the VLPFC doesn’t just lead us to fixate on our depressing situation; it also leads to an extremely analytical style of thinking. That’s because rumination is largely rooted in working memory, a kind of mental scratchpad that allows us to “work” with all the information stuck in consciousness. When people rely on working memory — and it doesn’t matter if they’re doing long division or contemplating a relationship gone wrong — they tend to think in a more deliberate fashion, breaking down their complex problems into their simpler parts.

The bad news is that this deliberate thought process is slow, tiresome and prone to distraction; the prefrontal cortex soon grows exhausted and gives out. Andrews and Thomson see depression as a way of bolstering our feeble analytical skills, making it easier to pay continuous attention to a difficult dilemma. The downcast mood and activation of the VLPFC are part of a “coordinated system” that, Andrews and Thomson say, exists “for the specific purpose of effectively analyzing the complex life problem that triggered the depression.” If depression didn’t exist — if we didn’t react to stress and trauma with endless ruminations — then we would be less likely to solve our predicaments. Wisdom isn’t cheap, and we pay for it with pain.


Radiohead – No Surprises
by popefucker
2 comments

Unconsciously Screamin'
 / 

One of my favorite moments in Annabel Scheme is the party thrown by a mysterious musician known as “The Beekeeper”:

If you had electronic eyes and night vision—I had both—you would have seen slips of paper passing from person to person. On each slip was a phone number. Each one was different, and there were a dozen circulating in the crowd. Each wandered and blinked like a firefly as kids used their phones, torch-like, to illuminate the number, then passed it on. Here and there, then everywhere, they were dialing numbers, switching their phones to speaker-mode and pushing them up into the air like trophies.

The buzzing was coming from the phones. It was a low, rhythmic drone. At first you couldn’t hear much, but apparently, if you put enough phones on speaker all at once, it starts to get loud.

Really loud.

So that was the trick: There were no speakers because the crowd was the speaker. The bees did not sound so far-off now.

Scheme clenched her teeth. “This is hurting my face.”

Suddenly it stopped. The graveyard fell silent. It was a field of pale arms thrust to the sky, swaying like seaweed. Kids were bouncing silently on the balls of their feet. Waiting.

Then there was a count-off, a tat tat tat tat and then the music started and it was everywhere, megawatts of power flowing out of every palm and pocket. There was no focal point, so bodies were pointed in every direction, ricocheting and chain-reacting. Kids were losing it, jumping up and down, colliding and cuddling in the dark grass.

The music had a clear beat, but it was warped and scratchy, like someone was tuning a giant radio. Snatches of singing would ring out for a moment, then decohere. There was a trumpet that pealed from somewhere very far away…

The music was coming together as kids followed their ears. If your phone was buzzing with bass, you joined the bunched-up sub-woofer section. If it was sending high notes sizzling into the air, you joined the line that snaked around the crowd’s perimeter. The music worked its pattern on the crowd. It was both amazingly high-tech and totally pagan.

The first question I had after reading this was — I wonder if Robin knows about Zaireeka, the Parking Lot Experiments, or the other stuff that The Flaming Lips tried in the late 1990s?

I still don’t know. But I was reminded of that perplexity today reading this interview with Pitchfork’s Mark Richardson that’s all about the amazingly high-tech and totally pagan crap that the Lips tried before exploding with 1999’s The Soft Bulletin. Complete with YouTube videos, several of which were new to me.

If you were taken with either (Scheme or the Lips), try both.

8 comments

The Rumplestiltskin Effect
 / 

Ask MetaFilter commenters agree: Knowing someone’s name because his boss forces him to wear a name tag does not constitute permission to use that name. Having settled on the consensus that addressing name-tagged employees by their first names is creepy, the commenters attempt to tease out why. Judging by the comments, your age might influence how much you agree with their consensus.

2 comments

Good-bye to all that
 / 

I read Marc Bousquet’s recent post on the academic labor market with chagrin and recognition:

Today, [only] 1/4 of faculty are tenured or in the tenure stream. Less if you address pervasive undercounting of nontenurable faculty, teaching by staff employees and graduate students. The trend line points steeply down.

All of the under- or un- employed scientists with doctorates could be employed overnight if more science, and more science education, was done by persons holding the PhD. Instead, we do science and science education with persons who are studying for the PhD, or who gave up on studying for the PhD simply because they can work cheaper than persons who actually hold the doctorate.

If the percentage of faculty working in the tenure stream were anywhere near what it was at the high point of US scientific and technical dominance, we’d actually have a vast, sucking undersupply of persons with the PhD. Hell, just one large state system could absorb most of the so-called surplus doctorates in a few years–and as I’ve already noted, taking students out of the workforce and working toward full employment for faculty would be an actual stimulus plan.

But what do we do to try to fix the system? Michael Drout maps some of the options (all bad):

This situation cannot be fixed as long as there exists the mismatch of the number of people who want to be professors with the number of paid positions to be a professor.

There is no solution that can solve this problem, just as there is no solution to solve the ‘problem’ of the number of people who want to be famous authors, movie actors, rock stars or professional athletes being far greater than the number of job openings for authors, actors, rock stars and athletes.

Making it easier to get tenure once hired does not solve the problem, it only pushes the decision back from the tenure process (where the candidate is known and has a six-year track record) to the hiring process (where the candidate is less known and has only a grad school record).

The desire to make it easier to get tenure once someone is hired may seem kind to the particular person (whom you know as an individual), but it is unfair to the many, many other people who would like that job, who may be more qualified, but who haven’t had a chance, possibly because they were passed over in the hiring, possibly because they entered the job market a few years later, etc. So by reducing the requirements for tenure–whatever they are–you are doing an injustice to all of these people.

Reducing the number of Ph.D.s awarded, a proposal mooted frequently (usually by people who already have Ph.D.s; people applying to grad school who want to get Ph.D.s. are usually less keen on the idea) does not solve the problem, it only pushes the decision process back from the hiring process to the graduate school entrance process, where the candidate has even less of a track record.

I began graduate school in 2001, during a global recession, and finished in 2009, in the middle of another one. I dangled on the job market twice (pre- and post-diss completion), with no luck. There’s clearly greater pressures than ever for undergraduates to complete their education, and pay more money to do it, but that has never (and it appears will never) translated to an increased demand for more non-casual faculty. I’m thirty years old — a husband and father. I barely survived a terrible accident this year. I can’t wait any more. It’s time to walk away.

7 comments

Mechanical Turk (wins every time)
 / 

I’m quite taken with George H Williams’s ProfHacker write-up of his experience using Amazon’s Mechanical Turk service to transcribe some audio, all the more so since he followed FOS (Friend of the Snark) Andy Baio’s methodology. I don’t have any audio to transcribe, but if I did, I’d definitely give this a whirl.

One comment

From stealing to sampling
 / 

This might bear watching:

T. S. Eliot once said that “good writers borrow, great writers steal.” Apparently taking this advice to heart, Helene Hegemann, a seventeen-year-old German writer, has “mixed” (her word) together a best-selling novel titled “Axolotl Roadkill.” According to an article in the Times, Hegemann lifted entire pages from a novel by a lesser known writer, and she doesn’t seem at all apologetic about doing so. “There’s no such thing as originality anyway, just authenticity,” said Hegemann in response to accusations of plagiarism. The judges of the Leipzig Book Fair seem to agree with her, at least in principle: even after the author admitted to copying another writer’s work, “Axolotl Roadkill” remains a finalist for the Fair’s $20,000 prize in fiction.

The Leipzig committee’s decision not to strike the book from their finalist’s list, effectively endorsing, or at least approving, Hegemann’s actions, is either an alarming or a progressive response. The cultural-relativist argument is that Germany, specifically Berlin, is a hotbed of artistic mixing and mashing, sampling and re-sampling, and that Hegemann is simply employing these same tactics in her writing. If a d.j. can thread together twenty different songs and package the end product as her own, why can’t a writer? This seems to be the question Hegemann is using as a defense. Original content, then, becomes subordinate to context, meaning that as long as a newer, larger work is being created, portions of prior works are fair game.

First, just to be clear — are we using periods and lower-case for “d.j.” now? What’s wrong with DJ? Goes well with MC, doesn’t it? Or is it “m.c.” or “emcee”?

It probably doesn’t matter, because we don’t need the disc jockey remix paradigm to try to understand what might be called “synthetic literature.” Lee Ellis looks back at T.S. Eliot, but in a skewed way:

Perhaps looking at the meaning behind T. S. Eliot’s quote can help clear up this situation. I interpret “steal” to mean, in this context, the act of taking from other texts themes, ideas, rhythms, structures, but not the sentences themselves.

No. I mean, Ellis can interpret Eliot’s sentence this way, divorced from his practice, but it doesn’t change that Eliot the artist stole. Not just the themes, ideas, rhythms, or structures, but the exact sentences.

In fact, you could say that by stealing the sentences, he emptied them of themes or ideas; a line lifted from Baudelaire (about a “hypocrite lecteur,” no less — “a hypocrite reader, my double, my brother”) and repurposed for “The Waste Land” comes to mean, by way of that refraction, something quite different. Like Borges’s “Pierre Menard, autor del Quijote,” a word-for-word recreation of a text arguably becomes a profound transformation of that text.

On the other hand, you can’t just translate someone’s ideas, themes, or structures into superficially new sentences and act like everything is cool. If I rip off your movie idea — plot, themes, characters — but switch some of the words around, I’ve done something much more dangerous than quote a line from your screenplay (especially if it’s relatively well-known).

You could contest Old Possum’s claim that this theft was a sign of “maturity,” but you can’t just act like he isn’t doing it. Nor was his theft all that novel — the pastiche has been a literary game for a long time, and it was particularly popular in the early 20th century, from Pound to Proust.

But calling something unoriginal isn’t identical with calling it plagiarism. Without being entirely arbitrary, let me posit a few things:

  1. Plagiarism really only makes sense in a scholarly or journalistic context. It’s a mistake in the handling of sources, and can be malicious or nonmalicious, and can completely damage a work or be relatively incidental to it;
  2. In art or fiction, assuming that there is a distinction, you need a completely different set of criteria. Hegemann’s shift to “authenticity” is probably not so far off;
  3. Ultimately, judging acts of egregious theft in fiction is going to have to be like judging pornography — you know it when you see it.

So what do we have left? If we’re starting off with the assumption that artistic creation is and should always be ex nihilo — sadly, not much. Maybe instead we need to distinguish between works that are synthetic and analytic — works that combine something to produce something new, versus works that only contain what they borrow (and in some cases, contain LESS than what’s borrowed).

7 comments

Slipping back into things
 / 

Dear Snarkmatrix,

Apologies for my recent silence. I have spent so much time at the hospital in the past few months that I should have a card to punch that gets me free sandwiches or something. But I wanted to stop and thank you for your notes and inquiries, let you know that I am working my way to being all the way back, at least on the internet… and am FASCINATED by the recent flurry of posts here about the early twentieth century, why so much of that time feels like now, not just an unchanging now, but an unchangeable now, yet still feels old and distant and foundational (or counter-foundational). Anyways, as always, you’re giving me things to think about.

(If you’re patient with me, I might even be able to write something about it)

3 comments

The Internet of ghosts
 / 

When an incoming freshman at Harvard enters her room in the Yard for the first time, she’s greeted with a little scrap of history meant to kindle her awe at her place in the college’s legacy. On her bed will sit an envelope containing a list of names and years of graduation of all the people who have ever inhabited her room.

It’s a little like that scene in Dead Poets’ Society where Robin Williams creeps around among a group of his students murmuring, “Caaaaarpe,” while they stare at a photograph of their forebears. But it produces its intended effect. Those students who will room with the ghosts of JFK and Oliver Wendell Holmes will mention this fact in conversation for the rest of their lives. And even the lists without famous names will convey a powerful message: It wasn’t so long ago that these ancients, who graduated before you were ever born, were in this very room, feeling these same feelings you are now.

I thought of this as I was thinking of another milestone that shaped my freshman year: my introduction to Napster. Although I was as awed as anyone else by the fact of being able to download any song, instantly, for free, it wasn’t long before another element of the service made it a killer app.
Read more…

8 comments

A new class of content for a new class of device
 / 

Let’s do this.

I want to talk about the iPad, but I’m going to start by talking about vlogs.

You know: videoblogs!

Rewind to 2005. Maybe your 2005 was different from mine, but I was working at an internet-centric cable TV network, and the world seemed to be saying one thing really loud: The revolution is here. We’ve got cheap cameras and cheap distribution. The era of the indie “web show” has arrived. Let a thousand videoblogs bloom!

Then they didn’t. Not really. Today the gear is even cheaper—HD Flipcams for like twelve bucks, right?—but we’ve got basically three web shows: Rocketboom, Epic Fu, and The Guild. (That’s cruel shorthand; if you are currently producing and/or starring in some other web show, I’m sorry. My argument demands ruthlessness.)

What happened?

Well, the web happened. YouTube happened. It turns out we weren’t wrong about the tools; we were wrong about the forms. We didn’t get a crisp catalog of indie web shows; we got a sprawling database of disconnected video clips.

Today on the web, on YouTube, a show just sort of dissolves into that database. To avoid that fate, it needs to be buoyed by big media; it needs to surf on the scarcity of TV time. A show needs a marketing budget to insist on its coherence. (Also, Hulu.)

None of this is a bad thing! I love the web-as-database; I love the wacky YouTube ecosystem. It’s like we grew a rainforest overnight.

But the point is, the web kinda hates bounded, holistic work. The web likes bits and pieces, cross-references and recommendations, fragments and tabs. Oh, and the web loves the fact that you’re reading this post in Google Reader.

Hold that thought.

Back in the day, when I was first getting to know my iPhone, I was surprised at how truly un-web-like it was. On the iPhone, you do one thing at a time and that one thing takes up the whole screen. Like nothing on the web, the iPhone is full-bleed.

You know what my favorite iPhone apps are? No joke: it’s stuff like this. Nobody’s made the multimedia manga or living-text novel of my dreams, so I’ve settled for The Wheels on the Bus. But it turns out that some of the stuff they’re doing with these kids’ apps—the way they’re mashing media and interactions together—is really slick.

And now this new device takes the iPhone’s virtues and scales them up—plus, no text messages while you’re reading. So more than anything else, the iPad looks to me like a focus machine. And it looks, therefore, like such an opportunity for storytelling, and for innovation around storytelling. It looks like an opportunity to make the Myst of 2010. (I don’t mean that literally. I only mean: wow, remember Myst? Remember how it was an utterly new kind of thing?)

Apple is great at inventing new devices, but it bums me out that they seem so content to fill those devices with the same same old stuff: TV shows, movies, music, and books. Books… in ePub format?

Apple: you did not invent a magical and revolutionary device so we could read books in ePub format.

Think about what the iPad really is! It’s the greatest canvas for media ever invented. It’s colorful, tactile, powerful, and programmable. It can display literally anything you can imagine; it can add sound and music; and it can feel you touching it. It’s light and (we are led to believe) comfortable in the hands. The Platonic Form of the Perfect Canvas is out there somewhere—it’s probably flexible… and it probably has a camera—but the iPad is, like, a really amazingly good shadow of that form. And this is just the first one!

So, we’re gonna use the Perfect Canvas to… watch TV shows?

Seriously: ePub?

Now, connect the dots. For all its power and flexibility, the web is really bad at presenting bounded, holistic work in a focused, immersive way. This is why web shows never worked. The web is bad at containers. The web is bad at frames.

Jeez, if only we had a frame.

20100129_ipad

So, to finish up: I think the young Hayao Miyazakis and Mark Z. Danielewskis and Edward Goreys of this world ought to be learning Objective-C—or at least making some new friends. Because this new device gives us the power and flexibility to realize a whole new class of crazy vision—and it puts that vision in a frame.

In five years, the coolest stuff on the iPad shouldn’t be Spider-Man 5, Ke$ha’s third album, or the ePub version of Annabel Scheme. If that’s all we’ve got, it will mean that Apple succeeded at inventing a new class of device… but we failed at inventing a new class of content.

In five years, the coolest stuff on the iPad should be… jeez, you know, I think it should be art.

22 comments

The future of designed content
 / 

I’ve been sniffling in bed watching anime all day and now it’s time to write a post about the future of designed content on the web.

A couple of assumptions going in:

  • The era of random content shrapnel has gone on long enough. We can do better.
  • We’ve suddenly got a pretty bad-ass toolkit! Standards like HTML5 and CSS3; extensions like Typekit and jQuery; browsers like Firefox, Chrome and Safari. (And as an add-on to that last one: the sophistication and homogeneity of Safari on the iPhone and, one presumes, the Imminent Apple Product.)
  • We’ve got some starting points, both real and speculative. People are thinking about this stuff. Gannett huddled with IDEO for a whole year and the big idea they emerged with was… designed content.

At the Hacks and Hackers meetup here in SF a few weeks ago, we kept using the words “artisanal” and “bespoke” to talk about designed content. I like these words a lot, but I’m also wary of them:

  • I like them because they imply a real care for craft, and they imply that form matches function. They also imply, you know, skill: smart people doing their best work.
  • I’m wary of them because they can serve as an excuse: “Oh, yeah, we only post one new story every two months because… it’s artisanal.” Designed content shouldn’t try to compete head-on with Demand Media for page-views and placement in Google results, but it can’t ignore the reality of the web, either. It can’t be all stock and no flow.

So what I’m anxious to see is a synthesis that matches bespoke design to web scale. But what would that look like?

The crew that comes closest right now is the NYT graphics and multimedia team: they work fast, their work is beautiful, and it’s often quite story-specific. But it’s also more “web interactive” than truly “designed content,” and there’s only so much they can do with NYT-style stories. Those are both pretty subtle distinctions; you’ll see what I mean in a moment.

Here’s my pitch for who could hit this synthesis, if they wanted to:

Gawker Media.

Here’s why:

  • They’re web-native. They know headlines; they know linkbait; they know SEO. They have trained with the Dark Lords of the Sith. This is the right foundation.
  • They’ve got voice. You could flip a switch to turn Gawker blogs into magazines, and they would make perfect sense. That’s not true for any other blog network, and it’s a real achievement. At the moment, those voices are transmitted through text and the occasional spectacle—but voice can drive design, too.
  • They’ve got scale. Gawker Media isn’t three guys in a garage scrambling to keep the feed flowing. They’ve got corporate infrastructure, and they could plausibly invest in what I’m about to suggest.

Here’s the plan:

You build a small Gawker Media design desk. It’s just a handful of young, hungry, multi-talented web designers—designers who dig editorial, not user experience or information architecture. Then, every day—maybe once in the morning and once in the afternoon—each blog gets to pitch a handful of ideas to the design desk. There’s a fast, ruthless triage, and they go to work. The goal is to make stuff fast—on the scale of hours, sometimes days. Never weeks.

The idea is not to make interactive apps and draggy-zoomy data viz! That stuff is too complicated. Rather, the design desk’s mandate is simply to present words and images in a way that makes you go: Uh. Wow. Just the way this does, or this does. (Actually, yeah, jeez: Hire Jason Santa Maria to set this up why don’t you?)

And Gawker content is a great match for this—almost perfect, actually—precisely because it’s not NYT content. It’s not, you know, Very Useful Information. It’s punchy, sassy, funny and snarky. It’s chunky, and it should stay chunky. This isn’t about expanding blog posts into magazine article wannabes; it’s about presenting 200-800 words of pure bloggy voice in an original, uh-wow way every time. Actually, no, not every time: instead, only when it really counts. The Gawker Media design desk would develop a sharp, subtle sense for design opportunity.

(It would have been pretty bad-ass to like, design this post in exactly the way I’m proposing, huh? Ohhh well.)

But let me expand on that a little bit more, because it’s important. The idea is not to wrap meaty, thoughtful posts like this io9 insta-classic in fancy design. Those are the posts that need it least! It’s like, “yo, get out of my way, let me read.” Rather, the idea is to come up with a new class of content entirely. Again: design opportunity.

Now, it’s not immediately obvious what this new class of content gets you (besides, you know, approving links from Snarkmarket) because… Google doesn’t index design! I mean, stop and think about that for a minute: Google doesn’t index design. Even though it has informational content of its own, and even though it contributes to clarity and utility: Google doesn’t index design. It doesn’t know how. When I search for “how to tie my shoes,” Demand Media’s semi-literate blob of instructions is probably going to show up above your lovingly-designed diagram. Ugh.

But Gawker Media is already past this. They’re not just playing the Google game anymore; they’re playing the uh-wow game. And that is what this class of content gets you. It gets you more uh-wows and more daily impact. It gets you content that screams to be shared. (Not unimportantly, it probably gets you some interesting advertising opportunities, too.)

Okay—the point of this articulation is not to convince Gawker Media to hire a bunch of designers. Rather, it’s get you to imagine what blogs like those would look like if they bothered with bespoke design every day. I think it’s a super-interesting vision.

And it would be even more interesting if RSS aggregators could preserve that design and display it inline. No more random content shrapnel! Instead, Google Reader starts to look like some crazy scrapbook, with pages pulled from hundreds of different magazines and pasted together into a seamless scroll.

Okay, until Gawker gets wise, go read Pictory. And let me know if this makes any sense. Can you imagine the designed content at Lifehacker and io9 the way I can? Crisp, coherent chunks of rich imagery and clever typography—like rocks in the stream?

Semi-related: trying to understand how people navigate rich, designed content… with graphs!

34 comments