The murmur of the snarkmatrix…

Jennifer § Two songs from The Muppet Movie / 2021-02-12 15:53:34
A few notes on daily blogging § Stock and flow / 2017-11-20 19:52:47
El Stock y Flujo de nuestro negocio. – redmasiva § Stock and flow / 2017-03-27 17:35:13
Meet the Attendees – edcampoc § The generative web event / 2017-02-27 10:18:17
Does Your Digital Business Support a Lifestyle You Love? § Stock and flow / 2017-02-09 18:15:22
Daniel § Stock and flow / 2017-02-06 23:47:51
Kanye West, media cyborg – MacDara Conroy § Kanye West, media cyborg / 2017-01-18 10:53:08
Inventing a game – MacDara Conroy § Inventing a game / 2017-01-18 10:52:33
Losing my religion | Mathew Lowry § Stock and flow / 2016-07-11 08:26:59
Facebook is wrong, text is deathless – Sitegreek !nfotech § Towards A Theory of Secondary Literacy / 2016-06-20 16:42:52
Snarkmarket commenter-in-chief since 2003, editor since 2008. Technology journalist and media theorist; reporter, writer, and recovering academic. Born in Detroit, living in Brooklyn, Tim loves hip-hop and poetry, and books have been and remain his drug of choice. Everything changes; don't be afraid. Follow him at

Straw men, shills, and killer robots
 / 

Indulge me, please, for digging into some rhetorical terminology. In particular, I want to try to sort out what we mean when we call something a “straw man.”

Here’s an example. Recently, psychologist/Harvard superstar Steven Pinker wrote an NYT op-ed, “Mind Over Mass Media,” contesting the idea that new media/the internet hurts our intelligence or our attention spans, and specifically contesting trying to marshal neuroscience studies in support of these claims. Pinker writes:

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Please note that nowhere does Pinker name these “critics of new media” or attribute this quote, “experience can change the brain.” But also note that everyone and their cousin immediately seemed to know that Pinker was talking about Nicholas Carr, whose new book The Shallows was just reviewed by Jonah Lehrer, also in the NYT. Lehrer’s review (which came first) is probably best characterized as a sharper version of Pinker’s op-ed:

There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.

Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.

I also really liked this wry observation that Lehrer added on at his blog, The Frontal Cortex:

Much of Carr’s argument revolves around neuroscience, as he argues that our neural plasticity means that we quickly become mirrors to our mediums; the brain is an information-processing machine that’s shaped by the kind of information it processes. And so we get long discussions of Eric Kandel, aplysia and the malleability of brain cells. (Having work in the Kandel lab for several years, I’m a big fan of this research program. I just never expected the kinase enzymes of sea slugs to be applied to the internet.)

Now, at least in my Twitter feed, the response to Pinker’s op-ed was positive, if a little backhanded. This is largely because Pinker largely seems to have picked this fight less to defend the value of the internet or even the concept of neuroplasticity than to throw some elbows at his favorite target, what he calls “blank slate” social theories that dispense with human nature. He wrote a contentious and much-contested book about it. He called it The Blank Slate. That’s why he works that dig in about how “the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.” Pinker doesn’t think we’re clay at all; instead, we’re largely formed.

So on Twitter we see a lot of begrudging support: “Pinker’s latest op-ed is good. He doesn’t elevate 20th C norms to faux-natural laws.” And: “I liked Pinker’s op-ed too, but ‘habits of deep reflection… must be acquired in… universities’? Debunk, meet rebunk… After that coffin nail to the neuroplasticity meme, Pinker could have argued #Glee causes autism for all I care.” And: “Surprised to see @sapinker spend so much of his op-ed attacking straw men (“critics say…”). Overall, persuasive though.

And this is where the idea of a “straw man” comes in. See, Pinker’s got a reputation for attacking straw men, which is why The Blank Slate, which is mostly a long attack on a version of BF Skinner-style psychological behaviorism, comes off as an attack on postmodern philosophy and literary criticism and mainstream liberal politics and a whole slew of targets that get lumped together under a single umbrella, differences and complexities be damned.

(And yes, this is a straw man characterization of Pinker’s book, probably unfairly so. Also, neither everyone nor the cousins of everyone knew Pinker was talking about Carr. But we all know what we know.)

However, on Twitter, this generated an exchange between longtime Snarkmarket friend Howard Weaver and I about the idea of a straw man. I wasn’t sure whether Howard, author of that last quoted tweet, was using “straw men” just to criticize Pinker’s choice not to call out Carr by name, or whether he thought Pinker had done what Pinker often seems to do in his more popular writing, arguing against a weaker or simpler version of what the other side actually thinks. That, at least, is a critically stronger sense of what’s meant by straw men. (See, even straw men can have straw men!)

So it seems like there are (at least) four different kinds of rhetorical/logical fallacies that could be called “arguing against a straw man”:

  1. Avoiding dealing with an actual opponent by making them anonymous/impersonal, even if you get their point-of-view largely right;
  2. Mischaracterizing an opponent’s argument (even or especially if you name them), usually by substituting a weaker or more easily refuted version;
  3. Assuming because you’ve shown this person to be at fault somewhere, that they’re wrong everywhere — “Since we now know philosopher Martin Heidegger was a Nazi, how could anyone have ever qualified him for a bank loan?”;
  4. Cherry-picking your opponent, finding the weakest link, then tarring all opponents with the same brush. (Warning! Cliché/mixed metaphor overload!)

Clearly, you can mix-and-match; the most detestable version of a straw man invents an anonymous opponent, gives him easily-refuted opinions nobody actually holds, and then assumes that this holds true for everybody who’d disagree with you. And the best practice would seem to be:

  1. Argue with the ideas of a real person (or people);
  2. Pick the strongest possible version of that argument;
  3. Characterize your opponent’s (or opponents’) beliefs honestly;
  4. Concede points where they are, seem to be, or just might be right.

If you can win a reader over when you’ve done all this, then you’ve really written something.

There’s even a perverse version of the straw man, which Paul Krugman calls an “anti-straw man,” but I want to call “a killer robot.” This is when you mischaracterize an opponent’s point-of-view by actually making it stronger and more sensible than what they actually believe. Krugman’s example comes from fiscal & monetary policy, in particular imagining justifications for someone’s position on the budget that turns out to contradict their stated position on interest rates. Not only isn’t this anyone’s position, it couldn’t be their position if their position was consistent at all. I agree with PK that this is a special and really interesting case.

Now, as Howard pointed out, there is another sense of “straw man,” used to mean any kind of counterargument that’s introduced by a writer with the intent of arguing against it later. You might not even straight-out refute it; it could be a trial balloon, or thought experiment, or just pitting opposites against each other as part of a range of positions. There’s nothing necessarily fallacious about it, it’s just a way of marking off an argument that you, as a writer, wouldn’t want to endorse. (Sometimes this turns into weasely writing/journalism, too, but hey, again, it doesn’t have to be.)

Teaching writing at Penn, we used a book that used the phrase “Straw Man” this way, and had a “Straw Man” exercise where you’d write a short essay that just had an introduction w/thesis, a counterargument (which we called a “straw man”), then a refutation of that counterargument. And then there was a “Straw Man Plus One” assignment, where you’d…

Never mind. The point is, we wound up talking about straw men a lot. And we’d always get confused, because sometimes “straw man” would mean the fallacy, sometimes it would mean the assignment, sometimes it would be the counterargument used in that (or any) assignment, sometimes it would be the paragraph containing the counterargument…

Oy. By 2009-10, confusion about this term had reached the point where two concessions were made. First, for the philosophers in the crowd who insisted on a strict, restrictive meaning of “straw man” as a fallacy, and who didn’t want their students using fallacious “straw men” in their “Straw Man” assignments, they changed the name of the assignment to “Iron Man.” Then, as part of a general move against using gendered language on the syllabus, it turned into “Iron Person.” Meanwhile, the textbook we used still called the assignment “Straw Man,” turning confusion abetted to confusion multiplied.

I probably confused things further by referring to the “iron person” assignment as either “the robot” — the idea being, again, that you build something that then is independent of you — or “the shill.” This was fun, because I got to talk about how con men (and women) work. The idea of the shill is that they pretend to be independent, but they’re really on the con man’s side the entire time. The best shill sells what they do, so that you can’t tell they’re in on it. They’re the ideal opponent, perfect as a picture. That got rid of any lingering confusion between the fallacy and the form.

Likewise, I believe that here and now we have sorted the range of potential meanings of “straw man,” once and for all. And if you can prove that I’m wrong, well, then I’m just not going to listen to you.

One comment

Waiting for Superman
 / 

Times like this truly do make me wish superheroes were real.

There’s an affecting moment in J Michael Straczynski’s recent run on the comic Thor. The Norse god of thunder’s been dead for three years, but has come back to life, as only gods and comic book superheroes can.

One of the first places he goes is New Orleans. Thor was dead when Hurricane Katrina hit a year earlier, and he knows he could have stopped the hurricanes, the floods, or otherwise saved the city and its people. But he wonders where the rest of the superheroes were: “Why were not force fields erected? Why were tides not evaporated by heat and blast? Why were buildings not supported by strength of arms and steel?”

Just then, Iron Man shows up, to tell Thor that all superheroes need to register with the federal government to prevent superpower-caused disasters. Instead of preventing Katrina or repairing New Orleans, Iron Man and his fellow superheroes have been fighting each other over this registration requirement, part of what Marvel Comics called Civil War.

There’s some meaning to be drawn from this, that I can’t fully articulate. Something about thinking too small, thinking about short-term hurdles and squabbles rather than the big picture; a blindness to the fact of habitual human suffering that would be willful if it weren’t also somehow sickeningly necessary.

I’m not sure. But I think I know why I’ve been reading more comic books lately.

12 comments

We like our cities logical
 / 

I like old Law & Order episodes — there’s a reason why I put the show smack in the middle of my Showroulette pitch — but wasn’t heartbroken when I’d heard that the flagship series was cancelled. (The quirkier, more salacious spinoffs, like “Law & Order: Freaky Sex Crimes Unit,” remain.) The show had been losing its edge for a while, in writing, acting, and even casting. I mean, how are going to cast the judge from The Wire as … a judge on Law & Order? That’s just lazy. At least the guys from The Sopranos didn’t always play mobsters.

A couple of things I’ve seen lately, though, in the wake of the show’s cancellation, suggest that Law & Order wasn’t quite as sharp because the city itself had lost its edge — in a good way, at least for New York (if not procedural dramas). This New York Times article notes how the show helped improve New York’s image to tourists and parvenues (“This Crime Spree Made New York Feel Safe“):

In 1990, when the show made its debut, 2,245 people were murdered in New York (a high-water mark), and several of those victims became emblematic of the haphazard, senseless violence that gripped the city…

[But] as [the detectives] pulled on the threads of the case, a pattern and motive always emerged. Unlike in the real New York, there is almost no pure street crime in “Law & Order.” In a show obsessed with the city’s class structure, you were far more likely to be murdered by your financial adviser than by a drug dealer. Crime has no single cause, the show seemed to argue, but crimes do, and they can be solved one at a time…

Mr. Wolf portrayed a city in which there were no senseless crimes, only crimes that hadn’t yet been made sense of. He took the conventions of the English country murder mystery and tucked them inside the ungovernable city. In so doing, for a national audience, he de-randomized New York violence.

The plunging murder rate has to help too — just 466 homicides in all of New York City in 2009, an all-time low. For a city of almost 9 million people, it’s pretty impressive that fewer people were killed in New York last year than follow me on Twitter. Let’s put it this way — Philadelphia and Baltimore, which also had record-low homicide numbers, together easily beat New York even though the two cities combined have something like half the population of Brooklyn. New York went from one of the most dangerous cities in the country to one of the safest.

The Wire’s David Simon, though, argues that the rising wealth and lowered danger of New York skews New York’s sense of what’s happening in American cities nationwide — and because New York dominates America’s media imagination, that has a disproportionate effect on how we understand what’s happening elsewhere. (Make sure you watch this video to the end, where he gives Law & Order a pop):

Some of this is familiar anti-NYC stuff, particularly from people who 1) live/grew up elsewhere and 2) work in/adjacent to media and publishing. But I think Simon’s bigger point, that the “urban experience” in America has become much more heterogeneous, both within and between cities, is 1) true and 2) has consequences, is really worth paying closer attention to.

One comment

The trouble with digital culture
 / 

One of the problems with studying any medium is that it’s too easy to mistake the part for the whole. Literature professors can confidently chart the development of the novel over centuries by referencing only a tiny well-regarded sliver of all novels published, some immensely popular and others forgotten. When you turn to the broader field of print culture, books themselves jostle against newspapers, advertisements, letters and memos, government and business forms, postcards, sheet music, reproduced images, money, business cards and nameplates, and thousands of other forms that have little if anything to do with the codex book. We tend towards influential, fractional exemplars, partly out of necessity (raised to the level of institutions) and partly out of habit (raised to the level of traditions). But trouble inevitably arises when we forget that the underexamined whole exists, or pretend that it doesn’t matter. It always does. If nothing else, the parts that we cut out for special scrutiny draw their significance in no small part by how they relate to the other, subterranean possibilities.

The culture of digital technology, like that of print, is impressively broad, thoroughly differentiated, and ubiquitously integrated into most of our working and non-working lives. This makes it difficult for media scholars and historians to study, just as it makes it difficult (but inevitable) for scholars to recognize how this technology has changed, is changing, and should continue to change the academy. Self-professed digital humanists — and I consider myself one — generally look at digital culture, then identify themselves and model their practices on only a sliver of the whole.

Digital culture far exceeds the world wide web, social networks, e-books, image archives, games, e-mail, and programming codes. It exceeds anything we see on our laptops, phones, or television screens. It even exceeds the programmers, hackers, pirates, clerics, artists, electricians, and engineers who put that code into practice, and the protocols, consoles, and infrastructure that govern and enable their use.

This is important, because digital humanists’ efforts to “hack the academy” most often turn out NOT to be about replacing an established analog set of practices and institutions with new digital tools and ideas. Instead, it’s a battle within digital culture itself: the self-styled “punk” culture of hackers, pirates, coders, and bloggers against the office suite, the management database, the IT purchaser. Twitter vs. Raisers’ Edge. These are also reductions, but potentially instructive ones.

For my own part, I tend to see digital humanism less as a matter of individual or group identity, or the application of digital tools to materials and scholarship in the humanities, but instead as something that is happening, continuing to emerge, develop, and differentiate itself, both inside and outside of the academy, as part of the spread of information and the continual redefinition of our assumptions about how we encounter media, technological, and other objects in the world. In this, every aspect of digital technology, whether old or new, establishment or counter-establishment, plays a part.

I’m writing this as part of the Center for History and New Media’s “Hacking the Academy” project, filed under the hashtag #criticism. Check out the other submissions here.

3 comments

Courage, the invisible, and the law
 / 

Ta-Nehisi Coates has been my favorite writer to read on this Rand Paul mess. (Short version – Ron Paul’s son won the Republican primary for a Senate seat in Kentucky, after which his candidacy kind of fell apart after some really clumsy and embarassing interviews where he tried to say that he was against the Civil Rights Act banning segregation, but that he wasn’t a racist, would have marched with MLK, and thinks a free society means people/businesses are free to do despicable things.) Here are the key bullet points:

  • Why can the media only focus on how bad this looks for Paul politically, rather than try to engage with his opinion as a serious position? “[W]hile I expect politicians and their handlers to think in terms of messaging, I also expect–perhaps foolishly–for media to be in the business of pushing past that messaging to actual ideas. What we get instead is a faux-objectivity, that avoids the substance of issues and instead focuses on how that substance is pitched. In that sense, much like the relationship between entertainment and many entertainment journalists, it’s really hard to see media as more than quasi-independent extension of campaign apparatus.”
  • Why can’t Paul and his conservative/libertarian supporters actually engage with this stuff more seriously? “What I’m driving at is raising the question about methods is never wrong, to the contrary it’s essential. That process is undermined by people who raise those questions, without having thought about them, without being able to speak to their nuances, and are mostly concerned with tribal signaling. People were dragged from their homes, raped and murdered over civil rights. Talk about it, by all means. But talk about it with the intellectual seriousness it deserves.This is not a third grade science fair project.”
  • This post, “Towards an abstract courage,” is my favorite, because it addresses the idea that certainly, Paul and every other decent person would have been allies with King and other desegregationists to bring segregated businesses down, without the federal government stepping in. “Now, after the police dogs, night-sticks and fire-hoses have been beaten back, Rand Paul wants to reopen the question, while, to be sure, claiming that he would have had the ‘courage to march with Martin Luther King.’ This is a common strain of courage. It chiefly shines through in men born 50 years too late. Presently among the crowd, they are distinguished at that decisive moment when queried about wars they won’t have to fight, in times they will never live. These men populate our history books. They are all on the wrong side.”
  • To that end, “Towards a manifested courage” tells the story of Joan Trumpauer, one of the white freedom riders arrested in Jackson, MS for integrating a lunch counter.

Coates links to Charles Lane in the Washington Post, who writes:

Suppose an African American customer sits down at a “whites only” restaurant and asks for dinner. The owner tells him to leave. The customer refuses and stays put. What are the owner’s options at that point? He can forcibly remove the customer himself, but, as Paul concedes, that could expose the restaurateur to criminal or civil liability. So he’ll have to call the cops. When they arrive, he’ll have to explain his whites-only policy and ask them to remove the unwanted black man because he’s violating it. But they can only do that on the basis of some law, presumably trespassing. In other words, the business owner’s discriminatory edict is meaningless unless some public authority enforces it.

Conversely, it is precisely because of this nexus between private discrimination and public enforcement that the larger community, through the political and judicial process, acquires a valid interest in legislating against discrimination. The public is entitled to say whether their tax money should pay for arresting black trespassers on whites-only property.

This, for me, is a huge point, since it establishes that segregation and desegregation aren’t at substance purely a matter of freedom of association or the content of characters/hearts, but a matter of recognition under the law. What we see are the people, those angry faces — but what makes the invisible infrastructure for all of that anger is the law.

To see how important — and how slippery — this point can be, read this NYT editorial excoriating Paul, then Chris Bray at History News Network, who justly slams the NYT:

[T]he American history of racial oppression and brutality is a history of government. The founding document of the republic privileged slavery as a lawful institution, and government served that institution for another seventy-eight years after that. The Emancipation Proclamation didn’t free all American slaves; it freed slaves in states engaged in rebellion…

After the abandonment of Reconstruction, “redeemed” southern governments rebuilt structures of oppression through law and the institutions of government. Jim Crow laws were laws; the regime of racial segregation was not simply a set of social choices. That guy standing in the schoolhouse door? He was a governor. Why is that so hard to figure out?

I think it’s because we’ve seen the pictures of the dogs and the firehoses and the angry men and women behind them, and we’ve assumed that that’s what discrimination looks like, to the point that we can’t understand anyone or anything as racist unless it looks like that.

But I don’t think that’s it at all. It’s a secret history of the invisible that we’re tracing. And the thing about being invisible is that it’s pretty easy to be everywhere.

4 comments

The He-Man generation
 / 

Henry Jenkins riffs on He-Man and other 80s-era action figures, offering a reading that starts out as largely charitable but ends up somewhere that’s actually quite beautiful:

When I speak to the 20 and 30 somethings who are leading the charge for transmedia storytelling, many of them have stories of childhood spent immersed in Dungeons and Dragons or Star Wars, playing with action figures or other franchise related toys, and my own suspicion has always been that such experiences shaped how they thought about stories.

From the beginning, they understood stories less in terms of plots than in terms of clusters of characters and in terms of world building. From the beginning they thought of stories as extending from the screen across platforms and into the physical realm. From the beginning they thought of stories as resources out of which they could create their own fantasies, as something which shifted into the hands of the audience once they had been produced and in turn as something which was expanded and remixed on the grassroots level.

The impetus for Jenkins’s generational meditation (besides an impending deadline for a keynote) is this io9 piece on “The 10 Most Unfortunate Masters of the Universe Toys,” which 1) I linked to a ways back on Twitter, and 2) is hilarious. Sample:

Stinkor was an evil skunk. How do we know he was evil? He has the suffix “-or” appended to his name. If his name was just “Stink,” he’d be kicking back in Castle Greyskull, pounding Schlitz with Man-At-Arms and scheduling baccarat night with Man-E-Faces.

Comments

A great disaster
 / 

The photos at The Big Picture are always stunning, but these pictures of Mount St. Helens are, I think, especially so. Sometimes it feels like we’re living in an age of one ecological disaster after another, and then it’s always instructive to remember the sheer, uncanny, unearthly power of these things. (It helps to have a great Mirah song for your soundtrack.)

Comments

Showroulette
 / 

IDEA

  • Instead of endlessly moaning about the supposed lack of serendipity on the internet, why can’t we try new ways to automate it?

STORIES

  • When I started college, you could watch Simpsons reruns for 90 minutes straight. The dorms picked up two different Fox channels that syndicated the show; one played it at 6 and 7, the other at 6 and 6:30. So if you were watching at 6, you could also pick which episode you wanted to watch. Sometimes, if both weren’t that interesting, or if you’d seen them recently, we’d just cut out for dinner and pick up the later episode. Usually, that wasn’t a problem; the Simpsons had been on for nine seasons, and nearly every episode was a classic.
  • If The Simpsons didn’t work, you could watch Law & Order on A&E. Or Bravo. Or TNT. Or Lifetime. I may be misremembering all of the channels the show was on at once, but there’s a reason why people (like, say, writers on The Simpsons) would joke about watching 14 straight hours of Law & Order on basic cable. The show was on a lot. And again, it hadn’t been on for twenty years with multiple spinoffs yet. Not every episode was great, but every episode was classic Law & Order, usually better casual dramatic entertainment than 90% of what was on then, let alone now.
  • If neither The Simpsons nor Law & Order were available, you could always watch The Shawshank Redemption. ALWAYS. I’ve seen this movie at least fifty times; I’ve probably seen it from-the-beginning, not-edited-for-TV once or twice.
  • Have you ever noticed what PBS does during pledge season, at least every other year? They play Ken Burns’s The Civil War. Or some other crazy-ass, awesome, twenty-year-old documentary or costume drama series. And I watch it. Randomly, in pieces, over and over again.

IDEAS

  • When people talk about serendipity, they’re not always talking about discovering something that’s totally brand-new. In fact, I’d hazard that they’re USUALLY talking about randomly unearthing something that’s comforting and familiar.
  • This is ten times more true with television.
  • But it’s true in other media, too. People like being able to browse through their own physical book and music collections, because you never know what might suddenly force itself upon you. The real anti-serendipitous edge to social networks like Facebook isn’t that they don’t introduce us to anyone new; it’s that they eliminate the unexpected meeting-up with a friend or former classmate. You don’t get to catch up because you’ve never fully lost touch.

PITCH

  • You actually can’t watch really old episodes of The Simpsons or Law & Order online. They have the new shows on Hulu and NBC.com and whatnot, but the syndication is a completely different deal. This saddens me.
  • Watching a syndicated Simpsons or Law & Order rerun isn’t actually random. It’s chance, which is different. Why not make it actually random?
  • This is Showroulette. You pick a show — let’s say that every show’s gotta have enough episodes to be in syndication, and only the backlist shows are available. Save the new ones for your running-show website — and you get a random episode.
  • This is the genius part, at least for me. Say you don’t like the episode you got. (I mean, sometimes Law & Order kinda stunk.) You can change it out for a different show, also picked at random. But every time you switch, you’ve got to watch an ad.
  • There are ads for the act breaks, too. Here, though, you can switch to a different episode without starting over – kinda like flipping the channel.

Come on! Tell me you wouldn’t try this! Tell me that 10% of you wouldn’t become obsessed with it.

Tell me there’s a better way to sell ads for older shows in syndication. Tell me there’s a better way to make a little more money off of long-running TV series without cannibalizing DVD sales. Tell me why this wouldn’t actually be better for most casual TV-watching (i.e., 90% of TV-watching) than any other online TV.

Tell me it wouldn’t be better to spin to a random episode of Soap or Hill Street Blues or Star Trek or The Bernie Mac Show than some random dude or chick or cat who might not even want to chat with you.

But mostly I want you to tell me ways to make this idea better. Or bigger. Or, just, more.

17 comments

Parasitic debts
 / 

File under “weird congruences”: the market for English professors collapsed with the investment banks. Two secondary markets staffed by Ivy League liberal arts high-achievers whose accomplishments looked great on paper but didn’t necessarily really know how to make anything. (NB: I don’t necessarily believe this, but let’s entertain it as an idea.)

Here’s Caleb Crain:

Every historical period has its predominant economic metaphors, and they seep into its culture. Not long ago, I had coffee with an undergraduate who reported that he had just read Derrida and Lacan on Poe and was excited by the idea that criticism might be the new literature. Twenty years ago, when I read Derrida and Lacan on Poe, my professors teased me the same exciting possibility. It occurs to me now that the idea is about as old as, and has certain structural parallels to, the notion that finance is the new manufacturing. Like criticism over literature, finance traditionally supervised manufacturing yet was thought to be parasitic upon it and less “creative” than it. And then at some moment, often specified on Michael Lewis’s authority as the 1980s, finance began to have the reputation of requiring more intellectual acumen than manufacturing and to attract the brighter and more modish talents. Similarly (though hard numbers are very hard to come by), academic criticism started to pay better than the creation of literature—certainly it offered more stability and social prestige. For a young American to ignore the economic signaling and go into manufacturing or literature rather than finance or criticism, he would have to be either idealist or dunderheaded.

And here’s Ezra Klein, interviewing a friend from Harvard:

What did you study at Harvard?

I focused on history and government and political philosophy.

And why did Goldman Sachs think that would be good training for investment banking?

Why Goldman thought I’d be good for investment banking is a very fair question. There are a lot of Harvard people at Goldman and they’ve put a lot of effort into recruiting from the school. They really try to attract liberal arts backgrounds. They say this stuff isn’t so complicated, that you’ll pick it up as you go along, that it’s all about teamwork, that they have training programs. That being said, it would be very hard to get a full-time job there without a previous summer internship.

How did you end up going to Goldman, though? Presumably, as a social sciences major, you hadn’t meant to head into the financial sector.

Investment banking was never something I thought I wanted to do. But the recruiting culture at Harvard is extremely powerful. In the midst of anxiety and trying to find a job at the end of college, the recruiters are really in your face, and they make it very easy. One thing is the internship program. It’s your junior year, it’s January or February, and you interview for internships. If all goes well, it’s sort of a summer-long interview. And if that goes well, you have an offer by September of your senior year, and that’s very appealing. It makes your senior year more relaxed, you can focus on your thesis, you can drink more. You just don’t have to worry about getting a job.

And separate from that, I think it’s about squelching anxiety in general. It checks the job box. And it’s a low-risk opportunity. It’s a two-year program with a great salary and the promise to get these skills that should be able to transfer to a variety of other areas. The idea is that once you pass the test at Goldman, you can do anything. You learn Excel, you learn valuation, you learn how to survive intense hours and a high-pressure environment. So it seems like a good way to launch your career. That’s very appealing for those of us at Harvard who were not in pre-professional majors.

It all torques the whole what-are-you-going-to-do-with-your-degree question in a new, more sinister direction. Let’s say you’re an Ivy League English major. Ten-to-twenty years ago, you would have gone to work in academia, publishing, or I-banking. (Maybe, maybe, the nonprofit sector — as Klein’s interviewee points out, Teach for America recruiters play on Ivy Leaguers’ anxieties in much the same way the Goldman recruiters did.)

Crain adds a weird allegory about islanders trading shells, which is too complex to summarize here, but comes off weirdly like a story about student-loan debt. Or maybe that’s just me.

6 comments

From space to time
 / 

Here’s more material on rethinking reading and attention.

James Bridle looks at Allen Lane’s 20th-century innovations with Penguin paperbacks and intuits a new axiom:

The book — by which I mean long-form text, in any format — is not a physical thing, but a temporal one.

Its primary definition, its signal quality, is the time we take to read it, and the time before it and the time after it that are also intrinsic parts of the experience: the reading of reviews and the discussions with our friends, the paths that lead us to it and away from it (to other books) and around it.

Publishers know very little about the habits and practices of their readers, and they impinge on this time very little, leaving much of the work to the retailers and distributors.

Amazon and Apple understand experience design, and they know more about our customers than we do; readers’ experience with our product is mediated and controlled by forces beyond ours.

Okay — this is a place to start. But there’s one problematic conclusion that Bridle pretty quickly draws from this. I wouldn’t toss it out, but I’d want to heavily qualify it. It’s the transformation from time as a condition of experience to something that determines value.

For example: Bridle says that readers don’t value what publishers do because all of the time involved in editing, formatting, marketing, etc., is invisible to the reader when they encounter the final product. Maybe. But making that time/labor visible CAN’T just mean brusquely insisting that publishers really are important and that they really do do valuable work. It needs to mean something like finding new ways for readers to engage with that work, and making that time meaningful as THEIR time.

In short, it means that writers and producers of reading material probably ought to consider taking themselves a little less seriously and readers and reading a little more seriously. Let’s actually BUILD that body of knowledge about readers and their practices — let’s even start by looking at TIME as a key determinant, especially as we move from print to digital reading — and try to offer a better, more tailored yet more variable range of experiences accordingly.

In that spirit, Alain Pierrot starts by thinking about this problem of how much of our time we give different texts, and offers a concrete idea for gathering and incorporating that data. (He’s building off an Information Architects post about an iPad project that incorporates Average Reading Time, or ART, into its interface! Brilliant!)

Can I read the next chapter of this essay, study or novel before I’m called to board the plane, before my train comes to the station, or should I pick a shorter magazine article or a short story from Ether Books, etc.?

On a more professional field, can I spare the time to read the full version of the report, or should I restrain to the executive summary, plus the most relevant divisions of the report before the meeting?

Or in academic situations, what amount of reading time should I plan to spend on the textbook, on the recommended readings and extra relevant titles before I sit term/final examinations?…

Wouldn’t it be a good idea to leverage all the occasions where digital texts are chunked in relevant spans to store their ART into metadata, made available to apps that would sort timewise what I’m proposed to read? Social media and relevant storage solutions might host measured ARTs at convenience.

XML structured editing affords many solutions for identifying the relevant sections of texts, and storing their length, timewise. I would love to see the feature embedded into a next version of ePub, or at least recommended as best practice.

Would that make sense for Google Books, Amazon, iBooks, publishers, librarians?

And this definitely dovetails with Amazon offering readers its most-highlighted passages. What do people pay attention to? And how long do they pay attention to it?

Reading Bridle — which is very smart, but seems to fall back on an assumption that publishers already know everything they need to know, they just aren’t doing what they need to do — and then reading the IA post — which is quite deliberately playing around with a bunch of different ideas, treating the digital text as a wide-open idea — even though they’re both about trying to pull off this very difficult move from space to time — illustrates how much is changing right now.

If I had to guess, I’d say, bet on the software guys to figure this out first. Even if publishers and booksellers have a better brick-and-mortar position, software is just plain faster. From space, to time.

3 comments