spacer image
spacer image

Welcome! You're looking at an archived Snarkmarket entry. We've got a fresh look—and more new ideas every day—on the front page.

February 23, 2009

<< Slytherin, FOR SURE | The Free Arts and the Servile Arts >>

The Logic of Oscar Predictions

Nate Silver, the web’s Statistician Laureate*, created a statistical model to predict the winners of the six major Oscar categories. He got four out of six right, missing Penélope Cruz for Best Supporting Actress and Sean Penn for Best Actor. In his postmortem, Silver notes that Kate Winslet’s flitting in and out of the category threw off his model, but also offers a broader defense of his approach:

Ultimately, this is not about humans versus computers. The computer I used to forecast the Oscars didn’t think for itself — it merely followed a set of instructions that I provided to it. Rather, it is a question of heuristics: when and whether subjective (but flexible) judgments, such as those a film critic might make, are better than objective (but inflexible) rulesets.

The advantage in making a subjective judgment is that you may be able to account for information that is hard to quantify — for example, Rourke’s behavioral problems or the politics of Sean Penn playing a gay icon in a year where Hollywood felt very guilty about the passage of Proposition 8. The disadvantage is that human beings have all sorts of cognitive biases, and it’s easy to allow these biases to color one’s thinking. I would guess, for instance, that most critics would have trouble decoupling the question of who they thought should win the Oscars — those performances they liked the best personally — from who they thought actually would win them.

On the lizard-brain side, I really enjoyed Dana Stevens’s critique of Silver’s approach in Slate (which incidentally got both Penn and Cruz right):

Numerous attempts to read through Nate Silver’s highly technical crunching of the Oscar numbers kept stalling out at this sentence: “Formally speaking, this required the use of statistical software and a process called logistic regression.” The Academy’s voting practices don’t involve “logistic regression”; they involve actual regression, the acting out of primitive, unmappable affects like grief, pity, fear, and desire. Not to give Heath Ledger the best supporting actor trophy this year would feel like a desecration of his memory (a sentiment with which I agree, by the way; despite the many other fine nominees in this category, it’s gotta be Heath). And Penélope will win for Vicky Cristina Barcelona because, like you, every red-blooded viewer of that movie, male or female, wants to lurk in her, um, bushes. (All scenes in VCB with Cruz and Javier Bardem speaking Spanish=¡Arriba!; all scenes with Scarlett Johansson and Rebecca Hall speaking English=Zzzzz.)

But I think I love the way Silver shifts the frame of Oscar predictions even more. He’s right that there are two different games here — playing along with the Academy voters, by offering your own choices as to which nominees OUGHT to win, and predicting the behavior of the Academy voters. You’ll see commentators switch back and forth between these two modes pretty seamlessly — “I think X is the best movie this year, but Y is a lock to win.”

As a movie critic (professional or amateur), distinguishing aesthetically between a set of performances is based on a lot of implicit, intuitive judgments, but the Oscar-forecasting is even murkier, based on quasi-empathetic rules of thumb — so-and-so is “due,” film X has “momentum,” or “rules” about Oscar voters’ love of mental illness or films about Nazis. (By the way, I’m writing a screenplay about gay and mentally ill victims of the Holocaust. I know they’ve already made Bent, but the time was wrong for Nazis + gay — now it’s a lock.) This stuff, more than anything, cries out for explicit analysis, to see if these hunches or rules of thumb actually have any explanatory power.

In short, Silver’s model is NOT a model of the deliberative (or pseudo-deliberative) process of the individual academy voter. It’s a second-order problem, not a first-order one. I think that’s a really powerful insight — particularly in so far as it DOESN’T posit that the academy voters’ behavior is logical or deliberative, merely that it falls on aggregate into certain predictable patterns.

The next question is whether this second-order (and third-order) discourse corrodes into first-order thinking. (Psst = not all the voters watch all of the movies — and people engage in “strategic voting.”) I’d be really interested to know whether Silver factors in an aggregate of critics’ predictions — you know, a poll of subjective polls — as an index (and perhaps a partial cause) of voters’ behavior… or if the NON-correlation of these two falls into any predictable patterns, usw…

* NB: In a recent post, my favorite medievalist blogger Eileen Joy has a great aside:

It occurs to me, too, that medieval studies would be well served to have a position similar to the one the biologist Richard Dawkins has occupied since 1995 at Oxford, the Charles Simonyi Chair of Public Understanding of Science. I think a really progressive university should create a Chair of Public Understanding of Medieval Studies and then appoint someone to that Chair who would devote their career to the advocacy of the relevance of medieval studies to contemporary issues and problems—cultural, social, political, and otherwise. [Well, a girl can dream, can’t she?]

Chair of Public Understanding of _________! What a wonderful, marvelously generalizable idea.

Tim-sig.gif
Posted February 23, 2009 at 6:51 | Comments (0) | Permasnark
File under: Movies, Science
spacer image
spacer image