The murmur of the snarkmatrix…

Robin § Sooo / 2014-08-21 20:47:35
Tim § Sooo / 2014-08-21 18:23:13
Gavin § Sooo / 2014-08-21 18:10:44
Robin § Sooo / 2014-08-21 18:06:14
Bob Stepno § The structure of journalism today / 2014-03-10 18:42:32
Anne Field § The booster pack / 2014-02-15 16:15:39
Josh Rubenoff § The booster pack / 2014-02-09 04:29:20
David Lang § The right flavor of fame / 2014-02-07 15:13:49
Robin § The booster pack / 2014-02-06 16:41:42
Navneet Alang § The booster pack / 2014-02-06 03:40:31

How a sale is made
 / 

It’s always nice when three blogs in your “must read” folder happily converge. First, Jason Kottke pulls a couple of super-tight paragraphs from a Chronicle of Higher Ed article by Clancy Martin, philosophy professor and onetime salesman of luxury jewelry, about how he plied his former trade:

The jewelry business — like many other businesses, especially those that depend on selling — lends itself to lies. It’s hard to make money selling used Rolexes as what they are, but if you clean one up and make it look new, suddenly there’s a little profit in the deal. Grading diamonds is a subjective business, and the better a diamond looks to you when you’re grading it, the more money it’s worth — as long as you can convince your customer that it’s the grade you’re selling it as. Here’s an easy, effective way to do that: First lie to yourself about what grade the diamond is; then you can sincerely tell your customer “the truth” about what it’s worth.

As I would tell my salespeople: If you want to be an expert deceiver, master the art of self-deception. People will believe you when they see that you yourself are deeply convinced. It sounds difficult to do, but in fact it’s easy — we are already experts at lying to ourselves. We believe just what we want to believe. And the customer will help in this process, because she or he wants the diamond — where else can I get such a good deal on such a high-quality stone? — to be of a certain size and quality. At the same time, he or she does not want to pay the price that the actual diamond, were it what you claimed it to be, would cost. The transaction is a collaboration of lies and self-deceptions.

This structure is so neat that it has to be generalizable, right? Look no further than politics, says Jamelle Bouie (filling in for Ta-Nehisi Coates). In “Why Is Stanley Kurtz Calling Obama a Socialist?“, he writes that whether or not calling Obama a socialist started out as a scare tactic, conservative commentators like Kurtz actually believe it now. He pulls a quote from Slacktivist’s Fred Clark on the problem of bearing false witness:

What may start out as a well-intentioned choice to “fight dirty” for a righteous cause gradually forces the bearers of false witness to behave as though their false testimony were true. This is treacherous — behaving in accord with unreality is never effective, wise or safe. Ultimately, the bearers of false witness come to believe their own lies. They come to be trapped in their own fantasy world, no longer willing or able to separate reality from unreality. Once the bearers of false witness are that far gone it may be too late to set them free from their self-constructed prisons.

What’s nice about pairing these two observations is that Martin’s take on self-deception in selling jewelry is binary, a pas de deux with two agents, both deceiving themselves and letting themselves be deceived. Bouie and Clark don’t really go there, but the implication is clear: in politics, the audience is ready to be convinced/deceived because it is already convincing/deceiving itself.

There’s no more dangerous position to be in, truth-wise, than to think you’re getting it figured out, that you see things other people don’t, that you’re getting over on someone. That’s how confidence games work, because that’s how confidence works. And almost nobody’s immune, as Jonah Lehrer points out, quoting Richard Feynman on selective reporting in science. He refers to a famous 1909 experiment which sought to measure the charge of the electron:

Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.

It’s all little lies and adjustments, all the way down. Where else can I get such a good deal on such a high-quality stone?

8 comments

The New Dead Media Expert at Wired
 / 

In the last year, the other two Snarkmasters switched jobs, with Robin joining Twitter and Matt moving to NPR. Well, friends, scratch off number three. Starting Wednesday, I’ll be a full-time contributor for Wired.com, writing about e-readers and emerging technology and all things awesome for Gadget Lab, plus maybe occasional pieces elsewhere in the Wired.com ecosystem. That’s right — me and Jonah Lehrer are going to get this whole fourth culture thing started.

Now, you may have heard that Wired editors Chris Anderson and Michael Wolff declared that “The Web is Dead, Long Live the Internet,” in a magazine cover story that was also featured prominently at Wired.com. Let me tell you, friends, I was delighted to hear the news. You see, writing about the web has always made me feel a little uncomfortable. Not the actual writing — just the explaining it to other people part.

You see, I worked so hard to become an expert on dead media, like the book and the newspaper and cinema and poetry, that writing about something living, even using something living, always felt like the grave robbing the cradle.

Now my portfolio is much tidier. Radio and TV hosts can introduce my credentials in one line: “Tim Carmody, renowned expert on dead media and its future.” It’s probably why they hired me in the first place.

[Actually, they advertised the job, I applied, they gave me a one-day tryout (One,Two,Three), and then gave me the nod at the end of the past week, while I was writing for Kottke. It’s been a heady month.]

Anyways, I hope you’ll stop by and bring the Snarkmatrix love to the comments over there. Tell your friends. Link to what I write, all the time, even or especially when you think I’m wrong. (I’ll be able to explain why I’m not.)

And of course, I’ll still be right here, writing about culture, really old technology, and everything else. The paisley just wouldn’t be right without the blue, orange, and green.

6 comments

Machines making mistakes
 / 

Why Jonah Lehrer can’t quit his janky GPS:

The moral is that it doesn’t take much before we start attributing feelings and intentions to a machine. (Sometimes, all it takes is a voice giving us instructions in English.) We are consummate agency detectors, which is why little kids talk to stuffed animals and why I haven’t thrown my GPS unit away. Furthermore, these mistaken perceptions of agency can dramatically change our response to the machine. When we see the device as having a few human attributes, we start treating it like a human, and not like a tool. In the case of my GPS unit, this means that I tolerate failings that I normally wouldn’t. So here’s my advice for designers of mediocre gadgets: Give them voices. Give us an excuse to endow them with agency. Because once we see them as humanesque, and not just as another thing, we’re more likely to develop a fondness for their failings.

This connects loosely with the first Snarkmarket post I ever commented on, more than six (!) years ago.

2 comments

Straw men, shills, and killer robots
 / 

Indulge me, please, for digging into some rhetorical terminology. In particular, I want to try to sort out what we mean when we call something a “straw man.”

Here’s an example. Recently, psychologist/Harvard superstar Steven Pinker wrote an NYT op-ed, “Mind Over Mass Media,” contesting the idea that new media/the internet hurts our intelligence or our attention spans, and specifically contesting trying to marshal neuroscience studies in support of these claims. Pinker writes:

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Please note that nowhere does Pinker name these “critics of new media” or attribute this quote, “experience can change the brain.” But also note that everyone and their cousin immediately seemed to know that Pinker was talking about Nicholas Carr, whose new book The Shallows was just reviewed by Jonah Lehrer, also in the NYT. Lehrer’s review (which came first) is probably best characterized as a sharper version of Pinker’s op-ed:

There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.

Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.

I also really liked this wry observation that Lehrer added on at his blog, The Frontal Cortex:

Much of Carr’s argument revolves around neuroscience, as he argues that our neural plasticity means that we quickly become mirrors to our mediums; the brain is an information-processing machine that’s shaped by the kind of information it processes. And so we get long discussions of Eric Kandel, aplysia and the malleability of brain cells. (Having work in the Kandel lab for several years, I’m a big fan of this research program. I just never expected the kinase enzymes of sea slugs to be applied to the internet.)

Now, at least in my Twitter feed, the response to Pinker’s op-ed was positive, if a little backhanded. This is largely because Pinker largely seems to have picked this fight less to defend the value of the internet or even the concept of neuroplasticity than to throw some elbows at his favorite target, what he calls “blank slate” social theories that dispense with human nature. He wrote a contentious and much-contested book about it. He called it The Blank Slate. That’s why he works that dig in about how “the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.” Pinker doesn’t think we’re clay at all; instead, we’re largely formed.

So on Twitter we see a lot of begrudging support: “Pinker’s latest op-ed is good. He doesn’t elevate 20th C norms to faux-natural laws.” And: “I liked Pinker’s op-ed too, but ‘habits of deep reflection… must be acquired in… universities’? Debunk, meet rebunk… After that coffin nail to the neuroplasticity meme, Pinker could have argued #Glee causes autism for all I care.” And: “Surprised to see @sapinker spend so much of his op-ed attacking straw men (“critics say…”). Overall, persuasive though.

And this is where the idea of a “straw man” comes in. See, Pinker’s got a reputation for attacking straw men, which is why The Blank Slate, which is mostly a long attack on a version of BF Skinner-style psychological behaviorism, comes off as an attack on postmodern philosophy and literary criticism and mainstream liberal politics and a whole slew of targets that get lumped together under a single umbrella, differences and complexities be damned.

(And yes, this is a straw man characterization of Pinker’s book, probably unfairly so. Also, neither everyone nor the cousins of everyone knew Pinker was talking about Carr. But we all know what we know.)

However, on Twitter, this generated an exchange between longtime Snarkmarket friend Howard Weaver and I about the idea of a straw man. I wasn’t sure whether Howard, author of that last quoted tweet, was using “straw men” just to criticize Pinker’s choice not to call out Carr by name, or whether he thought Pinker had done what Pinker often seems to do in his more popular writing, arguing against a weaker or simpler version of what the other side actually thinks. That, at least, is a critically stronger sense of what’s meant by straw men. (See, even straw men can have straw men!)

So it seems like there are (at least) four different kinds of rhetorical/logical fallacies that could be called “arguing against a straw man”:

  1. Avoiding dealing with an actual opponent by making them anonymous/impersonal, even if you get their point-of-view largely right;
  2. Mischaracterizing an opponent’s argument (even or especially if you name them), usually by substituting a weaker or more easily refuted version;
  3. Assuming because you’ve shown this person to be at fault somewhere, that they’re wrong everywhere — “Since we now know philosopher Martin Heidegger was a Nazi, how could anyone have ever qualified him for a bank loan?”;
  4. Cherry-picking your opponent, finding the weakest link, then tarring all opponents with the same brush. (Warning! Cliché/mixed metaphor overload!)

Clearly, you can mix-and-match; the most detestable version of a straw man invents an anonymous opponent, gives him easily-refuted opinions nobody actually holds, and then assumes that this holds true for everybody who’d disagree with you. And the best practice would seem to be:

  1. Argue with the ideas of a real person (or people);
  2. Pick the strongest possible version of that argument;
  3. Characterize your opponent’s (or opponents’) beliefs honestly;
  4. Concede points where they are, seem to be, or just might be right.

If you can win a reader over when you’ve done all this, then you’ve really written something.

There’s even a perverse version of the straw man, which Paul Krugman calls an “anti-straw man,” but I want to call “a killer robot.” This is when you mischaracterize an opponent’s point-of-view by actually making it stronger and more sensible than what they actually believe. Krugman’s example comes from fiscal & monetary policy, in particular imagining justifications for someone’s position on the budget that turns out to contradict their stated position on interest rates. Not only isn’t this anyone’s position, it couldn’t be their position if their position was consistent at all. I agree with PK that this is a special and really interesting case.

Now, as Howard pointed out, there is another sense of “straw man,” used to mean any kind of counterargument that’s introduced by a writer with the intent of arguing against it later. You might not even straight-out refute it; it could be a trial balloon, or thought experiment, or just pitting opposites against each other as part of a range of positions. There’s nothing necessarily fallacious about it, it’s just a way of marking off an argument that you, as a writer, wouldn’t want to endorse. (Sometimes this turns into weasely writing/journalism, too, but hey, again, it doesn’t have to be.)

Teaching writing at Penn, we used a book that used the phrase “Straw Man” this way, and had a “Straw Man” exercise where you’d write a short essay that just had an introduction w/thesis, a counterargument (which we called a “straw man”), then a refutation of that counterargument. And then there was a “Straw Man Plus One” assignment, where you’d…

Never mind. The point is, we wound up talking about straw men a lot. And we’d always get confused, because sometimes “straw man” would mean the fallacy, sometimes it would mean the assignment, sometimes it would be the counterargument used in that (or any) assignment, sometimes it would be the paragraph containing the counterargument…

Oy. By 2009-10, confusion about this term had reached the point where two concessions were made. First, for the philosophers in the crowd who insisted on a strict, restrictive meaning of “straw man” as a fallacy, and who didn’t want their students using fallacious “straw men” in their “Straw Man” assignments, they changed the name of the assignment to “Iron Man.” Then, as part of a general move against using gendered language on the syllabus, it turned into “Iron Person.” Meanwhile, the textbook we used still called the assignment “Straw Man,” turning confusion abetted to confusion multiplied.

I probably confused things further by referring to the “iron person” assignment as either “the robot” — the idea being, again, that you build something that then is independent of you — or “the shill.” This was fun, because I got to talk about how con men (and women) work. The idea of the shill is that they pretend to be independent, but they’re really on the con man’s side the entire time. The best shill sells what they do, so that you can’t tell they’re in on it. They’re the ideal opponent, perfect as a picture. That got rid of any lingering confusion between the fallacy and the form.

Likewise, I believe that here and now we have sorted the range of potential meanings of “straw man,” once and for all. And if you can prove that I’m wrong, well, then I’m just not going to listen to you.

One comment

Adaptive Melancholy
 / 

If it makes us less likely to eat or dance or drink or screw, and sometimes makes us kill ourselves, then why do people get depressed?

This radical idea — the scientists were suggesting that depressive disorder came with a net mental benefit — has a long intellectual history. Aristotle was there first, stating in the fourth century B.C. “that all men who have attained excellence in philosophy, in poetry, in art and in politics, even Socrates and Plato, had a melancholic habitus; indeed some suffered even from melancholic disease”…

But Andrews and Thomson weren’t interested in ancient aphorisms or poetic apologias. Their daunting challenge was to show how rumination might lead to improved outcomes, especially when it comes to solving life’s most difficult dilemmas. Their first speculations focused on the core features of depression, like the inability of depressed subjects to experience pleasure or their lack of interest in food, sex and social interactions. According to Andrews and Thomson, these awful symptoms came with a productive side effect, because they reduced the possibility of becoming distracted from the pressing problem.

The capacity for intense focus, they note, relies in large part on a brain area called the left ventrolateral prefrontal cortex (VLPFC), which is located a few inches behind the forehead. While this area has been associated with a wide variety of mental talents, like conceptual knowledge and verb conjugation, it seems to be especially important for maintaining attention. Experiments show that neurons in the VLPFC must fire continuously to keep us on task so that we don’t become sidetracked by irrelevant information. Furthermore, deficits in the VLPFC have been associated with attention-deficit disorder.

Several studies found an increase in brain activity (as measured indirectly by blood flow) in the VLPFC of depressed patients. Most recently, a paper to be published next month by neuroscientists in China found a spike in “functional connectivity” between the lateral prefrontal cortex and other parts of the brain in depressed patients, with more severe depressions leading to more prefrontal activity. One explanation for this finding is that the hyperactive VLPFC underlies rumination, allowing people to stay focused on their problem. (Andrews and Thomson argue that this relentless fixation also explains the cognitive deficits of depressed subjects, as they are too busy thinking about their real-life problems to bother with an artificial lab exercise; their VLPFC can’t be bothered to care.) Human attention is a scarce resource — the neural effects of depression make sure the resource is efficiently allocated.

But the reliance on the VLPFC doesn’t just lead us to fixate on our depressing situation; it also leads to an extremely analytical style of thinking. That’s because rumination is largely rooted in working memory, a kind of mental scratchpad that allows us to “work” with all the information stuck in consciousness. When people rely on working memory — and it doesn’t matter if they’re doing long division or contemplating a relationship gone wrong — they tend to think in a more deliberate fashion, breaking down their complex problems into their simpler parts.

The bad news is that this deliberate thought process is slow, tiresome and prone to distraction; the prefrontal cortex soon grows exhausted and gives out. Andrews and Thomson see depression as a way of bolstering our feeble analytical skills, making it easier to pay continuous attention to a difficult dilemma. The downcast mood and activation of the VLPFC are part of a “coordinated system” that, Andrews and Thomson say, exists “for the specific purpose of effectively analyzing the complex life problem that triggered the depression.” If depression didn’t exist — if we didn’t react to stress and trauma with endless ruminations — then we would be less likely to solve our predicaments. Wisdom isn’t cheap, and we pay for it with pain.


Radiohead – No Surprises
by popefucker
2 comments

Reading and the Panda's Thumb
 / 

In the fall of my freshman year of college, I read an essay by Stephen Jay Gould called “The Panda’s Thumb” (drawn, I think, from a book by the same name) for an Introduction to Philosophy class.* The premise was that evolution was best revealed not in examples of perfect adaptation of a species to its environment, but in biological accidents, cobbled-together solutions. The panda’s “thumb,” for example, isn’t a finely tailored opposable digit like the human’s, but a kind of randomly mutated bone spur at the end of the rest, held together by an overstretched tendon where a ligament should be. Evolution doesn’t produce perfect solutions – whenever possible, it uses what’s there, readapting existing features (or exaggerated versions of them) to fit new uses. To use the terminology of the late anthropologist Claude Lévi-Strauss, evolution for the most part isn’t an engineer, creating the perfect tools to fit the job, but a bricoleur, a kind of everyday handyman, perfectly willing to use a butterknife in place of a screwdriver if the butterknife is what’s on hand.

The neuroscientist Stanislas Dehaene, of the Collège de France, has been getting a lot of buzz for his new book Reading in the Brain: The Science and Evolution of a Human Invention, which that reading and writing and evolved in much the same way, making use of existing parts of the visual cortex and rewiring them. What’s more, Dehaene claims that reading and writing’s dependence on a part of the brain that originally evolved to serve other purposes has actually helped determine how reading has emerged historically, and even the shapes of letters themselves. Writing, in other words, isn’t entirely arbitrary – it’s limited by how far our brains can bend.

The neuroscience of writing also suggests that it’s primarily a visual phenomenon, and only secondarily a linguistic one (in the sense of language = speech). But the part of the visual cortex that handles reading relays visual recognition of letters to the speech and motor and conceptual centers of the brain so quickly and efficiently that it almost doesn’t matter; reading becomes a total mental act, integrating nearly all of our mental capacities with split-second timing.

Here’s a summary offered by Susan Okie in her review of the book in the Washington Post:

“Only a stroke of good fortune allowed us to read,” Dehaene writes near the end of his tour of the reading brain. It was Homo sapiens’s luck that in our primate ancestors, a region of the brain’s paired temporal lobes evolved over a period of 10 million years to specialize in the visual identification of objects. Experiments in monkeys show that, within this area, individual nerve cells are dedicated to respond to a specific visual stimulus: a face, a chair, a vertical line. Research suggests that, in humans, a corresponding area evolved to become what Dehaene calls the “letterbox,” responsible for processing incoming written words. Located in the brain’s left hemisphere near the junction of the temporal and occipital lobes, the letterbox performs identical tasks in readers of all languages and scripts. Like a switchboard, it transmits signals to multiple regions concerned with words’ sound and meaning — for example, to areas that respond to noun categories (people, animals, vegetables), to parts of the motor cortex that respond to action verbs (“kiss,” “kick”), even to cells in the brain’s associative cortex that home in on very specific stimuli. (In one epileptic patient, for example, a nerve cell was found that fired only in response to images or the written name of actress Jennifer Aniston!)

This result astonishes me, since I was pretty sure that the one cell = one concept model of the brain — what Douglas Hofstadter calls “the grandmother neuron” theory — had been completely debunked. Apparently, though, there’s a Jennifer Aniston cell? At least for some of us? It might not be the ONLY cell that lights up – but it doesn’t light up for anything else (and appears, at least in this case, to function at either the image OR the written name, suggesting a degree of cognitive interchangability between the two).

These reading cells work differently for words we immediately recognize – like the name of Jennifer Aniston – and those that we don’t (again suggesting that the brain works by macros and shortcuts whenever it can). Jonah Lehrer explains:

One of the most intriguing findings of this new science of reading is that the literate brain actually has two distinct pathways for reading. One pathway is direct and efficient, and accounts for the vast majority of reading comprehension — we see a group of letters, convert those letters into a word, and then directly grasp the word’s meaning. However, there’s also a second pathway, which we use whenever we encounter a rare and obscure word that isn’t in our mental dictionary. As a result, we’re forced to decipher the sound of the word before we can make a guess about its definition, which requires a second or two of conscious effort.

Lehrer also keys in Dehaene’s conclusions about the evolution of writing systems:

The second major mystery explored by Dehaene is how reading came to exist. It’s a mystery that’s only deepened by the recency of literacy: the first alphabets were invented less than 4,000 years ago, appearing near the Sinai Peninsula. (Egyptian hieroglyphic characters were used to represent a Semitic language.) This means that our brain wasn’t “designed” for reading; we haven’t had time to evolve a purpose-built set of circuits for letters and words. As Deheane eloquently notes, “Our cortex did not specifically evolve for writing. Rather, writing evolved to fit the cortex.”

Deheane goes on to provide a wealth of evidence showing this cultural evolution in action, as written language tweaked itself until it became ubiquitous. In fact, even the shape of letters — their odd graphic design — has been molded by the habits and constraints of our perceptual system. For instance, the neuroscientists Marc Changizi and Shinsuke Shimojo have demonstrated that the vast majority of characters in 115 different writing systems are composed of three distinct strokes, which likely reflect the sensory limitations of cells in the retina. (As Dehaene observes, “The world over, characters appear to have evolved an almost optimal combination that can easily be grasped by a single neuron.”) The moral is that our cultural forms reflect the biological form of the brain; the details of language are largely a biological accident.

“Writing evolved to fit the cortex.” On the one hand, it makes perfect sense that a human invention would be limited by human biology – that the visual forms of writing would be limited by our abilities to recognize patterns in the same way that the sounds of letters are limited by the shape and structure of the human mouth.

On the other, it so often seems that writing is BIGGER than we are, or at least independent – that it stands apart and outside of us, like it really was a gift from an Egyptian god – or that it’s so abstract, so removed in modern script from any kind of mimetic resemblance to the world, that it’s a purely arbitrary system, dictated by the requirements of the hand rather than the eye.

The other cool thing about Dehaene’s research? All that brain imaging and reading research and mapping of connections between different parts of the brain has helped him to figure out a neuroscientific way to begin to 1) define consciousness and 2) explain why consciousness is evolutionarily desirable. (Really.)

What I propose is that “consciousness is global information in the brain” — information which is shared across different brain areas. I am putting it very strongly, as “consciousness is”, because I literally think that’s all there is. What we mean by being conscious of a certain piece of information is that it has reached a level of processing in the brain where it can be shared… The criterion of information sharing relates to the feeling that we have that, whenever a piece of information is conscious, we can do a very broad array of things with it. It is available…

In several experiments, we have contrasted directly what you can do subliminally and what you can only do consciously. Our results suggest that one very important difference is the time duration over which you can hold on to information. If information is subliminal, it enters the system, creates a temporary activation, but quickly dies out. It does so in the space of about one second, a little bit more perhaps depending on the experiments, but it dies out very fast anyway. This finding also provides an answer for people who think that subliminal images can be used in advertising, which is of course a gigantic myth. It’s not that subliminal images don’t have any impact, but their effect, in the very vast majority of experiments, is very short-lived. When you are conscious of information, however, you can hold on to it essentially for as long as you wish,. It is now in your working memory, and is now meta-stable. The claim is that conscious information is reverberating in your brain, and this reverberating state includes a self-stabilizing loop that keeps the information stable over a long duration. Think of repeating a telephone number. If you stop attending to it, you lose it. But as long as you attend to it, you can keep it in mind.

Our model proposes that this is really one of the main functions of consciousness: to provide an internal space where you can perform thought experiments, as it were, in an isolated way, detached from the external world. You can select a stimulus that comes from the outside world, and then lock it into this internal global workspace. You may stop other inputs from getting in, and play with this mental representation in your mind for as long as you wish…

In the course of evolution, sharing information across the brain was probably a major problem, because each area had a specialized goal. I think that a device such as this global workspace was needed in order to circulate information in this flexible manner. It is extremely characteristic of the human mind that whatever result we come up with, in whatever domain, we can use it in other domains. It has a lot to do, of course, with the symbolic ability of the human mind. We can apply our symbols to virtually any domain.

Consciousness, in other words, is like writing for the brain – it fixes information that would otherwise be ephemeral, and allows you to perform more complicated operations with it. (Kind of like how we need a pencil and paper to do complicated arithmetic.)

Play with those analogies for a while. I’m going to start reading Dehaene’s book.

*Digression: This class was taught by a prof my friends and I nicknamed “Skeletor,” an ancient woman who couldn’t project her voice beyond the first few rows of the long rows of 50+ desks that passed for a seminar at Michigan State. On some days, she would wear a wrap-around microphone that inevitably dropped down her neck, becoming completely useless. She was always totally oblivious of this. We used to joke that she should wear a live snake wrapped around her neck instead – it would amplify her speech just as well, but everyone would pay rapt attention. I skipped about half of the classes to this class, netting one of my four 3.5s as an undergrad, all of them in my freshman year. If I hadn’t taken Ethics with the great Herbert Garelick the next semester, I’d probably be a math teacher today.

P.S.: I forgot to link to this great Scientific American interview with Dehaene. Here’s a snip:

COOK: In the book, you describe a part of the brain as the “letterbox.” Can you please explain what you mean by that?

DEHAENE: This is the name I have given to a brain region that systematically responds whenever we read words. It is in the left hemisphere, on the inferior face, and belongs to the visual region that helps us recognize our environment. This particular region specializes in written characters and words. What is fascinating is that it is at the same location in all of us – whether we read Chinese, Hebrew or English, whether we’ve learned with whole-language or phonics methods, a single brain region seems to take on the function of recognizing the visual word.

COOK: But reading is a relatively recent invention, so what was the “letterbox” doing before we had written language?

DEHAENE: An excellent question – we don’t really know. The whole region in which this area is inserted is involved in invariant visual recognition – it helps us recognize objects, faces and scenes, regardless of the particular viewpoint, lighting, and other superficial variations.

We are starting to do brain-imaging experiments in illiterates, and we find that this region, before it responds to words, has a preference for pictures of objects and faces. We are also finding that this region is especially attuned to small features present in the contours of natural shapes, such as the “Y” shape in the branches of trees. My hypothesis is our letters emerged from a recycling of those shapes at the cultural level. The brain didn’t have enough time to evolve “for” reading – so writing systems evolved “for” the brain!

17 comments

In Case You Missed It
 / 

Does the Brain Like E-Books?” sounds and reads too much like a Snarkmarket original to be ignored. I like this bit from my friend and almost-colleague (if I had locked down that UCSB job) Alan Liu:

Initially, any new information medium seems to degrade reading because it disturbs the balance between focal and peripheral attention. This was true as early as the invention of writing, which Plato complained hollowed out focal memory. Similarly, William Wordsworth’s sister complained that he wasted his mind in the newspapers of the day. It takes time and adaptation before a balance can be restored, not just in the “mentality” of the reader, as historians of the book like to say, but in the social systems that complete the reading environment.

Right now, networked digital media do a poor job of balancing focal and peripheral attention. We swing between two kinds of bad reading. We suffer tunnel vision, as when reading a single page, paragraph, or even “keyword in context” without an organized sense of the whole. Or we suffer marginal distraction, as when feeds or blogrolls in the margin (”sidebar”) of a blog let the whole blogosphere in.

And I adore this closer look at the cognitive implications of reading, as relayed by Jonah Lehrer:

I think one of the most interesting findings regarding literacy and the human cortex is the fact that there are actually two distinct pathways activated by the sight of letters. (The brain is stuffed full of redundancies.) As the lab of Stanislas Dehaene has found, when people are reading “routinized, familiar passages” a part of the brain known as the visual word form area (VWFA, or the ventral pathway) is activated. This pathway processes letters and words in parallel, allowing us to read quickly and effortlessly. It’s the pathway that literate readers almost always rely upon.

But Dehaene and colleagues have also found a second reading pathway in the brain, which is activated when we’re reading prose that is “unfamiliar”. (The scientists trigger this effect in a variety of ways, such as rotating the letters, or using a hard to read font, or filling the prose with obscure words.) As expected, when the words were more degraded or unusual, subjects took longer to comprehend them. By studying this process in an fMRI machine, Dehaene could see why: reading text that was highly degraded or presented in an unusual fashion meant that we relied on a completely different neural route, known as the dorsal reading pathway. Although scientists had previously assumed that the dorsal route ceased to be active once we learned how to read, Deheane’s research demonstrates that even literate adults still rely, in some situations, on the same patterns of brain activity as a first-grader, carefully sounding out the syllables.

That’s right — Mallarmé’s “Un coup de dés” actually pushes through to a different part of your brain — because it taps into new graphic possibilities, as well as semantic (and syntactic) ones. And that, my friends, is poetry — i.e. “language charged with meaning to the utmost possible degree.”

Or it is, so long as we keep making it new:

The larger point is that most complaints about E-Books and Kindle apps boil down to a single problem: they don’t feel as “effortless” or “automatic” as old-fashioned books. But here’s the wonderful thing about the human brain: give it a little time and practice and it can make just about anything automatic. We excel at developing new habits. Before long, digital ink will feel just as easy as actual ink.

Or today’s graphic avant-garde will feel as easy as tomorrow’s MOR pleasures.

Think about a newspaper – so much potential for marginal distraction! All those graphic collisions of text upon itself, with pictures and advertisements and such, in tiny type and held in an unusual bodily orientation. Then they added color! In the nineteenth century, the newspaper was a sensory onslaught akin to watching the commercials surrounding Saturday morning cartoons. Now, it’s straightforward, orderly — even stately.

There’s a great, probably unintentional allegory of this transformation in Citizen Kane. It plays out as the fossilization of a marriage, and the crystallization of Kane’s political intentions – moving from anarchic gadfly to demagogic gubernatorial candidate – but it’s also about the normalization (and neutralization) of newspaper reading. It goes from marginal distraction to tunnel vision, and in just six moves.

One comment