Amy Harmon, “Autistic and Seeking a Place in an Adult World”:
Justin, who barely spoke until he was 10, falls roughly in the middle of the spectrum of social impairments that characterize autism, which affects nearly one in 100 American children. He talks to himself in public, has had occasional angry outbursts, avoids eye contact and rarely deviates from his favorite subject, animation. His unabashed expression of emotion and quirky sense of humor endear him to teachers, therapists and relatives. Yet at 20, he had never made a true friend.
There’s a tremendous gap between stories of children on the autism spectrum and stories of adults. (There’s a great joke that goes something like: Something magical happens to the autistic when we turn 21; we disappear.)
Stories of problems affecting children always draw a bigger response than those affecting adults. Remember AIDS and Ryan White? Thinking about someone as a victim and thinking about them as a problem are equally, well, problematic. But it is usually better to be a victim, and to be as pure and sympathetic a victim as possible.
There is also an imagination gap. Most readers of newspapers and consumers of serious media are typical, healthy, middle-class adults. They sympathize best with fates that are either totally fantastic or resemble their own. Most people find it easier to imagine being the parent of an autistic child. They find it harder to imagine being autistic and struggling with the problems of autistic adults themselves.
For my part, I am the former, and I find the latter extremely easy. Partly because of my son, and partly because of me.
The family had been living in Europe, where Briant had a promising career in international business and Maria Teresa, the daughter of a Brazilian diplomat, had embraced an expatriate lifestyle.
It’s hard to talk about autism without talking about class. It’s a developmental disorder that appears to disproportionately fall in successful families with histories of Aspy-like behavior. But it’s also almost impossible to tell how much this indicates a certain kind of hereditability and how much class affects diagnosis.
Autistic children with rich/educated parents will often get an Asperger’s diagnosis even if their children don’t fall under the traditional (and compared to the overwhelmingly broad Autism Spectrum Disorder, fairly specific) diagnostic rubric of Asperger’s.
The CW says that if you’re going to have a diagnosis, it’s great to have Asperger’s. Bill Gates and the anthropologist on Bones might have Asperger’s. Asperger’s still gets you access to services, but doesn’t mean you’re staring down a much more crippling disorder. “Autism,” on the other hand, is still a scary word.
Meanwhile, if you’re broke or have less education, your child’s more likely to go undiagnosed or misdiagnosed, and to be treated as slow or mentally retarded. And even if you get the “right” diagnosis, the kinds of therapies offered and your ability to take advantage of them will vary wildly depending on your resources. Maybe especially time.
This is all to say that just as autism stories overwhelmingly focus on children, not adults, they also overwhelmingly focus on the wealthy, not the poor or near-poor. And the link between autism and poverty is extraordinary once a child becomes an adult — what “independence” means in that context is very different.
This is also to say that while all these additional considerations are important, fuck that shit. Because autism does cut across class and race and gender and sexual identity and physical ability, etc. And because of that, it changes what we mean by diversity, what kinds of diversity count, what diversity we ought to care about, and how we think about all of these issues of identity and privilege taken all together.
Justin’s aide braced herself when he raised his hand one day in a class that had focused for several months on Africa. The students had just finished reading a book on apartheid.
“Mr. Moore,” Justin complained, “I’m tired of learning about sad black people.”
The teacher, who was black, turned around.
“You know what, Justin?” he said. “Me too.”
Indulge me, please, for digging into some rhetorical terminology. In particular, I want to try to sort out what we mean when we call something a “straw man.”
Here’s an example. Recently, psychologist/Harvard superstar Steven Pinker wrote an NYT op-ed, “Mind Over Mass Media,” contesting the idea that new media/the internet hurts our intelligence or our attention spans, and specifically contesting trying to marshal neuroscience studies in support of these claims. Pinker writes:
Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.
Please note that nowhere does Pinker name these “critics of new media” or attribute this quote, “experience can change the brain.” But also note that everyone and their cousin immediately seemed to know that Pinker was talking about Nicholas Carr, whose new book The Shallows was just reviewed by Jonah Lehrer, also in the NYT. Lehrer’s review (which came first) is probably best characterized as a sharper version of Pinker’s op-ed:
There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.
Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.
I also really liked this wry observation that Lehrer added on at his blog, The Frontal Cortex:
Much of Carr’s argument revolves around neuroscience, as he argues that our neural plasticity means that we quickly become mirrors to our mediums; the brain is an information-processing machine that’s shaped by the kind of information it processes. And so we get long discussions of Eric Kandel, aplysia and the malleability of brain cells. (Having work in the Kandel lab for several years, I’m a big fan of this research program. I just never expected the kinase enzymes of sea slugs to be applied to the internet.)
Now, at least in my Twitter feed, the response to Pinker’s op-ed was positive, if a little backhanded. This is largely because Pinker largely seems to have picked this fight less to defend the value of the internet or even the concept of neuroplasticity than to throw some elbows at his favorite target, what he calls “blank slate” social theories that dispense with human nature. He wrote a contentious and much-contested book about it. He called it The Blank Slate. That’s why he works that dig in about how “the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.” Pinker doesn’t think we’re clay at all; instead, we’re largely formed.
So on Twitter we see a lot of begrudging support: “Pinker’s latest op-ed is good. He doesn’t elevate 20th C norms to faux-natural laws.” And: “I liked Pinker’s op-ed too, but ‘habits of deep reflection… must be acquired in… universities’? Debunk, meet rebunk… After that coffin nail to the neuroplasticity meme, Pinker could have argued #Glee causes autism for all I care.” And: “Surprised to see @sapinker spend so much of his op-ed attacking straw men (“critics say…”). Overall, persuasive though.”
And this is where the idea of a “straw man” comes in. See, Pinker’s got a reputation for attacking straw men, which is why The Blank Slate, which is mostly a long attack on a version of BF Skinner-style psychological behaviorism, comes off as an attack on postmodern philosophy and literary criticism and mainstream liberal politics and a whole slew of targets that get lumped together under a single umbrella, differences and complexities be damned.
(And yes, this is a straw man characterization of Pinker’s book, probably unfairly so. Also, neither everyone nor the cousins of everyone knew Pinker was talking about Carr. But we all know what we know.)
However, on Twitter, this generated an exchange between longtime Snarkmarket friend Howard Weaver and I about the idea of a straw man. I wasn’t sure whether Howard, author of that last quoted tweet, was using “straw men” just to criticize Pinker’s choice not to call out Carr by name, or whether he thought Pinker had done what Pinker often seems to do in his more popular writing, arguing against a weaker or simpler version of what the other side actually thinks. That, at least, is a critically stronger sense of what’s meant by straw men. (See, even straw men can have straw men!)
So it seems like there are (at least) four different kinds of rhetorical/logical fallacies that could be called “arguing against a straw man”:
- Avoiding dealing with an actual opponent by making them anonymous/impersonal, even if you get their point-of-view largely right;
- Mischaracterizing an opponent’s argument (even or especially if you name them), usually by substituting a weaker or more easily refuted version;
- Assuming because you’ve shown this person to be at fault somewhere, that they’re wrong everywhere — “Since we now know philosopher Martin Heidegger was a Nazi, how could anyone have ever qualified him for a bank loan?”;
- Cherry-picking your opponent, finding the weakest link, then tarring all opponents with the same brush. (Warning! Cliché/mixed metaphor overload!)
Clearly, you can mix-and-match; the most detestable version of a straw man invents an anonymous opponent, gives him easily-refuted opinions nobody actually holds, and then assumes that this holds true for everybody who’d disagree with you. And the best practice would seem to be:
- Argue with the ideas of a real person (or people);
- Pick the strongest possible version of that argument;
- Characterize your opponent’s (or opponents’) beliefs honestly;
- Concede points where they are, seem to be, or just might be right.
If you can win a reader over when you’ve done all this, then you’ve really written something.
There’s even a perverse version of the straw man, which Paul Krugman calls an “anti-straw man,” but I want to call “a killer robot.” This is when you mischaracterize an opponent’s point-of-view by actually making it stronger and more sensible than what they actually believe. Krugman’s example comes from fiscal & monetary policy, in particular imagining justifications for someone’s position on the budget that turns out to contradict their stated position on interest rates. Not only isn’t this anyone’s position, it couldn’t be their position if their position was consistent at all. I agree with PK that this is a special and really interesting case.
Now, as Howard pointed out, there is another sense of “straw man,” used to mean any kind of counterargument that’s introduced by a writer with the intent of arguing against it later. You might not even straight-out refute it; it could be a trial balloon, or thought experiment, or just pitting opposites against each other as part of a range of positions. There’s nothing necessarily fallacious about it, it’s just a way of marking off an argument that you, as a writer, wouldn’t want to endorse. (Sometimes this turns into weasely writing/journalism, too, but hey, again, it doesn’t have to be.)
Teaching writing at Penn, we used a book that used the phrase “Straw Man” this way, and had a “Straw Man” exercise where you’d write a short essay that just had an introduction w/thesis, a counterargument (which we called a “straw man”), then a refutation of that counterargument. And then there was a “Straw Man Plus One” assignment, where you’d…
Never mind. The point is, we wound up talking about straw men a lot. And we’d always get confused, because sometimes “straw man” would mean the fallacy, sometimes it would mean the assignment, sometimes it would be the counterargument used in that (or any) assignment, sometimes it would be the paragraph containing the counterargument…
Oy. By 2009-10, confusion about this term had reached the point where two concessions were made. First, for the philosophers in the crowd who insisted on a strict, restrictive meaning of “straw man” as a fallacy, and who didn’t want their students using fallacious “straw men” in their “Straw Man” assignments, they changed the name of the assignment to “Iron Man.” Then, as part of a general move against using gendered language on the syllabus, it turned into “Iron Person.” Meanwhile, the textbook we used still called the assignment “Straw Man,” turning confusion abetted to confusion multiplied.
I probably confused things further by referring to the “iron person” assignment as either “the robot” — the idea being, again, that you build something that then is independent of you — or “the shill.” This was fun, because I got to talk about how con men (and women) work. The idea of the shill is that they pretend to be independent, but they’re really on the con man’s side the entire time. The best shill sells what they do, so that you can’t tell they’re in on it. They’re the ideal opponent, perfect as a picture. That got rid of any lingering confusion between the fallacy and the form.
Likewise, I believe that here and now we have sorted the range of potential meanings of “straw man,” once and for all. And if you can prove that I’m wrong, well, then I’m just not going to listen to you.
If it makes us less likely to eat or dance or drink or screw, and sometimes makes us kill ourselves, then why do people get depressed?
This radical idea — the scientists were suggesting that depressive disorder came with a net mental benefit — has a long intellectual history. Aristotle was there first, stating in the fourth century B.C. “that all men who have attained excellence in philosophy, in poetry, in art and in politics, even Socrates and Plato, had a melancholic habitus; indeed some suffered even from melancholic disease”…
But Andrews and Thomson weren’t interested in ancient aphorisms or poetic apologias. Their daunting challenge was to show how rumination might lead to improved outcomes, especially when it comes to solving life’s most difficult dilemmas. Their first speculations focused on the core features of depression, like the inability of depressed subjects to experience pleasure or their lack of interest in food, sex and social interactions. According to Andrews and Thomson, these awful symptoms came with a productive side effect, because they reduced the possibility of becoming distracted from the pressing problem.
The capacity for intense focus, they note, relies in large part on a brain area called the left ventrolateral prefrontal cortex (VLPFC), which is located a few inches behind the forehead. While this area has been associated with a wide variety of mental talents, like conceptual knowledge and verb conjugation, it seems to be especially important for maintaining attention. Experiments show that neurons in the VLPFC must fire continuously to keep us on task so that we don’t become sidetracked by irrelevant information. Furthermore, deficits in the VLPFC have been associated with attention-deficit disorder.
Several studies found an increase in brain activity (as measured indirectly by blood flow) in the VLPFC of depressed patients. Most recently, a paper to be published next month by neuroscientists in China found a spike in “functional connectivity” between the lateral prefrontal cortex and other parts of the brain in depressed patients, with more severe depressions leading to more prefrontal activity. One explanation for this finding is that the hyperactive VLPFC underlies rumination, allowing people to stay focused on their problem. (Andrews and Thomson argue that this relentless fixation also explains the cognitive deficits of depressed subjects, as they are too busy thinking about their real-life problems to bother with an artificial lab exercise; their VLPFC can’t be bothered to care.) Human attention is a scarce resource — the neural effects of depression make sure the resource is efficiently allocated.
But the reliance on the VLPFC doesn’t just lead us to fixate on our depressing situation; it also leads to an extremely analytical style of thinking. That’s because rumination is largely rooted in working memory, a kind of mental scratchpad that allows us to “work” with all the information stuck in consciousness. When people rely on working memory — and it doesn’t matter if they’re doing long division or contemplating a relationship gone wrong — they tend to think in a more deliberate fashion, breaking down their complex problems into their simpler parts.
The bad news is that this deliberate thought process is slow, tiresome and prone to distraction; the prefrontal cortex soon grows exhausted and gives out. Andrews and Thomson see depression as a way of bolstering our feeble analytical skills, making it easier to pay continuous attention to a difficult dilemma. The downcast mood and activation of the VLPFC are part of a “coordinated system” that, Andrews and Thomson say, exists “for the specific purpose of effectively analyzing the complex life problem that triggered the depression.” If depression didn’t exist — if we didn’t react to stress and trauma with endless ruminations — then we would be less likely to solve our predicaments. Wisdom isn’t cheap, and we pay for it with pain.
Today’s a day for thinking about brains, plasticity, and renewal. At least in the pages of the New York Times.
First up is Barbara Strouch, who writes on new neuroscientific research into middle-aged brains:
Over the past several years, scientists have looked deeper into how brains age and confirmed that they continue to develop through and beyond middle age.
Many longheld views, including the one that 40 percent of brain cells are lost, have been overturned. What is stuffed into your head may not have vanished but has simply been squirreled away in the folds of your neurons.
One explanation for how this occurs comes from Deborah M. Burke, a professor of psychology at Pomona College in California. Dr. Burke has done research on “tots,” those tip-of-the-tongue times when you know something but can’t quite call it to mind. Dr. Burke’s research shows that such incidents increase in part because neural connections, which receive, process and transmit information, can weaken with disuse or age.
But she also finds that if you are primed with sounds that are close to those you’re trying to remember — say someone talks about cherry pits as you try to recall Brad Pitt’s name — suddenly the lost name will pop into mind. The similarity in sounds can jump-start a limp brain connection. (It also sometimes works to silently run through the alphabet until landing on the first letter of the wayward word.)
That’s a wonderful technique, all the more so because it sounds like something Cicero might have invented.
We are born with a highly structured brain. But those brains are also transformed by our experiences, especially our early experiences. More than any other animal, we humans constantly reshape our environment. We also have an exceptionally long childhood and especially plastic young brains. Each new generation of children grows up in the new environment its parents have created, and each generation of brains becomes wired in a different way. The human mind can change radically in just a few generations.
These changes are especially vivid for 21st-century readers. At this very moment, if you are under 30, you are much more likely to be moving your eyes across a screen than a page. And you may be simultaneously clicking a hyperlink to the last “Colbert Report,” I.M.-ing with friends and Skyping with your sweetheart.
We are seeing a new generation of plastic baby brains reshaped by the new digital environment. Boomer hippies listened to Pink Floyd as they struggled to create interactive computer graphics. Their Generation Y children grew up with those graphics as second nature, as much a part of their early experience as language or print. There is every reason to think that their brains will be as strikingly different as the reading brain is from the illiterate one.
Should this inspire grief, or hope? Socrates feared that reading would undermine interactive dialogue. And, of course, he was right, reading is different from talking. The ancient media of speech and song and theater were radically reshaped by writing, though they were never entirely supplanted, a comfort perhaps to those of us who still thrill to the smell of a library.
But the dance through time between old brains and new ones, parents and children, tradition and innovation, is itself a deep part of human nature, perhaps the deepest part. It has its tragic side. Orpheus watched the beloved dead slide irretrievably into the past. We parents have to watch our children glide irretrievably into a future we can never reach ourselves. But, surely, in the end, the story of the reading, learning, hyperlinking, endlessly rewiring brain is more hopeful than sad.
Put these two together, and you get a picture that’s even more hopeful. Our brains aren’t just plastic over the span of human evolution or historical epochs, but over individual lives. It might be easier and feel more natural for children, whose brains seem to us to be nothing but plasticity. But we don’t just have a long childhood — to a certain extent, our childhood never ends.
Human beings are among the only species on the planet who evolved to thrive in any kind of climate and terrain on the planet. (Seriously; underwater is the only real exception.) Compared to that, summoning the plasticity required to engage with any new kind of media is a piece of cake.
In the fall of my freshman year of college, I read an essay by Stephen Jay Gould called “The Panda’s Thumb” (drawn, I think, from a book by the same name) for an Introduction to Philosophy class.* The premise was that evolution was best revealed not in examples of perfect adaptation of a species to its environment, but in biological accidents, cobbled-together solutions. The panda’s “thumb,” for example, isn’t a finely tailored opposable digit like the human’s, but a kind of randomly mutated bone spur at the end of the rest, held together by an overstretched tendon where a ligament should be. Evolution doesn’t produce perfect solutions — whenever possible, it uses what’s there, readapting existing features (or exaggerated versions of them) to fit new uses. To use the terminology of the late anthropologist Claude Lévi-Strauss, evolution for the most part isn’t an engineer, creating the perfect tools to fit the job, but a bricoleur, a kind of everyday handyman, perfectly willing to use a butterknife in place of a screwdriver if the butterknife is what’s on hand.
The neuroscientist Stanislas Dehaene, of the Collège de France, has been getting a lot of buzz for his new book Reading in the Brain: The Science and Evolution of a Human Invention, which that reading and writing and evolved in much the same way, making use of existing parts of the visual cortex and rewiring them. What’s more, Dehaene claims that reading and writing’s dependence on a part of the brain that originally evolved to serve other purposes has actually helped determine how reading has emerged historically, and even the shapes of letters themselves. Writing, in other words, isn’t entirely arbitrary — it’s limited by how far our brains can bend.
The neuroscience of writing also suggests that it’s primarily a visual phenomenon, and only secondarily a linguistic one (in the sense of language = speech). But the part of the visual cortex that handles reading relays visual recognition of letters to the speech and motor and conceptual centers of the brain so quickly and efficiently that it almost doesn’t matter; reading becomes a total mental act, integrating nearly all of our mental capacities with split-second timing.
Here’s a summary offered by Susan Okie in her review of the book in the Washington Post:
“Only a stroke of good fortune allowed us to read,” Dehaene writes near the end of his tour of the reading brain. It was Homo sapiens’s luck that in our primate ancestors, a region of the brain’s paired temporal lobes evolved over a period of 10 million years to specialize in the visual identification of objects. Experiments in monkeys show that, within this area, individual nerve cells are dedicated to respond to a specific visual stimulus: a face, a chair, a vertical line. Research suggests that, in humans, a corresponding area evolved to become what Dehaene calls the “letterbox,” responsible for processing incoming written words. Located in the brain’s left hemisphere near the junction of the temporal and occipital lobes, the letterbox performs identical tasks in readers of all languages and scripts. Like a switchboard, it transmits signals to multiple regions concerned with words’ sound and meaning — for example, to areas that respond to noun categories (people, animals, vegetables), to parts of the motor cortex that respond to action verbs (“kiss,” “kick”), even to cells in the brain’s associative cortex that home in on very specific stimuli. (In one epileptic patient, for example, a nerve cell was found that fired only in response to images or the written name of actress Jennifer Aniston!)
This result astonishes me, since I was pretty sure that the one cell = one concept model of the brain — what Douglas Hofstadter calls “the grandmother neuron” theory — had been completely debunked. Apparently, though, there’s a Jennifer Aniston cell? At least for some of us? It might not be the ONLY cell that lights up — but it doesn’t light up for anything else (and appears, at least in this case, to function at either the image OR the written name, suggesting a degree of cognitive interchangability between the two).
These reading cells work differently for words we immediately recognize — like the name of Jennifer Aniston — and those that we don’t (again suggesting that the brain works by macros and shortcuts whenever it can). Jonah Lehrer explains:
One of the most intriguing findings of this new science of reading is that the literate brain actually has two distinct pathways for reading. One pathway is direct and efficient, and accounts for the vast majority of reading comprehension — we see a group of letters, convert those letters into a word, and then directly grasp the word’s meaning. However, there’s also a second pathway, which we use whenever we encounter a rare and obscure word that isn’t in our mental dictionary. As a result, we’re forced to decipher the sound of the word before we can make a guess about its definition, which requires a second or two of conscious effort.
Lehrer also keys in Dehaene’s conclusions about the evolution of writing systems:
The second major mystery explored by Dehaene is how reading came to exist. It’s a mystery that’s only deepened by the recency of literacy: the first alphabets were invented less than 4,000 years ago, appearing near the Sinai Peninsula. (Egyptian hieroglyphic characters were used to represent a Semitic language.) This means that our brain wasn’t “designed” for reading; we haven’t had time to evolve a purpose-built set of circuits for letters and words. As Deheane eloquently notes, “Our cortex did not specifically evolve for writing. Rather, writing evolved to fit the cortex.”
Deheane goes on to provide a wealth of evidence showing this cultural evolution in action, as written language tweaked itself until it became ubiquitous. In fact, even the shape of letters — their odd graphic design — has been molded by the habits and constraints of our perceptual system. For instance, the neuroscientists Marc Changizi and Shinsuke Shimojo have demonstrated that the vast majority of characters in 115 different writing systems are composed of three distinct strokes, which likely reflect the sensory limitations of cells in the retina. (As Dehaene observes, “The world over, characters appear to have evolved an almost optimal combination that can easily be grasped by a single neuron.”) The moral is that our cultural forms reflect the biological form of the brain; the details of language are largely a biological accident.
“Writing evolved to fit the cortex.” On the one hand, it makes perfect sense that a human invention would be limited by human biology — that the visual forms of writing would be limited by our abilities to recognize patterns in the same way that the sounds of letters are limited by the shape and structure of the human mouth.
On the other, it so often seems that writing is BIGGER than we are, or at least independent — that it stands apart and outside of us, like it really was a gift from an Egyptian god — or that it’s so abstract, so removed in modern script from any kind of mimetic resemblance to the world, that it’s a purely arbitrary system, dictated by the requirements of the hand rather than the eye.
The other cool thing about Dehaene’s research? All that brain imaging and reading research and mapping of connections between different parts of the brain has helped him to figure out a neuroscientific way to begin to 1) define consciousness and 2) explain why consciousness is evolutionarily desirable. (Really.)
What I propose is that “consciousness is global information in the brain” — information which is shared across different brain areas. I am putting it very strongly, as “consciousness is”, because I literally think that’s all there is. What we mean by being conscious of a certain piece of information is that it has reached a level of processing in the brain where it can be shared… The criterion of information sharing relates to the feeling that we have that, whenever a piece of information is conscious, we can do a very broad array of things with it. It is available…
In several experiments, we have contrasted directly what you can do subliminally and what you can only do consciously. Our results suggest that one very important difference is the time duration over which you can hold on to information. If information is subliminal, it enters the system, creates a temporary activation, but quickly dies out. It does so in the space of about one second, a little bit more perhaps depending on the experiments, but it dies out very fast anyway. This finding also provides an answer for people who think that subliminal images can be used in advertising, which is of course a gigantic myth. It’s not that subliminal images don’t have any impact, but their effect, in the very vast majority of experiments, is very short-lived. When you are conscious of information, however, you can hold on to it essentially for as long as you wish,. It is now in your working memory, and is now meta-stable. The claim is that conscious information is reverberating in your brain, and this reverberating state includes a self-stabilizing loop that keeps the information stable over a long duration. Think of repeating a telephone number. If you stop attending to it, you lose it. But as long as you attend to it, you can keep it in mind.
Our model proposes that this is really one of the main functions of consciousness: to provide an internal space where you can perform thought experiments, as it were, in an isolated way, detached from the external world. You can select a stimulus that comes from the outside world, and then lock it into this internal global workspace. You may stop other inputs from getting in, and play with this mental representation in your mind for as long as you wish…
In the course of evolution, sharing information across the brain was probably a major problem, because each area had a specialized goal. I think that a device such as this global workspace was needed in order to circulate information in this flexible manner. It is extremely characteristic of the human mind that whatever result we come up with, in whatever domain, we can use it in other domains. It has a lot to do, of course, with the symbolic ability of the human mind. We can apply our symbols to virtually any domain.
Consciousness, in other words, is like writing for the brain — it fixes information that would otherwise be ephemeral, and allows you to perform more complicated operations with it. (Kind of like how we need a pencil and paper to do complicated arithmetic.)
Play with those analogies for a while. I’m going to start reading Dehaene’s book.
*Digression: This class was taught by a prof my friends and I nicknamed “Skeletor,” an ancient woman who couldn’t project her voice beyond the first few rows of the long rows of 50+ desks that passed for a seminar at Michigan State. On some days, she would wear a wrap-around microphone that inevitably dropped down her neck, becoming completely useless. She was always totally oblivious of this. We used to joke that she should wear a live snake wrapped around her neck instead — it would amplify her speech just as well, but everyone would pay rapt attention. I skipped about half of the classes to this class, netting one of my four 3.5s as an undergrad, all of them in my freshman year. If I hadn’t taken Ethics with the great Herbert Garelick the next semester, I’d probably be a math teacher today.
P.S.: I forgot to link to this great Scientific American interview with Dehaene. Here’s a snip:
COOK: In the book, you describe a part of the brain as the “letterbox.” Can you please explain what you mean by that?
DEHAENE: This is the name I have given to a brain region that systematically responds whenever we read words. It is in the left hemisphere, on the inferior face, and belongs to the visual region that helps us recognize our environment. This particular region specializes in written characters and words. What is fascinating is that it is at the same location in all of us – whether we read Chinese, Hebrew or English, whether we’ve learned with whole-language or phonics methods, a single brain region seems to take on the function of recognizing the visual word.
COOK: But reading is a relatively recent invention, so what was the “letterbox” doing before we had written language?
DEHAENE: An excellent question – we don’t really know. The whole region in which this area is inserted is involved in invariant visual recognition – it helps us recognize objects, faces and scenes, regardless of the particular viewpoint, lighting, and other superficial variations.
We are starting to do brain-imaging experiments in illiterates, and we find that this region, before it responds to words, has a preference for pictures of objects and faces. We are also finding that this region is especially attuned to small features present in the contours of natural shapes, such as the “Y” shape in the branches of trees. My hypothesis is our letters emerged from a recycling of those shapes at the cultural level. The brain didn’t have enough time to evolve “for” reading – so writing systems evolved “for” the brain!
Phillip Greenspun argues that technology is reducing the value of older people’s wisdom.
Let’s start by considering factual knowledge. An old person will know more than a young person, but can any person, young or old, know as much as Google and Wikipedia? Why would a young person ask an elder the answer to a fact question that can be solved authoritatively in 10 seconds with a Web search?
How about skills? Want help orienting a rooftop television aerial? Changing the vacuum tubes in your TV? Dialing up AOL? Using MS-DOS? Changing the ribbon on an IBM Selectric (height of 1961 technology)? Tuning up a car that lacks electronic engine controls? Doing your taxes without considering the Alternative Minimum Tax and the tens of thousands of pages of rules that have been added since our senior citizen was starting his career? Didn’t think so.
The same technological progress that enables our society to keep an ever-larger percentage of old folks’ bodies going has simultaneously reduced the value of the minds within those bodies.
Well, fine; if you previously treated your grandparents like the contents of the vintage encyclopedias on their shelves, then you’ve got some new options. But get this: you always could have just read those encyclopedias, too.
Probably no invention diminished the knowledge-retention-value of older people so much as writing. At the same time, writing provided a way for that knowledge to survive death, to reach not only children and grandchildren but great-great-grandchildren and strangers and people in far away places. Likewise, if older folks’ wisdom can be transferred to the internet, then it will actually add value to both their wisdom and the internet. Oh, wait — it already has!
More to the point, Greenspun’s human-hard-drive concept of valuable knowledge is pretty ossified. When I see my grandmother, I don’t ask her about the names of plants or when the best time is to plant certain flowers, even though I know that she (and not I) know this stuff cold. I don’t even (at least always) ask her to sew my split pants seat or loose jacket button, even though she’s the one in the family who’s got the sewing machine and knows how to use it.
Instead, I talk to her about the time when she picked me up from school, and took me to Taco Bell, and the hot meat melted the cheese on the tacos, something I had never seen before, and that we both marveled at. Or I ask her about the book she’s reading, what she thinks of it, her opinions about the characters and the writing. Or I ask her about things that happened before my lifetime, about the Depression, or how she felt when she and my grandfather moved into their house in Detroit — I have the picture of her, nineteen years old with the nineteen-inch waist, doing a cartwheel on the front lawn, but it’s not enough. I listen to her describe how the city was then, and sometimes wince at the sharpness she expresses in her distaste for the city now. She tells me about how difficult it is for her to read now, how she wishes she’d kept taking the shots in her eyes for her glaucoma and macular degeneration. She tells me about my grandfather, who has been gone for fifteen years, whom I knew not nearly as well.
Not all kinds of knowledge are generated at random, of equal factual value to everybody. Sometimes they’re embodied in experience, and specifically relevant only to the people who share them. As Zora Neale Hurston has Janie say in Their Eyes Were Watching God, “you’ve got to go there to know there.”
(Greenspun’s post via Lone Gunman.)
I’ve been spending a lot of time reading about autism lately, so this NYT piece on a slate of forthcoming movies featuring characters with autism or Asperger’s syndrome caught my attention.
But isn’t the great book/movie about autism really Watchmen? One character after another — savants, to be sure — driven by their obsessions, unable to make lasting emotional connections with other people, despite their best efforts to connect and identify with humanity?
From the NYT:
I had never heard of this disorder before:
In hyperlexia, a child spontaneously and precociously masters single-word reading. It can be viewed as a superability, that is, word recognition ability far above expected levels… Hyperlexic children are often fascinated by letters and numbers. They are extremely good at decoding language and thus often become very early readers. Some hyperlexic children learn to spell long words (such as elephant) before they are two and learn to read whole sentences before they turn three. An fMRI study of a single child showed that hyperlexia may be the neurological opposite of dyslexia.
Often, hyperlexic children will have a precocious ability to read but will learn to speak only by rote and heavy repetition, and may also have difficulty learning the rules of language from examples or from trial and error, which may result in social problems… Their language may develop using echolalia, often repeating words and sentences. Often, the child has a large vocabulary and can identify many objects and pictures, but cannot put their language skills to good use. Spontaneous language is lacking and their pragmatic speech is delayed. Hyperlexic children often struggle with Who? What? Where? Why? and How? questions… Social skills often lag tremendously. Hyperlexic children often have far less interest in playing with other children than do their peers.
The thing is, this absolutely and precisely describes me in childhood, especially before the age of 5 or 6. (This is also the typical age when hyperlexic children begin to learn how to interact with others.) It also describes my son — which is how my wife found the description and forwarded it to me.
You walk around your entire life with these stories, these tics, and the entire time, your quirks are really symptoms. It’s a little strange.
Cities may be engines of innovation, but not everyone thinks they are beautiful, particularly the megalopolises of today, with their sprawling rapacious appetites. They seem like machines eating the wilderness, and many wonder if they are eating us as well. Is the recent large-scale relocation to cities a choice or a necessity? Are people pulled by the lure of opportunities, or are they pushed against their will by desperation? Why would anyone willingly choose to leave the balm of a village and squat in a smelly, leaky hut in a city slum unless they were forced to?
Image via Wikipedia
Well, every city begins as a slum. First it’s a seasonal camp, with the usual free-wheeling make-shift expediency. Creature comforts are scarce, squalor the norm. Hunters, scouts, traders, pioneers find a good place to stay for the night, or two, and then if their camp is a desirable spot it grows into an untidy village, or uncomfortable fort, or dismal official outpost, with permanent buildings surrounded by temporary huts. If the location of the village favors growth, concentric rings of squatters aggregate around the core until the village swells to a town. When a town prospers it acquires a center