New York Times
Today’s a day for thinking about brains, plasticity, and renewal. At least in the pages of the New York Times.
First up is Barbara Strouch, who writes on new neuroscientific research into middle-aged brains:
Over the past several years, scientists have looked deeper into how brains age and confirmed that they continue to develop through and beyond middle age.
Many longheld views, including the one that 40 percent of brain cells are lost, have been overturned. What is stuffed into your head may not have vanished but has simply been squirreled away in the folds of your neurons.
One explanation for how this occurs comes from Deborah M. Burke, a professor of psychology at Pomona College in California. Dr. Burke has done research on “tots,” those tip-of-the-tongue times when you know something but can’t quite call it to mind. Dr. Burke’s research shows that such incidents increase in part because neural connections, which receive, process and transmit information, can weaken with disuse or age.
But she also finds that if you are primed with sounds that are close to those you’re trying to remember — say someone talks about cherry pits as you try to recall Brad Pitt’s name — suddenly the lost name will pop into mind. The similarity in sounds can jump-start a limp brain connection. (It also sometimes works to silently run through the alphabet until landing on the first letter of the wayward word.)
That’s a wonderful technique, all the more so because it sounds like something Cicero might have invented.
We are born with a highly structured brain. But those brains are also transformed by our experiences, especially our early experiences. More than any other animal, we humans constantly reshape our environment. We also have an exceptionally long childhood and especially plastic young brains. Each new generation of children grows up in the new environment its parents have created, and each generation of brains becomes wired in a different way. The human mind can change radically in just a few generations.
These changes are especially vivid for 21st-century readers. At this very moment, if you are under 30, you are much more likely to be moving your eyes across a screen than a page. And you may be simultaneously clicking a hyperlink to the last “Colbert Report,” I.M.-ing with friends and Skyping with your sweetheart.
We are seeing a new generation of plastic baby brains reshaped by the new digital environment. Boomer hippies listened to Pink Floyd as they struggled to create interactive computer graphics. Their Generation Y children grew up with those graphics as second nature, as much a part of their early experience as language or print. There is every reason to think that their brains will be as strikingly different as the reading brain is from the illiterate one.
Should this inspire grief, or hope? Socrates feared that reading would undermine interactive dialogue. And, of course, he was right, reading is different from talking. The ancient media of speech and song and theater were radically reshaped by writing, though they were never entirely supplanted, a comfort perhaps to those of us who still thrill to the smell of a library.
But the dance through time between old brains and new ones, parents and children, tradition and innovation, is itself a deep part of human nature, perhaps the deepest part. It has its tragic side. Orpheus watched the beloved dead slide irretrievably into the past. We parents have to watch our children glide irretrievably into a future we can never reach ourselves. But, surely, in the end, the story of the reading, learning, hyperlinking, endlessly rewiring brain is more hopeful than sad.
Put these two together, and you get a picture that’s even more hopeful. Our brains aren’t just plastic over the span of human evolution or historical epochs, but over individual lives. It might be easier and feel more natural for children, whose brains seem to us to be nothing but plasticity. But we don’t just have a long childhood — to a certain extent, our childhood never ends.
Human beings are among the only species on the planet who evolved to thrive in any kind of climate and terrain on the planet. (Seriously; underwater is the only real exception.) Compared to that, summoning the plasticity required to engage with any new kind of media is a piece of cake.
Oh ho ho — Bill Keller, spilling the beans (or just gabbing like the rest of us):
I’m hoping we can get the newsroom more actively involved in the challenge of delivering our best journalism in the form of Times Reader, iPhone apps, WAP, or the impending Apple slate, or whatever comes after that.
This is noteworthy not just for gossip-y reasons. Even if he isn’t talking as an insider, Keller’s a journalist — he and his reporters probably have good info on this. Just how impending is impending? And is the NYT ready to do something real in that format, and related ones, like the iPhone?
Something cool is going to be happening soon.
I’ve seen several bloggers link, approvingly, to some of David Brooks’ recent columns on psychology and neuroscience, and I’ll join them. I think this conversation couldn’t be more fascinating, mostly because it’s a new one. This isn’t just a nice scientific tux to dress up old (“eternal”) ideas; some of these new notions about how the brain works (or, often, how it doesn’t work) are truly new.
And some of them are truly challenging. What if consciousness isn’t the pilot but rather the spin doctor, coming up with stories to explain your actions only after other, subtler faculties have already committed you to them? Consciousness as giant retcon.
What if there’s not one Robin—expressed in lots of interesting ways, of course—but instead a whole committee, always arguing over whether to actually write something or just post a snazzy image? As Paul Bloom puts it, by way of Brooks, maybe our many selves “are continually popping in and out of existence. They have different desires, and they fight for control—bargaining with, deceiving, and plotting against one another.”
I always think of that claim—who made it? Howard Bloom?—that Shakespeare literally invented modern Western consciousness. The revolution that was Shakespeare’s characterization provided a template that was so seductive, so viral, that it ultimately—after influencing and infecting lots of other writers—became one of the very foundations of our common sense about consciousness, identity, will, and everything else. (I’m probably mangling Bloom’s idea. Oh well: It’s my mangled version that I find so compelling.)
That’s totally magical, but it’s also totally arbitrary. So maybe it’s time for another sea change (Shakespeare!) in the way we think about ourselves. It doesn’t take much to make a big difference; these are the axioms we build our lives around, so if you change one just a little bit, the ripple effects are massive.
In any case, I’m glad a big-time columnist is bringing these ideas to center stage. I do wish there was a forum that was slightly more technical; I don’t want to read the journals, or even anything close to them, really, but I would like to go beyond the too-clean op-ed metaphors that Brooks is bound to by necessity.
I always feel like a dope linking to a NYT Mag cover story—it’s like, yeah, I noticed that, too—so instead I’ll make sure you know Dexter Filkins’ book, The Forever War, is also quite serious and quite good. It was sorta hailed as an instant classic when it came out, and it lives up to the hype.
His gift, if you ask me, is his agility on S.I. Hayakawa’s “ladder of abstraction.” One second he’s talking lucidly about grand strategy; the next he’s walking with a line of Marines on a dusty road. Both ends matter—one illuminates the other—and Filkins writes about both better than just about anybody.
Oh and P.S., Dexter Filkins gets my bet for the last reporter to ever get on Twitter. I feel like he would punch you in the face if you even said the word “tweet” to him.
More thoughts on Op-Tech writing at major dailies. In particular, I had a sentence that I wanted to squeeze in, but forgot about until an hour after I hit submit: “Op-Tech is equal parts business, politics, and aesthetics.”
Think about it! Most of this journalism is about major corporations who each release a handful of significant products or technologies each year. In a few cases, a Pogue or Mossberg will spotlight peripheral objects by smaller companies. But it’s really about major trends and players in the tech sector, trying to understand and evaluate what’s happening. That’s the business end.
But again, Op-Tech writers don’t largely touch on issues of manufacturing, personnel, law, everything the tech reporters do. They write as users (albeit expert users) for users. They talk about the aesthetics and experience of using an object, and make recommendations to users (and only occasionally to companies) about how best to use and whether to purchase a business or service. This is where they’re closest to food or movie reviewers.
Think about it! Like a meal or a movie, personal digital technology is criticized primarily according to the aesthetic experience of the user. I’ll ramp that up beyond the bounds of plausibility. New gadgets or software packs are among our most important aesthetic objects, more significant and universal than books, TV shows, or movies — so much so that the paper of record requires experts to weigh in on their value and importance.
At the same time, technology writing is political in a way that most aesthetic criticism simply isn’t. What I mean is that 1) there are real arguments between partisans, and 2) these arguments have significant real-world consequences — in ways that criticism of movies or restaurants, simply don’t, unless you live in the right part of Manhattan.
This, I think, is why so many people get upset about the cozy relationship between Op-Tech columnists and the companies they cover — they feel as though criticism, any criticism that might question the strategies of the Major Powers (yes, I’m talking about Apple, Microsoft, and Google as if they were empires on the verge of World War I), is shut out or at least diminished and contained for that reason. The weird position of the major guys as reviewers/insiders/brands appears to guarantee that.
My response would be 1) that you don’t need or even want a David Pogue or Walt Mossberg to be running around playing Edward R. Murrow, and 2) that job is open — at least that sliver that hasn’t largely been filled by magazine writers, academic critics, and independent bloggers.
Still, I would love to see more writing in newspapers that really focuses on the aesthetics of tech — Virginia Heffernan is really the model here — or the broader ramifications of tech policy. Imagine if the New York Times had an opinion columnist — right next to Krugman, Dowd, Brooks, and the rest — writing about the intersection of technology, politics, and culture? Not in Slate, not in the Chronicle of Higher Education — but smack in the middle of the NYT, WSJ, or the Post.
After all, EVERYONE who reads the editorial page of the Times has an opinion about who OUGHT to be writing for the editorial page of the Times.
I say, let’s treat this like it were actually already happening: write your model nominees in the comments below.
David Pogue’s position is that he’s not a technology reporter, but an opinion columnist who writes about technology:
“Since when have I ever billed myself as a journalist?” Pogue said angrily. “Since when have I ever billed myself as a journalist?.…I am not a reporter. I’ve never been to journalism school. I don’t know what it means to bury the lede. Okay I do know what it means. I am not a reporter. I’ve been an opinion columnist my entire career…..I try to entertain and inform.”
Recognizing perhaps that the distinction may be lost on his journalist colleagues at the NYT and elsewhere, Pogue added: “By the way I’m suddenly realizing this is all just making it all worse for myself. The haters are going to hate David Pogue even more now.”
This actually becomes a pretty complicated issue when you think about it. On the one hand — and maybe this is a bad example — the NYT hires columnists like Bill Kristol, who’s basic qualification is that they ARE partisans with an interest in promoting one side over an other. Sometimes, like Paul Krugman, they’re really smart, and sometimes like Nick Kristof, they do some reporting. But they’re basically intended to be advocates. You could criticize Kristol for a lot, but it would be stating the all-too-obvious to point out that his professional interest was bound up in the fate of the Republican party. And likewise, it would be stating the all-too-obvious to dig into Tom Friedman’s books and lectures. We’ve essentially decided that it’s cool if political writers are part of the party apparatus and/or ideological institutions. So long as they don’t out-and-out lie, they’re good.
On the other, they have technology and business reporters who are, I don’t know, supposed to uncover true facts about products and companies for consumers or investors or amateurs (like me) who are interested in these things for poorly-defined and even-less-well-understood reasons. Often, though, these reviewers interact with analysis of their objects — if the new Zune might be a hit or a dud, that becomes a fact that potentially affects sales, stock movements, personnel changes… all of that nitty-gritty stuff that’s part and parcel of being a good reporter.
Maybe in the middle somewhere, there are reviewers, usually writers who review books or movies or plays or television shows or restaurants. These writers are expected to be partial but unmotivated — they have an opinion but not a stake. This includes what some reviewers take to be draconian restrictions on reviewing the books of their friends and/or enemies. You’re there for your knowledge and aesthetics — and yet also, paradoxically, you are also there (in part) to sell the media you review.
And essentially, the objects reviewed are aesthetic objects. They’re not ordinary household goods. The closest thing to tech gadgets reviewed in a paper like the NYT is the automotive section, which is grouped with “jobs” and “real estate” in the classifieds. Nobody reviews furniture, or toasters, or bicycles. In a sense, the technology reviewer is the only reviewer who offers an opinion on things you use. At least in a sphere where not just you, but the newspaper itself, has a stake, however small, in selling the object.*
So technology journalism — at least, what I’m calling the “Op-Tech” genre, is somewhere between all of these fields. Like book and movie reviewers, they’re expected to offer their opinion on the aesthetics (and use, too) of objects placed before them. Like reporters, their value lies in their quasi-objective take on a product (which in turn helps move product) and the sources they can marshal to give them access. And like opinion reporters, they’re expected to be entertaining, partisan, and above all personal. After all, it’s their authority, their brand, that creates the conditions under which their opinion is credible (or less often, not).
* This is actually really complicated. One of the most revealing parts of Pogue’s complaint is his claim that he pushed for disclosure of his books in his columns. According to Pogue, his editors resisted it, because they thought it would be seen as self-advertising. “And you know what? I am sorry to tell you guys this, but now that the plug is going to appear in each column it’s going to raise the book sales.” (If you’re an Op-Ed columnist and you write a book, it’ll probably get excerpted in the magazine.)