I’m enamored of this post of Kasia’s, where she reports on how it feels to have your brain slow and gel* during two weeks away from the internet:
My peripheral vision grew back, my field of focus going from a small, Mac Book shaped rectangle to the whole horizon.
The best antidote to internet addiction is reading novels.
*The brain of an internet-surfer or blog-writer has the consistency of a hyper-agitated gas. The brain of a novel-reader or deep thinker, by contrast, is a viscous jelly. And the brain of a bookservative is a cold hard stone.
I devoured Steven Johnson’s forthcoming book, Where Good Ideas Come From, over the course of a few bus rides and absolutely loved it. Here’s one bit that’s now stuck in my head:
So, our brains are full of patterns, obviously. One of them is the oscillation between neurons firing all in sync and firing at random—sort of a flip-flop between coherence (the technical term is “phase-lock”) and noise. Well…
In 2007, Robert Thatcher, a brain scientist at the University of South Florida, decided to study the vacillation between phase-lock and noise in the brains of dozens of children. While Thatcher found that the noise periods lasted, on average, for 55 milliseconds, he also detected statistically significant variation among the children. Some brains had a tendency to remain longer in phase-lock, others had noise intervals that regularly approached 60 milliseconds. When Thatcher then compared the brain-wave results with the children’s IQ scores, he found a direct correlation between the two data sets. Every extra millisecond spent in the chaotic mode added as much as 20 IQ points. Longer spells in phase-lock deducted IQ points, though not as dramatically.
Thatcher’s study suggests a counterintuitive notion: the more disorganized your brain is, the smarter you are.
(Can we just pause here for a fist-pump and a quiet whispered “yesss”?)
It’s counterintuitive in part because we tend to attribute the growing intelligence of the technology world with increasingly precise electromechanical choreography. Intel doesn’t advertise its latest microprocessors with the slogan: “Every 55 milliseconds, our chips erupt into a blizzard of noise!” Yet somehow brains that seek out that noise seem to thrive, at least by the measure of the IQ test.
A few grafs later, to sum things up, here’s William James by way of Steven Johnson:
Instead of thoughts of concrete things patiently following one another, we have the most abrupt cross-cuts and transitions from one idea to another, the most rareified abstractions and discriminations, the most unheard-of combinations of elements… a seething cauldron of ideas, where everything is fizzling and bobbling about in a state of bewildering activity, where partnerships can be joined or loosened in an instant, treadmill routine is unknown, and the unexpected seems the only law.
He’s describing “the highest order of minds”—but he could just as easily be describing a startup, or a city. Which is exactly, I think, the point.
Why Jonah Lehrer can’t quit his janky GPS:
The moral is that it doesn’t take much before we start attributing feelings and intentions to a machine. (Sometimes, all it takes is a voice giving us instructions in English.) We are consummate agency detectors, which is why little kids talk to stuffed animals and why I haven’t thrown my GPS unit away. Furthermore, these mistaken perceptions of agency can dramatically change our response to the machine. When we see the device as having a few human attributes, we start treating it like a human, and not like a tool. In the case of my GPS unit, this means that I tolerate failings that I normally wouldn’t. So here’s my advice for designers of mediocre gadgets: Give them voices. Give us an excuse to endow them with agency. Because once we see them as humanesque, and not just as another thing, we’re more likely to develop a fondness for their failings.
But this gave Kasparov a fascinating idea. What if, instead of playing against one another, a computer and a human played together—as part of a team?
And then he blockquotes Kasparov (emphasis mine):
Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.
The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.
How cool is that? How pregnant with possibility?
Clive riffs on it some more and really zooms in on the process that supports human-machine interaction as the key variable. If you have a better process, you win.
Actually, I want to amend the word “interaction” above; that’s the standard way of talking about it, but I like Kasparov’s language of “teamwork” and “coaching” a lot better. How about that: from now on, think of devices and apps as your teammates—your collaborators. How does that change the way you think about them? How does that change your standards for them?
Also: While we’re on the subject of tools, Frank Chimero has a neat post about tools and ambiguity. Peep the silent counterpoint design elements. YES.
I love this chart, but maybe not for the obvious reason.
Stanislas Dehaene’s Reading in the Brain (previously on Snarkmarket) has a revelatory section about how we recognize glyphs even when they come in many configurations. Think about all the ways the letter E can look: capital E, lowercase e, cursive e, funky-futuristic-font E, and so on. Our brain recognizes them all (well.. almost all) instantly as E. It peels back the pixels or atoms and registers the underlying letter-concept.
Anyway, looking at this chart, I realized that the bat symbol is totally a glyph! It’s beyond graphic design at this point. There are so many variations out there—many, many more beyond what you see above—and there is a lot of difference between them. But they’re all unmistakably the bat symbol. That’s cool.
I want to make something that becomes a glyph.
(Rob Greco asks which version is my favorite. For me, it’s an easy pick: 2005 all the way. But the modern choice is actually the most retro; the Batman Begins team reached way back into the early archives for inspiration.)
Rachel shares some language hacks used to confound the Great Firewall:
Chai Zi, if you remember, is an old form of divination involving the splitting up of Chinese characters into their component radicals,altering or removing strokes to form different words. But ancient an artform as it is, it’s also become, today, one of the weapons in the arsenal of the Chinese Internet Résistance.
An example I like: the government likes to say that it filters the internet to promote ‘harmony’ (和谐, HE2 XIE2), and bans ‘unharmonious’ blogs. Chinese bloggers, however, say sardonically of a banned blog that it has been ‘river crabbed’ (河蟹, HE2 XIE2), because the word for river crab, while comprised of entirely different words to the word for harmony, nonetheless sounds exactly the same.
Cross-reference this with Stanislas Dehaene’s book “Reading in the Brain” and you get a gooey mass of brain-bending politico-linguistic delight.
Tony Judt, author of the magisterial book Postwar—really, one of my absolute favorites—has Lou Gehrig’s disease, and it’s progressed to the point where he can’t move his arms or legs.
In the NYRB, he writes:
During the day I can at least request a scratch, an adjustment, a drink, or simply a gratuitous re-placement of my limbs—since enforced stillness for hours on end is not only physically uncomfortable but psychologically close to intolerable. It is not as though you lose the desire to stretch, to bend, to stand or lie or run or even exercise. But when the urge comes over you there is nothing—nothing—that you can do except seek some tiny substitute or else find a way to suppress the thought and the accompanying muscle memory.
But then comes the night.
I’ve seen several bloggers link, approvingly, to some of David Brooks’ recent columns on psychology and neuroscience, and I’ll join them. I think this conversation couldn’t be more fascinating, mostly because it’s a new one. This isn’t just a nice scientific tux to dress up old (“eternal”) ideas; some of these new notions about how the brain works (or, often, how it doesn’t work) are truly new.
And some of them are truly challenging. What if consciousness isn’t the pilot but rather the spin doctor, coming up with stories to explain your actions only after other, subtler faculties have already committed you to them? Consciousness as giant retcon.
What if there’s not one Robin—expressed in lots of interesting ways, of course—but instead a whole committee, always arguing over whether to actually write something or just post a snazzy image? As Paul Bloom puts it, by way of Brooks, maybe our many selves “are continually popping in and out of existence. They have different desires, and they fight for control—bargaining with, deceiving, and plotting against one another.”
I always think of that claim—who made it? Howard Bloom?—that Shakespeare literally invented modern Western consciousness. The revolution that was Shakespeare’s characterization provided a template that was so seductive, so viral, that it ultimately—after influencing and infecting lots of other writers—became one of the very foundations of our common sense about consciousness, identity, will, and everything else. (I’m probably mangling Bloom’s idea. Oh well: It’s my mangled version that I find so compelling.)
That’s totally magical, but it’s also totally arbitrary. So maybe it’s time for another sea change (Shakespeare!) in the way we think about ourselves. It doesn’t take much to make a big difference; these are the axioms we build our lives around, so if you change one just a little bit, the ripple effects are massive.
In any case, I’m glad a big-time columnist is bringing these ideas to center stage. I do wish there was a forum that was slightly more technical; I don’t want to read the journals, or even anything close to them, really, but I would like to go beyond the too-clean op-ed metaphors that Brooks is bound to by necessity.