Today, the city of Chicago elects its mayor. In other cities, there would be a primary vote, then another at the time of the general election in November. But given the scarcity of Chicago Republicans — it’s like 25 guys, and they’re all professors in three departments at U of C — the Democratic primary would effectively determine who will be mayor of the city anyways.
So, Chicago’s mayoral race is nonpartisan. And it’s at the end of February — which in Chicago, is even more masochistic than it would be in cities with a more temperate climate.
Since Chicago’s longstanding Mayor Richard M Daley announced he would not seek re-election for another, Rahm Emanuel, former Chicago-area Congressman, Democratic Party powerhouse, and (until recently) Chief of Staff for President Obama, has sought to sew this thing up. There were some brief problems establishing his residency and right to run for office, but now it looks like he’s off to the races.
Since Emanuel announced he was running for office, he’s been joined by a delightfully funny and foul-mouthed shadow on Twitter calling himself @MayorEmanuel. Like Fake Steve Jobs before him, @MayorEmanuel combines a kind of exaggeration of the known qualities of the real Rahm Emanuel — profanity, intelligence, hyper-competitiveness — with a fully-realized, totally internal world of characters and events that has little to do with the real world and everything to do with the comic parallel universe @MayorEmanuel inhabits.
For instance, @MayorEmanuel’s “about” section on Twitter reads: “Your next motherfucking mayor. Get used to it, assholes.” The idea is that if we strip back the secrecy and public image to something so impolitic, so unlikely, we might arrive at something approximating the truth. But, despite my status as a one-time — and actually, I still hope future — Chicagoan, I haven’t been a regular reader of @MayorEmanuel. My friends retweet his funniest one-liners, and that’s good enough for me.
Yesterday, however, @MayorEmanuel outdid himself. He wrote an extended, meandering narrative of the day before the primary that took the whole parallel Rahm Emanuel thing to a different emotional, comic, cultural place entirely. It even features a great cameo by friend of the Snark Alexis Madrigal. The story is twisting, densely referential, far-ranging — and surprisingly, rather beautiful.
And so, once more using the magic of Storify, I’d like to share that story with you. I’ve added some annotations that I hope help explain what’s happening and aren’t too distracting.
In its original form, it has no title. I call it “The Two Mayors.” Read it after the jump.
Read more…Radiohead’s new album King of Limbs dropped on Friday, prompting much love from the Twittersphere. Maybe too much. The British band hits a kind of sweet spot for the educated set: progressive contemporary music that’s equally accessible whether you’re into old-school prog/classic rock, 90s alternative, or 00s house. Still, some of the exchanges seemed a little, um, exuberant:
Still, I think music fans and cultural observers need to grapple with this a little: Radiohead’s first album, Pablo Honey, came out 18 years ago. Here’s another way to think about it: when that album came out, I was 13; now I’m 31. And from at least The Bends to the present, they’ve commanded the attention of the musical press and the rock audience as one of the top ten — or higher — bands at any given moment. You might have loved Radiohead, you might have been bored by them, you might have wished they’d gone back to an earlier style you liked better, but you always had to pay attention to them, and know where you stood. For 18 years. That’s an astonishing achievement.
Here are some comparisons. The Rolling Stones have obviously outdone everyone in the rock longevity department; even if they were sometimes a punchline, they’ve made solid music and have always been insanely profitable. But really, if you take the stretch from 1964’s The Rolling Stones to 1981’s Tattoo You — which is actually mostly a B-sides album of leftovers from 1978’s Some Girls — that’s only 17 years. If you just do their first album through Some Girls, it’s only 14 years. And that’s when the Stones basically stop evolving as a band and stop being a crucial signpost for popular music.
Very few other rock bands last that long. The Beatles didn’t. Talking Heads didn’t. The Pixies and The Velvet Underground obviously didn’t. The Who only had 13 years between their first album and Keith Moon’s overdose. When Bruce Springsteen had a hit with “Streets of Philadelphia” eighteen years after Born To Run, it was an amazing comeback. R.E.M. had about 20 years of fairly consistent attention between “Radio Free Europe” and Reveal, but that’s an unknown underground band on one end and a kind of boring washed-up band on the other with a peak in the middle.
The Flaming Lips are still pushing it. U2’s been going for about 30 years, although they’ve lost a lot of cred along the way that Radiohead hasn’t. Bob Dylan is a freak. But this is the level we’re talking about here: U2, Dylan, and Radiohead. It’s worth tipping your cap. And watching some videos.
Read more…This is from the introduction* to Steven Johnson’s Interface Culture, a book from 1997 that I hadn’t previously read:
A few final observations, and warnings, about the pages that follow. The first should be a comfort to readers who have tired of the recent bombast emanating from both the digital elite and their neo-Luddite critics. I have tried to keep this book as free of dogma and polemic as possible, emphasizing both the tremendous intellectual liberation of the modern interface and the darker, more sinister implications of that same technology.
From its outset this book has been conceived as a kind of secular response to the twin religions of techno-boosterism and techno-phobia. On the most elemental level, I see it as a book of connections, a book of links — one in which desktop metaphors cohabit with Gothic cathedrals, and hypertext links rub shoulders with Victorian novels. Like the illuminations of McLuhan’s electric speed, the commingling of traditional culture and its digital descendants should be seen as a cause for celebration, and not outrage.
This is likely to trouble extremists on both sides of the spectrum.** The neo-Luddites want you to imagine the computer as a betrayal of the book’s slower, more concentrated intelligence; the techno-utopians want you to renounce your ties to the fixed limits of traditional media. Both sides are selling a revolution — it’s just that they can’t agree on whether it’s a good thing. This book is about the continuities more than the radical breaks, the legacies more than the disavowals.
For that reason, the most controversial thing about this book may be the case it makes for its own existence. This book is both an argument for a new type of criticism and a working example of that criticism going about its business.***
Notes
* I added some extra paragraph breaks to the excerpt to make it read more like a blog post.
** Compare my “Bookfuturist Manifesto,” from The Atlantic.com, August 2010.
*** I pretty much want to be Steven Johnson right now.
Like Robin, I love the counter-conventional wisdom John Herrman brings to “I Just Want A Dumb TV.” And I really like Frank Chimero’s distinction between “steadfast,” long-enduring, simple tools and “hot-swap” components of a system that you can change on the fly.
But I want to pivot from this taxonomy of “dumb” things to create a complimentary taxonomy of “smart” ones. If the current crop of “smart” TVs somehow goes wrong, how does it do it? And is a “dumb” monitor the best alternative?
“Smart” and “dumb” applied to electronics/tech has a long history, but for our purposes here, let’s look at the smartphone as one model of what a smart appliance looks like. That seems to be what makers of smart TVs did, anyways. So let’s say, bare minimum, a “smart” appliance needs:
In short, it should slightly resemble a modern, networked computer. The problem with smart TVs is they work too much like smartphones and not enough like PCs.
See, smartphones are hypermobile, so you stuff a ton of capacity into the device because it’s going to have to do most things by itself. Phone, games, maps, email, the web, etc, — everything that can be jammed into those little screens.
Television screens, on the other hand, are antimobile. Like desktop PCs, they stay in one place, and you hook other things up to them: cable boxes, game systems, Blu-Ray players, and (wirelessly) remote controls.
With a smart TV, you can go in two directions to make the device “smarter”: you can either try to make them super self-sufficient, doing more and more on one piece of hardware. Or you can make the device better and better at talking to other devices.
There are good aesthetic reasons to do the first one: you can cut cords and clutter and save some money and electricity. Also, it’s wired in with software, not hardware. It’s not like you’ve got this crummy, outdated VCR built into the box; you can (in principle) update your OS and get a whole new set of applications and capabilities.
Still, the second way of making a TV smart seems better to me. Forget connecting my TV to the web; I want to connect my TV to my phone, my laptop, my refrigerator, my alarm clock, my media players (etc etc etc). But do it all wirelessly, over a local network. Make it easier for me to get my media — wherever it comes from — up on the biggest screen in my house. I can’t do that with a totally dumb TV, but I can’t do that easily with current-generation smart TVs either.
This is why I guess I’m more interested in “two-screen” approaches to television, where you’re using an iPad (or something) to browse channels and read about programs and tweet about what you’re watching and otherwise interact with and control what’s on your screen. Because the lesson of “hot-swapping” is that good parts that talk to each other well make the whole more than the sum of its parts.
Over on Gizmodo, John Herrman takes TV manufacturers to task for pitching all these widget-enabled internet-connected “smart TVs.” He says:
So, here’s the idea: Just buy dumb TVs. Buy TVs with perfect pictures, nice speakers and and attractive finish. Let set top boxes or Blu-ray players or Apple TVs take care of all the amazing connectivity and content afforded to us by today’s best internet TVs. Spend money on what you know you’ll still want in a few years—a good screen—and let your A/V cabinet host the changing cast of disposable accessories. […]
And TV manufacturers: Don’t just make more dumb TVs. Make them dumber.
I love the exhortation: Make them dumber! Yes, we want stuff that’s even dumber and more durable and more flexible. We want stuff we can plug into other stuff forever.
It does seem true that in the places where requirements are clear—this must make a good picture—and interfaces consistent, things you buy can actually find their footing and hold steady in the swirl of the shiny new.
I’d love a directory of these steadfast components. I feel like my Samsung TV (very dumb) might be a candidate. The 24″ Dell LCD I’ve had at home for five years would definitely go in that directory—I think these Dell monitors are widely recognized as the, like, basic black t-shirts of computer components at this point.
But what else? And what about other domains? Certainly, a good cast-iron frying pan is a kitchen component. There’s probably some classic kind of shoe that, thanks to its timelessness and durability, has reached component status (I do not know what it is). And there are probably some components in here, right?
We can’t expect stability in durability in every domain yet. There’s not going to be a component-caliber tablet computer for quite a while, obviously. But where components are available… where things are dumb and durable… man, that’s the good stuff. That’s the stuff I find myself wanting more and more of.
What are your favorite components—either things you have or things you’d like to get?
Update: Frank Chimero pulls a Carmody1 and proposes a two-fold taxonomy: the steadfast and the hot-swap. Both have their place.
Another update: Tim Maly goes deeper with “shearing layers.”
1. pull a Carmody v. to leave a comment that exceeds the original post in insight and value.
It’s always nice when three blogs in your “must read” folder happily converge. First, Jason Kottke pulls a couple of super-tight paragraphs from a Chronicle of Higher Ed article by Clancy Martin, philosophy professor and onetime salesman of luxury jewelry, about how he plied his former trade:
The jewelry business — like many other businesses, especially those that depend on selling — lends itself to lies. It’s hard to make money selling used Rolexes as what they are, but if you clean one up and make it look new, suddenly there’s a little profit in the deal. Grading diamonds is a subjective business, and the better a diamond looks to you when you’re grading it, the more money it’s worth — as long as you can convince your customer that it’s the grade you’re selling it as. Here’s an easy, effective way to do that: First lie to yourself about what grade the diamond is; then you can sincerely tell your customer “the truth” about what it’s worth.
As I would tell my salespeople: If you want to be an expert deceiver, master the art of self-deception. People will believe you when they see that you yourself are deeply convinced. It sounds difficult to do, but in fact it’s easy — we are already experts at lying to ourselves. We believe just what we want to believe. And the customer will help in this process, because she or he wants the diamond — where else can I get such a good deal on such a high-quality stone? — to be of a certain size and quality. At the same time, he or she does not want to pay the price that the actual diamond, were it what you claimed it to be, would cost. The transaction is a collaboration of lies and self-deceptions.
This structure is so neat that it has to be generalizable, right? Look no further than politics, says Jamelle Bouie (filling in for Ta-Nehisi Coates). In “Why Is Stanley Kurtz Calling Obama a Socialist?“, he writes that whether or not calling Obama a socialist started out as a scare tactic, conservative commentators like Kurtz actually believe it now. He pulls a quote from Slacktivist’s Fred Clark on the problem of bearing false witness:
What may start out as a well-intentioned choice to “fight dirty” for a righteous cause gradually forces the bearers of false witness to behave as though their false testimony were true. This is treacherous — behaving in accord with unreality is never effective, wise or safe. Ultimately, the bearers of false witness come to believe their own lies. They come to be trapped in their own fantasy world, no longer willing or able to separate reality from unreality. Once the bearers of false witness are that far gone it may be too late to set them free from their self-constructed prisons.
What’s nice about pairing these two observations is that Martin’s take on self-deception in selling jewelry is binary, a pas de deux with two agents, both deceiving themselves and letting themselves be deceived. Bouie and Clark don’t really go there, but the implication is clear: in politics, the audience is ready to be convinced/deceived because it is already convincing/deceiving itself.
There’s no more dangerous position to be in, truth-wise, than to think you’re getting it figured out, that you see things other people don’t, that you’re getting over on someone. That’s how confidence games work, because that’s how confidence works. And almost nobody’s immune, as Jonah Lehrer points out, quoting Richard Feynman on selective reporting in science. He refers to a famous 1909 experiment which sought to measure the charge of the electron:
Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.
It’s all little lies and adjustments, all the way down. Where else can I get such a good deal on such a high-quality stone?
I love Westerns. My allegiance to the genre has long been known on the Snarkmatrix. (I refer you to the comment threads on Exhibit A or Exhibit B.) So I am excited that people are excited by Joel and Ethan Coen’s new Western, True Grit.
And jeez, I hope I get a few hours by myself in the next week or so to see this movie. Parenting is a serious drag on your ability to partake of the cinema, which is one reason I’ve become such a devotée of Netflix Watch Instantly. I didn’t even get to catch the restored Metropolis when it came to town, and I had only A) waited months for it and B) written a chapter of my dissertation about its director. So I don’t know if True Grit is as good as everyone says it is. What I do know, what I know the hell out of, are Westerns, and Netflix. If you don’t know Westerns, that’s fine. So long as you’ve got a Netflix subscription and streaming internet, I’ve got your back.
You probably know that True Grit (2010) is an adaptation of the same Charles Portis novel (True Grit) that was previously adapted into a movie [True Grit (1969)] that won John Wayne a Best Actor Oscar for his portrayal of the eyepatched marshal Rooster Cogburn. It’s not a remake, you’ve heard entoned, it’s a more-faithful adaptation of the novel.
Fine. Who cares? At a certain point, remakes and adaptations stop being remakes and adaptations. Does anyone care that His Girl Friday was a gender-swapping adaptation of The Front Page, a terrific Ben Hecht and Charles McArthur play which had already been made into a movie in 1931, and which was made into a movie again in 1974 with Billy Wilder directing and Walter Matthau and Jack Lemmon playing the Cary Grant and Rosalind Russell roles?
Okay, I do. But besides me, not really. Because His Girl Friday obliterated The Front Page in our movie-watching conciousness, even though the latter is the prototype of every fast-talking newspaper comedy from, shit, His Girl Friday to the Coen Brothers’ The Hudsucker Proxy. It’s been over forty years since True Grit (1969). It’s a good movie, but if you haven’t seen it, don’t sweat it too much.
You should, however, be sweating the Western. Because not least among their virtues is that Joel and Ethan Coen care and care deeply about genre. Virtually all of their movies are a loving pastiche of one genre form or another, whether playful (like Hudsucker’s newspaper comedy or The Big Lebowski’s skewed take on the hardboiled detective), not so playful (No Country For Old Men) or both somehow at once (Miller’s Crossing, Fargo). And the Western is fickle. You’ve got to contend with books, movies, radio, and TV, all with their own assumptions, all alternating giddy hats-and-badges-and-guns-and-horses entertainment and stone-serious edge-of-civilization Greek-tragedy-meets-American-origin-stories primal rites.
I’ll save you some time, though, by giving you just twelve links, briefly annotated.
Read more…
It’s a classic paradox of American democracy: citizens love America, hate Congress, but generally like their own district’s Congressman. (Until they don’t, and then they vote for someone else, who they usually like).
Josh Huder (via Ezra Klein) takes on the apparent paradox, armed with some good data and historical analysis.
Huder points out something even more paradoxical: Congressional approval takes a hit not just when there’s a scandal, or when there’s partisan gridlock in the face of a crisis, but even when Congress works together to pass major legislation:
By simply doing its job Congress can alienate large parts of its constituency. So while people like their legislators, they dislike when they get together with fellow members and legislate.
From this, Huder concludes that “disapproval is built into the institution’s DNA.” But let me come at this from a different angle: professional and/or sports.
There’s almost an exact isomorphism here. Fans/constituents like/love their home teams (unless their performance suffers for an extended period of time, when they switch to “throw the bums out” mode), and LOVE the game itself. But nobody really likes the league. Who would say, “I love the MLB” or “I love the NCAA” — meaning the actual organizations themselves?
Never! The decisions of the league are always suspect. They’re aggregate, bureaucratic, necessary, and not the least bit fun. Even when leagues make the right decision, we discount it; they’re just “doing their job.” The only time they can really capture our attention is when they do something awful. And most of the time, they’re just tedious drags on our attention, easily taken for granted.
If it’s a structure, it doesn’t seem to be limited to politics. It’s a weird blend of local/pastime attachment, combined with contempt/misunderstanding for the actual structures that work. Because we don’t *want* to notice them at work at all, really.
Robin set the table up (and h/t to Alexis for getting Lanier’s essay in circulation).
Here are three disjoint thoughts, slightly too long for tweets/comments:
A sufficiently copious flood of data creates an illusion of omniscience, and that illusion can make you stupid. Another way to put this is that a lot of information made available over the internet encourages players to think as if they had a God’s eye view, looking down on the whole system.
We’ve canonized these guys, to the point where 1) we think they did everything themselves, 2) they never used different strategies, 3) they never made mistakes, and 4) disagreeing with them then or now violates a deep moral law.
More importantly, in comparison, every other kind of activism is destined to fall short. Lanier’s essay, like Malcolm Gladwell’s earlier essay on digital activism, violates the Gandhi principle. (Hmm, maybe this should be the No-Gandhi Principle. Or it doesn’t violate the Gandhi Principle, but invokes it. Which is usually a bad thing. Still sorting this part out.) The point is, both Ad Hitlerem and the Gandhi Principle opt for terminal purity over differential diagnosis. If you’re not bringing it MLK-style, you’re not really doing anything.
The irony is, Lanier’s essay is actually pretty strong at avoiding the terminal purity problem in other places — i.e., if you agree with someone’s politics, you should agree with (or ignore) their tactics, or vice versa. At its best, it brings the nuance, rather than washing it out.
Google’s Ngrams is also subject to terminal purity arguments — either it’s exposing our fundamental cultural DNA, or it’s dicking around with badly-OCRed data, and it couldn’t possibly be anything in between. To which I say — oy.
Yesterday NiemanLab published some of my musings on the coming “Speakularity” – the moment when automatic speech transcription becomes fast, free and decent.
I probably should have underscored the fact that I don’t see this moment happening in 2011, given the fact that these musings were solicited as part of a NiemanLab series called “Predictions for Journalism 2011.” Instead, I think several things possibly could converge next year that would bring the Speakularity a lot closer. This is pure hypothesis and conjecture, but I’m putting this out there because I think there’s a small chance that talking about these possibilities publicly might actually make them more likely.
First, let’s take a clear-eyed look at where we are, in the most optimistic scenario. Watch the first minute-and-a-half or so of this video interview with Clay Shirky. Make sure you turn closed-captioning on, and set it to transcribe the audio. Here’s my best rendering of some of Shirky’s comments alongside my best rendering of the auto-caption:
Manual transcript: | Auto transcript: |
Well, they offered this penalty-free checking account to college students for the obvious reason students could run up an overdraft and not suffer. And so they got thousands of customers. And then when the students were spread around during the summer, they reneged on the deal. And so HSBC assumed they could change this policy and have the students not react because the students were just hopelessly disperse. So a guy named Wes Streeting (sp?) puts up a page on Facebook, which HSBC had not been counting on. And the Facebook site became the source of such a large and prolonged protest among thousands and thousands of people that within a few weeks, HSBC had to back down again. So that was one of the early examples of a managed organization like a bank running into the fact that its users and its customers are not just atomized, disconnected people. They can actually come together and act as a group now, because we’ve got these platforms that allow us to coordinate with one another. | will they offer the penalty-free technique at the college students pretty obvious resistance could could %uh run a program not suffer as they got thousands of customers and then when the students were spread around during the summer they were spread over the summer the reneged on the day and to hsbc assumed that they could change this policy and have the students not react because the students were just hopeless experts so again in western parts of the page on face book which hsbc had not been counting on the face book site became the source of such a large and prolonged protest among thousands and thousands of people that within a few weeks hsbc had to back down again so that was one of the early examples are female issue organization like a bank running into the fact that it’s users are not just after its customers are not just adam eyes turned disconnected people they get actually come together and act as a group mail because we’ve got these platforms to laos to coordinate |
Cringe-inducing, right? What little punctuation exists is in error (“it’s users”), there’s no capitalization, “atomized” has become “adam eyes,” “platforms that allow us” are now “platforms to laos,” and HSBC is suddenly an example of a “female issue organization,” whatever that means.
Now imagine, for a moment, that you’re a journalist. You click a button to send this video to Google Transcribe, where it appears in an interface somewhat resembling the New York Times’ DebateViewer. Highlight a passage in the text, and it will instantly loop the corresponding section of video, while you type in a more accurate transcription of the passage.
That advancement alone – quite achievable with existing technology – would speed our ability to transcribe a clip like this quite a bit. And it wouldn’t be much more of an encroachment than Google has already made into the field of automatic transcription. All of this, I suspect, could happen in 2011.
Now allow me a brief tangent. One of the predictions I considered submitting for NiemanLab’s series was that Facebook would unveil a dramatically enhanced Facebook Videos in 2011, integrating video into the core functionality of the site the way Photos have been, instead of making it an application. I suspect this would increase adoption, and we’d see more people getting tagged in videos. And Google might counter by adding social tagging capabilities to YouTube, the way they have with Picasa. This would mean that in some cases, Google would know who appeared in a video, and possibly know who was speaking.
Back to Google. This week, the Google Mobile team announced that they’ve built personalized voice recognition into Android. If you turn it on for your Android device, it’ll learn your voice, improving the accuracy of the software the way dictation programs such as Dragon do now.
Pair these ideas and fast-forward a bit. Google asks YouTube users whether they want to enable personalized voice recognition on videos they’re tagged in. If Google knows you’re speaking in a video, it uses what it knows about your voice to make your part of the transcription more accurate. (And hey, let’s throw in that they’ve enabled social tagging at the transcript level, so it can make educated guesses about who’s saying what in a video.)
A bit further on: Footage for most national news shows is regularly uploaded to YouTube, and this footage tends to feature a familiar blend of voices. If they were somewhat reliably tagged, and Google could begin learning their voices, automatic transcriptions for these shows could become decently accurate out of the box. That gets us to the democratized Daily Show scenario.
This is a bucketload of hypotheticals, and I’m highly pessimistic Google could make its various software layers work together this seamlessly anytime soon, but are you starting to see the path I’m drawing here?
And at this point, I’m talking about fairly mainstream applications. The launch of Google Transcribe alone would be a big step forward for journalists, driving down the costs of transcription for news applications a good amount.
Commenter Patrick at NiemanLab mentioned that the speech recognition industry will do everything in its power to prevent Google from releasing anything like Transcribe anytime soon. I agree, but I think speech transcription might be a smaller industry economically than GPS navigation,* and that didn’t prevent Google from solidly disrupting that universe with Google Navigate.
I’m stepping way out on a limb in all of this, it should be emphasized. I know very little about the technological or market realities of speech recognition. I think I know the news world well enough to know how valuable these things would be, and I think I have a sense of what might be feasible soon. But as Tim said on Twitter, “the Speakularity is a lot like the Singularity in that it’s a kind of ever-retreating target.”
The thing I’m surprised not many people have made hay with is the dystopian part of this vision. The Singularity has its gray goo, and the Speakularity has some pretty sinister implications as well. Does the vision I paint above up the creep factor for anyone?
* To make that guess, I’m extrapolating from the size of the call center recording systems market, which is projected to hit $1.24 billion by 2015. It’s only one segment of the industry, but I suspect it’s a hefty piece (15%? 20%?) of that pie. GPS, on the other hand, is slated to be a $70 billion market by 2013.