September 5, 2009
Jamais Cascio on devices that pay attention:
Imagine a desktop with a camera that knows to shut down the screen and eventually go to sleep when you walk away (but stays awake when you’re sitting there reading something or thinking), and will wake up when you sit down in front of it (no mouse-jiggling required).
Or a system with a microphone that listens for the combination of a phone ringing (sudden loud noise) followed by a nearby voice saying “hello” (or similar greeting), and will mute the system automatically.
When you go down this road, extrapolating from existing abilities (accelerometers, face and voice recognition, light detection) to more complex algorithms, the possibilities get correspondingly more complicated:
What prompted this line of thought for me was the story about the Outbreaks Near Me application for the iPhone. It struck me that a system that provided near-real-time weather, pollution, pollen, and flu (etc.) information based on watching where you are — and learning where you typically go, to give you early warnings — was well within our capabilities.
Or a system that listened for coughing — how many different voices, how often, how intense, where — to add to health maps used by epidemiologists (and other mobile apps).
It seems to be almost an axiom that the applications of digital technology that are potentially the most beneficial for the aggregate likewise require the most information from the individual user - and therefore creep us out to the point where we’re reluctant to put them into practice. There’s got to be a name for this paradox - a digital analogue to The Fable of the Bees.