Jerome Groopman writes in The New Yorker about Maja Matarić and the quest to create robots that can help people through therapy. Assistive robots are nothing new; Japan’s aging population has been using this technology for years. But Groopman focuses on the social and emotional components of these advancements, and on efforts to enable robots to understand people:
Glancing at the robot, Mary lifted a magazine from the top of the pile and guided it into a rack on top of the shelf. As soon as the magazine was in place, the robot emitted a beep. During the next few minutes, Mary moved each magazine, one by one, to the rack. Gradually, she increased her pace, and the beeps from the robot came faster. Mary began to laugh.
She turned and looked squarely at the robot. With a sly smile, she moved her weak arm toward the remaining magazines on the desk and mimed putting one into the rack. She then stuck her tongue out at the machine.
Matarić said, “She is cheating. She is totally thrilled, because she thinks she cheated the robot.” The robot, though, was on to the game. A reflective white band that Mary wore on her leg allowed the robot to follow her movements. A thin motion sensor attached to her sleeve transmitted Mary’s gestures to the robot, so that it knew almost instantly whether she was raising her arm and in what motion. A sensor in the rack signalled the robot when a magazine was properly placed, and the robot communicated with Mary only when she performed the task correctly.
Although the task lasted about an hour, the novelty of the interaction did not seem to wane. In a debriefing after the study, Mary said, “When I’m at home, my husband is useless. He just says, ‘Do it.’ I much prefer the robot to my husband.”
The article takes an interesting turn when Groopman considers the ethics of this technology:
Thirty years ago, [MIT professor Sherry] Turkle began studying the impact of sophisticated technologies, including virtual-reality computer games and robots, on emotional development in children and social relationships among adults. “I am not a Luddite,” Turkle said. “But there is no upside to being socialized by a robot.” Based on her observation of groups of different ages, Turkle has found that “children and the elderly start to relate to the object as a person. They begin to love it, and nurture it, and feel they have to attend to the robot’s inner state.” With this attachment and projection of their emotions, Turkle says, people begin to seek reciprocity, wanting the robot to care for them. “We were wired through evolution to feel that when something looks us in the eye, then someone is at home in it.”
Robots, Turkle argues, risk distorting the meaning of relationships, the bonds of love, and the types of emotional accommodation required to form authentic human attachments.
In a chat about the article, Groopman ties the matter to the expanding use of remote-controlled drones in warfare.
Apocalyptic visions involving robots tend to focus on what they’ll do to us. It’s interesting that the first real anxieties about this relationship concern what we do with them. Will we become too emotionally invested? Will we become too distant from the ethical reality of taking human lives?
And I wonder, do Sherry Turkle’s concerns extend to things like pets and the Sims?