there are computers who are just as smart as the people, the computers will do a lot of the jobs, but there will still be things for the people to do. They will run the restaurants, taste the food, and they will be the ones who will love each other, have families and love each other. I guess they’ll still be the only ones who go to church.” 15 Adults, too, spoke of life in families. To me, the romantic reaction was captured by how one man rebuffed the idea that he might confide in a computer psychotherapist: “How can I talk about sibling rivalry to something that never had a mother?”
Of course, elements of this romantic reaction are still around us. But a new sensibility emphasizes what we share with our technologies. With psychopharmacology, we approach the mind as a bioengineerable machine. 16 Brain imaging trains us to believe that things—even things like feelings—are reducible to what they look like. Our current therapeutic culture turns from the inner life to focus on the mechanics of behavior, something that people and robots might share.
A quarter of a century stands between two conversations I had about the possibilities of a robot confidant, the first in 1983, the second in 2008. For me, the differences between them mark the movement from the romantic reaction to the pragmatism of the robotic moment. Both conversations were with teenage boys from the same Boston neighborhood; they are both Red Sox fans and have close relationships with their fathers. In 1983, thirteen-year-old Bruce talked about robots and argued for the unique “emotionality” of people. Bruce rested his case on the idea that computers and robots are “perfect,” while people are “imperfect,” flawed and frail. Robots, he said, “do everything right”; people “do the best they know how.” But for Bruce it was human imperfection that makes for the ties that bind. Specifically, his own limitations made him feel close to his father (“I have a lot in common with my father.... We both have chaos”). Perfect robots could never understand this very important relationship. If you ever have a problem, you go to a person.
Twenty-five years later, a conversation on the same theme goes in a very different direction. Howard, fifteen, compares his father to the idea of a robot confidant, and his father does not fare well in the comparison. Howard thinks the robot would be better able to grasp the intricacies of high school life: “Its database would be larger than Dad’s. Dad has knowledge of basic things, but not enough of high school.” In contrast to Bruce’s sense that robots are not qualified to have an opinion about the goings-on in families, Howard hopes that robots might be specially trained to take care of “the elderly and children”—something he doesn’t see the people around him as much interested in.
Howard has no illusions about the uniqueness of people. In his view, “they don’t have a monopoly” on the ability to understand or care for each other. Each human being is limited by his or her own life experience, says Howard, but “computers and robots can be programmed with an infinite amount of information.” Howard tells a story to illustrate how a robot could provide him with better advice than his father. Earlier that year, Howard had a crush on a girl at school who already had a boyfriend. He talked to his father about asking her out. His father, operating on an experience he had in high school and what Howard considers an outdated ideal of “macho,” suggested that he ask the girl out even though she was dating someone else. Howard ignored his father’s advice, fearing it would lead to disaster. He was certain that in this case, a robot would have been more astute. The robot “could be uploaded with many experiences” that would have led to the right answer, while his father was working with a limited data set. “Robots can be made to understand things like jealousy from observing how people
Elizabeth Berg
Jane Haddam
Void
Dakota Cassidy
Charlotte Williams
Maggie Carpenter
Dahlia Rose
Ted Krever
Erin M. Leaf
Beverley Hollowed