behave.... A robot can be fully understanding and open-minded.” Howard thinks that as a confidant, the robot comes out way ahead. “People,” he says, are “risky.” Robots are “safe.”
There are things, which you cannot tell your friends or your parents, which . . . you could tell an AI. Then it would give you advice you could be more sure of.... I’m assuming it would be programmed with prior knowledge of situations and how they worked out. Knowledge of you, probably knowledge of your friends, so it could make a reasonable decision for your course of action. I know a lot of teenagers, in particular, tend to be caught up in emotional things and make some really bad mistakes because of that.
I ask Howard to imagine what his first few conversations with a robot might be like. He says that the first would be “about happiness and exactly what that is, how do you gain it.” The second conversation would be “about human fallibility,” understood as something that causes “mistakes.” From Bruce to Howard, human fallibility has gone from being an endearment to a liability.
No generation of parents has ever seemed like experts to their children. But those in Howard’s generation are primed to see the possibilities for relationships their elders never envisaged. They assume that an artificial intelligence could monitor all of their e-mails, calls, Web searches, and messages. This machine could supplement its knowledge with its own searches and retain a nearly infinite amount of data. So, many of them imagine that via such search and storage an artificial intelligence or robot might tune itself to their exact needs. As they see it, nothing technical stands in the way of this robot’s understanding, as Howard puts it, “how different social choices [have] worked out.” Having knowledge and your best interests at heart, “it would be good to talk to . . . about life. About romantic matters. And problems of friendship.”
Life? Romantic matters? Problems of friendship? These were the sacred spaces of the romantic reaction. Only people were allowed there. Howard thinks that all of these can be boiled down to information so that a robot can be both expert resource and companion. We are at the robotic moment.
As I have said, my story of this moment is not so much about advances in technology, impressive though these have been. Rather, I call attention to our strong response to the relatively little that sociable robots offer—fueled it would seem by our fond hope that they will offer more. With each new robot, there is a ramp-up in our expectations. I find us vulnerable—a vulnerability, I believe, not without risk.
CHAPTER 3
True companions
I n April 1999, a month before AIBO’s commercial release, Sony demonstrated the little robot dog at a conference on new media in San Jose, California. I watched it walk jerkily onto an empty stage, followed by its inventor, Toshitado Doi. At his bidding, AIBO fetched a ball and begged for a treat. Then, with seeming autonomy, AIBO raised its back leg to some suggestion of a hydrant. Then, it hesitated, a stroke of invention in itself, and lowered its head as though in shame. The audience gasped. The gesture, designed to play to the crowd, was wildly successful. I imagined how audiences responded to Jacques de Vaucanson’s eighteenth-century digesting (and defecating) mechanical duck and to the chess-playing automata that mesmerized Edgar Alan Poe. AIBO, like these, was applauded as a marvel, a wonder. 1
Depending on how it is treated, an individual AIBO develops a distinct personality as it matures from a fall-down puppy to a grown-up dog. Along the way, AIBO learns new tricks and expresses feelings: flashing red and green eyes direct our emotional traffic; each of its moods comes with its own soundtrack. A later version of AIBO recognizes its primary caregiver and can return to its charging station, smart enough to know when it needs a break. Unlike a
Ruth Hamilton
Mike Blakely
Neal Stephenson
Mark Leyner
Thomas Berger
Keith Brooke
P. J. Belden
JUDY DUARTE
Vanessa Kelly
Jude Deveraux