stares at the girlâs dad, fierce and hard. âItâs not a joke. Back in 2012, the Department of Defense analyzed the risks of developing AI systems, so we knew this kind of catastrophe might happen someday. But we couldnât simply halt our AI research. Other countries were designing their own AIs, and they werenât going to stop. So about a year ago we started working on a defensive strategy. A countermeasure. Thatâs why we built this base. And thatâs why youâre here.â
The general turns his head, scanning all the faces in the auditorium. Then he glances again at the doorway beside the stage. âNow one of my colleagues will explain the technology behind the Pioneer Project. This is Tom Armstrong, the projectâs chief scientist.â
Dad appears in the doorway and walks across the stage. Iâm relieved to see him but also a little unnerved by the change in his appearance. Heâs no longer wearing the polo shirt and khaki pants he wore during the drive in the SUV. Now heâs dressed in a winter-camouflage uniform, just like General Hawke and the other soldiers. As Dad steps up to the podium, taking Hawkeâs place, he locates me in the crowd and manages to smile. He looks nervous.
âThank you for coming,â he starts. âAnd thanks for your patience. I know some of you are frustrated by all the precautions weâve taken to keep this project secret. But now Iâm ready to discuss our goals and answer your questions.â
He presses the button on the podium, and the satellite photo on the screen is replaced by an image of software code. Hundreds of lines of instructions, written in a programming language I donât recognize, run from the top of the screen to the bottom. âThis is a portion of Sigmaâs source code. When we developed the software for the AI, we focused on imitating human skills such as reasoning, language, and pattern recognition. We succeeded in creating a self-aware intelligence that could accomplish almost any task a human can perform, from proving a mathematical theorem to composing an opera. But in one important respect, Sigma was a failure. We werenât able to give it humanlike morality or motives. Sigma has no incentive to pursue whatâs good for the human race because it lacks the ability to empathize.â
Dad presses the button again, and this time a photo of chimpanzees comes on the screen. âEmpathy comes naturally to humans because it played a big role in our evolution. The most successful apes were the ones who could imitate and understand each other. Sigma, in contrast, has no empathy. Itâs aware of our presence, of course, and it even sent a couple of messages to our military headquarters, but the AI has blocked all our attempts to communicate with it. The basic problem is that Sigmaâs intelligence is very different from ours. We donât understand the AI, and it doesnât understand us either. So we need to build a bridge between us and the machine.â
He pauses, as if to gather his courage. Then he presses the button once more and a diagram of the human brain appears behind him. Just below the familiar organ is a close-up view of a section of brain tissue, magnified to show the individual brain cells and the many branchlike connections between them. Clinging to the cells are hundreds of tiny golden spheres. They look like bits of pollen.
Dad steps toward the screen and points at the spheres. âThese are nanoprobes. Each is less than a thousandth of a millimeter wide. We can make trillions of them in the lab.â He reaches into the pocket of his uniform and pulls out a vial of yellowish fluid. âIn fact, I have several trillion probes right here, floating in this liquid. If we inject enough of these nanoprobes into a human brain, theyâll spread throughout the organ and stick to the brain cells. If we then scan the brain with X-ray pulses, the probes will
Georgette Heyer
Terry Bolryder
William Meikle
Jennifer East
Kat Latham
Jackie Ivie
Jon Talton
Melissa J. Morgan
London Saint James
Susanna Carr