At the heart of the debate is that most fundamental question: what does it mean to be human?
Stephen Hawking raised the public profile of grand science, and speculated about the future of artificial intelligence, as well as contacting aliens. Does science mix easily with science fiction?
Asking whether machines can really understand us is meaningless.
Robots should be empowered to pick the action that most helps humans.
Humans need greater autonomy than Isaac Asimov's neat science fiction idea permits.
Today's robots and artificial intelligence look very different from the androids conceived by Isaac Asimov.
We are approaching the time when robots in our daily lives will be making decisions about how to act. What guidelines should we give them?