The rise of AI chatbots provides an opportunity to expand the ways we do philosophy and research, and how we engage in intellectual discourse.
Endearing and amusing, AI’s faltering attempts at flirting show how far computer-generated language has come.
Philosophers say now is the time to mull over what qualities should grant an artificially intelligent machine moral standing.
Some people claim it’s already been passed. But Alan Turing’s test of whether artificial intelligence can act like a human remains an important benchmark for our species.
Alan Turing devised a way to test if AI is functionally the same as a human – we’ve done the same for androids.
Testing whether machines are capable of generating sonnets, short stories or dance music that is indistinguishable from human-generated works.
Can software really be considered the “driver” of an autonomous vehicle? This is one question that needs to be resolved before driveless cars can hit the roads.
As machines get ever more complex as we strive to make them complete more complex tasks, it’s time to ask again: will they ever be able to think? But what is thinking anyway?
If we can make artificial intelligent machines that act more human it raises the question of what sort of emotions we’d like them to express.
Computers try to predict our behaviour and anticipate our needs, but sadly they often get things dreadfully wrong.