The Turing test, first proposed in 1950 by Alan Turing, was framed as a test that could supposedly tell us whether an AI system could ‘think’ like a human.
AI poses a variety of ethical conundrums, but the NASA teams working on Mars rovers exemplify an ethic of care and human-robot teamwork that could act as a blueprint for AI’s future.
People can trust each other because they understand how the human mind works, can predict people’s behavior, and assume that most people have a moral sense. None of these things are true of AI.
The science of human consciousness offers new ways of gauging machine minds – and suggests there’s no obvious reason computers can’t develop awareness.
Dulani Jayasuriya, University of Auckland, Waipapa Taumata Rau; Jacky Liu, University of Auckland, Waipapa Taumata Rau et Ryan Elmore, University of Denver
A new machine learning model can pinpoint anomalies in sports results – whether from match fixing, strategic losses or poor player performance. It could be a useful tool in the fight against cheating.
Air quality forecasting is getting better, thanks in part to AI. That’s good, given the health impact of air pollution. An environmental engineer explains how systems warn of incoming smog or smoke.
Artificial intelligence looks like a political campaign manager’s dream because it could tune its persuasion efforts to millions of people individually – but it could be a nightmare for democracy.
Quantum machine learning models could help us create AI systems that are almost impenetrable by hackers. But in the hands of hackers, the same technology could wreak havoc.
Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.
Pain has long been subjectively measured, leading to frustrations for patients and doctors alike. Identifying neural biomarkers of pain could improve diagnosis and lead to better treatments of chronic pain conditions.
Metaphorical black boxes shield the inner workings of AIs, which protect software developers’ intellectual property. They also make it hard to understand how the AIs work – and why things go wrong.