AI-generated images of “a stained glass window with an image of a blue strawberry”.
As the perils and wonders of artificial intelligence begin to permeate our lives, the ‘IPCC report for AI’ calls for action from researchers and government to ensure a safe future.
The term ‘killer robot’ often conjures images of Terminator-like humanoid robots. Militaries around the world are working on autonomous machines that are less scary looking but no less lethal.
John F. Williams/U.S. Navy
Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.
Plans have been made for the AI-based program to begin trials before the year ends. But it raises serious questions about the role of police in preventing domestic violence.
Undergraduate students need to learn the responsible use of data science as well as the nuts and bolts.
Hill Street Studios/Stone via Getty Images
Undergraduate programs are springing up across the US to meet the burgeoning demand for workers trained in big data. Yet many of the programs lack training in the ethical use of data science.
The FTC put companies that sell AI systems on notice: Cross the line with biased products and the law is coming for you.
Maciej Frolow/Stone via Getty Images
The Federal Trade Commission is rattling its saber at the technology industry over growing public concern about biased AI algorithms. Can the agency back up its threats?
AI medical systems promise superhuman capabilities, but they are only as fair as the data they’re trained on.
PhonlamaiPhoto/iStock via Getty Images
Some AI systems make faulty assumptions about women and nonwhite men, which can lead to misdiagnoses. Overcoming this bias takes legal, regulatory and technical fixes.
AI promises to make life easier, but what will humans lose in the bargain?
AP Photo/Frank Augstein
By letting machines recommend movies and decide whom to hire, humans are losing their unpredictable nature – and possibly the ability to make everyday judgments, as well.
If the historical data used to train an AI system disadvantages certain minority groups, the system can be swayed to follow these patterns in its own decision-making process.
The departure of AI ethics researcher Timnit Gebru from Google highlights attempts to make algorithmic decision-making accountable.
Calls for more race-based data fail to consider the many risks associated with collecting it.
The COVID-19 pandemic has led to calls for the collection of race-based data. But the risks of algorithmic discrimination must be addressed.
Handing management to algorithms creates ‘black-box bosses" whose decision-making is hard to understand or question.
When algorithms make decisions with real-world consequences, they need to be fair.
A machine learning expert predicts a new balance between human and machine intelligence is on the horizon. For that to be good news, researchers need to figure out how to design algorithms that are fair.
Alongside doctors, AI could be a useful tool for providing better diagnosis.
Victor Moussa/ Shutterstock
An AI trained to look at heart scans was able to successfully predict risk of death. But one expert cautions we still need to be careful about designing – and using – AI for medical diagnosis.
Technology firms should use more design fiction to explore and avoid potential negative consequences, such as AI bias.
From the law to the media we’re becoming artificial humans, mere tools of the machines.