The term ‘killer robot’ often conjures images of Terminator-like humanoid robots. Militaries around the world are working on autonomous machines that are less scary looking but no less lethal.
John F. Williams/U.S. Navy
Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.
Plans have been made for the AI-based program to begin trials before the year ends. But it raises serious questions about the role of police in preventing domestic violence.
Undergraduate students need to learn the responsible use of data science as well as the nuts and bolts.
Hill Street Studios/Stone via Getty Images
Undergraduate programs are springing up across the US to meet the burgeoning demand for workers trained in big data. Yet many of the programs lack training in the ethical use of data science.
Government agencies are increasingly using facial recognition technology, including through security cameras like this one being installed on the Lincoln Memorial in 2019.
Mark Wilson/Getty Images
Politicians of all stripes, computer professionals and even big-tech executives are calling on government to hit the brakes on using these algorithms. The feds are hitting the gas.
President Trump’s ban on immigration from several mostly Muslim countries was ultimately upheld by the Supreme Court. President Biden revoked it on his first day in office.
Andrew Harnik/AP Photo
A civil rights group is suing Facebook for its failure to stop the spread of anti-Muslim hate speech on the platform.
The FTC put companies that sell AI systems on notice: Cross the line with biased products and the law is coming for you.
Maciej Frolow/Stone via Getty Images
The Federal Trade Commission is rattling its saber at the technology industry over growing public concern about biased AI algorithms. Can the agency back up its threats?
Algorithms help lots of people discover new music.
Shutterstock/WAYHOME studio
Search engines, like social media algorithms, get you to click on links by learning what other people click on. Enticing misinformation often comes out on top.
AI medical systems promise superhuman capabilities, but they are only as fair as the data they’re trained on.
PhonlamaiPhoto/iStock via Getty Images
Some AI systems make faulty assumptions about women and nonwhite men, which can lead to misdiagnoses. Overcoming this bias takes legal, regulatory and technical fixes.
AI promises to make life easier, but what will humans lose in the bargain?
AP Photo/Frank Augstein
By letting machines recommend movies and decide whom to hire, humans are losing their unpredictable nature – and possibly the ability to make everyday judgments, as well.
If the historical data used to train an AI system disadvantages certain minority groups, the system can be swayed to follow these patterns in its own decision-making process.
A report calls for banning the use of emotion recognition technology. An AI and computer vision researcher explains the potential and why there’s growing concern.
When algorithms make decisions with real-world consequences, they need to be fair.
R-Type/Shutterstock.com
A machine learning expert predicts a new balance between human and machine intelligence is on the horizon. For that to be good news, researchers need to figure out how to design algorithms that are fair.
Alongside doctors, AI could be a useful tool for providing better diagnosis.
Victor Moussa/ Shutterstock
An AI trained to look at heart scans was able to successfully predict risk of death. But one expert cautions we still need to be careful about designing – and using – AI for medical diagnosis.
Algorithms can reinforce existing biases in society.
Shutterstock
James P. Jimirro Professor of Media Effects, Co-Director, Media Effects Research Laboratory, & Director, Center for Socially Responsible AI, Penn State
Professor, Computing and Information Systems, Pro Vice-Chancellor (Research Systems), and Pro Vice-Chancellor (Digital & Data), The University of Melbourne