The FTC put companies that sell AI systems on notice: Cross the line with biased products and the law is coming for you.
Maciej Frolow/Stone via Getty Images
The Federal Trade Commission is rattling its saber at the technology industry over growing public concern about biased AI algorithms. Can the agency back up its threats?
AI medical systems promise superhuman capabilities, but they are only as fair as the data they’re trained on.
PhonlamaiPhoto/iStock via Getty Images
Some AI systems make faulty assumptions about women and nonwhite men, which can lead to misdiagnoses. Overcoming this bias takes legal, regulatory and technical fixes.
AI promises to make life easier, but what will humans lose in the bargain?
AP Photo/Frank Augstein
By letting machines recommend movies and decide whom to hire, humans are losing their unpredictable nature – and possibly the ability to make everyday judgments, as well.
If the historical data used to train an AI system disadvantages certain minority groups, the system can be swayed to follow these patterns in its own decision-making process.
The departure of AI ethics researcher Timnit Gebru from Google highlights attempts to make algorithmic decision-making accountable.
Calls for more race-based data fail to consider the many risks associated with collecting it.
The COVID-19 pandemic has led to calls for the collection of race-based data. But the risks of algorithmic discrimination must be addressed.
Handing management to algorithms creates 'black-box bosses" whose decision-making is hard to understand or question.
When algorithms make decisions with real-world consequences, they need to be fair.
A machine learning expert predicts a new balance between human and machine intelligence is on the horizon. For that to be good news, researchers need to figure out how to design algorithms that are fair.
Alongside doctors, AI could be a useful tool for providing better diagnosis.
Victor Moussa/ Shutterstock
An AI trained to look at heart scans was able to successfully predict risk of death. But one expert cautions we still need to be careful about designing -- and using -- AI for medical diagnosis.
Technology firms should use more design fiction to explore and avoid potential negative consequences, such as AI bias.
From the law to the media we're becoming artificial humans, mere tools of the machines.