Disinformation, algorithms, big data, care work, climate change, and cultural knowledge can all be invisible. This exhibition brings them to the light.
“Alfie”, a moral choice machine, is pictured in front of an important question during a press conference in Germany.
Arne Dedert/picture alliance via Getty Images
A UK controversy about school leavers’ marks shows algorithms can get things wrong. To ensure algorithms are as fair as possible, how they work and the trade-offs involved must be made clear.
Plans have been made for the AI-based program to begin trials before the year ends. But it raises serious questions about the role of police in preventing domestic violence.
Government agencies are increasingly using facial recognition technology, including through security cameras like this one being installed on the Lincoln Memorial in 2019.
Mark Wilson/Getty Images
Politicians of all stripes, computer professionals and even big-tech executives are calling on government to hit the brakes on using these algorithms. The feds are hitting the gas.
President Trump’s ban on immigration from several mostly Muslim countries was ultimately upheld by the Supreme Court. President Biden revoked it on his first day in office.
Andrew Harnik/AP Photo
Search engines, like social media algorithms, get you to click on links by learning what other people click on. Enticing misinformation often comes out on top.
If the historical data used to train an AI system disadvantages certain minority groups, the system can be swayed to follow these patterns in its own decision-making process.
A-level students protest the use of algorithms to determine their grades.
Jonathan Brady/PA Wire/PA Images
A report calls for banning the use of emotion recognition technology. An AI and computer vision researcher explains the potential and why there’s growing concern.
When algorithms make decisions with real-world consequences, they need to be fair.
R-Type/Shutterstock.com
A machine learning expert predicts a new balance between human and machine intelligence is on the horizon. For that to be good news, researchers need to figure out how to design algorithms that are fair.
Algorithms can reinforce existing biases in society.
Shutterstock
Technology firms should use more design fiction to explore and avoid potential negative consequences, such as AI bias.
Specialist machine learning and narrow AI could help us to start removing the “four Ds” - dirty, dull, difficult, dangerous - from our daily work.
from www.shutterstock.com
Artificial intelligence is predicted to contribute some US$15.7 trillion to the global economy by 2030. A new report looks at issues specific to New Zealand.
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
What do the Carlos Ghosn scandal, the rising power of algorithms and the “gilets jaunes” have in common? The need to extend the spatial and temporal definitions of responsibility.