Disinformation, algorithms, big data, care work, climate change, and cultural knowledge can all be invisible. This exhibition brings them to the light.
“Alfie”, a moral choice machine, is pictured in front of an important question during a press conference in Germany.
Arne Dedert/picture alliance via Getty Images
Inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place.
A UK controversy about school leavers’ marks shows algorithms can get things wrong. To ensure algorithms are as fair as possible, how they work and the trade-offs involved must be made clear.
Plans have been made for the AI-based program to begin trials before the year ends. But it raises serious questions about the role of police in preventing domestic violence.
Government agencies are increasingly using facial recognition technology, including through security cameras like this one being installed on the Lincoln Memorial in 2019.
Mark Wilson/Getty Images
Politicians of all stripes, computer professionals and even big-tech executives are calling on government to hit the brakes on using these algorithms. The feds are hitting the gas.
President Trump’s ban on immigration from several mostly Muslim countries was ultimately upheld by the Supreme Court. President Biden revoked it on his first day in office.
Andrew Harnik/AP Photo
A civil rights group is suing Facebook for its failure to stop the spread of anti-Muslim hate speech on the platform.
Algorithms help lots of people discover new music.
Music recommendation algorithms are more likely to suggest music by male than female artists.
Search engines often serve up a distorting blend of information and misinformation.
Crispin la valiente/Moment via Getty Images
Search engines, like social media algorithms, get you to click on links by learning what other people click on. Enticing misinformation often comes out on top.
If the historical data used to train an AI system disadvantages certain minority groups, the system can be swayed to follow these patterns in its own decision-making process.
A-level students protest the use of algorithms to determine their grades.
Jonathan Brady/PA Wire/PA Images
Problems in the underlying data mean we can’t leave algorithms to decide things on their own.
Scientists are arguing over how YouTube might help turn people into extremists.
Emotion recognition technology, an outgrowth of facial recognition technology, continues to advance quickly.
A report calls for banning the use of emotion recognition technology. An AI and computer vision researcher explains the potential and why there’s growing concern.
When algorithms make decisions with real-world consequences, they need to be fair.
A machine learning expert predicts a new balance between human and machine intelligence is on the horizon. For that to be good news, researchers need to figure out how to design algorithms that are fair.
Algorithms can reinforce existing biases in society.
The fundamental problem with AI is it is often riddled with society’s existing biases and prejudices.
Technology firms should use more design fiction to explore and avoid potential negative consequences, such as AI bias.
Specialist machine learning and narrow AI could help us to start removing the “four Ds” - dirty, dull, difficult, dangerous - from our daily work.
Artificial intelligence is predicted to contribute some US$15.7 trillion to the global economy by 2030. A new report looks at issues specific to New Zealand.
Social biases in digital tech create racist face recognition software and sexist hiring tools, but more data collection isn’t the answer.
Sometimes the questions become too much for artificial intelligence systems.
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
An ethicist on why fixing algorithms may not be the best response to algorithmic bias.
Les gilets jaunes Joan Mora.
What do the Carlos Ghosn scandal, the rising power of algorithms and the “gilets jaunes” have in common? The need to extend the spatial and temporal definitions of responsibility.