A UK controversy about school leavers’ marks shows algorithms can get things wrong. To ensure algorithms are as fair as possible, how they work and the trade-offs involved must be made clear.
Congress has asked many questions of Meta CEO Mark Zuckerberg but has done little to regulate Facebook.
AP Photo/Jacquelyn Martin
Pressure is mounting on Congress to take action on Facebook. Our panel of experts offers their top priorities: user control of data, banking-like oversight and resources to close the digital divide.
Jeffrey Hirsch, University of North Carolina at Chapel Hill
If your job doesn’t currently involve automation or artificial intelligence in some way, it likely will soon. Computer-based worker surveillance and performance analysis will come, too.
How do you feel about Facebook?
fyv6561/Shutterstock.com
Rather than revealing an advertiser targeted you by your phone number or email address, Facebook may tell you it showed you a particular ad because you like Facebook. That’s not much help.
Let’s work together.
Olena Yakobchuk/Shutterstock.com
People – individually and in groups – were not as good at facial recognition as an algorithm. But five people plus the algorithm, working together, were even better.
Sometimes the questions become too much for artificial intelligence systems.
sdecoret/Shutterstock.com
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
Sophia, a robot granted citizenship in Saudi Arabia.
MSC/wikimedia
A legal loophole could grant computer systems many legal rights people have – threatening human rights and dignity and setting up some real legal and moral problems.
Hey Google: How’s your news?
BigTunaOnline/Shutterstock.com
Google News does not differentiate search results according to users’ politics – but it does favor mainstream news sites, which are seen as leaning left, and doesn’t clearly disclose how its algorithms work.
People who share potential misinformation on Twitter (in purple) rarely get to see corrections or fact-checking (in orange).
Shao et al.
Information on social media can be misleading because of biases in three places – the brain, society and algorithms. Scholars are developing ways to identify and display the effects of these biases.
It can be complicated to teach a computer to detect harassment and threats.
Palto/Shutterstock.com
It could seem attractive to try to teach computers to detect harassment, threats and abusive language. But it’s much more difficult than it might appear.
Should an algorithm try to guess what gender people are by how they look?
all_is_magic/Shutterstock.com
It can be unpleasant to be mistaken for someone of a different gender. When an algorithm does it secretly, it’s even more concerning – especially for transgender and gender-nonconforming people.
It’s time to build trust.
Arthimedes/Shutterstock.com