Scientists are arguing over how YouTube might help turn people into extremists.
Uber and Lyft drivers protest their working conditions in Los Angeles in May 2019.
AP Photo/Damian Dovarganes
If your job doesn't currently involve automation or artificial intelligence in some way, it likely will soon. Computer-based worker surveillance and performance analysis will come, too.
How do you feel about Facebook?
Facebook serves as a gatekeeper of the information diets of more than 200 million Americans and 2 billion users worldwide.
Why is that ad targeting you?
Olivier Le Moal/Shutterstock.com
Rather than revealing an advertiser targeted you by your phone number or email address, Facebook may tell you it showed you a particular ad because you like Facebook. That's not much help.
Let’s work together.
People – individually and in groups – were not as good at facial recognition as an algorithm. But five people plus the algorithm, working together, were even better.
Sometimes the questions become too much for artificial intelligence systems.
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
Sophia, a robot granted citizenship in Saudi Arabia.
A legal loophole could grant computer systems many legal rights people have – threatening human rights and dignity and setting up some real legal and moral problems.
Hey Google: How’s your news?
Google News does not differentiate search results according to users' politics – but it does favor mainstream news sites, which are seen as leaning left, and doesn't clearly disclose how its algorithms work.
People who share potential misinformation on Twitter (in purple) rarely get to see corrections or fact-checking (in orange).
Shao et al.
Information on social media can be misleading because of biases in three places – the brain, society and algorithms. Scholars are developing ways to identify and display the effects of these biases.
It can be complicated to teach a computer to detect harassment and threats.
It could seem attractive to try to teach computers to detect harassment, threats and abusive language. But it's much more difficult than it might appear.
Should an algorithm try to guess what gender people are by how they look?
It can be unpleasant to be mistaken for someone of a different gender. When an algorithm does it secretly, it's even more concerning – especially for transgender and gender-nonconforming people.
It’s time to build trust.
Social media companies arose from libertarian, free-market origins but must embrace social benefits and democracy to survive.