Using machine learning and natural language processing, researchers are developing an algorithm that can distinguish between real and fake news articles.
Algorithms are only human (well, designed by humans) but we need to trust they'll do what they're supposed to do. And that means we need a better way to test them.
Instead of trying to explain the mystifying mathematics behind how algorithms work, this researcher started looking at how they actually 'see' the world we live in.
A new legal framework for automated decision making is critical to protect citizens in the digital age.
Online gambling algorithims and blurred lines on what constitutes an advert on social media mean advertising principles are being flouted.
People know about Facebook's problems, but assume they are largely immune – even while they imagine that everyone else is very susceptible to influence.
People could be asked to prove their identity to continuing posting political content or adverts on Facebook.
Google's algorithms reflect bias against members of racialized and gendered groups.
Technology experts have long worried about a 'digital divide' between those who could use computers and those who could not. Artificial intelligence algorithms are widening the gulf.
To multiply two numbers by hand take a few steps but it's something we're taught in school. When dealing with big numbers, really big numbers, we need to a quicker way to do things.
It's time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.
From the law to the media we're becoming artificial humans, mere tools of the machines.
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
Algorithmic guardians could be programmed to manage our digital interactions with social platforms and apps according to our personal preferences.
Algorithms used by social media networks expose users to divisive content separating them into bubbles. But the ways in which they are trained amplifies the effects of the filter bubble.
An ethicist on why fixing algorithms may not be the best response to algorithmic bias.
Beware of the blind use of artificial intelligence: used as a "magic wand", for example in an autonomous car, it presents risks.
What do the Carlos Ghosn scandal, the rising power of algorithms and the "gilets jaunes" have in common? The need to extend the spatial and temporal definitions of responsibility.
It's easier to make the list than you might think.
Expecting algorithms to perform perfectly might be asking too much of ourselves.