A key element of the battle between truth and propaganda has nothing to do with technology. It has to do with how people are much more likely to accept something if it confirms their beliefs.
Artificial intelligence holds great promise for medicine, but safeguards are needed to ensure it does not harm patients.
Mathematician Hannah Fry has called for tech and data scientists to make an ethical pledge, as medical doctors do. But the same result might be delivered by simply asking people to mind their bias.
A new test which capitalises on existing knowledge and technology will increase diagnoses, speed up the process and save the NHS millions of pounds.
Using machine learning and natural language processing, researchers are developing an algorithm that can distinguish between real and fake news articles.
Algorithms are only human (well, designed by humans) but we need to trust they'll do what they're supposed to do. And that means we need a better way to test them.
Instead of trying to explain the mystifying mathematics behind how algorithms work, this researcher started looking at how they actually 'see' the world we live in.
A new legal framework for automated decision making is critical to protect citizens in the digital age.
Online gambling algorithims and blurred lines on what constitutes an advert on social media mean advertising principles are being flouted.
People know about Facebook's problems, but assume they are largely immune – even while they imagine that everyone else is very susceptible to influence.
People could be asked to prove their identity to continuing posting political content or adverts on Facebook.
Google's algorithms reflect bias against members of racialized and gendered groups.
Technology experts have long worried about a 'digital divide' between those who could use computers and those who could not. Artificial intelligence algorithms are widening the gulf.
To multiply two numbers by hand take a few steps but it's something we're taught in school. When dealing with big numbers, really big numbers, we need to a quicker way to do things.
It's time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.
From the law to the media we're becoming artificial humans, mere tools of the machines.
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
Algorithmic guardians could be programmed to manage our digital interactions with social platforms and apps according to our personal preferences.
Algorithms used by social media networks expose users to divisive content separating them into bubbles. But the ways in which they are trained amplifies the effects of the filter bubble.
An ethicist on why fixing algorithms may not be the best response to algorithmic bias.