With an eye to Kant’s work, a philosopher and a sociologist argue that the Uber project robs drivers of their dignity.
Whatever approach you take to managing your feeds, remain cautious and sceptical.
Algorithms have been blamed for dividing society. What if they could support social cohesion instead?
Effective implementation of existing law can protect us from the risks posed by AI algorithms.
Vlogging has emerged as a new source of intimate entertainment, and for creators, potential income. However, they also raise serious questions about exploitation and the privacy rights of children.
New features on Apple iOS 17 aim to give users insights into their mental health, but they may also shape how people see themselves.
Data used to train AI systems often reflects the racism inherent in society.
The seeds of the current commotion over AI were laid years ago.
Social media companies’ drive to keep you on their platforms clashes with how people evolved to learn from each other. One result is more conflict and misinformation.
AI can streamline the painstaking work of mixing and editing tracks. But it’s also easy to see how AI-generated music will make more money for giant streaming services at the expense of artists.
By bridging culture and computation, heritage algorithms challenge the myth of ‘primitive cultures’ and forge a new understanding of science and art.
Media outlets increasingly construct narratives about collective reality based on what’s happening on social media.
‘I no longer exist, I have become a construct of their imagination. It is the ultimate act of dehumanisation.’
Visual artists draw from visual references, not words, as they imagine their work. So when language is in the driver’s seat of making art, it erects a barrier between the artist and the canvas.
Machine learning can spot patterns in patient data and help detect hepatitis B earlier, which could save lives.
Our new study demonstrates the enormous potential that machine learning has to help identify people with AS
Biased algorithms in health care can lead to inaccurate diagnoses and delayed treatment. Deciding which variables to include to achieve fair health outcomes depends on how you approach fairness.
Without more transparency about AI use, it will be difficult for people to challenge biased decisions against them.
Twitter uses an AI-powered centrally managed algorithm to moderate what you see. On Bluesky, you have control over the algorithm that selects what you see through so-called ‘composable moderation’.
Metaphorical black boxes shield the inner workings of AIs, which protect software developers’ intellectual property. They also make it hard to understand how the AIs work – and why things go wrong.