Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?
Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.
New technologies are often surrounded by hopeful messages that they will alleviate poverty and bring about positive social change. History shows these assumptions are often misplaced.
AI models are increasingly being used to make important decision about people’s lives – just take Robodebt. Yet the complexity of these systems means we hardly understand them.
Between driverless cars, autonomous weapons and AI-powered medical diagnostic tools, it seems there will be no shortage of ethically-complex situations involving AI in the future.
Individuals who experience suicidal thoughts can show signs of this in the language they use. We analysed more than 100 suicide notes to find these language patterns.
Through the act of suggesting some words and not others, the predictive text features in our devices change the way we think — and therefore shape our culture.
If the historical data used to train an AI system disadvantages certain minority groups, the system can be swayed to follow these patterns in its own decision-making process.