Of the risks posed by AI, overtaking human intelligence isn’t an immediate concern.
One researcher’s experience from a quarter-century ago shows why bias in AI remains a problem – and why the solution isn’t a simple technical fix.
Do you own a literary work that ChatGPT helped you write? Does OpenAI? The legal questions are thorny, and the answers unclear.
There’s a lot of overlap in AI-related concepts. Understanding how they are different from each other, and how they relate, is important.
Generative AI can seem like magic, which makes it both enticing and frightening. Scholars are helping society come to grips with the potential benefits and harms.
Language model AIs are smooth talkers, but you shouldn’t rely on them to make important decisions. That’s because they have trouble telling the difference between a gain and a loss.
Large language models can’t understand language the way humans do because they can’t perceive and make sense of the world.
Pausing AI development will give our governments and culture time to catch up with and steer the rush of new technology.
AI chatbots are on the rise in China – but their abilities and purpose may be quite different from the products of US tech giants.
The latest release in the GPT series shows marked improvement over predecessors.
Searching the web with ChatGPT is like talking to an expert – if you’re OK getting a mix of fact and fiction. But even if it were error-free, searching this way comes with hidden costs.
Our tendency to view machines as people and become attached to them points to real risks of psychological entanglement with AI technology.
Artists and photographers have strongly opposed their distinct styles being replicated by AI image generators. And the law has yet to catch up with this issue.
There won’t be an easy tech fix for the questions about authorship raised by ChatGPT and other text generators.
ChatGPT and other AI chatbots seem remarkably good at conversations. But you can’t believe anything they say. Sometimes, though, reality isn’t the point.
Now that AI systems can generate realistic images and convincing prose, are creative and knowledge workers endangered or poised for productivity gains? A panel of experts says it’s not so clear-cut.
From ChatGPT to Lensa, it feels like AI is here to take over. But despite some impressive results, such systems still have plenty of limitations.
AI models can now produce meaningful responses to exam and assignment questions. We’ll have to embrace them if we want the next few years to go smoothly.
We’re facing a significant advance in AI using methods that are not described in scientific literature, and with datasets restricted to a single for-profit company.
The story of Meta’s latest AI model shows the pitfalls of machine learning – and a disregard for potential risks.