Images generated by AI systems, like these fake photos of Donald Trump being arrested (he hasn’t been arrested), can be a dangerous source of misinformation.
AP Photo/J. David Ake
In a world of increasingly convincing AI-generated text, photos and videos, it’s more important than ever to be able to distinguish authentic media from fakes and imitations. The challenge is how.
Baidu’s ERNIE Bot was launched to considerable disappointment.
Ng Han Guan / AP
AI chatbots are on the rise in China – but their abilities and purpose may be quite different from the products of US tech giants.
This is a fake AI-generated image.
Daniel Kempe via Twitter/Midjourney
AI tools are now generating content that’s difficult to distinguish from reality.
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development. But how successful have they been?
(Shutterstock)
When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
Ketut Subiyanto/Pexels
If people rely on ChatGPT or Google for complex medical questions, they could come unstuck.
The latest release in the GPT series shows marked improvement over predecessors.
Large language model AI responds to questions but doesn’t actually know anything and is prone to making things up.
Charles Taylor/iStock via Getty Images
Searching the web with ChatGPT is like talking to an expert – if you’re OK getting a mix of fact and fiction. But even if it were error-free, searching this way comes with hidden costs.
To what extent will our psychological vulnerabilities shape our interactions with emerging technologies?
Andreus/iStock via Getty Images
Our tendency to view machines as people and become attached to them points to real risks of psychological entanglement with AI technology.
Jirsak/Shutterstock
The AI is not a major threat to human employment.
With proper teaching, students can use ChatGPT to develop their arguments and build their essays.
Gorodenkoff/Shutterstock
Students could learn about critical thinking, writing and the broader role of artificial intelligence tools like chatbots.
Chuan Chuan/Shutterstock
AI chatbots can’t take responsibility for what they say, so we shouldn’t trust them.
BugWarp/Wikimedia Commons
It is one thing to treat AI as a tool when it has no scope for emotion. It is quite another when AI has a full suite of emotional responses.
Some critics have claimed that artificial intelligence chatbot ChatGPT has “killed the essay,” while DALL-E, an AI image generator, has been portrayed as a threat to artistic integrity.
(Shutterstock)
Rather than seeing artificial intelligence as the cause of new problems, we might better understand AI ethics as bringing attention to old ones.
ChatGPT has the fastest-growing user base of any technology in history.
Dmytro Varavin/iStock via Getty Images
New technologies are often surrounded by hopeful messages that they will alleviate poverty and bring about positive social change. History shows these assumptions are often misplaced.
Could ChatGPT be the technological tool that will, finally, radically change higher education?
Shutterstock
The irruption of ChatPT has had impact everywhere, including higher education. But can it be a greater impact than expected?
‘ChatGPT, please give me a 1,000 word article on how to stop you from making workplaces worse.’
Ground Picture
Journalists, policymakers and academics are among those whose worlds could be turned upside down by AI chatbots.
That students can cheat more efficiently with ChatGPT does not warrant claims about the death of the student essay.
(Shutterstock)
We ought to want student essays to reflect understanding, judgment and caring, something beyond ChatGPT.
Wes Hicks/Unsplash
Some fear ChatGPT will increase student cheating. But education academics say it can also save time preparing lessons and create new opportunities for learning.
Teachers and university professors have relied heavily on ‘one and done’ essay assignments for decades. Requiring students to submit drafts of their work is one needed shift.
(Shutterstock)
Educators need to carefully consider ChatGPT and issues of academic integrity to move toward an assessment system that leverages AI tools.
Midjourney/Marcel Scharth
Users are having a blast getting creative with AI generators – but your output is only ever as good as your prompt.