Artificial intelligence is escalating the battle between spam senders and spam blockers. Recent advances could mean more convincing pitches to get you to click, buy and give up personal information.
Generative AI can seem like magic, which makes it both enticing and frightening. Scholars are helping society come to grips with the potential benefits and harms.
Language model AIs are smooth talkers, but you shouldn’t rely on them to make important decisions. That’s because they have trouble telling the difference between a gain and a loss.
A recent open letter calling for a temporary artificial intelligence development hiatus is more concerned with hypothetical risks about the future than the issues that are right in front of us.
In a world of increasingly convincing AI-generated text, photos and videos, it’s more important than ever to be able to distinguish authentic media from fakes and imitations. The challenge is how.
When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
Searching the web with ChatGPT is like talking to an expert – if you’re OK getting a mix of fact and fiction. But even if it were error-free, searching this way comes with hidden costs.
New technologies are often surrounded by hopeful messages that they will alleviate poverty and bring about positive social change. History shows these assumptions are often misplaced.
Artists and photographers have strongly opposed their distinct styles being replicated by AI image generators. And the law has yet to catch up with this issue.
Research about both social and technical aspects of work can guide critical thinking about when and how business leaders and MBA students might use generative AI.
ChatGPT and other AI chatbots seem remarkably good at conversations. But you can’t believe anything they say. Sometimes, though, reality isn’t the point.