Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.
AI chatbots and image generators run on thousands of computers housed in data centers like this Google facility in Oregon.
Tony Webster/Wikimedia
Generative AI, those astonishingly powerful language- and image-generating tools taking the world by storm, come at a price: a big carbon footprint. But not all AIs are equally dirty.
Facial recognition software misidentifies Black women more than other people.
JLco - Ana Suanes/iStock via Getty Images
One researcher’s experience from a quarter-century ago shows why bias in AI remains a problem – and why the solution isn’t a simple technical fix.
Generative AI thrives on exploiting people’s reflexive assumptions of authenticity by producing material that looks like ‘the real thing.’
artpartner-images/The Image Bank via Getty Images
Artificial intelligence is escalating the battle between spam senders and spam blockers. Recent advances could mean more convincing pitches to get you to click, buy and give up personal information.
What does generative AI mean for the human need to create, work and seek the truth?
Krerksak Woraphoomi/iStock via Getty Images
Generative AI can seem like magic, which makes it both enticing and frightening. Scholars are helping society come to grips with the potential benefits and harms.
You don’t have to see the future to know that AI has ethical baggage.
Wang Yukun/Moment via Getty Images
Language model AIs are smooth talkers, but you shouldn’t rely on them to make important decisions. That’s because they have trouble telling the difference between a gain and a loss.
Words have meaning for people because we use them to make sense of the world.
RyanJLane/E+ via Getty Images
Large language models can’t understand language the way humans do because they can’t perceive and make sense of the world.
A group of prominent computer scientists and other tech industry notables are calling for a six-month pause on artificial intelligence technology.
(Shutterstock)
A recent open letter calling for a temporary artificial intelligence development hiatus is more concerned with hypothetical risks about the future than the issues that are right in front of us.
Images generated by AI systems, like these fake photos of Donald Trump being arrested (he hasn’t been arrested), can be a dangerous source of misinformation.
AP Photo/J. David Ake
In a world of increasingly convincing AI-generated text, photos and videos, it’s more important than ever to be able to distinguish authentic media from fakes and imitations. The challenge is how.
The new tools are expected to free up space for workers by helping out with tedious and repetitive task. Here’s how it will work.
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development. But how successful have they been?
(Shutterstock)
When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
Large language model AI responds to questions but doesn’t actually know anything and is prone to making things up.
Charles Taylor/iStock via Getty Images
Searching the web with ChatGPT is like talking to an expert – if you’re OK getting a mix of fact and fiction. But even if it were error-free, searching this way comes with hidden costs.
Some critics have claimed that artificial intelligence chatbot ChatGPT has “killed the essay,” while DALL-E, an AI image generator, has been portrayed as a threat to artistic integrity.
(Shutterstock)
New technologies are often surrounded by hopeful messages that they will alleviate poverty and bring about positive social change. History shows these assumptions are often misplaced.
Artists and photographers have strongly opposed their distinct styles being replicated by AI image generators. And the law has yet to catch up with this issue.
But if students misrepresent or omit sources, including generative AI, that’s a problem.
(Shutterstock)
Research about both social and technical aspects of work can guide critical thinking about when and how business leaders and MBA students might use generative AI.