Computer scientists are overwhelmingly present in AI news coverage in Canada, while critical voices who could speak to the current and potential adverse effects of AI are lacking.
What does generative AI mean for the human need to create, work and seek the truth?
Krerksak Woraphoomi/iStock via Getty Images
Generative AI can seem like magic, which makes it both enticing and frightening. Scholars are helping society come to grips with the potential benefits and harms.
Meta and Pico lead the field with their VR headsets, ChatGPT continues its inexorable rise and new engine developments are pushing the boundaries of the video game experience.
ChatGPT has generated enormous interest, but is some of its content protected under copyright law?
Shutterstock / Blue Planet Studio
Language model AIs are smooth talkers, but you shouldn’t rely on them to make important decisions. That’s because they have trouble telling the difference between a gain and a loss.
Words have meaning for people because we use them to make sense of the world.
RyanJLane/E+ via Getty Images
The user interfaces of AI chatbots, like ChatGPT, are designed to mimic natural human conversation. But in doing so, AI chatbots present as more trustworthy than they really are.
Philipp von Ditfurth/picture alliance via Getty Images
Pausing AI development will give our governments and culture time to catch up with and steer the rush of new technology.
A group of prominent computer scientists and other tech industry notables are calling for a six-month pause on artificial intelligence technology.
(Shutterstock)
A recent open letter calling for a temporary artificial intelligence development hiatus is more concerned with hypothetical risks about the future than the issues that are right in front of us.
An unmanned U.S. Predator drone flies over Kandahar Air Field, southern Afghanistan, on a moon-lit night several years ago. Drone strikes are now a major feature of modern warfare, including in Ukraine and Syria.
(AP Photo/Kirsty Wigglesworth)
As Russia’s war in Ukraine illustrates, the use of lethal automated weapons, or LAWS, can always be justified. Their ability to desensitize their users from the act of killing, however, shouldn’t be.
The new generation of AI tools makes it a lot easier to produce convincing misinformation.
Photo by Olivier Douliery/AFP via Getty Images
Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.
The California-based startup Replika has programmed chatbots to serve as companions.
Olivier Douliery/AFP via Getty Images