Menu Close

Articles on Artificial Intelligence ethics

Displaying all articles

In the absence of legal guidelines, companies need to establish internal processes for responsible use of AI. Oscar Wong/Moment via Getty Images

What is ‘ethical AI’ and how can companies achieve it?

Companies that want to avoid the harms of AI, such as bias or privacy violations, lack clear-cut guidelines on how to act responsibly. That makes internal management and decision-making critical.
Over the last two years, a multinational research team has analyzed how mainstream Canadian news media covers artificial intelligence. (Shutterstock)

News coverage of artificial intelligence reflects business and government hype — not critical voices

Computer scientists are overwhelmingly present in AI news coverage in Canada, while critical voices who could speak to the current and potential adverse effects of AI are lacking.
The new generation of AI tools makes it a lot easier to produce convincing misinformation. Photo by Olivier Douliery/AFP via Getty Images

Regulating AI: 3 experts explain why it’s difficult to do and important to get right

Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development. But how successful have they been? (Shutterstock)

The AI arms race highlights the urgent need for responsible innovation

When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
Some critics have claimed that artificial intelligence chatbot ChatGPT has “killed the essay,” while DALL-E, an AI image generator, has been portrayed as a threat to artistic integrity. (Shutterstock)

Generative AI like ChatGPT reveal deep-seated systemic issues beyond the tech industry

Rather than seeing artificial intelligence as the cause of new problems, we might better understand AI ethics as bringing attention to old ones.
ChatGPT is better used for playacting than playing at finding facts. EvgeniyShkolenko/iStock via Getty Images

ChatGPT is great – you’re just using it wrong

ChatGPT and other AI chatbots seem remarkably good at conversations. But you can’t believe anything they say. Sometimes, though, reality isn’t the point.
Does the moment of imagination carry more value than the work of making something real? DeAgostini/Getty Images

ChatGPT, DALL-E 2 and the collapse of the creative process

The technology’s focus on the framing of the artistic task amounts to the fetishization of the creative moment – and devalues the journey that waters the seed of an idea to its fruition.

Top contributors

More