Menu Close

Articles on Big tech

Displaying 1 - 20 of 106 articles

AI chatbots are becoming more powerful, but how do you know if they’re working in your best interest? Carol Yepes/Moment via Getty Images

Can you trust AI? Here’s why you shouldn’t

It’s difficult to see how artificial intelligence systems work, and to see whose interests they work for. Regulation could make AI more trustworthy. Until then, user beware.
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development. But how successful have they been? (Shutterstock)

The AI arms race highlights the urgent need for responsible innovation

When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
Some critics have claimed that artificial intelligence chatbot ChatGPT has “killed the essay,” while DALL-E, an AI image generator, has been portrayed as a threat to artistic integrity. (Shutterstock)

Generative AI like ChatGPT reveal deep-seated systemic issues beyond the tech industry

Rather than seeing artificial intelligence as the cause of new problems, we might better understand AI ethics as bringing attention to old ones.
Going online often involves surrendering some privacy, and many people are becoming resigned to the fact that their data will be collected and used without their explicit consent. (Shutterstock)

Protecting privacy online begins with tackling ‘digital resignation’

Many people have become resigned to the fact that tech companies collect our private data. But policymakers must do more to limit the amount of personal information corporations can collect.

Top contributors

More