The rise of AI raises the complicated question of how we might guide it towards ethical and political ends.
Some critics have claimed that artificial intelligence chatbot ChatGPT has “killed the essay,” while DALL-E, an AI image generator, has been portrayed as a threat to artistic integrity.
(Shutterstock)
ChatGPT and other AI chatbots seem remarkably good at conversations. But you can’t believe anything they say. Sometimes, though, reality isn’t the point.
Artificial Intelligence comes with a litany of ethical risks and dilemmas. Some are universal, but some are unique to particular countries, like South Africa.
Counterfactuals are claims about what would happen, were something to occur in a different way. For instance, we can ask what the world would be like had the internet never been developed.
Shutterstock
AI models are increasingly being used to make important decision about people’s lives – just take Robodebt. Yet the complexity of these systems means we hardly understand them.
The technology’s focus on the framing of the artistic task amounts to the fetishization of the creative moment – and devalues the journey that waters the seed of an idea to its fruition.
In the age of AI image generation, believing your own eyes may not hold the same weight it once used to.
In the city of London, security cameras can even be found in cemeteries. In 2021 the mayor’s office launched an effort to establish guidelines for research around emerging technology.
Acabashi/Wikimedia
As states and nations struggle to regulate growing AI use, municipal authorities are often leading the way. An emerging paradigm known as AI Localism can help us better define the way forward.
The Tim Hortons consumer app was found to have collected detailed user information, including location data. As a privacy violation, this challenges perception of Tim Hortons as a trusted brand.
An unmarked grave with a headstone that resembles a computer screen, nicknamed ‘iGrave’, is seen in north-west London.
Leon Neal/AFP
The recent case of a man making a simulation of his deceased fiancée raises important questions: while AI makes it possible to create “deadbots”, is it ethically desirable or reprehensible to do so?
An algorithm is the centerpiece of one criminal justice reform program, but should it be race-blind?
the_burtons/Moment via Getty Images
A cornerstone of the First Step Act, passed with bipartisan support, is the PATTERN risk-assessment tool.
“Alfie”, a moral choice machine, is pictured in front of an important question during a press conference in Germany.
Arne Dedert/picture alliance via Getty Images
Elon Musk’s brain-machine interface technology could bring humans and computers closer together than ever before, and herald a new frontier in healthcare
In this September 2019 photo, a woman walks below a Google sign on the campus in Mountain View, Calif.
(AP Photo/Jeff Chiu)
The new Alphabet Workers Union is making clear that changes must be put in place, both in education and on the job, to allow engineers to start taking responsibility for the social impact of their work.
James P. Jimirro Professor of Media Effects, Co-Director, Media Effects Research Laboratory, & Director, Center for Socially Responsible AI, Penn State