The proliferation of non-consensual, sexualised deepfake images is a reflection of society’s negative attitudes towards women.
Companies and hiring agencies are increasingly resorting to AI-enabled tools in recruitment. How they’re used can significantly impact an individual’s perception of the organisation.
For decades, woman ‘computers’ worked behind the scenes while their male counterparts received recognition. The AI industry must not be an example of history repeating itself.
The public release of the chatbot has led to a global conversation about the risks and benefits of AI – a conversation few people were having just a few years ago.
The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.
Tech firms are relying on low-wage workers to power their AI models. That raises serious ethical questions about how the technology is being developed.
The chatbot has been released to a small group of testers and some of X’s Premium+ subscribers – many of them have shared their initial thoughts.
AI-generated faces are now readily available, and have been used in identity fraud, catfishing and cyber warfare.
If safety is the heart of the Biden administration’s executive order on AI, then civil rights is its soul.
President Joe Biden has issued an executive order to regulate AI. The directives could help avoid AI doom – but they miss some key points.
The rapid rate of AI adoption is putting workplaces at risk of overlooking its potentially adverse impacts, particularly those that could impact the health and well-being of workers.
Creating bias-free AI systems is easier said than done. A computer scientist explains how controlling bias could lead to fairer AI.
Regardless of the input, AI image generators will have a tendency to return certain kinds of results. This is where the potential for bias arises.
From open letters to congressional testimony, some AI leaders have stoked fears that the technology is a direct threat to humanity. The reality is less dramatic but perhaps more insidious.
Large language models are becoming increasingly capable of imitating human-like responses, creating opportunities to test social science theories on a larger scale and with much greater speed.
Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?
Without more transparency about AI use, it will be difficult for people to challenge biased decisions against them.
Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.
One researcher’s experience from a quarter-century ago shows why bias in AI remains a problem – and why the solution isn’t a simple technical fix.
Transparency and accountability must be a priority to prevent discrimination.