Companies and hiring agencies are increasingly resorting to AI-enabled tools in recruitment. How they’re used can significantly impact an individual’s perception of the organisation.
For decades, woman ‘computers’ worked behind the scenes while their male counterparts received recognition. The AI industry must not be an example of history repeating itself.
The public release of the chatbot has led to a global conversation about the risks and benefits of AI – a conversation few people were having just a few years ago.
The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.
Tech firms are relying on low-wage workers to power their AI models. That raises serious ethical questions about how the technology is being developed.
The rapid rate of AI adoption is putting workplaces at risk of overlooking its potentially adverse impacts, particularly those that could impact the health and well-being of workers.
From open letters to congressional testimony, some AI leaders have stoked fears that the technology is a direct threat to humanity. The reality is less dramatic but perhaps more insidious.
Large language models are becoming increasingly capable of imitating human-like responses, creating opportunities to test social science theories on a larger scale and with much greater speed.
Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?
Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.