Melika Soleimani, Te Kunenga ki Pūrehuroa – Massey University; Ali Intezari, The University of Queensland; David J Pauleen, Te Kunenga ki Pūrehuroa – Massey University, and Jim Arrowsmith, Te Kunenga ki Pūrehuroa – Massey University
Recruiters are now routinely using AI to automate the screening of CVs and interview videos. But human bias already exists in the AI data – and it can even be heightened by the algorithm.
Media outlets like The Australian and The Daily Telegraph will now share their content with the makers of ChatGPT. It raises many questions about the future of journalism and how people access news.
Companies and hiring agencies are increasingly resorting to AI-enabled tools in recruitment. How they’re used can significantly impact an individual’s perception of the organisation.
For decades, woman ‘computers’ worked behind the scenes while their male counterparts received recognition. The AI industry must not be an example of history repeating itself.
The public release of the chatbot has led to a global conversation about the risks and benefits of AI – a conversation few people were having just a few years ago.
The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.
Tech firms are relying on low-wage workers to power their AI models. That raises serious ethical questions about how the technology is being developed.
The rapid rate of AI adoption is putting workplaces at risk of overlooking its potentially adverse impacts, particularly those that could impact the health and well-being of workers.
From open letters to congressional testimony, some AI leaders have stoked fears that the technology is a direct threat to humanity. The reality is less dramatic but perhaps more insidious.
Large language models are becoming increasingly capable of imitating human-like responses, creating opportunities to test social science theories on a larger scale and with much greater speed.