Using technology to screen job applicants might be faster than reading CVs and face- to-face interviews but the most suitable candidate could be overlooked.
The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.
Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?
Biased algorithms in health care can lead to inaccurate diagnoses and delayed treatment. Deciding which variables to include to achieve fair health outcomes depends on how you approach fairness.
AI algorithms reinforce existing biases. Before they are introduced as routine tools in clinical care, we must establish ethical guidelines to reduce the risk of harm.
Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.
Searching the web with ChatGPT is like talking to an expert – if you’re OK getting a mix of fact and fiction. But even if it were error-free, searching this way comes with hidden costs.
The intersection of content management, misinformation, aggregated data about human behavior and crowdsourcing shows how fragile Twitter is and what would be lost with the platform’s demise.
James P. Jimirro Professor of Media Effects, Co-Director, Media Effects Research Laboratory, & Director, Center for Socially Responsible AI, Penn State
Professor, Computing and Information Systems, Pro Vice-Chancellor (Research Systems), and Pro Vice-Chancellor (Digital & Data), The University of Melbourne