It’s difficult to see how artificial intelligence systems work, and to see whose interests they work for. Regulation could make AI more trustworthy. Until then, user beware.
Biobanks collect and store large amounts of data that researchers use to conduct a wide range of studies. Making sure participants understand what they’re getting into can help build trust in science.
Yu Chen, Binghamton University, State University of New York
There are many uses for digital systems that are not centrally controlled and that allow large numbers of people to participate securely, even if they don’t all know and trust each other.
Canada’s police services are becoming increasingly militarized. This undermines the fundamental aims of policing and fosters public distrust of police.
Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.
The user interfaces of AI chatbots, like ChatGPT, are designed to mimic natural human conversation. But in doing so, AI chatbots present as more trustworthy than they really are.
People tend not to think that their own emotions could simply be wrong. But research shows that people excessively dislike others who disagree with them.
Nurses who identify as Democrats have a significantly higher likelihood of having their children vaccinated against COVID-19 than those who identify as Republicans.
It’s tempting to focus on the minority of Americans who hold negative views about scientists. But blaming others for their lack of trust won’t build the relationships that can boost trust.