In the age of AI, people might wonder if there’s anything computers can’t do. The answer is yes. In fact, there are numerous problems that are beyond the reach of even the most powerful computers.
Over hundreds of million years of evolution, ants have come up with some pretty smart solutions to problems of agriculture, navigation and architecture. People could learn a thing or two.
A machine learning expert predicts a new balance between human and machine intelligence is on the horizon. For that to be good news, researchers need to figure out how to design algorithms that are fair.
A legal loophole could grant computer systems many legal rights people have – threatening human rights and dignity and setting up some real legal and moral problems.
It could seem attractive to try to teach computers to detect harassment, threats and abusive language. But it’s much more difficult than it might appear.
New research has uncovered a previously unknown weakness in smart city systems: devices that trust each other. That could lead to some pretty terrible traffic, among other problems.
Computers today are fast and powerful but they still can’t think like a human when it comes to some tasks we find easy. That’s why tech companies are turning to neuroscience for help.
A European Union law will require human-understandable explanations for algorithms’ decisions. A team of researchers has found a way to provide that, even for complex calculations.
Unrestricted access to information is vital to a vibrant democracy.But if this information is inaccurate, biased or falsified, the fundamental freedom of informed choice is denied.
Changes in news media distribution and the impartiality of news sources provide good reason to be concerned. However, digital inequality is not the way to understand or measure it.
Math isn’t prejudiced, goes the argument. But these arithmetic programs can learn bias from the data fed into them by human beings, leading to unfair treatment and discrimination.