When algorithms make decisions with real-world consequences, they need to be fair.
A machine learning expert predicts a new balance between human and machine intelligence is on the horizon. For that to be good news, researchers need to figure out how to design algorithms that are fair.
Is this face just an assembly of computer bits?
PHOTOCREO Michal Bednarek/Shutterstock.com
When artificial intelligence systems try to behave like humans and make mistakes, they show their limits – but also their startling advances.
An artificial image created on the Ganbreeder site.
Because a host of artists and programmers can leave their stamp on a final product, disagreements and claims of theft have ensued.
New technology, old flaws.
Expecting algorithms to perform perfectly might be asking too much of ourselves.
Sophia, a robot granted citizenship in Saudi Arabia.
A legal loophole could grant computer systems many legal rights people have – threatening human rights and dignity and setting up some real legal and moral problems.
The past and present of Google – what’s next?
As Google turns 20, a look at how the company has grown – and what the next two decades might bring for the company.
It can be complicated to teach a computer to detect harassment and threats.
It could seem attractive to try to teach computers to detect harassment, threats and abusive language. But it’s much more difficult than it might appear.
What algorithm turned these lights red?
New research has uncovered a previously unknown weakness in smart city systems: devices that trust each other. That could lead to some pretty terrible traffic, among other problems.
A branch of AI research promises to deliver computers that evolve their own software but the tech industry has yet to catch on.
Two Stanford researchers used a deep neural network to detect sexuality from profile pictures on a US dating website.
We have far more to worry about from outdated science that embodies dubious prejudices than we do from deep learning networks.
Trust in me.
We prefer to go with our guts.
All those neurones: if only a machine could really think like a human.
Computers today are fast and powerful but they still can’t think like a human when it comes to some tasks we find easy. That’s why tech companies are turning to neuroscience for help.
Can an algorithm explain itself?
Robot decision via shutterstock.com
A European Union law will require human-understandable explanations for algorithms’ decisions. A team of researchers has found a way to provide that, even for complex calculations.
There are reasons to believe the promise of people analytics may not live up to the hype.
Despite its promises, people analytics has serious ethical implications and can adversely affect organisations and how people are treated at work.
Unrestricted access to information is vital to a vibrant democracy.But if this information is inaccurate, biased or falsified, the fundamental freedom of informed choice is denied.
News delivery via social media is based on a business model that exploits our need for self-validation.
Changes in news media distribution and the impartiality of news sources provide good reason to be concerned. However, digital inequality is not the way to understand or measure it.
How fast can it get here?
Box delivery image via Hadrian / Shutterstock.com
Algorithms can discriminate, even when their designers don’t intend that to happen. But they also can make detecting bias easier.
It’s all just data – how can it be prejudiced?
Math isn’t prejudiced, goes the argument. But these arithmetic programs can learn bias from the data fed into them by human beings, leading to unfair treatment and discrimination.
Programs like Hour of Code introduce computer programming to students in an engaging manner.
Hour of Code 2014/Flickr
If we want students to be well prepared for the 21st century, then we should be teaching coding in school.
A model of the Terminator from the popular movie series where machines take over the world.
If machines run by artificial intelligence take over the world it’s only because we programmed them to do so. So how can fuzzy logic help us prevent that?