Menu Close

Articles on Data ethics

Displaying all articles

Facial recognition algorithms are usually tested using white faces, which results in the technology being unable to differentiate between racialized individuals. (Shutterstock)

AI technologies — like police facial recognition — discriminate against people of colour

Technology is not neutral, as facial recognition algorithms and predictive policing have shown us. Algorithms discriminate by design, reflecting and reinforcing pre-existing biases.
While leaks and whistleblowers continue to be valuable tools in the fight for data privacy, we can’t rely on them solely to keep big tech companies in check. SHUTTERSTOCK

The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do

Most of us are probably having our data tracked in some form. And while there are regulatory safeguards in place to protect user privacy, it's hard to say whether these are enough.
Companies use data to make a portrait of their users. ImageFlow/shutterstock.com

Big tech surveillance could damage democracy

Big tech companies compete over who can gather the most intelligence on their users. Countries like Russia and China turn this information against their citizens.
Power over business, democracy and education will likely continue to lie with data and data-dependent tools, such as machine learning and artificial intelligence. Shutterstock

Data ethics is more than just what we do with data, it’s also about who’s doing it

Biases are difficult to shed, which makes workplace diversity a powerful and necessary tool for catching unsuspected bias before it has a chance to cause damage.

Top contributors

More