Governments can exclude certain groups of people in policies and services not only by the type of data they collect but also how they collect, store, analyze and use the data.
ChatGPT is fuelled by our intimate online histories. It’s trained on 300 billion words, yet users have no way of knowing which of their data it contains.
The Tim Hortons consumer app was found to have collected detailed user information, including location data. As a privacy violation, this challenges perception of Tim Hortons as a trusted brand.
Many businesses struggle with data security, but the new Privacy Act means they will have to make protecting customers’ personal information a priority.
The new bill would open the gates for your data to freely exchange hands between any ‘accredited’ agency. The proposal is more arrogant than it is effective.
It’s not clear how individuals are being targeted. And while they’re mostly high-profile people, that doesn’t mean there’s no lesson for the average person to take away.
Technology is not neutral, as facial recognition algorithms and predictive policing have shown us. Algorithms discriminate by design, reflecting and reinforcing pre-existing biases.
In the past decade, the Australian government has commissioned data analytics projects worth more than A$200 million. We have little information about what they involved.
Most of us are probably having our data tracked in some form. And while there are regulatory safeguards in place to protect user privacy, it’s hard to say whether these are enough.
Big tech companies compete over who can gather the most intelligence on their users. Countries like Russia and China turn this information against their citizens.
Biases are difficult to shed, which makes workplace diversity a powerful and necessary tool for catching unsuspected bias before it has a chance to cause damage.