Predictive policing has improved in leaps and bounds and become increasingly automated thanks to big data, data mining and powerful computers.
Mass data collection and surveillance have become ubiquitous. For marginalized communities, the stakes of having their privacy violated are high.
Once analysts gain access to our private data, they can use that information to influence and alter our behaviour and choices. If you’re marginalized in some way, the consequences are worse.
Plans have been made for the AI-based program to begin trials before the year ends. But it raises serious questions about the role of police in preventing domestic violence.
Just three big developers are being paid tens of millions of pounds to supply the majority of these UK systems.
Technology is not neutral, as facial recognition algorithms and predictive policing have shown us. Algorithms discriminate by design, reflecting and reinforcing pre-existing biases.
Technology can transgress all kinds of legal frameworks.
Crime data reflect only what crimes are identified by the police – not all the crimes that occur. So decisions based on crime data are necessarily biased and incompletely informed.
For 50 years, we have worked to make U.S. police more diverse and less intrusive. Why haven’t we made more progress?
Preventing crime before it happens, while saving resources, sounds like a great use of big data. But these calculated probabilities raise big questions about civil liberties.