Facial recognition software could be applied to managing people during pandemics.
Recently, police forces have come under criticism for their engagement of facial recognition technologies. But pandemic response plans may increasingly incorporate surveillance.
There are few guarantees that the facial recognition system is secure or even that it is accurate.
You’d thinking flying in a plane would be more dangerous than driving a car. In reality it’s much safer, partly because the aviation industry is heavily regulated. Airlines must stick to strict standards…
How you feel about eating chips and wearing your pyjamas out – experiments show how differently you react when you’re being observed.
The more we use facial recognition, the more we see its limits and its risks.
When it comes to faces, most of us are typical-recognisers, with just a small percentage classed as super-recognisers.
“Super-recognisers” who can identify a range of ethnicities could help increase fraud detection rates at passport control and decrease false conviction rates that have relied on CCTV.
Emotion recognition technology, an outgrowth of facial recognition technology, continues to advance quickly.
A report calls for banning the use of emotion recognition technology. An AI and computer vision researcher explains the potential and why there’s growing concern.
Andrew Hastie said the broad objectives of the identity-matching system were sound, but key changes were needed to ensure privacy and transparency.
Human rights groups say the bill is an attempt to introduce mass surveillance to Australia and an egregious breach of individual privacy.
Surveillance software that identifies people from CCTV is eroding human rights and democracy.
You can’t change your fingerprint if it’s stolen like you’d change your password.
Is there still time to reach the ‘off’ button?
Legal bans and moratoriums on other emerging technologies need not be permanent or absolute, but the more powerful a technology is, the more care it requires to operate safely.
Are any of these faces real?
Research has found ways to detect deepfakes through flaws that can’t be fixed easily by the fakers.
Campaigners in the UK are pushing to protect privacy and make the security services more accountable.
Social biases in digital tech create racist face recognition software and sexist hiring tools, but more data collection isn’t the answer.
A SenseTime artificial intelligence system monitors an intersection in China.
AI can help make government more efficient – but at what cost? Citizens’ lives could be better or worse, based on how the technology is used.
Let’s work together.
People – individually and in groups – were not as good at facial recognition as an algorithm. But five people plus the algorithm, working together, were even better.
Sometimes the questions become too much for artificial intelligence systems.
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
Biometric systems are increasingly used in our civil, commercial and national defence applications.
Current techniques to protect biometric details, such as face recognition or fingerprints, from hacking are effective, but advances in AI are rendering these protections obsolete.
One of these people is on a wanted list for theft. A super-recogniser may pick them at a glance.
Even the world’s best available training – used to train police, border control agents and other security personnel – does not compensate for natural talent in face recognition.
Facial recognition is already in our schools.
New technologies like facial recognition are coming – whether we like it or not. We can’t turn back the tide, but we can manage new technology to do the least harm and most good.