tag:theconversation.com,2011:/es/topics/facial-recognition-4486/articlesFacial recognition – The Conversation2024-03-13T21:35:03Ztag:theconversation.com,2011:article/2256282024-03-13T21:35:03Z2024-03-13T21:35:03ZDigital surveillance is omnipresent in China. Here’s how citizens are coping<figure><img src="https://images.theconversation.com/files/581384/original/file-20240225-28-qjmkpc.jpg?ixlib=rb-1.1.0&rect=17%2C34%2C3817%2C2121&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Chinese government may access the data collected by Baidu, Alibaba, Tencent, Xiaomi and other operators. How are citizens coping with this constant digital surveillance?</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Do you ever think about the digital footprint you leave when you are browsing the web, shopping online, commenting on social networks or going by a facial recognition camera? </p>
<p>State surveillance of citizens is growing all over the world, but it is a fact of everyday life in China, where it has <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-4/surveillance-china-ariane-ollier-malaterre?context=ubx&refId=4b23b424-16f8-41ce-89e5-09d719356614">deep historical roots</a>.</p>
<p>In China, almost nothing is paid for in cash anymore. <a href="https://search.worldcat.org/fr/title/1043756337">Super apps</a> make life easy: people use Alipay or WeChat Pay to pay for subway or bus tickets, rent a bike, hail a taxi, shop online, book trains and shows, split the bill at restaurants and even pay their taxes and utility bills. </p>
<p>The Chinese also use these platforms to check the news, entertain themselves and exchange countless text, audio and video messages, both personal and professional. Everything is linked to the user’s mobile phone number, which is itself registered under their identity. The government may access the data collected by Baidu, Alibaba, Tencent, Xiaomi and other operators. </p>
<p>Much has been written about <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/poi3.291">blacklists</a> (listing authors of “trust-breaking” behaviours, such as not settling one’s debts), <a href="https://www.researchgate.net/publication/353602055_Blacklists_and_Redlists_in_the_Chinese_Social_Credit_System_Diversity_Flexibility_and_Comprehensiveness">redlists</a> (listing authors of commendable behaviours, such as volunteering) and commercial and public <a href="https://www.wired.co.uk/article/china-social-credit-system-explained">“social credit”</a> systems. However, recent research has shown that these systems are still <a href="https://www.researchgate.net/publication/369147865_Civilized_cities_or_social_credit_Overlap_and_tension_between_emergent_governance_infrastructures_in_China">fragmented and scattered in terms of data collection and analysis</a>. They also rely at least partly on <a href="https://link.springer.com/book/10.1007/978-981-99-2189-8">manual</a> rather than digitized or algorithmic processes, with little capacity to build integrated citizen profiles through compiling all the available data.</p>
<p>How do Chinese citizens experience this constant surveillance? In my book <a href="https://www.routledge.com/Living-with-Digital-Surveillance-in-China-Citizens-Narratives-on-Technology/Ollier-Malaterre/p/book/9781032517704"><em>Living with Digital Surveillance in China: Citizens’ Narratives on Technology, Privacy and Governance</em></a>, I present research I conducted in China in 2019. Specifically, the book is based on 58 semi-structured in-depth interviews with Chinese participants recruited through colleagues at three universities in Beijing, Shanghai and Chengdu.</p>
<figure class="align-center ">
<img alt="People hunched over their mobile phones ride on a train" src="https://images.theconversation.com/files/578075/original/file-20240226-18-45emg3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578075/original/file-20240226-18-45emg3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=292&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578075/original/file-20240226-18-45emg3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=292&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578075/original/file-20240226-18-45emg3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=292&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578075/original/file-20240226-18-45emg3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=367&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578075/original/file-20240226-18-45emg3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=367&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578075/original/file-20240226-18-45emg3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=367&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">On the Beijing metro, commuters consult their smartphones, where people get information, entertain themselves and exchange countless messages, both personal and professional.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Unmasking and punishing violators, improving morality</h2>
<p>Like my colleagues <a href="https://journals.sagepub.com/doi/full/10.1177/1461444819826402">Genia Kostka</a> and <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3958660">Chuncheng Liu</a>, I discovered that many participants in my research frame surveillance as indispensable for solving China’s problems. </p>
<p>Underpinning this support is a coherent system of anguishing narratives, to which redemptive narratives respond. The anguishing narratives emphasise the moral shortcomings that the research participants attribute to China: almost every participant brought up the <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-6/rules-monitoring-raise-people-moral-quality-ariane-ollier-malaterre?context=ubx&refId=9424fad5-6e42-4823-874b-3a4adbf97a7b">“lack of moral quality”</a> of their fellow citizens, whom they said behaved like children with little moral sense. </p>
<p>In the context of this shame-inducing narrative, surveillance is framed as a welcome solution to enforce the rules by punishing violators and getting people to behave better. According to the participants, moral shortcomings are responsible for the <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-7/national-humiliations-civilisation-dream-ariane-ollier-malaterre?context=ubx&refId=0167048c-9288-4a50-af2d-67ef75ca2d9a">“century of humiliations”</a> that China has experienced since the Opium Wars and the Japanese invasions; according to this discourse, “civilizing” the population will enable China to gain the international recognition it so ardently desires. </p>
<p>Finally, wanting to protect privacy was often seen by participants as a <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-8/saving-face-ariane-ollier-malaterre?context=ubx&refId=26e22dfc-2812-429d-a0b8-2df3e5fab205">desire to hide shameful secrets in order to save face</a>. Here too, surveillance is viewed positively, as a tool to unmask shady behaviours and promote morality. </p>
<p>These three narratives of shame and fear are countered by two redemptive ones, that serve as an antidote: that of the <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-10/government-protection-order-ariane-ollier-malaterre?context=ubx&refId=3bc7328b-b04b-45c3-91fc-80e059436273">government as a protective figure</a>, i.e., one that acts like a benevolent parent who guarantees the security and prosperity of its children, and the resolutely techno-optimistic one of <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-11/technology-magic-bullet-ariane-ollier-malaterre?context=ubx&refId=061fee9e-9fa6-4088-8fa3-9bcff0f94b6b">technology as a magic bullet</a> where technological advances is credited as the potential to solve all of China’s problems, and as a civilizing force that will propel China towards international recognition. </p>
<h2>Four types of mental tactics for distancing oneself from surveillance</h2>
<p>Yet the people I spoke to also expressed <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-14/misgivings-objections-ariane-ollier-malaterre?context=ubx&refId=a410f3e8-32c4-469e-9f52-c26deefb50c5">frustration, fear and anger</a> about state surveillance. Almost 90 per cent of them adopted one or more <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-13/mental-tactics-dissociate-oneself-surveillance-ariane-ollier-malaterre?context=ubx&refId=89ae4273-b251-495c-8293-92e28ba99ef3">mental tactics</a> to distance, and mentally protect themselves, from surveillance. </p>
<p>In my analysis I identified four different types of tactics:</p>
<p><strong>1 – Brushing surveillance aside</strong></p>
<ul>
<li><p>Denying or minimizing the existence of surveillance: “Nobody is watching. The government does not want to spend money to pay people to watch all the time. When they need it, they check; otherwise, no one is watching.”</p></li>
<li><p>Ignoring it: “If I don’t like the loss of privacy and freedom, I choose to ignore it, I don’t think of it.” Or: “Yes, it’s true, but it does not harm me. It does not remind me all the time. Sometimes I choose to ignore it.”</p></li>
<li><p>Normalizing it: “In China everyone shares their credit card information, their address, their ID. We feel secure.”; “Most governments use social media as a tool to spy.”</p></li>
<li><p>Redefining restrictions as temporary, or as occurring less than in the past, or less for oneself than for others, such as civil servants. Some redefine freedom itself: “It’s the country that makes the laws, the regulations, it’s like that in all countries. Other behaviours are a matter of my freedom, for example what I’m going to have for lunch.”</p></li>
</ul>
<p><strong>2 – Othering surveillance targets</strong></p>
<ul>
<li><p>Because I’m just an ordinary citizen: “I’m not a big potato, there’s no need for people to intentionally find me.”</p></li>
<li><p>Or because I’m a good person and “the blacklist is just for criminals”: “We think that improving public behaviour will make the environment and surroundings better for us, for the ones who obey the rules in the first place.”</p></li>
</ul>
<p><strong>3 – Wearing blinders</strong></p>
<ul>
<li><p>By focusing on everyday life: “Most people don’t care about these things. They care about money and power.”</p></li>
<li><p>Or, by focusing on the present: “We can’t live without Zhifu [Alipay], or Didi. We have facial recognition, CCTV is everywhere. It won’t harm me at present, so far, it does not do actual harm, so I’m not that concerned.”</p></li>
</ul>
<p><strong>4 – Resorting to fatalism</strong></p>
<p>“Nobody can avoid it… I don’t know how to avoid this risk, I just accept it.”; “We think it’s useless to spend time discussing the social credit system since we can’t change it.”</p>
<h2>The cognitive and emotional weight of surveillance</h2>
<p>In short, the way the Chinese citizens I spoke to experience digital surveillance is characterized by strong psychic tensions: the same persons who support surveillance as being indispensable in the Chinese context are also and nevertheless expressing the heavy burden that coping with such exposure places on them. </p>
<p>This weight is both cognitive, as evidenced by the range of self-protective mental tactics to dissociate oneself from surveillance, and emotional, as conveyed in participants’ strong emotions and <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781003403876-15/self-censorship-ariane-ollier-malaterre?context=ubx&refId=3f08cb71-224e-498d-ac93-917dafa6d0aa">particularly telling body language</a>.</p>
<p>So, what about us? We, in Western liberal democracies, are also exposed to digital surveillance. And our surveillance ideas are also shaped by our own socio-political, cultural, and economic contexts, with significant variations across different Western societies. My work suggests that some of our own privacy and surveillance narratives are quite close to the Chinese ones, while others clearly differ. </p>
<p>What about you? How do you see your own relationship to digital surveillance?</p><img src="https://counter.theconversation.com/content/225628/count.gif" alt="La Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ariane Ollier-Malaterre has received funding from the Social Sciences and Humanities Research Council of Canada. She is a member of the Work and Family Researchers Network, the Association of Internet Researchers and the Academy of Management.</span></em></p>State surveillance of citizens is growing all over the world, but it is a fact of daily life in China. People are developing mental tactics to distance themselves from it.Ariane Ollier-Malaterre, Professeure de management et titulaire de la Chaire de recherche du Canada sur la régulation du digital dans la vie professionnelle et personnelle; Canada Research Chair in Digital Regulation at Work and in Life, Université du Québec à Montréal (UQAM)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2225922024-03-06T13:35:06Z2024-03-06T13:35:06ZEmotion-tracking AI on the job: Workers fear being watched – and misunderstood<figure><img src="https://images.theconversation.com/files/579064/original/file-20240229-20-1y9mr8.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C8333%2C8308&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">How would you feel if your workplace was tracking how you feel?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/surveillance-young-female-character-royalty-free-illustration/1200880011">nadia_bormotova/iStock via Getty Images</a></span></figcaption></figure><p><a href="https://uk.sagepub.com/en-gb/eur/emotional-ai/book251642">Emotion artificial intelligence</a> uses <a href="https://uk.sagepub.com/en-gb/eur/emotional-ai/book251642">biological signals</a> such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. It is used in contexts both mundane, like entertainment, and high stakes, like the workplace, hiring and health care.</p>
<p>A wide range of industries <a href="https://www.prnewswire.com/il/news-releases/nemesysco-reports-increased-interest-for-its-voice-analytics-technology-for-remote-employee-wellness-monitoring-301036444.html">already use emotion AI</a>, including call centers, finance, banking, nursing and caregiving. <a href="https://www.gartner.com/smarterwithgartner/the-future-of-employee-monitoring">Over 50% of large employers in the U.S. use emotion AI</a> aiming to infer employees’ internal states, a practice that <a href="https://www.mordorintelligence.com/industry-reports/emotion-detection-and-recognition-edr-market">grew during the COVID-19 pandemic</a>. For example, call centers monitor what their operators say and their tone of voice.</p>
<p>Scholars have raised concerns about <a href="https://doi.org/10.1177/1529100619832930">emotion AI’s scientific validity</a> and its <a href="https://doi.org/10.1145/3442188.3445939">reliance on contested theories about emotion</a>. They have also highlighted emotion AI’s potential for <a href="https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/">invading privacy</a> and exhibiting <a href="https://theconversation.com/emotion-reading-tech-fails-the-racial-bias-test-108404">racial</a>, <a href="https://doi.org/10.48550/arXiv.2103.11436">gender</a> and <a href="https://doi.org/10.1177/14614448221109550">disability</a> bias. </p>
<p>Some employers use the technology <a href="https://doi.org/10.1145/3579543">as though it were flawless</a>, while some scholars seek to <a href="https://doi.org/10.1109/ARSO.2017.8025197">reduce its bias and improve its validity</a>, <a href="https://dx.doi.org/10.2139/ssrn.3927300">discredit it altogether</a> or suggest <a href="https://www.biometricupdate.com/201912/ai-now-calls-for-ban-on-affect-recognition-as-market-expected-to-surge-to-90b-by-2024">banning emotion AI</a>, at least until more is known about its implications.</p>
<p>I study the <a href="https://scholar.google.com/citations?hl=en&user=ju-VqbUAAAAJ">social implications of technology</a>. I believe that it is crucial to examine emotion AI’s implications for people subjected to it, such as workers – especially those marginalized by their race, gender or disability status.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/sXvYC9_ktVw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Can AI actually read your emotions? Not exactly.</span></figcaption>
</figure>
<h2>Workers’ concerns</h2>
<p>To understand where emotion AI use in the workplace is going, my colleague <a href="https://www.researchgate.net/scientific-contributions/Karen-L-Boyd-2198141312">Karen Boyd</a> and I set out to examine <a href="https://doi.org/10.1145/3579528">inventors’ conceptions</a> of emotion AI in the workplace. We analyzed patent applications that proposed emotion AI technologies for the workplace. Purported benefits claimed by patent applicants included assessing and supporting employee well-being, ensuring workplace safety, increasing productivity and aiding in decision-making, such as making promotions, firing employees and assigning tasks. </p>
<p>We wondered what workers think about these technologies. Would they also perceive these benefits? For example, would workers find it beneficial for employers to provide well-being support to them?</p>
<p>My collaborators <a href="https://scholar.google.com/citations?hl=en&user=8XW-v0AAAAAJ&view_op=list_works&sortby=pubdate">Shanley Corvite</a>, <a href="https://scholar.google.com/citations?hl=en&user=B28WGTsAAAAJ&view_op=list_works&sortby=pubdate">Kat Roemmich</a>, <a href="https://www.researchgate.net/scientific-contributions/Tillie-Ilana-Rosenberg-2249183666">Tillie Ilana Rosenberg</a> and I conducted a survey partly representative of the U.S. population and partly oversampled for people of color, trans and nonbinary people and people living with mental illness. These groups may be more likely to experience harm from emotion AI. Our study had 289 participants from the representative sample and 106 participants from the oversample. We found that <a href="https://doi.org/10.1145/3579600">32% of respondents reported experiencing or expecting no benefit to them</a> from emotion AI use, whether current or anticipated, in their workplace. </p>
<p>While some workers noted potential benefits of emotion AI use in the workplace like increased well-being support and workplace safety, mirroring benefits claimed in patent applications, all also expressed concerns. They were concerned about harm to their well-being and privacy, harm to their work performance and employment status, and bias and mental health stigma against them.</p>
<p>For example, 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions.</p>
<h2>Participants’ voices</h2>
<p>One participant who had multiple health conditions said: “The awareness that I am being analyzed would ironically have a negative effect on my mental health.” This means that despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. Indeed, other work by my colleagues Roemmich, <a href="https://scholar.google.com/citations?hl=en&user=zg29qGEAAAAJ&view_op=list_works&sortby=pubdate">Florian Schaub</a> and I suggests that emotion AI-induced privacy loss can span a range of <a href="https://dx.doi.org/10.2139/ssrn.3782222">privacy harms</a>, including <a href="https://doi.org/10.1145/3544548.3580950">psychological, autonomy, economic, relationship, physical and discrimination</a>. </p>
<p>On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.”</p>
<p>Participants in the study also mentioned the potential for exacerbated power imbalances and said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace, pointing to how emotion AI use could potentially intensify already existing tensions in the employer-worker relationship. For instance, a respondent said: “The amount of control that employers already have over employees suggests there would be few checks on how this information would be used. Any ‘consent’ [by] employees is largely illusory in this context.” </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/mzLrtld_oek?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Emotion AI is just one way companies monitor employees.</span></figcaption>
</figure>
<p>Lastly, participants noted potential harms, such as emotion AI’s technical inaccuracies potentially creating false impressions about workers, and emotion AI creating and perpetuating bias and stigma against workers. In describing these concerns, participants highlighted their fear of employers relying on inaccurate and biased emotion AI systems, particularly against people of color, women and trans individuals. </p>
<p>For example, one participant said: “Who is deciding what expressions ‘look violent,’ and how can one determine people as a threat just from the look on their face? A system can read faces, sure, but not minds. I just cannot see how this could actually be anything but destructive to minorities in the workplace.”</p>
<p>Participants noted that they would either refuse to work at a place that uses emotion AI – an option not available to many – or engage in behaviors to make emotion AI read them favorably to protect their privacy. One participant said: “I would exert a massive amount of energy masking even when alone in my office, which would make me very distracted and unproductive,” pointing to how emotion AI use would impose additional emotional labor on workers.</p>
<h2>Worth the harm?</h2>
<p>These findings indicate that emotion AI exacerbates existing challenges experienced by workers in the workplace, despite proponents claiming emotion AI helps solve these problems.</p>
<p>If emotion AI does work as claimed and measures what it claims to measure, and even if issues with bias are addressed in the future, there are still harms experienced by workers, such as the additional emotional labor and loss of privacy. </p>
<p>If these technologies do not measure what they claim or they are biased, then people are at the mercy of algorithms deemed to be valid and reliable when they are not. Workers would still need to expend the effort to try to reduce the chances of being misread by the algorithm, or to engage in emotional displays that would read favorably to the algorithm. </p>
<p>Either way, these systems function as <a href="https://dictionary.cambridge.org/us/dictionary/english/panopticon">panopticon</a>-like technologies, creating privacy harms and feelings of being watched.</p><img src="https://counter.theconversation.com/content/222592/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Work reported here was sponsored by the National Science Foundation (NSF) award 2020872 and CAREER award 2236674. </span></em></p>Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.Nazanin Andalibi, Assistant Professor of Information, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2246432024-03-04T01:25:27Z2024-03-04T01:25:27ZYour face for sale: anyone can legally gather and market your facial data without explicit consent<figure><img src="https://images.theconversation.com/files/579102/original/file-20240301-28-tzp738.jpg?ixlib=rb-1.1.0&rect=956%2C85%2C6119%2C4218&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/futuristic-technological-scanning-face-beautiful-woman-1554013514">Kitreel/Shutterstock</a></span></figcaption></figure><p>The morning started with a message from a friend: “I used your photos to train my local version of Midjourney. I hope you don’t mind”, followed up with generated pictures of me wearing a flirty steampunk costume.</p>
<p>I did in fact mind. I felt violated. Wouldn’t you? I bet Taylor Swift did when <a href="https://theconversation.com/taylor-swift-deepfakes-new-technologies-have-long-been-weaponised-against-women-the-solution-involves-us-all-222268">deepfakes of her hit the internet</a>. But is the legal status of my face different from the face of a celebrity?</p>
<p>Your facial information is a unique form of personal sensitive information. It can identify you. Intense profiling and mass government surveillance <a href="https://www.forbes.com/sites/kalevleetaru/2019/05/06/as-orwells-1984-turns-70-it-predicted-much-of-todays-surveillance-society/?sh=38a97b4e11de">receives much attention</a>. But businesses and individuals are also using tools that <a href="https://www.sbs.com.au/news/article/creepy-and-invasive-kmart-bunnings-and-the-good-guys-accused-of-using-facial-recognition-technology/h08q8evb1">collect</a>, <a href="https://www.afr.com/technology/how-clearview-ai-unleashed-a-global-dystopia-20230929-p5e8lc">store</a> and modify facial information, and we’re facing an unexpected wave of <a href="https://deepai.org/machine-learning-model/text2img">photos</a> and <a href="https://theconversation.com/what-is-sora-a-new-generative-ai-tool-could-transform-video-production-and-amplify-disinformation-risks-223850">videos</a> generated with artificial intelligence (AI) tools.</p>
<p>The development of legal regulation for these uses is lagging. At what levels and in what ways should our facial information be protected? </p>
<h2>Is implied consent enough?</h2>
<p>The Australian <a href="https://www.legislation.gov.au/C2004A03712/latest/text">Privacy Act</a> considers biometric information (which would include your face) to be a part of our personal sensitive information. However, the act doesn’t <em>define</em> biometric information. </p>
<p>Despite its drawbacks, the act is currently the main legislation in Australia aimed at facial information protection. It states biometric information cannot be collected without a person’s consent. </p>
<p>But the law doesn’t specify whether it should be <a href="https://www.ipc.nsw.gov.au/fact-sheet-consent">express or implied consent</a>. Express consent is given explicitly, either orally or in writing. Implied consent means consent may reasonably be inferred from the individual’s actions in a given context. For example, if you walk into a store that has a sign “facial recognition camera on the premises”, your consent is implied. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A poster at a supermarket that says camera technology trial in progress, partially obscured by a couple of bins." src="https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1067&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1067&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1067&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1340&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1340&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578587/original/file-20240228-28-ns24xh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1340&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An inconspicuous sign that flags camera technology trial is in progress counts as implied consent.</span>
<span class="attribution"><span class="source">Margarita Vladimirova</span></span>
</figcaption>
</figure>
<p>But using implied consent opens our facial data up to potential exploitation. <a href="https://www.choice.com.au/consumers-and-data/data-collection-and-use/how-your-data-is-used/articles/kmart-bunnings-and-the-good-guys-using-facial-recognition-technology-in-store">Bunnings, Kmart</a> and <a href="https://www.theguardian.com/business/2023/feb/19/woolworths-expands-self-checkout-ai-that-critics-say-treats-every-customer-as-a-suspect">Woolworths</a> have all used easy-to-miss signage that facial recognition or camera technology is used in their stores.</p>
<h2>Valuable and unprotected</h2>
<p>Our facial information has become so valuable, <a href="https://www.theguardian.com/australia-news/2023/oct/24/australian-federal-police-afp-pimeyes-facial-recognition-facecheck-id-search-engine-platform">data companies such as Clearview AI and PimEye</a> are mercilessly hunting it down on the internet <a href="https://onezero.medium.com/i-got-my-file-from-clearview-ai-and-it-freaked-me-out-33ca28b5d6d4">without our consent</a>.</p>
<p>These companies put together databases for sale, used not only by the police in various countries, <a href="https://www.theguardian.com/australia-news/2023/oct/24/australian-federal-police-afp-pimeyes-facial-recognition-facecheck-id-search-engine-platform">including Australia</a>, but also by <a href="https://www.clearview.ai/developer-api">private companies</a>. </p>
<p>Even if you deleted all your facial data from the internet, you could easily be captured in public and appear in some database anyway. Being in someone’s TikTok video <a href="https://www.abc.net.au/news/2022-07-14/tiktok-video-maree-melbourne-flowers/101228418">without your consent</a> is a prime example – in Australia this is legal.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1547794543726055425"}"></div></p>
<p>Furthermore, we’re also now contending with generative AI programs such as Midjourney, DALL-E 3, Stable Diffusion and others. Not only the collection, but the modification of our facial information can be easily performed by anyone. </p>
<p>Our faces are unique to us, they’re part of what we perceive as ourselves. But they don’t have special legal status or special legal protection.</p>
<p>The only action you can take to protect your facial information from aggressive collection by a store or private entity <a href="https://www.oaic.gov.au/privacy/privacy-complaints/lodge-a-privacy-complaint-with-us">is to complain</a> to the office of the Australian Information Commissioner, which may or may not result in an investigation.</p>
<p>The same applies to deepfakes. The Australian Competition and Consumer Commission will consider only activity that applies to trade and commerce, for example if a <a href="https://www.theguardian.com/technology/2022/mar/18/accc-takes-meta-to-court-over-facebook-scam-ads-depicting-australian-identities">deepfake is used for false advertising</a>. </p>
<p>And the Privacy Act doesn’t protect us from other people’s actions. I didn’t consent to have someone train an AI with my facial information and produce made-up images. But there is no oversight on such use of generative AI tools, either. </p>
<p>There are currently no laws that <em>prevent</em> other people from collecting or modifying your facial information.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/so-youve-been-scammed-by-a-deepfake-what-can-you-do-223299">So, you've been scammed by a deepfake. What can you do?</a>
</strong>
</em>
</p>
<hr>
<h2>Catching up the law</h2>
<p>We need a range of regulations on the collection and modification of facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising.</p>
<p>Experts at the University of Technology Sydney have proposed a comprehensive legal framework for <a href="https://www.uts.edu.au/human-technology-institute/projects/facial-recognition-technology-towards-model-law">regulating the use of facial recognition technology</a> under Australian law.</p>
<p>It contains proposals for regulating the first stage of non-consensual activity: the collection of personal information. That may help in the development of new laws.</p>
<p>Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government <a href="https://www.minister.industry.gov.au/ministers/husic/media-releases/new-artificial-intelligence-expert-group">AI expert group</a> working to develop “safe and responsible AI practices”.</p>
<p>There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent <a href="https://www.ag.gov.au/rights-and-protections/publications/government-response-privacy-act-review-report">response to the Attorney-General’s Privacy Act review</a> has some promising provisions. </p>
<p>The government has agreed further consideration should be given to enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. This work should be coordinated with the government’s ongoing work on Digital ID and the National Strategy for Identity Resilience. </p>
<p>As for consent, the government has agreed in principle that the definition of consent required for biometric information collection should be amended to specify it must be voluntary, informed, current, specific and unambiguous. </p>
<p>As facial information is increasingly exploited, we’re all waiting to see whether these discussions do become law – hopefully sooner rather than later.</p>
<hr>
<p><em>Correction: we have amended a sentence to clarify Woolworths use camera technology but not necessarily facial recognition technology.</em></p><img src="https://counter.theconversation.com/content/224643/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Margarita Vladimirova does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Our facial information is sensitive – yet companies and individuals can collect, sell and manipulate it without our consent. Australian law must change to protect us all.Margarita Vladimirova, PhD in Privacy Law and Facial Recognition Technology, Deakin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2231302024-02-12T21:19:30Z2024-02-12T21:19:30ZThe use of technology in policing should be regulated to protect people from wrongful convictions<figure><img src="https://images.theconversation.com/files/574851/original/file-20240212-28-7lu7es.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4288%2C2848&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">In 2010, police at the G20 summit in Toronto filmed protestors.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The proliferation of technology for everyday living can be seen through <a href="https://mashable.com/article/chatgpt-ai-essays-classroom-materials-teachers-react">ChatGPT writing term papers</a> or <a href="https://www.cbc.ca/news/canada/ottawa/robot-cat-gatineau-restaurant-1.6224125">robots serving meals at a restaurant</a>. </p>
<p>Technology can also be used towards less utilitarian ends. Unfortunately, deepfakes — digitally altered images of people — <a href="https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them">can be used to spread misinformation</a>.</p>
<p>A new edited volume, which I co-edited, considers the use of everyday technologies in the criminal justice system, <a href="https://doi.org/10.4324/9781003323112">ranging from detecting deception to web sleuthing to help law enforcement solve crime</a>. </p>
<h2>Technology and policing</h2>
<p>Consider the use of body-worn cameras by police, as in the fatal shooting of Ontario Provincial Police Const. Greg Pierzchala in December 2022. Footage from his body camera will provide evidence during <a href="https://lfpress.com/news/local-news/slain-officer-was-wearing-body-camera-that-could-provide-key-evidence-experts">the trial of his accused killers</a>.</p>
<p>Police investigations have also been aided by private citizen sleuths via technology, who gather evidence to help police identify criminals. This was the case with <a href="https://www.cbc.ca/news/canada/montreal/luka-magnotta-guilty-of-1st-degree-murder-in-jun-lin-s-slaying-1.2875989">convicted murderer Luka Magnotta</a>, where an online network <a href="https://nationalpost.com/news/canada/luka-rocco-magnotta-online-sleuths">identified him in cat torture videos</a> and provided the information to law enforcement agencies. </p>
<p>Another use of technology can be for public surveillance for crime prevention through the application of facial recognition software.</p>
<p>Security cameras are now a ubiquitous feature in public places. In 2021, it was estimated that <a href="https://geographical.co.uk/science-environment/whos-watching-the-cities-with-the-most-cctv-cameras">one billion security cameras were being used around the world</a>. China is listed as having about <a href="https://www.usnews.com/news/cities/articles/2020-08-14/the-top-10-most-surveilled-cities-in-the-world">54 per cent of all surveillance cameras</a>. </p>
<p>In 2020, Toronto had approximately <a href="https://torontosun.com/news/local-news/toronto-becoming-a-camera-city-but-still-pales-in-comparison-to-london-england">2,000 cameras at city-owned facilities</a>. </p>
<p>Security cameras may or may not be used in conjunction with facial recognition software.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a close-up of a security camera with a city at night in the background" src="https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574850/original/file-20240212-22-v0paoz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Security cameras are becoming regular features of outdoor public spaces.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Finding faces</h2>
<p>Facial recognition uses software to identify or confirm someone’s identity using an image of their face. Captured faces are compared to a database, often for the purposes of <a href="https://www.csis.org/analysis/how-does-facial-recognition-work">crime prevention</a>. </p>
<p>Some retailers have used facial recognition to help reduce theft. In 2022, Josh Soika, an Indigenous man, was confronted by a security guard due to being “flagged” as having <a href="https://www.cbc.ca/news/canada/manitoba/first-nation-apology-store-accused-1.6620457">stolen previously from the store</a>. Later, it was determined that <a href="https://retail-insider.com/bulletin/2022/11/facial-recognition-in-stores-in-canada-may-pose-problems-amid-ai-based-misidentification-potential/">Soika was misidentified by the artificial intelligence (AI)</a> used by Canadian Tire for facial recognition. </p>
<p>In 2023, Canadian Tire Corporation and its dealers have since agreed to <a href="https://www.cbc.ca/news/canada/british-columbia/canadian-tire-bc-facial-id-technology-privacy-commissioner-1.6817039">no longer use facial recognition technology</a>.</p>
<p>In the United States recently, the Federal Trade Commission (FTC) banned the pharmacy chain Rite Aid for five years from using facial recognition software to identify customers who have stolen merchandise or <a href="https://www.supermarketnews.com/retail-financial/rite-aid-now-banned-using-facial-recognition-ftc-next-five-years">displayed other problematic behaviours</a>. In some instances, Rite Aid workers would follow “identified” customers around, accuse them of stealing and call police. People of colour were falsely identified at a greater rate than white customers.</p>
<p>It is important to note that someone who has shoplifted in the past isn’t <a href="https://theconversation.com/policing-is-not-the-answer-to-shoplifting-feeding-people-is-217046">necessarily planning to shoplift again</a>. </p>
<p>The use of facial recognition software in Canada is controversial. In 2021, it was reported that Toronto police used Clearview AI, a facial recognition software, in 84 investigations, <a href="https://www.cbc.ca/news/canada/toronto/toronto-police-report-clearview-ai-1.6295295">with at least two cases proceeding to prosecution</a>. Once it was discovered by the police chief however, the practice was stopped.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-technologies-like-police-facial-recognition-discriminate-against-people-of-colour-143227">AI technologies — like police facial recognition — discriminate against people of colour</a>
</strong>
</em>
</p>
<hr>
<h2>Discrimination and AI</h2>
<p>Accuracy rates with facial recognition software <a href="https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/">are above 90 per cent</a>, but <a href="https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-recognition-technology">that number is greatly reduced within certain demographics</a>. Facial recognition software is documented to misidentify women, racialized people and those between the ages of 18-30 years, <a href="https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212">with accuracy reduced to 35 per cent</a>.</p>
<p>In February 2023, Porcha Woodruff, a 32-year-old pregnant Black woman from Detroit, was arrested for robbery and carjacking based on a facial recognition match. Police used AI that had run an image of a carjacker caught on video through a mugshot database that contained Woodruff’s photo, and incorrectly matched it. </p>
<p>Woodruff was <a href="https://www.washingtonpost.com/nation/2023/08/07/michigan-porcha-woodruff-arrest-facial-recognition/">jailed for 11 hours and went into labour</a>. The charges were dropped, and Woodruff is currently suing the <a href="https://www.cbsnews.com/detroit/news/detroit-woman-at-center-of-facial-recognition-lawsuit-responds-to-police-chiefs-claims/">city of Detroit and the Detroit Police Department</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/CmO4Mv1uDew?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBS Detroit interviews researcher Dorothy Roberts about Porcha Woodruff’s misidentification due to facial recognition technology.</span></figcaption>
</figure>
<h2>Consequences of misidentification</h2>
<p>According to the U.S.-based Innocence Project, over 70 per cent of known wrongful convictions are due to <a href="https://innocenceproject.org/eyewitness-misidentification/">mistaken identification by people as a contributing factor</a>. The Canadian Registry of Wrongful Convictions finds approximately <a href="https://www.wrongfulconvictions.ca/issues/eyewitness-identification">a third of their cases involved false identification</a>.</p>
<p>People can show what is known as “<a href="https://doi.org/10.3389/fpsyg.2020.00208">own-race bias</a>” when identifying faces; people are more accurate when <a href="https://doi.org/10.3389/fpsyg.2020.00208">identifying faces of their own race than other races</a>. </p>
<p>The misidentification of a perpetrator — whether by a human or an AI program — can lead to the same consequences: being charged, prosecuted or wrongfully convicted. Technology, as with humans, isn’t always accurate and may succumb to similar biases.</p>
<p>Legislation must keep up to protect people’s rights and privacy. As technology evolves, adequate information and full transparency needs to be provided to the public on how, when and where a technology is in use. It also is clear that much more research is needed to <a href="https://doi.org/10.4324/9781003323112">better understand the impact of technology</a> on the criminal justice system.</p><img src="https://counter.theconversation.com/content/223130/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joanna Pozzulo receives funding from Social Sciences and Humanities Research Council of Canada. </span></em></p>Police use of surveillance technologies — like security cameras and artificial intelligence — is becoming more widespread. Measures are needed to protect people’s privacy and avoid misidentification.Joanna Pozzulo, Chancellor's Professor, Psychology, Carleton UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2172262024-01-19T13:42:15Z2024-01-19T13:42:15ZFace recognition technology follows a long analog history of surveillance and control based on identifying physical features<figure><img src="https://images.theconversation.com/files/569962/original/file-20240117-29-ri412u.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5272%2C3598&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Today's technology advances what passport control has been doing for more than a century.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/controll-of-passports-at-the-frontiers-between-beuthen-and-news-photo/548866047">ullstein bild via Getty Images</a></span></figcaption></figure><p>American Amara Majeed was <a href="https://www.bbc.com/news/world-asia-48061811">accused of terrorism</a> by the Sri Lankan police in 2019. Robert Williams was <a href="https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html">arrested outside his house</a> in Detroit and detained in jail for 18 hours for allegedly stealing watches in 2020. Randal Reid <a href="https://www.nytimes.com/2023/03/31/technology/facial-recognition-false-arrests.html">spent six days in jail</a> in 2022 for supposedly using stolen credit cards in a state he’d never even visited.</p>
<p>In all three cases, the authorities had the wrong people. In all three, it was face recognition technology that told them they were right. Law enforcement officers in many U.S. states are <a href="https://www.wired.com/story/hidden-role-facial-recognition-tech-arrests/">not required to reveal</a> that they used face recognition technology to identify suspects.</p>
<p>Face recognition technology is the latest and most sophisticated version of <a href="https://www.dhs.gov/biometrics">biometric surveillance</a>: using unique physical characteristics to identify individual people. It stands in a <a href="https://www.thalesgroup.com/en/markets/digital-identity-and-security/government/inspired/history-of-biometric-authentication">long line of technologies</a> – from the fingerprint to the passport photo to iris scans – designed to monitor people and determine who has the right to move freely within and across borders and boundaries.</p>
<p>In my book, “<a href="https://www.press.jhu.edu/books/title/12700/do-i-know-you">Do I Know You? From Face Blindness to Super Recognition</a>,” I explore how the story of face surveillance lies not just in the history of computing but in the history of medicine, of race, of psychology and neuroscience, and in the health humanities and politics.</p>
<p>Viewed as a part of the long history of people-tracking, face recognition techology’s incursions into privacy and limitations on free movement are carrying out exactly what biometric surveillance was always meant to do.</p>
<p>The system works by converting captured faces – either static from photographs or moving from video – into a series of unique data points, which it then compares against the data points drawn from images of faces already in the system. As face recognition technology improves in accuracy and speed, its effectiveness as a means of surveillance becomes ever more pronounced.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="faces in a crowd highlighted and annotated with dates and times" src="https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/569964/original/file-20240117-15-h4ovvh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Paired with AI, face recognition technology scans the crowd at a conference.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/live-demonstration-uses-artificial-intelligence-and-facial-news-photo/1080200068">David McNew/AFP via Getty Images</a></span>
</figcaption>
</figure>
<h2>Accuracy improves, but biases persist</h2>
<p>Surveillance is predicated on the idea that <a href="https://theconversation.com/surveillance-is-pervasive-yes-you-are-being-watched-even-if-no-one-is-looking-for-you-187139">people need to be tracked</a> and their movements limited and controlled in a trade-off between privacy and security. The assumption that less privacy leads to more security is built in.</p>
<p>That may be the case for some, but not for the people disproportionately targeted by face recognition technology. <a href="https://www.routledge.com/Histories-of-Surveillance-from-Antiquity-to-the-Digital-Era-The-Eyes-and/Marklund-Skouvig/p/book/9781032021539">Surveillance has always been designed</a> to identify the people whom those in power wish to most closely track.</p>
<p>On a global scale, <a href="https://doi.org/10.1080/21670811.2018.1493938">there are</a> <a href="https://longreads.tni.org/stateofpower/settled-habits-new-tricks-casteist-policing-meets-big-tech-in-india">caste cameras in India</a>, <a href="https://www.theguardian.com/world/2021/sep/30/uyghur-tribunal-testimony-surveillance-china">face surveillance of Uyghurs in China</a> and even <a href="https://mynbc15.com/news/spotlight-on-america/facial-recognition-technology-in-school-hallways-states-face-a-divisive-debate">attendance surveillance</a> <a href="https://dx.doi.org/10.7302/21934">in U.S. schools</a>, often with low-income and majority-Black populations. <a href="https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist">Some people are tracked more closely</a> than others.</p>
<p>In addition, the cases of Amara Majeed, Robert Williams and Randal Reid <a href="https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist">aren’t anomalies</a>. As of 2019, face recognition technology <a href="https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf">misidentified Black and Asian people</a> at up to <a href="https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software">100 times the rate of white people</a>, including, in 2018, a disproportionate number of the <a href="https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28">28 members of the U.S. Congress</a> who were falsely matched with mug shots on file using Amazon’s Rekognition tool.</p>
<p>When the database against which captured images were compared had only a limited number of mostly white faces upon which to draw, face recognition technology would offer matches based on the closest alignment available, leading to a pattern of highly racialized – and racist – false positives.</p>
<p>With the expansion of images in the database and increased sophistication of the software, <a href="https://www.csis.org/blogs/strategic-technologies-blog/how-accurate-are-facial-recognition-systems-and-why-does-it">the number of false positives</a> – incorrect matches between specific individuals and images of wanted people on file – has <a href="https://bipartisanpolicy.org/blog/frt-accuracy-performance/">declined dramatically</a>. Improvements in pixelation and mapping static images into moving ones, along with increased social media tagging and <a href="https://www.penguinrandomhouse.com/books/691288/your-face-belongs-to-us-by-kashmir-hill/">ever more sophisticated scraping tools</a> like those developed by Clearview AI, have helped decrease the error rates.</p>
<p><a href="https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/">The biases</a>, however, remain deeply embedded into the systems and their purpose, explicitly or implicitly targeting already targeted communities. The technology is not neutral, nor is the surveillance it is used to carry out.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Pen and ink illustration of suited hands using calipers to measure a man's forehead to back of his head" src="https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=458&fit=crop&dpr=1 600w, https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=458&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=458&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=576&fit=crop&dpr=1 754w, https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=576&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/569966/original/file-20240117-21-awurl6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=576&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Physiognomy went beyond recognition of an individual and tried to connect physical features with other characteristics.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/head-royalty-free-illustration/1399373778">clu/DigitalVision Vectors via Getty Images</a></span>
</figcaption>
</figure>
<h2>Latest technique in a long history</h2>
<p>Face recognition software is only the most recent manifestation of global systems of tracking and sorting. Precursors are rooted in the now-debunked belief that bodily features offer a unique index to character and identity. This pseudoscience was formalized in the late 18th century under the rubric of the <a href="https://www.hup.harvard.edu/books/9780674036048">ancient practice of physiognomy</a>.</p>
<p>Early systemic applications included anthropometry (body measurement), fingerprinting and iris or retinal scans. They all offered unique identifiers. None of these could be done without the participation – willing or otherwise – of the person being tracked.</p>
<p>The framework of bodily identification was adopted in the 19th century for use in criminal justice detection, prosecution and record-keeping to allow governmental control of its populace. The intimate relationship between face recognition and border patrol was galvanized by the <a href="http://www.atlasobscura.com/articles/passport-photos-history-development-regulation-mugshots">introduction of photos into passports</a> in some countries including Great Britain and the United States in 1914, <a href="https://doi.org/10.1017/9781108664271">a practice that became widespread by 1920</a>.</p>
<p>Face recognition technology provided a way to go stealth on human biometric surveillance. Much early research into face recognition software was <a href="https://www.wired.com/story/secret-history-facial-recognition/">funded by the CIA</a> for the purposes of border surveillance.</p>
<p>It tried to develop a standardized framework for face segmentation: mapping the distance between a person’s facial features, including eyes, nose, mouth and hairline. Inputting that data into computers let a user search stored photographs for a match. These early scans and maps were limited, and the attempts to match them were not successful.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Woman looks at screen with her image on a vending machine" src="https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/569967/original/file-20240117-23-u3alzk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A customer pays via facial recognition at a smart store in China.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/nov-6-2018-a-visitor-tries-facial-recognition-payment-in-a-news-photo/1058496364">Huang Zongzhi/Xinhua News Agency via Getty Images</a></span>
</figcaption>
</figure>
<p>More recently, private companies have <a href="https://fortune.com/longform/facial-recognition/">adopted data harvesting techniques</a>, including face recognition, as part of a long practice of <a href="https://theconversation.com/data-brokers-know-everything-about-you-what-ftc-case-against-ad-tech-giant-kochava-reveals-218232">leveraging personal data for profit</a>.</p>
<p>Face recognition technology works not only to unlock your phone or help you board your plane more quickly, but also in promotional store kiosks and, essentially, in any photo taken and shared by anyone, with anyone, anywhere around the world. These photos are stored in a database, creating ever more comprehensive systems of surveillance and tracking.</p>
<p>And while that means that today it is unlikely that Amara Majeed, Robert Williams, Randal Reid and Black members of Congress would be ensnared by a false positive, face recognition technology has invaded everyone’s privacy. It – and the governmental and private systems that design, run, use and capitalize upon it – is watching, and paying particular attention to those whom society and its structural biases deem to be the greatest risk.</p><img src="https://counter.theconversation.com/content/217226/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sharrona Pearl receives funding from Interfaith America.</span></em></p>Face recognition technology follows earlier biometric surveillance techniques, including fingerprints, passport photos and iris scans. It’s the first that can be done without the subject’s knowledge.Sharrona Pearl, Associate Professor of Bioethics and History, Drexel UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2194062023-12-10T22:09:09Z2023-12-10T22:09:09ZDigital ID will go mainstream across Australia in 2024. Here’s how it can work for everyone<figure><img src="https://images.theconversation.com/files/564405/original/file-20231207-23-kahv7b.jpg?ixlib=rb-1.1.0&rect=4%2C0%2C2904%2C1634&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/woman-in-black-shirt-standing-in-front-of-black-metal-screen-Jlqm6p_nntk">Simon Lee / Unsplash</a></span></figcaption></figure><p>In a world promising self-driving cars and artificial general intelligence, the prospect of a new form of digital identity verification can feel … less than exciting.</p>
<p>And yet digital identity is about to be unleashed in Australia and around the world. In 2024, many years before most of us experience the joy of commuting in our fully autonomous car, new forms of digital ID will profoundly change how we engage with government and business. For example, digital ID may remove the pain of handing over physical copies of your driver’s licence, passport and birth certificate when renewing your Working with Children Check or setting up a new bank account.</p>
<p>How can we gain the benefits of digital ID – convenience, efficiency, lower risk of cybercrime – while minimising the attendant risks, such as privacy leaks, data misuse, and reduced trust in government? </p>
<p>In <a href="https://jmi.org.au/news/facial-verification-tech-in-nsw-digital-identity-new-report-unveils-path-to-enhanced-governance-and-training">a new paper</a> released today by the Human Technology Institute, we propose legal and policy guardrails to improve user safeguards and build community trust for the rollout of digital ID in New South Wales. While the paper focuses on NSW, it contains ten principles to support the development of any safe, reliable and responsible digital identity system.</p>
<h2>Across Australia, governments are kickstarting digital identity initiatives</h2>
<p>Some forms of digital identification already operate in Australia at scale. For example, the <a href="https://www.idmatch.gov.au/">Document Verification Service</a> was introduced as early as 2009 to automate checking of important documents such as passports. </p>
<p>Last year this service was used <a href="https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Legal_and_Constitutional_Affairs/IDVerificationBills23/Report/Chapter_1_-_Introduction">more than 140 million times</a> by roughly 2,700 government and private sector organisations. A limited form of facial verification technology was used well over a million times.</p>
<p>A key problem, however, is that Australia has not had an effective legal framework to govern even the existing digital ID system. This is starting to change. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-national-digital-id-scheme-is-being-proposed-an-expert-weighs-the-pros-and-many-more-cons-214144">A national digital ID scheme is being proposed. An expert weighs the pros and (many more) cons</a>
</strong>
</em>
</p>
<hr>
<p>In June this year, the federal government released a <a href="https://www.homeaffairs.gov.au/criminal-justice/files/national-strategy-for-identity-resilience.pdf">national strategy for digital identity resilience</a>. In its final sittings for 2023, the Australian Parliament <a href="https://ministers.ag.gov.au/media-centre/delivering-strong-safeguards-identity-verification-services-07-12-2023">passed the Identity Verification Services Bill 2023</a>, which provides some important protections for privacy and other rights. </p>
<p>Also in December, the government proposed a second law, the <a href="https://ministers.ag.gov.au/media-centre/strengthening-australias-digital-id-system-30-11-2023">Digital ID Bill 2023</a>. This bill would provide rules for a major expansion of Australia’s system of digital identification.</p>
<p>Notwithstanding this recent flurry of activity in the federal government, NSW has long been Australia’s leading jurisdiction in this area. It announced its <a href="https://www.nsw.gov.au/customer-service/media-releases/nsw-government-unveils-future-of-digital-identity">Digital ID program</a> in April 2022 and has quietly worked to put in place the key elements of what could become a world-leading digital ID system, with strong community safeguards.</p>
<h2>What is a ‘digital identity’, and what are the risks?</h2>
<p>The technologies at the heart of digital ID are powerful and carry risks. </p>
<p>In particular, facial verification technology matches an individual’s face data against a recorded reference image. It may also incorporate “liveness detection”, which checks that the face to be verified belongs to a genuine individual requesting a service in real time (as opposed to a photograph, for example). </p>
<p>NSW’s digital identity initiative uses both these technologies.</p>
<p>Overall, digital identity should mean <em>less</em> of our personal information is collected and used by third parties. For example, when someone enters a pub and a bouncer asks for ID, the only information the bouncer needs to know is that the patron is over 18. The bouncer doesn’t need other personal information on their licence, such as their address or organ donor status. </p>
<p>Good design and regulation would ensure the digital ID service can verify someone’s age without disclosing other sensitive data.</p>
<p>On the other hand, these technologies use sensitive personal information and this brings risks when they are used to make decisions that affect people’s rights. Errors may result in an individual being denied an essential government service. </p>
<p>Because a digital ID system would by its nature collect sensitive personal information, it also poses risks of identity fraud or hacking of personal information.</p>
<h2>Making digital ID safe</h2>
<p>There must be robust safeguards in place to address these risks.</p>
<p>Accountable digital identity systems should be voluntary, not compulsory. They need to ensure citizens have options for choice and consent, and should be usable and accessible for everyone. </p>
<p>Digital ID also needs to be safe. It should protect the sensitive personal information of users and make sure this data is not used for other, unintended purposes like law enforcement.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australias-national-digital-id-is-here-but-the-governments-not-talking-about-it-130200">Australia's National Digital ID is here, but the government's not talking about it</a>
</strong>
</em>
</p>
<hr>
<p>To achieve these aims, we recommend that NSW Digital ID be grounded in legislation that enshrines:</p>
<ul>
<li><p><strong>user protections</strong>, including providing for privacy and data security of all users</p></li>
<li><p><strong>performance standards</strong>, ensuring that digital identity performs to a high standard of accuracy and be fit for purpose, with public reporting by the responsible government agency or department on relevant independent benchmarking and technical standards compliance</p></li>
<li><p><strong>oversight and accountability</strong>, with both internal and external monitoring, and clear redress mechanisms</p></li>
<li><p><strong>interoperability</strong> with other government systems.</p></li>
</ul>
<p>These principles are not specific to NSW. They are relevant and transferable to other jurisdictions looking to develop digital identity systems. </p>
<p>Whether Australia’s digital identity transformation is a success depends on how digital identity systems are established in law and practice. It is crucial that robust governance mechanisms are in place to ensure digital identity systems are safe, secure and accountable. Only then will Australians embrace and trust the digital transformation that is afoot.</p>
<hr>
<p><em>HTI’s work to develop independent expert advice outlining a governance framework and training strategy for NSW Digital ID was funded by a <a href="https://jmi.org.au/2022-policy-challenge-grant-winners/">James Martin Institute Policy Challenge Grant</a>.</em></p><img src="https://counter.theconversation.com/content/219406/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Edward Santow works for the UTS Human Technology Institute. The Institute has received a funding grant from the James Martin Institute for Public Policy to support the project mentioned in this article. Prof Santow also serves as an independent member of the NSW Government's AI Review Committee, which has provided some advice on the NSW Government's use of digital identification.</span></em></p><p class="fine-print"><em><span>Lauren Perry works for the UTS Human Technology Institute. The Institute has received a funding grant from the James Martin Institute for Public Policy to support the project mentioned in this article</span></em></p><p class="fine-print"><em><span>Sophie Farthing works for the UTS Human Technology Institute. The Institute has received a funding grant from the James Martin Institute for Public Policy to support the project mentioned in this article. </span></em></p>2024 will see a massive expansion in Australia’s digital ID system. Good tech and strong guardrails will make Australia a world leader in this important area.Edward Santow, Professor & Co-Director, Human Technology Institute, University of Technology SydneyLauren Perry, Responsible Technology Policy Specialist - Human Technology Institute, University of Technology SydneySophie Farthing, Head, Policy Lab, Human Technology Institute, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2106252023-09-08T05:25:02Z2023-09-08T05:25:02ZIndigenous knowledges informing ‘machine learning’ could prevent stolen art and other culturally unsafe AI practices<p>Artificial intelligence (AI) relies on its creators for training, otherwise known as “machine learning.” Machine learning is the process by which the machine generates its intelligence through outside input. </p>
<p>But its behaviour is determined by the information it is provided. And at the moment, AI is a <a href="https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html">white male dominated field</a>.</p>
<p>How can we ensure the evolution of AI doesn’t further encroach on Indigenous rights and data sovereignty?</p>
<h2>AI risks to Indigenous art</h2>
<p>AI has the ability to generate art, and anyone can “<a href="https://creator.nightcafe.studio/creation/bB0ySQMphHOpLFxUQWTX">create</a>” Indigenous art using this machine. Even before AI, <a href="https://www.theguardian.com/artanddesign/2017/aug/09/australias-fake-art-and-tourist-tack-indigenous-artists-fight-back">Aboriginal art has widely been appropriated</a> and reproduced without attribution or acknowledgement, particularly for tourism industries. </p>
<p>And this could worsen with people now being able to generate art <a href="https://www.science.org/doi/10.1126/science.adh0575">through AI</a>. This is an issue not just experienced by Indigenous people, with <a href="https://bootcamp.uxdesign.cc/why-some-artists-see-ai-generated-art-as-a-threat-to-their-livelihood-f4634b24a5ce">many artists affected by</a> their art styles being misappropriated. </p>
<p>Indigenous art is embedded with history and connects to culture and Country. AI-created Indigenous art would lack this. There are also implications for financial gain bypassing Indigenous artists and going to the producers of the technology. </p>
<p>Including Indigenous people in creating AI or deciding what AI can learn, could help minimise exploitation of Indigenous artists and their art. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-can-reinforce-discrimination-but-used-correctly-it-could-make-hiring-more-inclusive-207966">AI can reinforce discrimination — but used correctly it could make hiring more inclusive</a>
</strong>
</em>
</p>
<hr>
<h2>What is Indigenous data sovereignty?</h2>
<p>In Australia there is a long history of collecting data <a href="https://theconversation.com/for-too-long-research-was-done-on-first-nations-peoples-not-with-them-universities-can-change-this-163968"><em>about</em></a> Aboriginal and Torres Strait Islander people. But there has been little data collected <em>for</em> or <em>with</em> Aboriginal and Torres Strait Islander people. Aboriginal scholars Maggie Walter and Jacon Prehn write of this in the context of the growing <a href="https://www.taylorfrancis.com/chapters/edit/10.4324/9781003193791-17/indigenous-issues-rights-sovereignty-jacob-prehn-maggie-walter">Indigenous Data Sovereignty movement</a>. </p>
<p>Indigenous Data Sovereignty is concerned with the rights of Indigenous peoples to own, control, access and possess their own data, and decide who to give it to. <a href="https://www.marketplace.org/shows/marketplace-tech/what-we-can-learn-from-an-indigenous-approach-to-ai/">Globally</a>, Indigenous peoples are pushing for formal agreements on <a href="https://acola.org/wp-content/uploads/2019/07/acola-ai-input-paper_indigenous-data-sovereignty_walter-kukutai.pdf">Indigenous Data Sovereignty</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1681180417129582594"}"></div></p>
<p>Many Indigenous people are concerned with how the data involving our knowledges and cultural practices is being used. This has resulted in some Indigenous lawyers finding ways to <a href="https://www.terrijanke.com.au/post/2018/01/29/rights-to-culture-indigenous-cultural-and-intellectual-property-icip-copyright-and-protoc">integrate</a> intellectual property with cultural rights.</p>
<p>Māori scholar Karaitiana Taiuru <a href="https://www.japantimes.co.jp/news/2023/04/10/world/indigenous-language-ai-colonization-worries/">says</a>:</p>
<blockquote>
<p>If Indigenous peoples don’t have sovereignty of their own data, they will simply be re-colonised in this information society.</p>
</blockquote>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1036425197400686593"}"></div></p>
<h2>How mob have been using AI</h2>
<p>Indigenous people are already collaborating on research that draws on Indigenous knowledges and involves AI.</p>
<p>In the wetlands of Kakadu, rangers are using AI and Indigenous knowledges to <a href="https://www.csiro.au/en/news/all/articles/2019/november/magpie-geese-return-ethical-ai-indigenous-knowledge#:%7E:text=In%20the%20wetlands%20of%20Kakadu,geese%20are%20returning%20to%20roost.&text=Have%20you%20seen%20the%20view,heart%20of%20Kakadu%20National%20Park%3F">care for Country</a>. </p>
<p>A weed called para grass is having a negative impact on magpie geese, which have been in decline. While the Kakadu rangers are doing their best to control the issue, the sheer size of the area (two million hectares), makes this difficult. </p>
<p>Collecting and analysing information about magpie geese and the impact of para grass using drones is having a <a href="https://www.csiro.au/en/news/all/articles/2019/november/magpie-geese-return-ethical-ai-indigenous-knowledge#:%7E:text=In%20the%20wetlands%20of%20Kakadu,geese%20are%20returning%20to%20roost.&text=Have%20you%20seen%20the%20view,heart%20of%20Kakadu%20National%20Park%3F">positive influence</a> on goose numbers. </p>
<p>Projects like these are vital given the loss of biodiversity around the globe that is causing species extinctions and ecosystem loss at alarming rates. As a result of this collaboration thousands of magpie geese are returning to Country to roost. </p>
<figure>
<iframe src="https://player.vimeo.com/video/374286893" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<figcaption><span class="caption">Wetlands are “the supermarkets of the bush”</span></figcaption>
</figure>
<p>This project involves Traditional land owners (collectively known as Bininj in the north of Kakadu National Park and Mungguy in the south) working with rangers and researchers to help protect the environment and <a href="https://news.microsoft.com/en-au/features/science-indigenous-knowledge-and-ai-weave-environmental-magic/">preserve biodiversity</a>.</p>
<p>By working with Traditional Owners, monitoring systems were able to be programmed with geographically-specific knowledge, not otherwise recorded, reflecting the connection of Indigenous people with the land. This collaboration highlights the need to ensure Indigenous-led approaches. </p>
<p>In another example, in Sanikiluaq, an Inuit community in Nunavut, Canada, a project called <a href="https://www.arcticwwf.org/the-circle/stories/blending-indigenous-knowledge-and-artificial-intelligence-to-enable-adaptation/">PolArtic</a> uses scientific data with Indigenous knowledges to assess the location of, and manage, fisheries. </p>
<p>Changing climate patterns are affecting the availability of fish, and this is another example where Indigenous knowledges are providing solutions for biodiversity issues caused by the global climate crisis. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1652836119082598400"}"></div></p>
<p><a href="https://www.indigital.net.au">Indigital</a> is an Indigenous-owned profit-for-purpose company founded by Dharug, Cabrogal innovator <a href="https://cms.australianoftheyear.org.au/recipients/mikaela-jade">Mikaela Jade</a>. Jade has worked with traditional owners of Kakadu to use augmented reality to <a href="https://womensagenda.com.au/leadership/how-mikaela-jade-went-from-park-ranger-to-tech-founder-telling-indigenous-stories-through-augmented-reality/">tell their stories on Country</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1697144637524529388"}"></div></p>
<p>Indigital is also providing pathways for mob who are keen to learn more about digital technologies and combine them with their knowledges.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-should-australia-capitalise-on-ai-while-reducing-its-risks-its-time-to-have-your-say-206863">How should Australia capitalise on AI while reducing its risks? It's time to have your say</a>
</strong>
</em>
</p>
<hr>
<h2>Future challenges and opportunities for Indigenous inclusion</h2>
<p>Although AI is a powerful tool, it is limited by the data which inform it. The success of the above projects is because AI was informed by Indigenous knowledges, provided by Indigenous knowledge holders who have a long held ancestral relationship with the land, animals and environment.</p>
<p>Research indicates AI is a white male-dominated industry. A global study found <a href="https://pursuit.unimelb.edu.au/articles/the-women-putting-intelligence-in-artificial-intelligence">12% of professionals</a> across all levels were female, with only 4% being people of colour. Indigenous participation was not noted. </p>
<p>In early June, the Australian government’s <a href="https://apo.org.au/node/322938">Safe and Responsible AI in Australia</a> discussion paper found racial and gender biases evident in AI. Racial biases occurred, the paper found, in situations such as where AI had been used to predict criminal behaviour.</p>
<p>The purpose of the study was to seek feedback on how to lessen potential risks of harm from AI. Advisory groups and consultation processes were raised as possibilities to address this, but not explored in any real depth.</p>
<p>Indigenous knowledges have a lot to offer in the development of new technologies including AI. Art is part of our cultures, ceremonies, and identity. AI-generated art presents the risk of mass reproduction without Indigenous input or ownership, and misrepresentation of culture.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1629276348832260096"}"></div></p>
<p>The federal government needs to consider Indigenous Knowledges informing the machine learning informing AI, supporting data sovereignty. There is an opportunity for Australia to become a global leader in pursuing technology advancement ethically.</p><img src="https://counter.theconversation.com/content/210625/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dr Peita Richards is the recipient of an Office of National Intelligence, National Intelligence Postdoctoral Grant (project number 202308) funded by the Australian Government.</span></em></p><p class="fine-print"><em><span>Bronwyn Carlson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There are many programs where people can generate art using AI. However, this comes with a risk of non-Indigenous people generating Indigenous art, which negatively affects Indigenous artists.Bronwyn Carlson, Professor, Indigenous Studies and Director of The Centre for Global Indigenous Futures, Macquarie UniversityPeita Richards, Research Fellow, Charles Sturt UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2052332023-06-07T14:04:05Z2023-06-07T14:04:05ZFoetal alcohol syndrome: facial modelling study explores technology to aid diagnosis<figure><img src="https://images.theconversation.com/files/530314/original/file-20230606-17-xwgadc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Advances in facial recognition technology may have useful applications in healthcare.</span> <span class="attribution"><span class="source">Getty Images</span></span></figcaption></figure><p>Foetal alcohol syndrome is a lifelong condition <a href="https://www.nhs.uk/conditions/foetal-alcohol-spectrum-disorder/">caused</a> by exposing an unborn baby to alcohol. It’s a pattern of mental, <a href="https://doi.org/10.1111/j.1469-7580.2006.00683.x">physical</a> and behavioural symptoms seen in some people whose mothers consumed alcohol during pregnancy. Not all prenatal alcohol exposure results in the syndrome; it is the most severe form of a range of effects called foetal alcohol spectrum disorders. </p>
<p>South Africa has the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5710622/">highest reported rates</a> of <a href="https://theconversation.com/explainer-foetal-alcohol-spectrum-disorders-9871">foetal alcohol spectrum disorders</a> in the world: 111.1 per 1,000 population. The disorders may affect <a href="https://farrsa.org.za/library/#toggle-id-2">seven million</a> people in the country. The number could be higher because of under-diagnosis. </p>
<p>Foetal alcohol syndrome can’t be reversed. But confirmed diagnosis can have benefits. It can lead to early intervention and therapy (physical, occupational, and speech, among others), and a better <a href="https://farrsa.org.za/wp-content/uploads/2021/11/2021-FASD-Pamphlet-13-Sept-2021.pdf">understanding</a> from parents and teachers. Diagnosis can also ensure that adults are eligible for social services support. </p>
<p>Clinicians use a range of methods to <a href="https://publications.aap.org/pediatrics/article/138/2/e20154256/52445/Updated-Clinical-Guidelines-for-Diagnosing-Fetal">diagnose foetal alcohol syndrome</a>, including assessing abnormal growth and brain function. A key part of the process is looking at the individual’s facial features. Typical <a href="https://farrsa.org.za/library/#toggle-id-1">features</a> are small eye openings, a thin upper lip, and a smooth area between the nose and upper lip. </p>
<p>But visual examination of the facial features can be subjective and often depends on the clinician’s experience and expertise. Another challenge arises in low-resource settings when there aren’t many doctors specially trained to do this.</p>
<p>A more objective and standard way to detect foetal alcohol syndrome early would therefore be useful.</p>
<p>One method that’s being used to aid diagnosis is <a href="https://doi.org/10.1111/acer.14875">three-dimensional (3D) surfaces</a> produced by devices that scan the face. The technology is costly and complex. Two-dimensional (2D) images are easier to get – it can be done with a digital camera or smartphone – but are not accurate enough for diagnosis.</p>
<p><a href="https://doi.org/10.17159/sajs.2023/12064">Our study</a> sought to explore whether it was possible to use normal 2D face images to approximate 3D surfaces of the face. We showed that it was. Our method involved using 3D models that can change their shape based on a variety of real human faces, combined with 3D facial analysis technology.</p>
<p>We argue in our paper that our findings show the technology can improve early detection, intervention and treatment for people affected by foetal alcohol syndrome, particularly in low-resource settings. </p>
<p>We hope to contribute to the global effort to prevent and manage the lifelong consequences of the syndrome and disorders.</p>
<h2>How it would work</h2>
<p>We constructed a <a href="https://doi.org/10.1145/3395208">flexible 3D model</a> that can alter its shape based on a variety of real human faces. The changes are guided by statistical patterns learned from a <a href="https://www.cs.binghamton.edu/%7Elijun/Research/3DFE/3DFE_Analysis.html">dataset of high-quality 3D scans</a> from 98 individuals. This international open-source dataset was carefully curated to represent different demographic groups. </p>
<p>We didn’t have access to image data of individuals affected by foetal alcohol syndrome. We therefore used 2D and 3D images of individuals without this condition to develop and validate our approach. We nevertheless reasoned that our method should work equally well for any scenario where the model and the test subjects are closely matched. </p>
<p>We then set out to develop and validate a machine learning algorithm for predicting 3D faces of unseen subjects, from their 2D face images only, using our 3D model. </p>
<p>This was a pioneering step in our research, where we aimed to create a “smart” tool that could bring flat images to life in three dimensions. The results of the study were encouraging. </p>
<p>Our 3D-from-2D prediction algorithm performed well in three ways:</p>
<ul>
<li><p>capturing facial variations</p></li>
<li><p>representing unique features</p></li>
<li><p>summarising information of faces from 2D images. </p></li>
</ul>
<p>Since we had actual 3D face scans to use for comparison, we were able to calculate the average difference between these scans and the face shapes predicted by our model. This allowed us to measure the error in our fitting, which we found to be in <a href="https://doi.org/10.1109/TCYB.2014.2359056">line with other studies</a>. </p>
<p>We particularly focused on specific regions of the face: the eyes, midface, upper lip, and philtrum (the groove between the nose and the top lip). These regions provide crucial information for clinicians when examining the facial markers of foetal alcohol syndrome. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530325/original/file-20230606-28-vgz1xs.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Facial regions associated with foetal alcohol syndrome on a normal face.</span>
<span class="attribution"><span class="source">Tinashe Mutsvangwa</span></span>
</figcaption>
</figure>
<p>We could accurately predict these facial regions, and concluded from this that our method could form the foundation of an image-based diagnostic tool for foetal alcohol syndrome.</p>
<p>Our study also showed that the quality of our predictions was independent of skin tone. This is a crucial finding. <a href="https://doi.org/10.1179/1743131X14Y.0000000093">Certain 3D scanning technologies have been known to struggle with accurately capturing darker skin tones</a>. This issue is <a href="https://doi.org/10.1016/j.bjps.2019.05.002">being addressed</a>. Nevertheless, our findings gave us confidence that there was additional potential for use of our approach in diverse populations. </p>
<h2>Challenges</h2>
<p>We did identify some limitations. Access to 3D data of individuals with foetal alcohol syndrome remains a challenge. Future research could focus on reducing reconstruction errors to acceptable clinical standards by collecting and analysing larger datasets, including data from underrepresented populations.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/remembering-tania-douglas-a-brilliant-biomedical-engineer-academic-and-friend-161931">Remembering Tania Douglas: a brilliant biomedical engineer, academic and friend</a>
</strong>
</em>
</p>
<hr>
<p><em>Our study is a continuation of the work carried out in collaboration with the late renowned South African biomedical engineer, <a href="https://sajs.co.za/article/view/11067">Tania Douglas</a> of the University of Cape Town.</em></p><img src="https://counter.theconversation.com/content/205233/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tinashe Ernest Muzvidzwa Mutsvangwa receives funding from the South African National Research Foundation</span></em></p><p class="fine-print"><em><span>Bernhard Egger receives funding from the German research council. </span></em></p><p class="fine-print"><em><span>Felix Atuhaire received funding from European Commission; the South African Department of Science and Innovation; the South African National Research Foundation.</span></em></p>Key to diagnosing foetal alcohol syndrome is an assessment of certain facial features. A 3D facial scan is expensive but 2D images may offer a solution.Tinashe Ernest Muzvidzwa Mutsvangwa, Associate Professor of Biomedical Engineering, University of Cape TownBernhard Egger, Professor for Cognitive Computer Vision, Friedrich-Alexander-Universität Erlangen-NürnbergFelix Atuhaire, Lecturer, Mbarara University of Science and TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2051682023-05-10T16:43:39Z2023-05-10T16:43:39ZVoter ID: most people are terrible at matching faces to photos, making polling checks unreliable<p>On Thursday May 4, for the first time, members of the public voting in local council elections in England were required to bring photo ID to their polling station. <a href="https://news.sky.com/story/local-elections-voters-turned-away-at-polling-stations-for-not-having-the-right-id-12873256">Initial reports</a> suggested that a few people were turned away because they didn’t bring one of the approved forms of photo ID. </p>
<p>But even if they did bring the right documents, such as a driving licence or passport, there’s a question mark over whether the people manning polling stations could tell accurately whether the voter was the person pictured in the ID.</p>
<p>When you present your photo ID to be checked, the person looking at it has to decide if your face matches the picture in the document. In a lab, this is usually <a href="https://link.springer.com/article/10.3758/BRM.42.1.286">done with images and is called “face matching”</a>. Such studies typically present two face images side-by-side and ask people to judge whether the images show the same person or two different people. </p>
<p>While people perform well at this task when they are <a href="https://doi.org/10.1016/j.cognition.2015.05.002">familiar with the person pictured</a>, studies report the error rate can be <a href="https://bpspsychub.onlinelibrary.wiley.com/doi/10.1111/bjop.12260">as high as 35%</a> when those pictured are unfamiliar. Even when people are asked to compare a live person standing in front of them with a photo, a recent study found they still got <a href="https://bpspsychub.onlinelibrary.wiley.com/doi/full/10.1111/bjop.12388">more than 20% of their answers wrong</a>. </p>
<h2>Natural ability</h2>
<p>The people checking our photo ID are almost always unfamiliar with us, so we should expect that this is a difficult, error-prone task for them. And while you might think that people whose job it is to check photo ID would be better at it than the rest of us, <a href="https://doi.org/10.1002/(SICI)1099-0720(199706)11:3%3C211::AID-ACP430%3E3.0.CO;2-O">cashiers</a>, <a href="https://journals.sagepub.com/doi/10.1111/1467-9280.00144">police officers</a> and <a href="https://doi.org/10.1371/journal.pone.0103510">border control officers</a> have all been shown to be as poor at face matching as untrained people. </p>
<p>The study of border control officers also showed they don’t improve at the task as time goes on – there was no relationship between their performance and the number of years they had spent in the job.</p>
<figure class="align-center ">
<img alt="Mosaic of people." src="https://images.theconversation.com/files/525210/original/file-20230509-21-ab8560.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/525210/original/file-20230509-21-ab8560.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/525210/original/file-20230509-21-ab8560.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/525210/original/file-20230509-21-ab8560.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/525210/original/file-20230509-21-ab8560.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/525210/original/file-20230509-21-ab8560.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/525210/original/file-20230509-21-ab8560.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Few people are good at matching photos to a real face.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/multi-ethnic-people-different-age-looking-1733849612">fizkes / Shutterstock</a></span>
</figcaption>
</figure>
<p>This suggests that face recognition ability doesn’t change with practice.
While repeated exposure to variable images of one person’s face can help you to recognise them, <a href="https://doi.org/10.1371/journal.pone.0211037">professional facial image comparison courses</a> aimed at training face identification ability have not been shown to produce lasting improvements in performance.</p>
<p>There is, however, an argument for the role of natural ability in face recognition. People known as <a href="https://theconversation.com/are-you-among-australias-best-facial-super-recognisers-take-our-test-to-find-out-150089">“super-recognisers”</a> perform far better than the general population at tests of face recognition, and have been used by police forces to identify criminals. </p>
<p>For example, super-recognisers could be asked to look through images of wanted persons and then try to find them in CCTV footage, or match images caught on CCTV to police mugshots. Some of us are just better than others at these types of task.</p>
<h2>Error-prone task</h2>
<p>But why is it so difficult for most of us to recognise an unfamiliar person across different images? We all know that we look different in different pictures – not many of us would choose to use our passport image on a dating website. And this <a href="https://doi.org/10.1080/17470218.2013.800125">variability in appearance</a> is what makes unfamiliar face matching so difficult. </p>
<p>When we are familiar with someone, we have seen their face many times looking lots of different ways. We have been exposed to a high amount of this “within-person variability”, enabling us to put together a stable representation of that familiar person in our minds. </p>
<figure class="align-center ">
<img alt="Crowd of people" src="https://images.theconversation.com/files/525207/original/file-20230509-30-6jscjt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/525207/original/file-20230509-30-6jscjt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/525207/original/file-20230509-30-6jscjt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/525207/original/file-20230509-30-6jscjt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/525207/original/file-20230509-30-6jscjt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/525207/original/file-20230509-30-6jscjt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/525207/original/file-20230509-30-6jscjt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Super-recognisers could be asked to look through images of wanted persons and search for them in CCTV footage.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/blurred-people-modern-hall-788893783">engel.ac / Shutterstock</a></span>
</figcaption>
</figure>
<p>In fact, exposure to within-person variability has been shown to be crucial for <a href="https://doi.org/10.1080/17470218.2015.1136656">learning</a> what a new face looks like. With unfamiliar people, we just haven’t seen enough of their variability to reliably decide whether they look like the image in their photo ID.</p>
<h2>Photo ID at elections</h2>
<p>Here’s what this means for photo ID at elections, which was introduced as an attempt to tackle voter fraud. In fact, aside from the issue of people not having the required form of photo ID in order to vote, having people at polling stations check photo ID may not actually be a reliable way of verifying voters’ identities. </p>
<p>People could be falsely matched to an incorrect ID, or incorrectly turned away on the basis that they don’t match the photo in the document. Unfamiliar face matching is error-prone, and can’t be reliably trained. </p>
<p>So, unless the people at the polling stations are super-recognisers, they may find it difficult and make errors when matching voters to their photo IDs.</p><img src="https://counter.theconversation.com/content/205168/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Katie Gray is a Labour Party member. </span></em></p><p class="fine-print"><em><span>Kay Ritchie does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Research suggests that photo ID checks at polling stations risk voters being turned away because of errors.Kay Ritchie, Senior Lecturer in Cognitive Psychology, University of LincolnKatie Gray, Associate Professor, School of Psychology and Clinical Language Sciences, University of ReadingLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2002512023-02-22T02:54:12Z2023-02-22T02:54:12ZAs livestock theft becomes a growing problem in rural Australia, new technologies offer hope<figure><img src="https://images.theconversation.com/files/511521/original/file-20230221-26-ytz7u0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">AAP/NSW drought stock</span></span></figcaption></figure><p>Last week, it was reported that 700 sheep with an estimated value of $140,000, including nearly 200 valuable merino ewes, were stolen from a Victorian property in a highly sophisticated <a href="https://www.abc.net.au/news/2023-02-16/livestock-thieves-steal-hundreds-of-sheep-rural-victoria/101982058">rural crime operation</a>. Such <a href="https://www.abc.net.au/news/2022-06-06/northern-territory-katherine-cattle-theft/101128430">large-scale rural theft</a> is <a href="https://www.theage.com.au/national/victoria/livestock-theft-leaves-sheepish-farmers-calling-for-action-20220601-p5aqaz.html">increasingly common</a>. </p>
<p>Rural crime is not isolated to certain states. Rather, stock theft is an Australian problem. Evidence from these large-scale thefts shows that offenders use “corridors” across state borders to move stolen rural property and livestock great distances.</p>
<p>Surveys conducted in <a href="https://express.adobe.com/page/H4jeQ3vvA7bsO/">Victoria</a> and <a href="https://express.adobe.com/page/zsV05pknxXl7N/">New South Wales</a> found 70% and 80% of farmers had experienced some type of farm crime in their lifetime, and experienced this victimisation repeatedly. </p>
<p>While farmers experience a variety of crimes, including trespass and illegal shooting on their properties, acquisitive crime – stock theft in particular – is one of the most common crimes faced by farmers.</p>
<p>The impact of “farm crime” is significant. Not only is the farming sector important to the <a href="https://www.agriculture.gov.au/abares/products/insights/snapshot-of-australian-agriculture-2022">Australian economy</a>, but such crimes can have devastating financial, psychological and physical <a href="https://pubag.nal.usda.gov/catalog/7162182">impacts on farmers</a>, rural landowners and communities. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/crime-is-rife-on-farms-yet-reporting-remains-stubbornly-low-heres-how-new-initiatives-are-making-progress-158421">Crime is rife on farms, yet reporting remains stubbornly low. Here's how new initiatives are making progress</a>
</strong>
</em>
</p>
<hr>
<h2>Why does it happen?</h2>
<p>The high rates of theft in farming communities can be explained by <a href="https://express.adobe.com/page/zsV05pknxXl7N/">unique geographic and cultural factors</a> influencing the incidence and response to crime. </p>
<p>Let’s consider geography in more detail. Rational choice theory suggests offenders make decisions to commit crimes by weighing the risks and rewards. The goal of crime prevention then is to increase risks and lower rewards. </p>
<p>In a busy city, for example, crime prevention might include tools such as locks, motion lights or CCTV, while the many people going about their business may deter criminals simply by being present. </p>
<p>The presence of formal guardians, such as the police or security guards, may serve to deter crime too. The urban environment can also be designed and built in such a way as to discourage crime by limiting hiding places, exit points and escape routes. </p>
<p>The rural environment flips all of this on its head. It is often not possible to implement traditional crime prevention tools given the vast amount of wide-open space, nor are locks or gates always practical on a busy working farm. </p>
<p>The low population density means there are very few “eyes in the paddock” to witness and deter crime. A formal police presence is even more sparse, with slower response times than in urban areas. </p>
<p>The environment itself is also less conducive to crime prevention through evironmental design due to limited and spread-out infrastructure combined with a myriad of access points. </p>
<p>When we add all of this together, the risk-reward calculation for committing crimes such as stock theft in rural areas is often very favourable to offenders. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1627931907504902145"}"></div></p>
<h2>What can we do about it?</h2>
<p>Innovations in policing and agricultural technology appear to offer some promising progress to combat farm crime. </p>
<p>The NSW Police have a dedicated <a href="https://www.facebook.com/RuralCrimeNSWPF/">Rural Crime Prevention Team</a>. It’s comprised of officers with cultural and practical knowledge of rural industry and the necessary training, skills and expertise to deal with farm crime. </p>
<p>This team has deployed innovative <a href="https://www.beefcentral.com/news/nsw-police-launches-operation-stock-check-to-combat-livestock-theft/">techniques</a> to fight rural crime, and their efforts have contributed to <a href="https://ruralcriminology.org/index.php/IJRC/article/view/9106">increases in satisfaction</a> with the police and, most importantly, in the reporting of rural crime by farmers. </p>
<p>Despite this, police are still operating in an environment that presents serious difficulties in preventing, investigating and clearing farm crime. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/illegal-hunters-are-a-bigger-problem-on-farms-than-animal-activists-so-why-arent-we-talking-about-that-126513">Illegal hunters are a bigger problem on farms than animal activists – so why aren't we talking about that?</a>
</strong>
</em>
</p>
<hr>
<p>There are two key issues at work. The first is that farmers may check on stock only intermittently, and so be unaware of a theft for some time. The second is difficulty in tracking and identifying stolen stock. </p>
<p>New technology offers some solutions here. The Centre for Rural Criminology (UNE) staged a <a href="https://cpb-ap-se2.wpmucdn.com/blog.une.edu.au/dist/3/1351/files/2022/09/Ceres-Tag-An-Evaluation-for-the-Prevention-Interruption-and-Reduction-of-Livestock-Theft.pdf">mock theft of livestock</a>, with a live police intervention, to evaluate the ability of a <a href="https://cerestag.com/">smart animal ear tag</a> to combat stock theft. The results were <a href="https://www.une.edu.au/connect/news/2022/09/report-released-on-stock-theft-prevention-ear-tag">very promising</a>. </p>
<p>Using the movement and location data provided by the tag, the farmers were alerted to the stock theft within minutes of the thieves entering the paddock. This enabled a rapid and effective response and recovery by the police.</p>
<p>Another <a href="https://stoktake.au/">new technology</a> applies <a href="https://www.mdpi.com/2073-4395/11/11/2365">facial recognition</a> to stock by drawing on small variations in the shape and patterns of a <a href="https://www.une.edu.au/connect/news/2021/12/facial-recognition-comes-for-cattle">their muzzle</a>, which are as distinct as a human fingerprint. </p>
<p>Farmers are able to capture photos of livestock using a smartphone or tablet, then upload this to an AI-powered cloud platform to identify animals. Ideally, law enforcement could use this image recognition technology to identify stolen cattle and return them to their owners. </p>
<p>The theft of stock is a serious and growing problem in Australia. Large-scale and sophisticated thefts are being reported with increasing frequency and farmers, rural communities and the Australian economy suffer from this. </p>
<p>Dedicated policing efforts in combination with new agricultural technologies may increase the risks of committing farm crimes and turn the tables on the offenders.</p><img src="https://counter.theconversation.com/content/200251/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The mock-theft research trial conducted by the Centre for Rural Criminology discussed in this article was funded, in part, by Ceres Tag. Kyle Mulrooney and Alistair Harkness are co-directors of the Centre for Rural Criminology at the University of New England.</span></em></p>Preventing theft on farms is much more difficult than in urban areas for many reasons – but new technological developments may help curb the crimes.Kyle J.D. Mulrooney, Senior Lecturer in Criminology, Co-director of the Centre for Rural Criminology, University of New EnglandAlistair Harkness, Senior Lecturer in Criminology, Co-Director of the Centre for Rural Criminology, University of New EnglandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1938952022-11-04T17:24:28Z2022-11-04T17:24:28ZFacial recognition: why we shouldn’t ban the police from using it altogether<figure><img src="https://images.theconversation.com/files/493503/original/file-20221104-14-6bf1nw.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">100% accurate?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/face-recognition-technology-concept-illustration-big-1745290397">varuna</a></span></figcaption></figure><p>The UK police are being accused of breaking ethical standards by using live facial recognition technology to help fight crime. <a href="https://www.mctd.ac.uk/join-calls-to-ban-police-use-of-facial-recognition-says-minderoo-centre-researchers/">A recent report</a> by the University of Cambridge into trials of the technology by forces in London and south Wales was particularly concerned about the “lack of robust redress” for anyone suffering harm. It spoke of the need to “protect human rights and improve accountability” before facial recognition is used more widely. </p>
<p>The Cambridge team wants a broad ban on police using the technology, and they are not alone. UK civil liberties group Big Brother Watch has been running a “<a href="https://bigbrotherwatch.org.uk/campaigns/stop-facial-recognition/">stop facial recognition</a>” campaign as the government mulls how to <a href="https://www.gov.uk/government/news/uk-sets-out-proposals-for-new-ai-rulebook-to-unleash-innovation-and-boost-public-trust-in-the-technology">regulate AI technologies</a>. Meanwhile, 12 NGOs <a href="https://edri.org/wp-content/uploads/2022/10/CZ-Minister-Digitalisation-letter-AI-act.pdf">recently called on</a> EU legislators to completely ban it, along with various other forms of biometric identification, in their upcoming <a href="https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF">AI Act</a>. </p>
<p>Simply banning this technology would be a mistake, however. In my view, there’s a good case for a more measured approach. </p>
<h2>Growing police use</h2>
<p>The police forces in London and south Wales appear to be the only two in the UK currently using live facial recognition, which uses <a href="https://ico.org.uk/media/2619985/ico-opinion-the-use-of-lfr-in-public-places-20210618.pdf">artificial intelligence software</a> to compare an individual’s digital facial image with an existing facial image to estimate similarity. Manchester Police trialled it but were <a href="https://www.express.co.uk/news/uk/1031939/manchester-news-police-surveillance-technology-trafford-centre-manchester">forced to pause</a> by the <a href="https://www.gov.uk/government/organisations/surveillance-camera-commissioner">surveillance camera commissioner</a> in 2018 for not obtaining the necessary approvals. </p>
<p>In 2020 an appellate court also <a href="https://www.judiciary.uk/wp-content/uploads/2020/08/R-Bridges-v-CC-South-Wales-ors-Judgment.pdf">ruled against</a> south Wales’ use of the technology, concluding the force’s legal framework for deployment effectively gave them unlimited discretion to do so. It made no difference to the court that the police had notified the public (known as overt operational deployment).</p>
<p>Despite this ruling, facial recognition can still broadly be used by police, although numerous <a href="https://www.psni.police.uk/sites/default/files/2022-10/02158%20Facial%20Recognition%20Technology.pdf">other forces</a> have said <a href="https://www.gmp.police.uk/foi-ai/greater-manchester-police/disclosure-2019/april/gsa-45619/">they are not</a> doing so at present. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Woman on phone while numerous people behind her are being scanned by facial recognition technology" src="https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=465&fit=crop&dpr=1 600w, https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=465&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=465&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=584&fit=crop&dpr=1 754w, https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=584&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/493451/original/file-20221104-11-ngyqww.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=584&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Any UK police force can use facial recognition under the current legal framework.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/facial-recognition-search-surveillance-person-modern-1481376347">Trismegist San</a></span>
</figcaption>
</figure>
<p>The London Metropolitan Police increasingly use facial recognition to locate missing persons, suspects, <a href="https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/policy-documents/lfr-sop.pdf">witnesses</a> and victims. They have scanned individuals’ faces in city squares and at public events, using a <a href="https://www.judiciary.uk/wp-content/uploads/2020/08/R-Bridges-v-CC-South-Wales-ors-Judgment.pdf">facial recognition camera</a> typically placed on a police vehicle or street pole. The <a href="https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/policy-documents/lfr-sop.pdf">public are alerted</a> to the deployment through notices as they enter the recognition zone – unless that compromises policing tactics or deployment is urgent. </p>
<p>Between February 2020 and July 2022, the Met deployed the techology in eight locations including <a href="https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/deployment-records/lfr-deployment-grid.pdf">Piccadilly Circus</a>. They <a href="https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/deployment-records/lfr-deployment-grid.pdf">are estimated</a> to have viewed more than 150,000 faces, leading to nine arrests but also eight occasions where they targeted the wrong person.</p>
<h2>The pros and cons</h2>
<p>Facial recognition has evolved in recent years, for instance to work in real time, but inaccuracies and errors remain. In New Jersey, <a href="https://incidentdatabase.ai/cite/288">228 wrongful arrests</a> were reportedly made using (non-real time) facial recognition between January 2019 and April 2021. One <a href="https://edition.cnn.com/2021/04/29/tech/nijeer-parks-facial-recognition-police-arrest/index.html">black American</a> spent 11 days in jail after being wrongly identified. False identifications can also lead to everything from missed flights to distressing police interrogations. </p>
<p>Specific groups are disproportionately affected. <a href="https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf#page=4">A 2019 US study</a> found that women are two-to-five times more likely to be falsely identified, while the risks are ten-to-100 times greater for black and Asian faces than white ones. Given that police already disproportionately <a href="https://www.theguardian.com/uk-news/2020/oct/27/black-people-nine-times-more-likely-to-face-stop-and-search-than-white-people">stop and search</a> ethnic minorities, this shortcoming in the technology could potentially even be used to sustain such practices. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Crowd in London protesting about police stop and search" src="https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/493452/original/file-20221104-11-ohcw3y.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Facial recognition is not necessarily part of the solution.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/london-17th-april-2021-kill-bill-1957541050">BradleyStearn</a></span>
</figcaption>
</figure>
<p>Another risk is that police covertly install facial recognition cameras permanently. This could help the state to crack down on public protests, for example. There is already a pending <a href="https://www.hrw.org/news/2020/07/08/moscows-use-facial-recognition-technology-challenged">legal challenge against Russia</a> before the European Court of Human Rights over such practices, and fear of state surveillance is one reason why many want this technology banned. </p>
<p>Nonetheless, facial recognition has its benefits. <a href="https://www.securityindustry.org/2020/07/16/facial-recognition-success-stories-showcase-positive-use-cases-of-the-technology/">It can help</a> police to find serious criminals, including terrorists, not to mention <a href="https://www.reuters.com/article/us-india-crime-children-idUSKBN2081CU">missing children</a> and people at risk of harming themselves or others. </p>
<p>Like it or not, we also live under colossal corporate surveillance capitalism already. The <a href="https://papltd.co.uk/top-10-countries-and-cities-by-number-of-cctv-cameras/">UK and US</a> have among the most installed CCTV cameras in the world. London residents are filmed <a href="https://www.theguardian.com/uk-news/2021/oct/02/how-cctv-played-a-vital-role-in-tracking-sarah-everard-and-her-killer">300 times</a> a day on average, and police can usually use the data without a search warrant. As if that wasn’t bad enough, big tech companies <a href="https://guardian.ng/features/what-does-big-tech-know-about-you/">know almost everything</a> personal about us. Worrying about live facial recognition is inconsistent with our tolerance of all this surveillance. </p>
<h2>A better approach</h2>
<p>Instead of an outright ban, even of covert facial recognition, I’m in favour of a statutory law to clarify when this technology can be deployed. For one thing, police in the UK can currently use it to track people on their watchlists, but this can include even those charged with minor crimes. There are also no uniform criteria for deciding who can be listed. </p>
<p>Under the EU’s <a href="https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF">proposed law</a>, facial recognition could only be deployed against those suspected of crimes carrying a maximum sentence of upwards of three years. That would appear to be a reasonable cut-off. </p>
<p>Secondly, a court or similar independent body should always have to authorise deployment, including assessing whether it would be proportionate to the police objective in question. In the Met, authorisation currently has to come from a police officer ranked <a href="https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/policy-documents/lfr-sop.pdf">superintendent or higher</a>, and <a href="https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/policy-documents/lfr-sop.pdf">they do</a> have to <a href="https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/policy-documents/lfr-policy-document.pdf">make a call</a> on proportionality – but this should not be a police decision.</p>
<p>We also need clear, auditable ethical standards for what happens during and after the technology is deployed. Images of wrongly identified people should be deleted immediately, for instance. Unfortunately, Met policy on this is unclear at present. The Met is trying to use the technology responsibly in other respects, but this is not enough in itself. </p>
<p>Last but not least, the <a href="https://news.stanford.edu/2021/05/14/researchers-call-bias-free-artificial-intelligence/">potential for discrimination</a> should be tackled by legally requiring developers to train the AI on a diverse enough range of communities to meet a minimum threshold. This sort of framework should allow society to enjoy the benefits of live facial recognition without the harms. Simply banning something that requires a delicate balancing of competing interests is the wrong move entirely.</p><img src="https://counter.theconversation.com/content/193895/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Asress Adimi Gikay does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Civil liberties groups in the UK and elsewhere want to stop the police from using this technology altogether, but that’s going too far.Asress Adimi Gikay, Senior Lecturer in AI, Disruptive Innovation and Law, Brunel University LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1920032022-10-28T12:30:49Z2022-10-28T12:30:49ZThe White House’s ‘AI Bill of Rights’ outlines five principles to make artificial intelligence safer, more transparent and less discriminatory<figure><img src="https://images.theconversation.com/files/492180/original/file-20221027-21-2gwe3k.jpg?ixlib=rb-1.1.0&rect=186%2C71%2C4465%2C2850&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Many AI algorithms, like facial recognition software, have been shown to be discriminatory to people of color.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/mature-african-man-scanning-his-face-with-mobile-royalty-free-image/1209011777?phrase=facial%20recognition%20black%20person&adppopup=true">Prostock-Studio/iStock via Getty Images</a></span></figcaption></figure><p>Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the U.S. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism. </p>
<p>Google <a href="https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/">fired an employee</a> who publicly raised concerns over how a certain type of AI can contribute to <a href="https://www.washington.edu/news/2021/03/10/large-computer-language-models-carry-environmental-social-risks/">environmental and social problems</a>. Other AI companies have developed products that are used by organizations <a href="https://theintercept.com/2021/01/30/lapd-palantir-data-driven-policing/">like the Los Angeles Police Department</a> where they have been shown to <a href="https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/">bolster existing racially biased policies</a>. </p>
<p>There are some government <a href="https://www.gao.gov/products/gao-21-519sp">recommendations</a> and <a href="https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai">guidance</a> regarding AI use. But in early October 2022, the White House Office of Science and Technology Policy added to federal guidance in a big way by releasing the <a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/">Blueprint for an AI Bill of Rights</a>. </p>
<p>The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems. The <a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/">blueprint</a> spells out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can act as a guide to help prevent AI systems from limiting the rights of U.S. residents. </p>
<p><a href="https://scholar.google.com/citations?user=5zZFOikAAAAJ&hl=en&oi=ao">As a computer scientist</a> who studies the ways people interact with AI systems – and in particular how anti-Blackness mediates those interactions – I find this guide a step in the right direction, even though it has some holes and is not enforceable.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A group of people sitting in chairs with one person raising their hand." src="https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=423&fit=crop&dpr=1 600w, https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=423&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=423&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=532&fit=crop&dpr=1 754w, https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=532&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/492136/original/file-20221027-37192-cl732q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=532&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">It is critically important to include feedback from the people who are going to to be most affected by an AI system – especially marginalized communities – during development.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/group-of-people-sitting-during-a-meeting-royalty-free-image/1423632924?phrase=diverse%20group%20feedback&adppopup=true">FilippoBacci/E+ via Getty Images</a></span>
</figcaption>
</figure>
<h2>Improving systems for all</h2>
<p>The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk of AI furthering discrimination.</p>
<p>To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems <a href="https://detroitcommunitytech.org/?q=datajustice">without having much say in their development</a>. Research has shown that <a href="https://morethancode.cc/2018/08/20/morethancode-full-report.html">direct and genuine community involvement in the development process is important</a> for deploying technologies that have a positive and lasting impact on those communities.</p>
<p>The second principle focuses on the <a href="https://www.microsoft.com/en-us/research/video/the-new-jim-code-reimagining-the-default-settings-of-technology-society/">known problem of algorithmic discrimination</a> within AI systems. A well-known example of this problem is how <a href="https://apnews.com/article/lifestyle-technology-business-race-and-ethnicity-racial-injustice-b920d945a6a13db1e1aee44d91475205">mortgage approval algorithms discriminate against minorities</a>. The document asks for companies to develop AI systems that do not treat people differently based on their race, sex or other <a href="https://www.senate.ca.gov/content/protected-classes">protected class status</a>. It suggests companies employ tools such as equity assessments that can help assess how an AI system may impact members of exploited and marginalized communities.</p>
<p>These first two principles address big issues of bias and fairness found in AI development and use.</p>
<h2>Privacy, transparency and control</h2>
<p>The final three principles outline ways to give people more control when interacting with AI systems. </p>
<p>The third principle is on data privacy. It seeks to ensure that people have more say about how their data is used and are protected from abusive data practices. This section aims to address situations where, for example, companies use <a href="https://www.deceptive.design/">deceptive design</a> to manipulate users into <a href="https://www.theverge.com/2021/3/16/22333506/california-bans-dark-patterns-opt-out-selling-data">giving away their data</a>. The blueprint calls for practices like not taking a person’s data unless they consent to it and asking in a way that is understandable to that person.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A speaker sitting on a table." src="https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/492128/original/file-20221027-13-naukmp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Smart speakers have been caught collecting and storing conversations without users’ knowledge.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/digital-background-smart-assistant-royalty-free-image/1320780003">Olemedia/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>The next principle focuses on “notice and explanation.” It highlights the importance of transparency – people should know how an AI system is being used as well as the ways in which an AI contributes to outcomes that might affect them. Take, for example the New York City Administration for Child Services. Research has shown that the agency uses <a href="https://doi.org/10.52214/cjrl.v11i4.8741">outsourced AI systems to predict child maltreatment</a>, systems that most people don’t realize are being used, even when they are being investigated.</p>
<p>The AI Bill of Rights provides a guideline that people in New York in this example who are affected by the AI systems in use should be notified that an AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can <a href="https://hbr.org/2022/06/building-transparency-into-ai-projects">reduce the risk of errors or misuse</a>.</p>
<p>The last principle of the AI Bill of Rights outlines a framework for human alternatives, consideration and feedback. The section specifies that people should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable. </p>
<p>As an example of how these last two principles might work together, take the case of someone applying for a mortgage. They would be informed if an AI algorithm was used to consider their application and would have the option of opting out of that AI use in favor of an actual person.</p>
<h2>Smart guidelines, no enforceability</h2>
<p>The five principles laid out in the AI Bill of Rights address many of the issues scholars have raised over the design and use of AI. Nonetheless, this is a nonbinding document and not currently enforceable. </p>
<p>It may be too much to hope that industry and government agencies will put these ideas to use in the exact ways the White House urges. If the ongoing regulatory battle over data privacy offers any guidance, tech companies will <a href="https://fortune.com/2022/05/18/private-sector-online-privacy-health-apps-data/">continue to push for self-regulation</a>.</p>
<p>One other issue that I see within the AI Bill of Rights is that it fails to directly call out <a href="https://www.blackpast.org/african-american-history/combahee-river-collective-statement-1977/">systems of oppression</a> – like <a href="https://theconversation.com/explainer-what-is-systemic-racism-and-institutional-racism-131152">racism</a> or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in health care have led to <a href="https://www.theguardian.com/society/2019/oct/25/healthcare-algorithm-racial-biases-optum">worse care for Black patients</a>. I have argued that anti-Black racism should be <a href="https://ieeexplore.ieee.org/document/9606203">directly addressed when developing AI systems</a>. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a notable hole and a <a href="https://doi.org/10.1145/3531146.3533157">known issue within AI development</a>.</p>
<p>Despite these shortcomings, this blueprint could be a positive step toward better AI systems, and maybe the first step toward regulation. A document such as this one, even if not policy, can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems.</p><img src="https://counter.theconversation.com/content/192003/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christopher Dancy receives funding from the National Science Foundation for his work on AI. </span></em></p>Many AI algorithms, like facial recognition software, have been shown to be discriminatory to people of color, especially those who are Black.Christopher Dancy, Associate Professor of Industrial & Manufacturing Engineering and Computer Science & Engineering, Penn StateLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1878632022-09-26T20:32:50Z2022-09-26T20:32:50ZDebate: How to stop our cities from being turned into AI jungles<figure><img src="https://images.theconversation.com/files/486555/original/file-20220926-12-8kztvd.jpg?ixlib=rb-1.1.0&rect=0%2C383%2C3464%2C2359&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">In the city of London, security cameras can even be found in cemeteries. In 2021 the mayor's office launched an effort to establish guidelines for research around emerging technology.</span> <span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/e/e5/City_of_London_Cemetery_Columbarium_security_camera_2_lighter.jpg">Acabashi/Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>As artificial intelligence grows more ubiquitous, its potential and the challenges it presents are coming increasingly into focus. How we balance the risks and opportunities is shaping up as one of the defining questions of our era. In much the same way that cities have emerged as hubs of innovation in culture, politics, and commerce, so they are defining the frontiers of AI governance.</p>
<p>Some examples of how cities have been taking the lead include the <a href="https://citiesfordigitalrights.org/">Cities Coalition for Digital Rights</a>, the <a href="https://recherche.umontreal.ca/english/strategic-initiatives/montreal-declaration-for-a-responsible-ai/">Montreal Declaration for Responsible AI</a>, and the <a href="https://opendialogueonai.com/">Open Dialogue on AI Ethics</a>. Others can be found in San Francisco’s <a href="https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html">ban of facial-recognition technology</a>, and New York City’s push for <a href="https://www.cbsnews.com/news/new-york-city-artificial-intelligence-hiring-restriction/">regulating the sale of automated hiring systems</a> and creation of an <a href="https://www1.nyc.gov/site/ampo/index.page">algorithms management and policy officer</a>. Urban institutes, universities and other educational centres have also been forging ahead with a range of <a href="https://fari.brussels/">AI ethics initiatives</a>.</p>
<p>These efforts point to an emerging paradigm that has been referred to as <a href="https://ailocalism.org/">AI Localism</a>. It’s a part of a larger phenomenon often called <a href="https://www.brookings.edu/book/the-new-localism/">New Localism</a>, which involves cities taking the lead in regulation and policymaking to develop context-specific approaches to a variety of problems and challenges. We have also seen an increased uptake of city-centric approaches <a href="https://china.elgaronline.com/view/edcoll/9781788973274/9781788973274.xml">within international law frameworks</a>. </p>
<p>In so doing, municipal authorities are filling gaps left by insufficient state, national or global governance frameworks related to AI and other complex issues. Recent years, for example, have seen the emergence of <a href="https://ir.lawnet.fordham.edu/faculty_scholarship/611/">“broadband localism”</a>, in which local governments address the digital divide; and <a href="https://www.law.nyu.edu/sites/default/files/upload_documents/Rubinstein%20Privacy%20Localism.pdf">“privacy localism”</a>, both in response to challenges posed by the increased use of data for law enforcement or recruitment.</p>
<p>AI localism encompasses a wide variety of issues, stakeholders, and contexts. In addition to bans on AI-powered facial recognition, local governments and institutions are looking at procurement rules pertaining to AI use by public entities, public registries of local governments’ AI systems, and public education programs on AI. But even as initiatives and case studies multiply, we still lack a systematic method to assess their effectiveness – or even the very need for them. This limits policymakers’ ability to develop appropriate regulation and more generally stunts the growth of the field.</p>
<h2>Building an AI Localism framework</h2>
<p>Below are ten principles to help systematise our approach to AI Localism. Considered together, they add up to an incipient framework for implementing and assessing initiatives around the world:</p>
<ul>
<li><p><strong>Principles provide a North Star for governance:</strong> Establishing and articulating a clear set of guiding principles is an essential starting point. For example, the <a href="https://www.london.gov.uk/publications/emerging-technology-charter-london">Emerging Technology Charter for London</a>, launched by the mayoral office in 2021 to outline “practical and ethical guidelines” for research around emerging technology and smart-city technology pilots, is one example. Similar projects exist in Nantes, France, which rolled out a <a href="https://metropole.nantes.fr/files/pdf/numerique-innovation/Charte-donnee.pdf">data charter</a> to underscore the local government’s commitment to data sovereignty, protection, transparency, and innovation. Such efforts help interested parties chart a course that effectively balances the potential and challenges posed by AI while affirming a commitment to openness and transparency on data use for the public.</p></li>
<li><p><strong>Public engagement provides a social license:</strong> Establishing trust is essential to fostering responsible use of technology as well as broader acceptance and uptake by the public. Forms of public engagement – crowdsourcing, awareness campaigns, mini-assemblies, and more – can help to build trust, and should be part of a deliberative process undertaken by policymakers. For example, the California Department of Fair Employment and Housing held their <a href="http://celavoice.org/">first virtual public hearing</a> with citizens and worker advocacy groups on the growing use of AI in hiring and human resources, and the potential for technological bias in procurement.</p></li>
<li><p><strong>AI literacy enables meaningful engagement:</strong> The goal of AI literacy is to encourage familiarity with the technology itself as well as with associated ethical, political, economic and cultural issues. For example, the <a href="https://montrealethics.ai/">Montreal AI Ethics Institute</a>, a non-profit focused on advancing AI literacy, provides free, timely, and digestible information about AI and AI-related happenings from across the world.</p></li>
</ul>
<figure class="align-right ">
<img alt="Security cameras on a pole in New York City." src="https://images.theconversation.com/files/486554/original/file-20220926-19-b72ydd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/486554/original/file-20220926-19-b72ydd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=800&fit=crop&dpr=1 600w, https://images.theconversation.com/files/486554/original/file-20220926-19-b72ydd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=800&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/486554/original/file-20220926-19-b72ydd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=800&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/486554/original/file-20220926-19-b72ydd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1005&fit=crop&dpr=1 754w, https://images.theconversation.com/files/486554/original/file-20220926-19-b72ydd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1005&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/486554/original/file-20220926-19-b72ydd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1005&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In New York City, the city has established an Algorithms Management and Policy Officer to govern the use of how data captured by security cameras and other devices is managed.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/f/f0/NYPD_Surveillance_Tech_2.jpg">Cyprian Latewood/Wikipedia</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<ul>
<li><p><strong>Tap into local expertise:</strong> Policymakers should tap into cities’ AI expertise by establishing or supporting research centres. Two examples are the <a href="https://claire-ai.org/">Confederation of Laboratories for Artificial Intelligence Research in Europe</a> (CLAIRE), a pan-European project that takes a European focus to AI uses in cities and <a href="https://howbusyistoon.com/">“How Busy Is Toon”</a>, a website developed by Newcastle City Council and Newcastle University to provide real-time transit information about the city centre.</p></li>
<li><p><strong>Innovate in how transparency is provided:</strong> To build trust and foster engagement, AI Localism should encompass time-tested transparency principles and practices. For example, Amsterdam and Helsinki <a href="https://venturebeat.com/2020/09/28/amsterdam-and-helsinki-launch-algorithm-registries-to-bring-transparency-to-public-deployments-of-ai/">disclose AI use</a> and explain <a href="https://www.antibes-juanlespins.com/administration/acces-aux-documents-administratifs">how algorithms are employed</a> for specific purposes. In addition, AI Localism can innovate in how transparency is provided, instilling awareness and systems to identify and overcome <a href="https://aiblindspot.media.mit.edu/">“AI blind spots”</a> and other forms of unconscious bias.</p></li>
<li><p><strong>Establish means for accountability and oversight:</strong> One of the signal features of AI Localism is a recognition of the need for accountability and oversight to ensure that principles of responsive governance are being adhered to. Examples include New York City’s <a href="https://www1.nyc.gov/office-of-the-mayor/news/554-19/mayor-de-blasio-signs-executive-order-establish-algorithms-management-policy-officer">Algorithms Management and Policy Officer</a>, Singapore’s <a href="https://oecd.ai/en/dashboards/policy-initiatives/2019-data-policyInitiatives-24364">Advisory Council on the Ethical Use of AI and Data</a>, and Seattle’s <a href="https://www.seattle.gov/tech/initiatives/privacy/surveillance-technologies/surveillance-advisory-working-group">Surveillance Advisory Working Group</a>.</p></li>
<li><p><strong>Signal boundaries through binding laws and policies:</strong> Principles are only as good as they are implemented or enforced. Ratifying legislation, such as New York City’s <a href="https://techcrunch.com/2021/07/09/new-york-city-biometrics-law/">Biometrics Privacy Law</a>, which requires clear notices that biometric data is being collected by businesses, limits how businesses can use such data. It also prohibits selling and profiting from the data. Such regulation sends a clear message to consumers that their data rights and protections are upheld and holds corporations accountable to respecting privacy privileges.</p></li>
<li><p><strong>Use procurement to shape responsible AI markets:</strong> As municipal and other governments have done in other areas of public life, cities should use procurement policies to encourage responsible AI initiatives. For instance, the Berkeley, California Council passed an <a href="https://berkeley.municipal.codes/BMC/2.99.010">ordinance</a> requiring that public departments justify the use of new surveillance technologies and that the benefits of these tools outweigh the harms prior to procurement.</p></li>
<li><p><strong>Establish data collaboratives to tackle asymmetries:</strong> Data collaboratives are an emerging form of intersectoral partnership, in which private data is reused and deployed toward the public good. In addition to yielding new insights and innovations, such partnerships can also be powerful tools for breaking down the data asymmetries that both underlie and drive so many wider socio-economic inequalities. Encouraging data collaboratives, by identifying possible partnerships and matching supply and demand, is thus an important component of AI Localism. Initial efforts include the <a href="https://amdex.eu/">Amsterdam Data Exchange</a>, which allows for data to be securely shared to address local issues.</p></li>
<li><p><strong>Make good governance strategic:</strong> Too many AI strategies don’t include governance and too many governance approaches are not strategic. It is thus imperative that cities have a clear vision on how they see data and AI being used to improve local wellbeing. Charting an <a href="https://ajuntament.barcelona.cat/digital/sites/default/files/mesura_de_govern_intel_ligencia_artificial_eng.pdf">AI strategy</a>, as was undertaken by the Barcelona City Council in 2021, can create avenues to embed smart AI use across agencies and open up AI awareness to residents to make responsible data use and considerations a common thread rather than a siloed exercise within local government.</p></li>
</ul>
<p>AI Localism is an emergent area, and both its practice and research remain in flux. The technology itself continues to change rapidly, offering something of a moving target for governance and regulation. Its state of flux highlights the need for the type of framework outlined above. Rather than playing catch-up, responding reactively to successive waves of technological innovation, policymakers can respond more consistently, and responsibly, from a principled bedrock that takes into account, the often competing needs of various stakeholders.</p><img src="https://counter.theconversation.com/content/187863/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stefaan G. Verhulst ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>As states and nations struggle to regulate growing AI use, municipal authorities are often leading the way. An emerging paradigm known as AI Localism can help us better define the way forward.Stefaan G. Verhulst, Co-Founder and Chief Research and Development Officer of the Governance Laboratory (GovLab), New York UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1910752022-09-26T20:02:24Z2022-09-26T20:02:24ZAvoiding a surveillance society: how better rules can rein in facial recognition tech<figure><img src="https://images.theconversation.com/files/486434/original/file-20220926-65632-zn3w0w.jpg?ixlib=rb-1.1.0&rect=53%2C17%2C6000%2C3547&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/0lOkeLbdsBw">JR Korpa / Unsplash</a></span></figcaption></figure><p>The human face is special. It is simultaneously public and personal. Our faces reveal sensitive information about us: who we are, of course, but also our gender, emotions, health status and more.</p>
<p>Lawmakers in Australia, like those around the world, never anticipated our face data would be harvested on an industrial scale, then used in everything from our smartphones to police CCTV cameras. So we shouldn’t be surprised that our laws have not kept pace with the extraordinary rise of facial recognition technology.</p>
<p>But what kind of laws do we need? The technology can be used for both good and ill, so neither banning it nor the current free-for-all seem ideal.</p>
<p>However, regulatory failure has left our community vulnerable to harmful uses of facial recognition. To fill the legal gap, we propose a “<a href="https://www.uts.edu.au/human-technology-institute/explore-our-work/facial-recognition-technology-towards-model-law">model law</a>”: an outline of legislation that governments around Australia could adopt or adapt to regulate risky uses of facial recognition while allowing safe ones.</p>
<h2>The challenge of facial recognition technologies</h2>
<p>The use cases for facial recognition technologies seem limited only by our imagination. Many of us think nothing of using facial recognition to unlock our electronic devices. Yet the technology has also been trialled or implemented throughout Australia in a wide range of situations, including <a href="https://www.theage.com.au/national/victoria/minority-report-crackdown-on-facial-recognition-technology-in-schools-20181005-p5080p.html">schools</a>, <a href="https://www.abf.gov.au/entering-and-leaving-australia/smartgates/arrivals">airports</a>, <a href="https://www.abc.net.au/news/2022-07-13/bunnings-kmart-investigated-over-facial-recognition-technology/101233372">retail stores</a>, clubs and <a href="https://www.cbs.sa.gov.au/facial-recognition-technology">gambling venues</a>, and <a href="https://www.police.nsw.gov.au/crime/terrorism/terrorism_categories/facial_recognition">law enforcement</a>. </p>
<p>As the use of facial recognition grows at an <a href="https://www.mordorintelligence.com/industry-reports/facial-recognition-market">estimated 20%</a> annually, so too does the risk to humans – especially in high-risk contexts like policing. </p>
<p>In the US, reliance on error-prone facial recognition tech has resulted in numerous instances of injustice, especially involving Black people. These include the wrongful <a href="https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html">arrest and detention of Robert Williams</a>, and the wrongful <a href="https://www.cnet.com/news/politics/black-teen-kicked-out-of-skating-rink-after-facial-recognition-error/">exclusion of a young Black girl</a> from a roller rink in Detroit.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facial-recognition-is-on-the-rise-but-the-law-is-lagging-a-long-way-behind-185510">Facial recognition is on the rise – but the law is lagging a long way behind</a>
</strong>
</em>
</p>
<hr>
<p>Many of the world’s biggest tech companies – including <a href="https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/">Meta</a>, <a href="https://www.reuters.com/technology/exclusive-amazon-extends-moratorium-police-use-facial-recognition-software-2021-05-18/">Amazon</a> and <a href="https://azure.microsoft.com/en-us/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/">Microsoft</a> – have reduced or discontinued their facial recognition-related services. They have cited concerns about consumer safety and a lack of effective regulation.</p>
<p>This is laudable, but it has also prompted a kind of “regulatory-market failure”. While those companies have pulled back, other companies with fewer scruples have taken a bigger share of the facial recognition market.</p>
<p>Take the American company Clearview AI. It scraped billions of face images from social media and other websites without the consent of the affected individuals, then created a face-matching service that it sold to the Australian Federal Police and other law enforcement bodies around the world. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australian-police-are-using-the-clearview-ai-facial-recognition-system-with-no-accountability-132667">Australian police are using the Clearview AI facial recognition system with no accountability</a>
</strong>
</em>
</p>
<hr>
<p>In 2021, the Australian Information & Privacy Commissioner found that both <a href="https://www.oaic.gov.au/updates/news-and-media/clearview-ai-breached-australians-privacy">Clearview AI</a> and <a href="https://www.oaic.gov.au/updates/news-and-media/afp-ordered-to-strengthen-privacy-governance">the AFP</a> had breached Australia’s privacy law, but enforcement actions like this are rare.</p>
<p>However, Australians want better regulation of facial recognition. This has been shown in the <a href="https://tech.humanrights.gov.au/artificial-intelligence/facial-recognition-biometric-tech">Australian Human Rights Commission’s 2021 report</a>, the <a href="https://www.choice.com.au/consumers-and-data/data-collection-and-use/how-your-data-is-used/articles/kmart-bunnings-and-the-good-guys-using-facial-recognition-technology-in-store">2022 CHOICE investigation</a> into the use of facial recognition technology by major retailers, and in research we at the Human Technology Institute have commissioned as part of our <a href="https://www.uts.edu.au/human-technology-institute/explore-our-work/facial-recognition-technology-towards-model-law">model law</a>. </p>
<h2>Options for facial recognition reform</h2>
<p>What options does Australia have? The first is to do nothing. But this would mean accepting we will be unprotected from harmful use of facial recognition technologies, and keep us on our current trajectory towards mass surveillance.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/large-scale-facial-recognition-is-incompatible-with-a-free-society-126282">Large-scale facial recognition is incompatible with a free society</a>
</strong>
</em>
</p>
<hr>
<p>Another option would be to ban facial recognition tech altogether. Some jurisdictions have indeed instituted moratoriums on the technology, but they contain many exceptions (for positive uses), and are at best a temporary solution.</p>
<p>In our view, the better reform option is a law to regulate facial recognition technologies according to how risky they are. Such a law would encourage facial recognition with clear public benefit, while protecting against harmful uses of the technology.</p>
<h2>A risk-based law for facial recognition technology regulation</h2>
<p>Our model law would require anyone developing or deploying facial recognition systems in Australia to conduct a rigorous impact assessment to evaluate the human rights risk.</p>
<p>As the risk level increases, so too would the legal requirements or restrictions. Developers would also be required to comply with a technical standard for facial recognition, aligned with international standards for AI performance and good data management.</p>
<p>The model law contains a general prohibition on high-risk uses of facial recognition applications. For example, a “facial analysis” application that purported to assess individuals’ sexual orientation and then make decisions about them would be prohibited. (Sadly, this is not a <a href="https://www.washingtonpost.com/news/morning-mix/wp/2017/09/12/researchers-use-facial-recognition-tools-to-predict-sexuality-lgbt-groups-arent-happy/">far-fetched hypothetical</a>.) </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/486445/original/file-20220926-14387-rumoex.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/486445/original/file-20220926-14387-rumoex.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/486445/original/file-20220926-14387-rumoex.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/486445/original/file-20220926-14387-rumoex.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/486445/original/file-20220926-14387-rumoex.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/486445/original/file-20220926-14387-rumoex.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/486445/original/file-20220926-14387-rumoex.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The ‘model law’ for facial recognition would assess the risk of various applications and apply controls accordingly.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/IhcSHrZXFs4">Bernard Hermant / Unsplash</a></span>
</figcaption>
</figure>
<p>The model law also provides three exceptions to the prohibition on high-risk facial recognition technology:</p>
<ol>
<li><p>the regulator could permit a high-risk application if it considers the application to be justified under international human rights law</p></li>
<li><p>there would be a specific legal regime for law enforcement agencies, including a “face warrant” scheme that would provide independent oversight as with other such warrants</p></li>
<li><p>high-risk applications may be used in academic research, with appropriate oversight.</p></li>
</ol>
<h2>Review by the regulator and affected individuals</h2>
<p>Any law would need to be enforced by a regulator with appropriate powers and resources. Who should this be?</p>
<p>The majority of the stakeholders we consulted – including business users, technology firms and civil society representatives – proposed the Office of the Australian Information Commissioner (OAIC) would be well suited to be the regulator of facial regulation. For certain, sensitive users – such as the military and certain security agencies – there may also need to be a specialised oversight regime.</p>
<h2>The moment for reform is now</h2>
<p>Never have we seen so many groups and individuals from across civil society, industry and government so engaged and aligned on the need for facial recognition technology reform. This is reflected in support for the model law from both the Technology Council of Australia and CHOICE. </p>
<p>Given the extraordinary rise of uses of facial recognition, and an emerging consensus among stakeholders, the federal attorney-general should seize this moment and lead national reform. The first priority is to introduce a federal bill – which could easily be based on the our model law. The attorney-general should also collaborates with the states and territories to harmonise Australian law on facial recognition.</p>
<p>This proposed reform is important on its own terms: we cannot allow facial recognition technologies to remain effectively unregulated. It would also demonstrate how Australia can use law to protect against harmful uses of new technology, while simultaneously incentivising innovation for public benefit. </p>
<hr>
<p><em>More information about the model law can be found in our report <a href="https://www.uts.edu.au/human-technology-institute/explore-our-work/facial-recognition-technology-towards-model-law">Facial recognition technology: Towards a model law</a>.</em></p><img src="https://counter.theconversation.com/content/191075/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicholas Davis is employed by the Human Technology Institute (HTI), which is part of the University of Technology Sydney (UTS). The Facial Recognition Model Law Project, to which this article refers, was undertaken by HTI, with funding from UTS and support from the UTS Centre for Social Justice & Inclusion. UTS has received donations from, among others, Microsoft, which provided a donation to the UTS Technology for Social Good program to advance work on responsible technology.
</span></em></p><p class="fine-print"><em><span>Edward Santow is employed by the Human Technology Institute (HTI), which is part of the University of Technology Sydney (UTS). The Facial Recognition Model Law Project, to which this article refers, was undertaken by HTI, with funding from UTS and support from the UTS Centre for Social Justice & Inclusion. UTS has received donations from, among others, Microsoft, which provided a donation to the UTS Technology for Social Good program to advance work on responsible technology.
From 2016-2021, Edward Santow served as the Human Rights Commissioner at the Australian Human Rights Commission (AHRC). As noted in this article, the AHRC undertook a major project on human rights and technology, which he led. It included consideration of facial recognition and other biometric technology.</span></em></p><p class="fine-print"><em><span>Lauren Perry is employed by the Human Technology Institute (HTI), which is part of the University of Technology Sydney (UTS). The Facial Recognition Model Law Project, to which this article refers, was undertaken by HTI, with funding from UTS and support from the UTS Centre for Social Justice & Inclusion. UTS has received donations from, among others, Microsoft, which provided a donation to the UTS Technology for Social Good program to advance work on responsible technology.
She also previously worked at the Australian Human Rights Commission on the Human Rights and Technology Project. </span></em></p>Facial recognition technology has set us on a path to mass surveillance – but it’s not too late to change course.Nicholas Davis, Industry Professor of Emerging Technology and Co-Director, Human Technology Institute, University of Technology SydneyEdward Santow, Professor & Co-Director, Human Technology Institute, University of Technology SydneyLauren Perry, Policy and Projects Manager - Human Technology Institute, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1859532022-09-21T13:51:44Z2022-09-21T13:51:44ZGovernments’ use of automated decision-making systems reflects systemic issues of injustice and inequality<figure><img src="https://images.theconversation.com/files/474638/original/file-20220718-22-v48mdv.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4083%2C2719&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Software and technology can process large amounts of data instantaneously, making them highly attractive for government use.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “<a href="https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=25156">stumbling zombie-like into a digital welfare dystopia</a>.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.</p>
<p>Alston was worried for good reason. Research shows that ADS can be used in ways that <a href="https://www.ruhabenjamin.com/race-after-technology">discriminate</a>, <a href="https://us.macmillan.com/books/9781250074317/automatinginequality">exacerbate inequality</a>, <a href="https://citizenlab.ca/2018/09/bots-at-the-gate-human-rights-analysis-automated-decision-making-in-canadas-immigration-refugee-system/">infringe upon rights</a>, <a href="https://doi.org/10.1093/oso/9780197579411.001.0001">sort people into different social groups</a>, <a href="https://www.ohchr.org/en/press-releases/2019/10/world-stumbling-zombie-digital-welfare-dystopia-warns-un-human-rights-expert">wrongly limit access to services</a> and <a href="https://www.routledge.com/Surveillance-as-Social-Sorting-Privacy-Risk-and-Automated-Discrimination/Lyon/p/book/9780415278737">intensify surveillance</a>. </p>
<p>For example, families have been <a href="https://time.com/5840609/algorithm-unemployment/">bankrupted</a> and <a href="https://www.theguardian.com/australia-news/2019/feb/06/robodebt-faces-landmark-legal-challenge-over-crude-income-calculations">forced into crises</a> after being falsely accused of benefit fraud. </p>
<p>Researchers have identified how <a href="http://gendershades.org/">facial recognition systems</a> and <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">risk assessment tools</a> are more likely to wrongly identify people with darker skin tones and women. These systems have already led to <a href="https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html">wrongful arrests</a> and misinformed sentencing decisions.</p>
<p>Often, people only learn that they have been affected by an ADS application when one of two things happen: after things go wrong, as was the case with the <a href="https://www.wired.co.uk/article/alevel-exam-algorithm">A-levels scandal in the United Kingdom</a>; or when controversies are made public, as was the case with <a href="https://www.cbc.ca/news/canada/clearview-ai-facial-recognition-1.6286016">uses of facial recognition technology in Canada and the United States</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-level-results-why-algorithms-get-things-so-wrong-and-what-we-can-do-to-fix-them-142879">A-level results: why algorithms get things so wrong – and what we can do to fix them</a>
</strong>
</em>
</p>
<hr>
<h2>Automated problems</h2>
<p>Greater transparency, responsibility, accountability and public involvement in the design and use of ADS is important to protect people’s rights and privacy. There are three main reasons for this: </p>
<ol>
<li>these systems can <a href="https://datajusticelab.org/data-harm-record/">cause a lot of harm</a>; </li>
<li>they are being introduced faster than necessary protections can be implemented, and;</li>
<li>there is a lack of opportunity for those affected to make <a href="https://datajusticelab.org/2022/08/31/new-research-report-civic-participation-in-the-datafied-society/">democratic decisions</a> about if they should be used and if so, how they should be used.</li>
</ol>
<p>Our latest research project, <a href="https://www.carnegieuktrust.org.uk/publications/automating-public-services-learning-from-cancelled-systems"><em>Automating Public Services: Learning from Cancelled Systems</em></a>, provides findings aimed at helping prevent harm and contribute to meaningful debate and action. The report provides the first comprehensive overview of systems being cancelled across western democracies. </p>
<p>Researching the factors and rationales leading to cancellation of ADS systems helps us better understand their limits. In our report, we identified 61 ADS that were cancelled across Australia, Canada, Europe, New Zealand and the U.S. We present a detailed account of systems cancelled in the areas of fraud detection, child welfare and policing. Our findings demonstrate the importance of careful consideration and concern for equity.</p>
<h2>Reasons for cancellation</h2>
<p>There are a range of factors that influence decisions to cancel the uses of ADS. One of our most important findings is how often systems are cancelled because they are not as effective as expected. Another key finding is the significant role played by community mobilization and research, investigative reporting and legal action. </p>
<p>Our findings demonstrate there are competing understandings, visions and politics surrounding the use of ADS.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a table showing the factors influencing the decision to cancel and ADS system" src="https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=306&fit=crop&dpr=1 600w, https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=306&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=306&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=385&fit=crop&dpr=1 754w, https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=385&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/474650/original/file-20220718-57395-gd0t0k.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=385&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">There are a range of factors that influence decisions to cancel the uses of ADS systems.</span>
<span class="attribution"><span class="source">(Data Justice Lab)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Hopefully, our recommendations will lead to increased civic participation and improved oversight, accountability and harm prevention.</p>
<p>In the report, we point to widespread calls for governments to establish resourced ADS registers as a basic first step to greater transparency. Some countries <a href="https://www.gov.uk/government/collections/algorithmic-transparency-standard">such as the U.K.</a>, have stated plans to do so, while other countries like Canada have yet to move in this direction.</p>
<p>Our findings demonstrate that the use of ADS can lead to greater inequality and systemic injustice. This reinforces the need to be alert to how the use of ADS can create differential systems of advantage and disadvantage.</p>
<h2>Accountability and transparency</h2>
<p>ADS need to be developed with care and responsibility by meaningfully engaging with affected communities. There can be harmful consequences when government agencies do not engage the public in discussions about the appropriate use of ADS before implementation. </p>
<p>This engagement should include the option for community members to decide areas where they do not want ADS to be used. Examples of good government practice can include taking the time to ensure independent expert reviews and impact assessments that focus on equality and human rights are carried out. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a list of recommendations for governments using ADS systems" src="https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=650&fit=crop&dpr=1 600w, https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=650&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=650&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=817&fit=crop&dpr=1 754w, https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=817&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/474724/original/file-20220718-76955-qvn3pm.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=817&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Governments can take several different approaches to implement ADS systems in a more accountable manner.</span>
<span class="attribution"><span class="source">(Data Justice Lab)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>We recommend strengthening accountability for those wanting to implement ADS by requiring proof of accuracy, effectiveness and safety, as well as reviews of legality. At minimum, people should be able to find out if an ADS has used their data and, if necessary, have access to resources to challenge and redress wrong assessments. </p>
<p>There are a number of cases listed in our report where government agencies’ partnership with private companies to provide ADS services has presented problems. In one case, a government agency decided not to use a bail-setting system because the proprietary nature of the system meant that defendants and officials would not be able to understand why a decision was made, making an effective challenge impossible. </p>
<p>Government agencies need to have the resources and skills to thoroughly examine how they procure ADS systems.</p>
<h2>A politics of care</h2>
<p>All of these recommendations point to the importance of a politics of care. This requires those wanting to implement ADS to appreciate the complexities of people, communities and their rights. </p>
<p>Key questions need to be asked about how the uses of ADS lead to blind spots because of the way they increase the distancing between administrators and the people they are meant to serve through scoring and sorting systems that oversimplify, infer guilt, wrongly target and stereotype people through categorizations and quantifications.</p>
<p>Good practice, in terms of a politics of care, involves taking the time to carefully consider the potential impacts of ADS before implementation and being responsive to criticism, ensuring ongoing oversight and review, and seeking independent and community review.</p><img src="https://counter.theconversation.com/content/185953/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joanna Redden receives funding from the Social Sciences and Humanities Research Council of Canada and the Natural Sciences and Engineering Research Council of Canada. She is a member of the New Democratic Party.</span></em></p>In the pursuit of efficiency, governments turn to technological solutions, like automated decision-making systems. But these systems are often problematic.Joanna Redden, Associate Professor, Information and Media Studies, Western UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1883302022-08-18T15:50:10Z2022-08-18T15:50:10ZFacial recognition: UK plans to monitor migrant offenders are unethical – and they won’t work<figure><img src="https://images.theconversation.com/files/478550/original/file-20220810-12-7rlycq.jpg?ixlib=rb-1.1.0&rect=11%2C0%2C3822%2C2132&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Facial recognition technology struggles to recognise darker skin tones</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/future-face-detection-technological-3d-scanning-1628451346">Nazar Kantora/ Shutterstock</a></span></figcaption></figure><p>One afternoon in our lab, my colleague and I were testing our new prototype for a facial recognition software on a laptop. The software used a video camera to scan our faces and guess our age and gender. It correctly guessed my age but when my colleague, who was from Africa, tried it out, the camera didn’t detect a face at all. We tried turning on lights in the room, adjusted her seating and background, but the system still struggled to detect her face. </p>
<p>After many failed attempts, the software finally detected her face – but got her age wrong and gave the wrong gender. </p>
<p>Our software was only a prototype, but the difficulty working with darker skin tones reflects the experiences of people of colour who try to use facial recognition technology. In recent years, researchers have <a href="http://proceedings.mlr.press/v81/buolamwini18a.html?mod=article_inline">demonstrated the unfairness in facial recognition systems</a>, finding that the software and algorithms developed by big technology companies are more accurate at recognising lighter skin tones than darker ones. </p>
<p>Yet recently, the Guardian reported that the UK Home Office <a href="https://www.theguardian.com/politics/2022/aug/05/facial-recognition-smartwatches-to-be-used-to-monitor-foreign-offenders-in-uk">plans</a> to make migrants convicted of criminal offences scan their faces five times a day using a smart watch equipped with facial recognition technology. A spokesperson for the Home Office said facial recognition technology would not be used on asylum seekers arriving in the UK illegally, and that the report on its use on migrant offenders was “purely speculative”.</p>
<h2>Get the balance right</h2>
<p>There will always be a tension between <a href="https://link.springer.com/article/10.1007/s10551-016-3233-4">national security and individual rights</a>. Security for the many can take priority over privacy for a few. For example, in November 2015 when the terrorist group ISIS attacked Paris, killing 130 people, the Paris police <a href="https://www.france24.com/en/20151118-text-found-binned-mobile-phone-bataclan-encryption-paris-attacks">found a phone</a> that one of the terrorists had abandoned at the scene, and read messages stored on it. </p>
<p>There is a lot of nuance to this issue. We must ask ourselves, whose rights are curbed by a breach of privacy, <a href="https://people.cs.umass.edu/%7Eelm/papers/FRTintheWild.pdf">to what degree</a>, and who judges if a breach of privacy is in balance with the severity of a criminal offence? </p>
<p>In the case of offenders taking photographs of their faces several times a day, we could argue the breach of privacy is in the national security interest for most people, if the crime is serious. The government is entitled to make such a decision as it is responsible for the safety of its citizens. For minor offences, however, face recognition may be too strong a measure. </p>
<p>In its plan, the Home Office has not differentiated between minor and serious offenders; nor has it provided convincing evidence that facial recognition improves people’s compliance with immigration law. </p>
<p>Worldwide, we know facial recognition is <a href="https://core.ac.uk/works/54207934">more likely to be used to police people of colour</a> by monitoring their movements more often than those of white people. This is despite the fact that facial recognition systems are <a href="https://bigbrotherwatch.org.uk/wp-content/uploads/2018/05/Face-Off-final-digital-1.pdf">more accurate</a> with lighter than darker skin tones. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/478551/original/file-20220810-20-ifsy3x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/478551/original/file-20220810-20-ifsy3x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=387&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478551/original/file-20220810-20-ifsy3x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=387&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478551/original/file-20220810-20-ifsy3x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=387&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478551/original/file-20220810-20-ifsy3x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=486&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478551/original/file-20220810-20-ifsy3x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=486&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478551/original/file-20220810-20-ifsy3x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=486&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Home Office reportedly wants migrant offenders to scan their faces five times a day.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/biometric-identification-africanamerican-woman-scanning-face-1314016142">Prostock-studio/Shutterstock</a></span>
</figcaption>
</figure>
<p>Taking a picture of your face and uploading it five times a day could feel demeaning. Glitches with darker skin tones could make checking into the system more than just a frustrating experience. There could be serious consequences for offenders if the technology fails.</p>
<p>The flaws in facial recognition might also create national security issues for the government. For example, it might misidentify the face of one person as another. Facial recognition technology is not ready for something as important as national security.</p>
<h2>The alternative</h2>
<p>Another option the government is considering for migrant offenders is location tracking. Electronic monitoring <a href="https://www.gov.uk/government/news/tens-of-thousands-more-criminals-to-be-tagged-to-cut-crime-and-protect-victims">already keeps track of people with criminal records in the UK</a> using ankle tags, and it would make sense to apply the same technology to migrant and non-migrant offenders equally.</p>
<p>Location tracking comes with its own <a href="https://core.ac.uk/works/68321747">ethical issues for personal privacy</a> and <a href="https://core.ac.uk/works/8113207?source=1&algorithmId=15&similarToDoc=8565700&similarToDocKey=CORE&recSetID=fa863cbe-c736-4eda-9069-9ddd18e7d1f2&position=1&recommendation_type=same_repo&otherRecs=8113207,8512890,18775066,4206661,8542302">racial surveillance</a>. Due to the intrusive nature of electronic monitoring, some people who wear these devices can <a href="https://www.theguardian.com/politics/2022/aug/05/facial-recognition-smartwatches-to-be-used-to-monitor-foreign-offenders-in-uk">suffer from depression, anxiety or suicidal thoughts</a>.</p>
<p>But location tracking technology <a href="https://arxiv.org/abs/1810.03568">gives options</a>, at least. For example, data can be handled sensitively by following <a href="https://link.springer.com/article/10.1007/s11948-013-9462-3">data privacy guidelines</a> such as the UK’s <a href="https://www.gov.uk/data-protection">Data Protection Act 2018</a>. We can minimise the amount of location data we collect by only tracking someone’s location once or twice a day. We can anonymize the data, only making people’s names visible when and where necessary.</p>
<p>The UK Home Office could use location data to flag up suspicious activity, such as if an offender enters an area from which they have been barred. For minor offenders, we need not track the person’s exact location but only the general area, such as a postcode or town. </p>
<p>As a society, we should strive to maintain the dignity and privacy of people, except in the most serious cases. More importantly, we should ensure technology does not have the potential to discriminate against a group of people based on their ethnicity. The law and regulation should apply equally to all people.</p>
<p>The Home Office spokesperson added: “The public expects us to monitor convicted foreign national offenders … Foreign criminals should be in no doubt of our determination to deport them, and the government is doing everything possible to increase the number of foreign national offenders being deported.”</p><img src="https://counter.theconversation.com/content/188330/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Namrata Primlani has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie-Sklodowska Curie grant agreement No 813508.
Namrata has been a Mozilla Fellow with the Mozilla Foundation until July 2022.
Namrata is a member of A+ Alliance Feminist AI Research Network f<A+i>r. </span></em></p>Our research shows the technology simply isn’t ready yet.Namrata Primlani, Doctoral Researcher, Northumbria University, NewcastleLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1871392022-07-22T12:31:36Z2022-07-22T12:31:36ZSurveillance is pervasive: Yes, you are being watched, even if no one is looking for you<figure><img src="https://images.theconversation.com/files/475248/original/file-20220720-11760-u3jww7.jpg?ixlib=rb-1.1.0&rect=0%2C6%2C4632%2C3064&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Video cameras on city streets are only the most visible way your movements can be tracked.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/AmericaProtestsRethinkingPolice/afb962959ce14ce4bbd6fd03729c2e57/photo">AP Photo/Mel Evans</a></span></figcaption></figure><p>The U.S. has the <a href="https://www.techspot.com/news/83061-report-finds-us-has-largest-number-surveillance-cameras.html">largest number of surveillance cameras per person</a> in the world. Cameras are omnipresent on city streets and in hotels, restaurants, malls and offices. They’re also used to <a href="https://www.mordorintelligence.com/industry-reports/united-states-video-surveillance-market">screen passengers</a> for the Transportation Security Administration. And then there are <a href="https://www.vox.com/recode/23207072/amazon-ring-privacy-police-footage">smart doorbells</a> and other home security cameras. </p>
<p>Most Americans are aware of video surveillance of public spaces. Likewise, most people know about online tracking – and <a href="https://morningconsult.com/2021/04/27/state-privacy-congress-priority-poll/">want Congress to do something about it</a>. But as a researcher who <a href="https://scholar.google.com/citations?hl=en&user=tMOMmqsAAAAJ&view_op=list_works&sortby=pubdate">studies digital culture and secret communications</a>, I believe that to understand how pervasive surveillance is, it’s important to recognize how physical and digital tracking work together. </p>
<p>Databases can correlate <a href="https://theconversation.com/impending-demise-of-roe-v-wade-puts-a-spotlight-on-a-major-privacy-risk-your-phone-reveals-more-about-you-than-you-think-182504">location data from smartphones</a>, the growing number of private cameras, <a href="https://www.wired.com/story/license-plate-reader-alpr-surveillance-abortion/">license plate readers</a> on police cruisers and toll roads, and <a href="https://www.theverge.com/2019/12/9/21002515/surveillance-cameras-globally-us-china-amount-citizens">facial recognition technology</a>, so if law enforcement wants to track where you are and where you’ve been, they can. They need a warrant to use <a href="https://www.upturn.org/work/mass-extraction/">cellphone search</a> equipment: Connecting your device to a <a href="https://csrc.nist.gov/Projects/Mobile-Security-and-Forensics/Mobile-Forensics">mobile device forensic tool</a> lets them extract and <a href="https://www.forensicmag.com/518341-Digital-Forensics-Window-Into-the-Soul/">analyze all your data</a> <a href="https://casetext.com/case/people-v-riley-263">if they have a warrant</a>. </p>
<p>However, private <a href="https://www.wired.com/story/opinion-data-brokers-know-where-you-are-and-want-to-sell-that-intel/">data brokers</a> also track this kind of data and <a href="https://issues.org/data-brokers-police-surveillance/">help surveil citizens</a> – without a warrant. There is a large market for personal data, compiled from information people volunteer, information people unwittingly yield – for example, <a href="https://www.digitaltrends.com/mobile/how-to-control-which-apps-access-your-location-on-ios-and-android/">via mobile apps</a> – and information that is stolen in data breaches. Among the customers for this largely unregulated data are <a href="https://www.vox.com/recode/22565926/police-law-enforcement-data-warrant">federal, state and local law enforcement agencies</a>.</p>
<h2>How you are tracked</h2>
<p>Whether or not you pass under the gaze of a surveillance camera or license plate reader, you are tracked by your mobile phone. GPS tells weather apps or maps your location, Wi-Fi uses your location, and <a href="https://cyberforensics.com/services/cellular-triangulation/">cell-tower triangulation</a> tracks your phone. <a href="https://www.eurekalert.org/news-releases/955287">Bluetooth</a> can identify and track your smartphone, and not just for COVID-19 contact tracing, Apple’s “Find My” service, or to connect headphones.</p>
<p>People volunteer their locations for <a href="https://us.norton.com/internetsecurity-privacy-ridesharing-privacy-ride.html">ride-sharing</a> or for games like <a href="https://www.pokemon.com/us/app/pokemon-go/">Pokemon Go</a> or <a href="https://www.ingress.com">Ingress</a>, but apps can also <a href="https://research.checkpoint.com/2021/mobile-app-developers-misconfiguration-of-third-party-services-leave-personal-data-of-over-100-million-exposed/">collect and share location</a> without your knowledge. Many late-model cars feature telematics that track locations – for example, <a href="https://www.popularmechanics.com/cars/how-to/a7469/your-car-is-spying-on-you-but-whom-is-it-spying-for/">OnStar or Bluelink</a>. All this makes opting out impractical.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="over-the-shoulder view of a young woman on a city street holding a smart phone displaying a map" src="https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/475532/original/file-20220721-9531-4dkmtj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Your phone knows where you are, and that information can readily make its way from apps to data brokers and on to law enforcement.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/woman-using-gps-navigation-app-on-smartphone-to-royalty-free-image/1306359673">Oscar Wong/Moment via Getty Images</a></span>
</figcaption>
</figure>
<p>The same thing is true online. Most websites feature <a href="https://themarkup.org/blacklight">ad trackers and third-party cookies</a>, which are stored in your browser whenever you visit a site. They identify you when you visit other sites so advertisers can follow you around. Some websites also use <a href="https://www.malwarebytes.com/keylogger">key logging</a>, which monitors what you type into a page before hitting submit. Similarly, session recording monitors mouse movements, clicks, scrolling and typing, even if you don’t click “submit.” </p>
<p>Ad trackers know when you browsed where, which browser you used, and what your device’s internet address is. <a href="https://www.politico.com/news/2022/07/18/google-data-states-track-abortions-00045906">Google</a> and Facebook are among the main beneficiaries, but there are many <a href="https://privacybee.com/blog/these-are-the-largest-data-brokers-in-america/">data brokers</a> <a href="https://www.cbsnews.com/news/the-data-brokers-selling-your-personal-information/">slicing and dicing such information</a> by religion, ethnicity, political affiliations, social media profiles, income and medical history for profit.</p>
<h2>Big Brother in the 21st century</h2>
<p>People may implicitly consent to some loss of privacy in the interest of perceived or real security – for example, in stadiums, on the road and at airports, or in return for cheaper online services. But these trade-offs benefit individuals far less than the companies aggregating data. <a href="https://theconversation.com/why-some-americans-dont-trust-the-census-130109">Many Americans</a> are suspicious of government <a href="https://www.pewtrusts.org/en/research-and-analysis/reports/2010/01/20/most-view-census-positively-but-some-have-doubts">censuses</a>, yet they willingly share their jogging routines on apps like <a href="https://www.theguardian.com/world/2018/jan/28/fitness-tracking-app-gives-away-location-of-secret-us-army-bases">Strava</a>, which has <a href="https://www.popularmechanics.com/technology/apps/a15912407/strava-app-military-bases-fitbit-jogging/">revealed</a> sensitive and secret <a href="https://www.washingtonpost.com/world/a-map-showing-the-users-of-fitness-devices-lets-the-world-see-where-us-soldiers-are-and-what-they-are-doing/2018/01/28/86915662-0441-11e8-aa61-f3391373867e_story.html">military data</a>. </p>
<p>In the <a href="https://scholarworks.law.ubalt.edu/ublr/vol50/iss1/2/">post-Roe v. Wade legal environment</a>, there are <a href="https://www.nytimes.com/2022/05/19/opinion/privacy-technology-data.html">concerns</a> not only about <a href="https://www.stopspying.org/pregnancy-panopticon">period tracking</a> apps but about <a href="https://www.techdirt.com/2022/05/04/data-brokers-selling-location-data-of-americans-who-visit-abortion-clinics/">correlating data</a> on physical movements with online searches and <a href="https://theconversation.com/impending-demise-of-roe-v-wade-puts-a-spotlight-on-a-major-privacy-risk-your-phone-reveals-more-about-you-than-you-think-182504">phone data</a>. Legislation like the recent <a href="https://legiscan.com/TX/bill/SB8/2021">Texas Senate Bill 8</a> anti-abortion law invokes “private individual enforcement mechanisms,” raising questions about who gets <a href="https://www.politico.com/news/2022/07/18/google-data-states-track-abortions-00045906">access to tracking data</a>. </p>
<p>In 2019, the <a href="https://www.vox.com/platform/amp/2019/10/31/20939890/missouri-abortion-clinic-hearing-periods-roe-wade">Missouri Department of Health</a> stored data about the periods of patients at the state’s lone Planned Parenthood clinic, correlated with state medical records. Communications <a href="https://www.icij.org/inside-icij/2015/05/be-paranoid-how-one-reporter-learned-danger-metadata/">metadata</a> can reveal who you are in touch with, when you were where, and who else was there – whether they are in your contacts or not.</p>
<p>Location data from apps on hundreds of millions of phones lets the <a href="https://www.politico.com/news/2022/07/18/dhs-location-data-aclu-00046208">Department of Homeland Security</a> track people. Health <a href="https://www.democraticmedia.org/CDD-Wearable-Devices-Big-Data-Report">wearables</a> pose similar risks, and medical experts note a <a href="https://pubmed.ncbi.nlm.nih.gov/31146589/">lack of awareness</a> about the security of data they collect. Note the resemblance of your Fitbit or smartwatch to ankle bracelets people wear during court-ordered monitoring.</p>
<p>The most pervasive user of tracking in the U.S. is Immigration and Customs Enforcement (ICE), which <a href="https://americandragnet.org/">amassed a vast amount of information</a> without judicial, legislative or public oversight. Georgetown University Law Center’s Center on Privacy and Technology <a href="https://americandragnet.org/">reported on how ICE searched</a> the driver’s license photographs of 32% of all adults in the U.S., tracked cars in cities home to 70% of adults, and updated address records for 74% of adults when those people activated new utility accounts.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&rect=0%2C6%2C4068%2C2697&q=45&auto=format&w=1000&fit=clip"><img alt="A streetlight post with a second boom with a round black sphere hanging off the end" src="https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&rect=0%2C6%2C4068%2C2697&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/475235/original/file-20220720-26-5djlgp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Video cameras and license plate readers, like those attached to this Baltimore streetlight, monitor and record the comings and goings of pedestrians and cars on city streets.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/BaltimoreAerialSurveillance/2bd5050cf1874c85911db98629799f45/photo">AP Photo/Julio Cortez</a></span>
</figcaption>
</figure>
<h2>No one is watching the watchers</h2>
<p>Nobody expects to be invisible on streets, at borders, or in shopping centers. But who has access to all that surveillance data, and how long it is stored? There is <a href="https://www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us/">no single U.S. privacy law</a> at the federal level, and states cope with a regulatory patchwork; only five states – California, Colorado, Connecticut, Utah and Virginia – <a href="https://www.natlawreview.com/article/state-us-state-privacy-laws-comparison">have privacy laws</a>. </p>
<p>It is possible to <a href="https://www.consumerreports.org/privacy/how-to-turn-off-location-services-on-your-smartphone-a8219252827/">limit location tracking</a> on your phone, but not to avoid it completely. Data brokers are supposed to mask your <a href="https://dataprivacymanager.net/what-is-personally-identifiable-information-pii/">personally identifiable data</a> before selling it. But this “<a href="https://www.techdirt.com/2021/11/22/anonymized-data-is-gibberish-term-rampant-location-data-sales-is-still-problem/">anonymization</a>” is meaningless since individuals are easily identified by cross-referencing additional data sets. This makes it easy for <a href="https://www.vice.com/en/article/panvkz/stalkers-debt-collectors-bounty-hunters-impersonate-cops-phone-location-data">bounty hunters and stalkers</a> to abuse the system. </p>
<p>The biggest risk to most people arises when there is a <a href="https://www.tripwire.com/state-of-security/security-data-protection/4-credit-bureau-data-breaches-predate-2017-equifax-hack/">data breach</a>, which is happening more often – whether it is a <a href="https://www.forbes.com/sites/ajdellinger/2019/06/07/many-popular-android-apps-leak-sensitive-data-leaving-millions-of-consumers-at-risk/?sh=367a2418521e">leaky app</a> or careless <a href="https://www.theverge.com/2022/7/6/23196805/marriott-hotels-maryland-data-breach-credit-cards">hotel chain</a>, a <a href="https://www.techdirt.com/2019/11/26/california-makes-50-million-annually-selling-your-dmv-data/">DMV data sale</a> or a compromised <a href="https://www.securityweek.com/massive-credit-bureau-hack-raises-troubling-questions">credit bureau</a>, or indeed a <a href="https://www.cnn.com/2019/07/22/tech/equifax-hack-ftc/index.html">data brokering</a> middleman whose <a href="https://www.washingtonpost.com/technology/2020/03/02/cloud-hack-problems/">cloud storage</a> is hacked. </p>
<p>This illicit flow of data not only puts <a href="https://www.theatlantic.com/technology/archive/2017/06/online-data-brokers/529281/">fuzzy notions</a> of privacy in peril, but may put your addresses and passport numbers, biometric data and social media profiles, credit card numbers and dating profiles, health and insurance information, and more <a href="https://epic.org/issues/consumer-privacy/data-brokers/">on sale</a>.</p><img src="https://counter.theconversation.com/content/187139/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Krapp does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s increasingly difficult to move about – both in the physical world and online – without being tracked.Peter Krapp, Professor of Film & Media Studies, University of California, IrvineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1872742022-07-20T20:08:52Z2022-07-20T20:08:52ZWhat do TikTok, Bunnings, eBay and Netflix have in common? They’re all hyper-collectors<figure><img src="https://images.theconversation.com/files/474987/original/file-20220719-6978-2qdmfk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>You walk into a shopping centre to buy some groceries. Without your knowledge, an electronic scan of your face is taken by in-store surveillance cameras and stored in an online database. Each time you return to that store, your “faceprint” is compared with those of people wanted for shoplifting or violence.</p>
<p>This might sound like science fiction but it’s the reality for many of us. By failing to take our digital privacy seriously – as former human rights commissioner Ed Santow has warned – Australia is “<a href="https://www.theage.com.au/national/we-must-not-sleepwalk-into-mass-surveillance-20220630-p5ay0q.html">sleepwalking</a>” its way into mass surveillance.</p>
<h2>Privacy and the digital environment</h2>
<p>Of course, companies have been collecting personal information for decades. If you’ve ever signed up to a loyalty program like FlyBuys then you’ve performed what marketing agencies call a “<a href="https://www.choice.com.au/consumers-and-data/data-collection-and-use/who-has-your-data/articles/loyalty-program-data-collection">value exchange</a>”. In return for benefits from the company (like discounted prices or special offers), you’ve handed over details of who you are, what you buy, and how often you buy it.</p>
<p>Consumer data is big business. In 2019, a <a href="https://www.webfx.com/blog/internet/what-are-data-brokers-and-what-is-your-data-worth-infographic/">report</a> from digital marketers WebFX showed that data from around 1,400 loyalty programs was routinely being traded across the globe as part of an industry <a href="https://clearcode.cc/blog/what-is-data-broker/">worth around US$200 billion</a>. That same year, the Australian Competition and Consumer Commission’s <a href="https://www.accc.gov.au/publications/customer-loyalty-schemes-final-report">review of loyalty schemes</a> revealed how many of these loyalty schemes lacked data transparency and even discriminated against vulnerable customers.</p>
<p>But the digital environment is making data collection even easier. When you <a href="https://onlinemasters.ohio.edu/blog/netflix-data/">watch Netflix</a>, for example, the company knows what you watch, when you watch it, and how long you watch it for. But they go further, also <a href="https://seleritysas.com/blog/2019/04/05/how-netflix-used-big-data-and-analytics-to-generate-billions/">capturing data</a> on which scenes or episodes you watch repeatedly, the ratings of your content, the number of searches you perform and what you search for.</p>
<h2>Hyper-collection: a new challenge to privacy</h2>
<p>Late last year, the controversial tech company ClearView AI was <a href="https://www.oaic.gov.au/updates/news-and-media/clearview-ai-breached-australians-privacy">ordered</a> by the Australian information commissioner to stop “scraping” social media for the pictures it was collecting in its massive facial recognition database. Just this month, the commissioner was investigating several retailers for <a href="https://www.abc.net.au/news/2022-07-13/bunnings-kmart-investigated-over-facial-recognition-technology/101233372">creating facial profiles</a> of the customers in their stores.</p>
<p>This new phenomenon – “hyper-collection” – represents a growing trend by large companies to collect, sort, analyse and use more information than they need, usually in covert or passive ways. In many cases, hyper-collection is not supported by a truly legitimate commercial or legal purpose.</p>
<h2>Digital privacy laws and hyper-collection</h2>
<p>Hyper-collection is a major problem in Australia for three reasons.</p>
<p>First, Australia’s privacy law wasn’t prepared for the likes of Netflix and TikTok. Despite <a href="https://www.oaic.gov.au/privacy/the-privacy-act/history-of-the-privacy-act">numerous amendments</a>, the <a href="https://www.oaic.gov.au/privacy/the-privacy-act">Privacy Act</a> dates back to the late 1980s. Although former Attorney-General Christian Porter <a href="https://www.ag.gov.au/integrity/consultations/review-privacy-act-1988">announced a review</a> of the Act in late 2019, it has been held up by the recent change of government.</p>
<p>Second, Australian privacy laws are unlikely on their own to threaten the profit base of foreign companies, especially those located in China. The Information Commissioner has the power to order companies to take certain actions – like it <a href="https://www.afr.com/policy/foreign-affairs/australia-s-tiktok-data-vulnerable-to-access-by-china-staff-20220712-p5b10f">did with Uber in 2021</a> – and can enforce these through court orders. But the penalties aren’t really big enough to discourage companies with profits in the billions of dollars.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/83-of-australians-want-tougher-privacy-laws-nows-your-chance-to-tell-the-government-what-you-want-149535">83% of Australians want tougher privacy laws. Now’s your chance to tell the government what you want</a>
</strong>
</em>
</p>
<hr>
<p>Third, hyper-collection is often enabled by the vague consents we give to get access to the services these companies provide. Bunnings, for example, argued that its collection of your faceprint was allowed because <a href="https://ia.acs.org.au/article/2022/bunnings-doubles-down-on-facial-recognition.html">signs at the entry to their stores</a> told customers facial recognition might be used. Online marketplaces like eBay, Amazon, Kogan and Catch, meanwhile, supply “<a href="https://www.accc.gov.au/media-release/concerning-issues-for-consumers-and-sellers-on-online-marketplaces">bundled consents</a>” – basically, you have to consent to their privacy policies as a condition of using their services. No consent, no access.</p>
<h2>TikTok and hyper-collection</h2>
<p>TikTok (owned by Chinese company ByteDance) has largely replaced YouTube as a way of creating and sharing online videos. The app is powered by an algorithm has already drawn <a href="https://theconversation.com/tiktoks-secret-algorithm-is-its-greatest-strength-and-could-also-be-its-undoing-176605">criticism</a> for routinely collecting data about users, as well as the ByteDance’s secretive approach to <a href="https://www.lowyinstitute.org/the-interpreter/unique-power-tiktok-s-algorithm">content moderation and censorship</a>.</p>
<p>For years, TikTok executives have been telling governments that <a href="https://www.aspistrategist.org.au/its-time-tiktok-australia-came-clean/">data isn’t stored in servers on the Chinese mainland</a>. But these promises might be hollow in the wake of recent allegations.</p>
<p>Cybersecurity experts now claim that not only does the TikTok app <a href="https://www.smartcompany.com.au/technology/tiktok-chinese-servers-aussie-cybersecurity/">routinely connect to Chinese servers</a>, but that users’ data is accessible by ByteDance employees, including the mysterious Beijing-based “Master Admin”, which has <a href="https://www.buzzfeednews.com/article/emilybakerwhite/tiktok-tapes-us-user-data-china-bytedance-access">access to every user’s personal information</a>.</p>
<p>Then, just this week, it was alleged that TikTok (owned by Chinese company ByteDance) can also access <a href="https://www.abc.net.au/news/2022-07-18/tiktok-users-warned-the-platform-is-harvesting-personal-data/13977370">almost all the data</a> contained on the phone it is installed on – including photos, calendars and emails.</p>
<p>Under China’s national security laws, the government can order tech companies to <a href="https://www.sbs.com.au/news/article/so-what-if-china-can-access-your-tiktok-data/mr1anx97k">pass on that information</a> to police or intelligence agencies.</p>
<h2>What options do we have?</h2>
<p>Unlike a physical store, we don’t get a lot of choice about consenting to digital companies’ privacy policies and how they collect our information.</p>
<p>One option – supported by encryption expert Vanessa Teague at ANU – is for consumers simply to delete offending apps until their creators are <a href="https://www.sbs.com.au/news/article/so-what-if-china-can-access-your-tiktok-data/mr1anx97k">willing to submit to greater data transparency</a>. Of course, this means locking ourselves out of those services, and it will only have a big impact in the company if enough Australians join in.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facial-recognition-is-on-the-rise-but-the-law-is-lagging-a-long-way-behind-185510">Facial recognition is on the rise – but the law is lagging a long way behind</a>
</strong>
</em>
</p>
<hr>
<p>Another option is “opting-out” of intrusive data collection. We’ve done this before – when My Health records became mandatory in 2019, a record number of us <a href="https://www.yourlifechoices.com.au/health/my-health-record-an-expensive-white-elephant-critics-say/">opted out</a>. Though these opt-outs reduced the usefulness of that <a href="https://www.theguardian.com/commentisfree/2018/jul/20/there-is-no-social-license-for-my-health-record-australians-should-reject-it">digital health record program</a>, they did demonstrate that Australians can take their data privacy seriously. </p>
<p>But how exactly can Australians opt-out of a massive social app like TikTok? Right now, they can’t – perhaps the government needs to explore a solution as part of its review.</p>
<p>A further option being explored by the Privacy Act review is whether to create new laws that would allow individuals to <a href="https://www.ag.gov.au/system/files/2020-10/privacy-act-review-terms-of-reference.pdf">sue companies for damages for breaches of privacy</a>. While lawsuits are expensive and time-consuming, they might just deliver the kind of financial damage to big companies that could change their behaviour.</p>
<p>No matter which option we take, Australians need to start getting more savvy with their data privacy. This might just mean we actually read those terms and conditions before agreeing, and being prepared to “vote with our feet” if companies won’t be honest about what they’re doing with our personal information.</p><img src="https://counter.theconversation.com/content/187274/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brendan Walker-Munro receives funding from the Australian Government through Trusted Autonomous Systems, a Defence Cooperative Research Centre funded through the Next Generation Technologies Fund. </span></em></p>Australians – and Australian governments – need to get more savvy about data privacyBrendan Walker-Munro, Senior Research Fellow, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1855102022-06-27T02:32:16Z2022-06-27T02:32:16ZFacial recognition is on the rise – but the law is lagging a long way behind<figure><img src="https://images.theconversation.com/files/471008/original/file-20220627-14-q7vf1z.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4481%2C3216&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/iot-machine-learning-human-object-recognition-794528230">Shutterstock</a></span></figcaption></figure><p>Private companies and public authorities are quietly using facial recognition systems around Australia. </p>
<p>Despite the growing use of this controversial technology, there is little in the way of specific regulations and guidelines to govern its use.</p>
<h2>Spying on shoppers</h2>
<p>We were reminded of this fact recently when consumer advocates at CHOICE <a href="https://www.choice.com.au/consumers-and-data/data-collection-and-use/how-your-data-is-used/articles/kmart-bunnings-and-the-good-guys-using-facial-recognition-technology-in-store">revealed</a> that major retailers in Australia are using the technology to identify people claimed to be thieves and troublemakers. </p>
<p>There is no dispute about the goal of reducing harm and theft. But there is also little transparency about how this technology is being used. </p>
<p>CHOICE found that most people have no idea their faces are being scanned and matched to stored images in a database. Nor do they know how these databases are created, how accurate they are, and how secure the data they collect is. </p>
<p>As CHOICE discovered, the notification to customers is inadequate. It comes in the form of small, hard-to-notice signs in some cases. In others, the use of the technology is announced in online notices rarely read by customers. </p>
<p>The companies clearly don’t want to draw attention to their use of the technology or to account for how it is being deployed.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bunnings-kmart-and-the-good-guys-say-they-use-facial-recognition-for-loss-prevention-an-expert-explains-what-it-might-mean-for-you-185126">Bunnings, Kmart and The Good Guys say they use facial recognition for 'loss prevention'. An expert explains what it might mean for you</a>
</strong>
</em>
</p>
<hr>
<h2>Police are eager</h2>
<p>Something similar is happening with the use of the technology by Australian police. Police in New South Wales, for example, have embarked on a “low-volume” <a href="https://www.theguardian.com/australia-news/2021/jul/01/calls-to-stop-nsw-police-trial-of-national-facial-recognition-system-over-lack-of-legal-safeguards">trial</a> of a nationwide face-recognition database. This trial took place despite the fact that the enabling legislation for the national database has not yet been passed.</p>
<p>In South Australia, controversy over Adelaide’s plans to upgrade its CCTV system with face-recognition capability led the city council to <a href="https://www.abc.net.au/news/2022-06-22/adelaide-city-council-votes-no-to-facial-recognition-in-cctv/101172924?utm_source=pocket_mylist">vote</a> not to purchase the necessary software. The council has also asked South Australia Police not to use face-recognition technology until legislation is in place to govern its use. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1540320043052826624"}"></div></p>
<p>However, SA Police have <a href="https://www.abc.net.au/news/2022-06-22/adelaide-city-council-votes-no-to-facial-recognition-in-cctv/101172924?utm_source=pocket_mylist">indicated</a> an interest in using the technology. </p>
<p>In a public <a href="https://www.itnews.com.au/news/sa-police-ignore-adelaide-council-plea-for-facial-recognition-ban-on-cctv-581559">statement</a>, the police described the technology as a potentially useful tool for criminal investigations. The statement also noted: </p>
<blockquote>
<p>There is no legislative restriction on the use of facial recognition technology in South Australia for investigations. </p>
</blockquote>
<h2>A controversial tool</h2>
<p>Adelaide City Council’s call for regulation is a necessary response to the expanding use of automated facial recognition. </p>
<p>This is a powerful technology that promises to fundamentally change our experience of privacy and anonymity. There is already a large gap between the amount of personal information collected about us every day and our own knowledge of how this information is being used, and facial recognition will only make the gap bigger.</p>
<p>Recent events suggest a reluctance on the part of retail outlets and public authorities alike to publicise their use of the technology. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/large-scale-facial-recognition-is-incompatible-with-a-free-society-126282">Large-scale facial recognition is incompatible with a free society</a>
</strong>
</em>
</p>
<hr>
<p>Although it is seen as a potentially useful tool, it can be a controversial one. A world in which remote cameras can identify and track people as they move through public space seems alarmingly Orwellian. </p>
<p>The technology has also been criticised for being invasive and, in some cases, <a href="https://www.marketplace.org/shows/marketplace-tech/bias-in-facial-recognition-isnt-hard-to-discover-but-its-hard-to-get-rid-of/">biased</a> and inaccurate. In the US, for example, people have already been <a href="https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/">wrongly arrested</a> based on matches made by face-recognition systems.</p>
<h2>Public pushback</h2>
<p>There has also been widespread public opposition to the use of the technology in some cities and states in the US, which have gone so far as to impose <a href="https://www.wired.com/story/face-recognition-banned-but-everywhere/">bans</a> on its use.</p>
<p>Surveys show the Australian public have <a href="https://securitybrief.com.au/story/australians-uneasy-about-facial-recognition-tech-report">concerns</a> about the invasiveness of the technology, but that there is also support for its potential use to increase public safety and security.</p>
<p>Facial-recognition technology isn’t going away. It’s likely to become less expensive and more accurate and powerful in the near future. Instead of implementing it piecemeal, under the radar, we need to directly confront both the potential harms and benefits of the technology, and to provide clear rules for its use.</p>
<h2>What would regulations look like?</h2>
<p>Last year, then human rights commissioner Ed Santow called for <a href="https://www.itnews.com.au/news/human-rights-commission-calls-for-temporary-ban-on-high-risk-govt-facial-recognition-565173">a partial ban</a> on the use of facial-recognition technology. He is now developing model legislation for how it might be regulated in Australia. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1536850168942718976"}"></div></p>
<p>Any regulation of the technology will need to consider both the potential benefits of its use and the risks to privacy rights and civic life. </p>
<p>It will also need to consider enforceable standards for its proper use. These could include the right to correct inaccurate information, the need to provide human confirmation for automated forms of identification, and the setting of minimum standards of accuracy. </p>
<p>They could also entail improving public consultation and consent around the use of the technology, and a requirement for the performance of systems to be accountable to an independent authority and to those researching the technology.</p>
<p>As the reach of facial recognition expands, we need more public and parliamentary debate to develop appropriate regulations for governing its use.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/darwins-smart-city-project-is-about-surveillance-and-control-127118">Darwin's 'smart city' project is about surveillance and control</a>
</strong>
</em>
</p>
<hr>
<hr>
<p><em>If you’re in Adelaide, there will be a public forum on regulating facial recognition technology at the Town Hall <a href="https://www.eventbrite.com.au/e/regulating-facial-recognition-technology-in-adelaideand-beyond-tickets-360120358687">tonight</a> (Monday, June 27). Ed Santow and his colleague Lauren Perry will present their model legislation, and they will be joined in discussion by South Australian parliamentarian Tammy Franks and Law Society of South Australia president Justin Stewart-Rattray.</em></p><img src="https://counter.theconversation.com/content/185510/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Andrejevic receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Gavin JD Smith receives funding from the Australian Research Council. </span></em></p>Private companies and public authorities are beginning to implement facial recognition technology, even without rules to govern what they can do.Mark Andrejevic, Professor, School of Media, Film, and Journalism, Monash University, Monash UniversityGavin JD Smith, Associate Professor in Sociology, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1852492022-06-16T15:58:58Z2022-06-16T15:58:58ZUkraine Recap: European leaders gather to urge peace<p>Three European leaders took the train today to Kyiv to meet Volodymyr Zelensky. The visit <a href="https://www.theguardian.com/world/2022/jun/16/kyiv-ukraine-olaf-scholz-emmanuel-macron-mario-draghi-russia-war">has been billed</a> as a “symbolic joint trip to show their support for Ukraine”, but there will have been pressure on the Ukrainian president to indicate what it might take on the part of him and his people to meaningfully engage in peace negotiations with Russia.</p>
<p>We’re seeing increased reports that many in Europe think Ukraine should be ready to concede at least some territory if it would bring the war to a rapid end. But recent research <a href="https://theconversation.com/ukraine-most-people-refuse-to-compromise-on-territory-but-willingness-to-make-peace-depends-on-their-war-experiences-new-survey-185147">conducted in Ukraine</a> by a team which included Kristin M Bakke of University College London with two US colleagues found that 82% of respondents believed Ukraine should not under any circumstances concede territory. </p>
<p>Drilling down into that, they found some interesting variations in the views depending on both gender (women were less likely to insist on taking back all occupied territory) and location of respondents (people in the west of Ukraine were more insistent on winning back their territory than in the east where the fighting is concentrated at the moment).</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-most-people-refuse-to-compromise-on-territory-but-willingness-to-make-peace-depends-on-their-war-experiences-new-survey-185147">Ukraine: most people refuse to compromise on territory, but willingness to make peace depends on their war experiences – new survey</a>
</strong>
</em>
</p>
<hr>
<p>It looks like an intractable problem. Vladimir Putin appears to be channelling Peter the Great in his thirst for conquest, with the Ukrainian people overwhelmingly opposed to allowing him to slake that thirst with their blood. So, how might negotiations play out? Neophytos Loizides, a professor of international conflict analysis at the University of Kent, has offered <a href="https://theconversation.com/ukraine-war-five-issues-that-could-help-kickstart-peace-talks-as-european-leaders-head-to-kyiv-183864">five ideas</a>, based on situations where negotiations have resolved often bitter conflicts, that could help kickstart peace talks.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-war-five-issues-that-could-help-kickstart-peace-talks-as-european-leaders-head-to-kyiv-183864">Ukraine war: five issues that could help kickstart peace talks as European leaders head to Kyiv</a>
</strong>
</em>
</p>
<hr>
<p>One of Ukraine’s firmest friends through all of this has been Poland, which is playing host to 1.42 million refugees, according to the UN, and has offered steadfast support throughout the conflict. But it was not always so cordial between the two countries. For centuries Ukrainians and Poles have been at loggerheads – mainly over territorial and identity issues. Christoph Mick, professor of modern European history at Warwick University, <a href="https://theconversation.com/ukraine-and-poland-why-the-countries-fell-out-in-the-past-and-are-now-closely-allied-184906">walks us through this chequered past</a> looks at the issues that have bought the two former enemies together.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-and-poland-why-the-countries-fell-out-in-the-past-and-are-now-closely-allied-184906">Ukraine and Poland: why the countries fell out in the past, and are now closely allied</a>
</strong>
</em>
</p>
<hr>
<hr>
<figure class="align-right ">
<img alt="Ukraine Recap weekly email newsletter" src="https://images.theconversation.com/files/449743/original/file-20220303-4351-1xhaozt.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/449743/original/file-20220303-4351-1xhaozt.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/449743/original/file-20220303-4351-1xhaozt.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/449743/original/file-20220303-4351-1xhaozt.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/449743/original/file-20220303-4351-1xhaozt.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/449743/original/file-20220303-4351-1xhaozt.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/449743/original/file-20220303-4351-1xhaozt.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>This is our weekly recap of expert analysis of the Ukraine conflict.</em></strong>
<em>The Conversation, a not-for-profit news group, works with a wide range of academics across its global network to produce evidence-based analysis. Get these recaps in your inbox every Thursday. <a href="https://theconversation.com/uk/newsletters/ukraine-recap-114?utm_source=TCUK&utm_medium=linkback&utm_campaign=UK+Newsletter+Ukraine+Recap+2022+Mar&utm_content=WeeklyRecapTop">Subscribe here</a>.</em></p>
<hr>
<h2>On the battlefield</h2>
<p>The conflict grinds on and the butcher’s bill continues to mount, with all the tragedy and heartache that goes with it. Last week there were reports of what the Russians are calling a terror attack and the Ukrainians are calling resistance activity, which is a familiar difference of opinion that seems to crop up whenever a powerful nation attacks and occupies a less powerful one.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1537161767557681152"}"></div></p>
<p>Russian sources reported recently that a cafe frequented by occupation forces in the city of Kherson in southern Ukraine had been the target of a bombing in which four people were injured. The Russians have been keen to frame this as terrorism, but – as Chris Morris, an expert in policing from the University of Portsmouth – <a href="https://theconversation.com/ukraine-war-why-popular-resistance-is-a-big-problem-for-russia-184956">points out here,</a> it’s very much in line with what you might expect from an occupied city in the midst of such a violent war of occupation. </p>
<p>Given how prepared so many civilians were to fight against the invading army, Russia must expect an increasingly potent resistance movement, even in regions of Ukraine which were already occupied but where significant portions of the population are determined to see the Russian military leave.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-war-why-popular-resistance-is-a-big-problem-for-russia-184956">Ukraine war: why popular resistance is a big problem for Russia</a>
</strong>
</em>
</p>
<hr>
<p>When the word “terrorists” is bandied about by the Russian propaganda machine, you can be sure the words “war criminals” won’t be far behind. Which is how many of the prisoners of war from battles like the bitterly contested fight to take the port city of Mariupol are being classified. But it must be said that both sides are thought to be breaking the rules when it comes to their treatment of prisoners of war. </p>
<p>As Christpher Bluth, an expert in international relations from Bradford University – who has written regularly for us on the war – notes, Ukraine has paraded Russian prisoners on its media, in some cases deliberately humiliating them, which is against the Geneva Conventions. The Russian military has made the same mistakes. Both sides are signed up to the rules of war and the <a href="https://theconversation.com/ukraine-conflict-how-both-sides-are-breaking-the-law-on-prisoners-of-war-184701">mistreatment of POWs is always unacceptable</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-conflict-how-both-sides-are-breaking-the-law-on-prisoners-of-war-184701">Ukraine conflict: how both sides are breaking the law on prisoners of war</a>
</strong>
</em>
</p>
<hr>
<h2>Away from the frontline</h2>
<p>We’ve often written here about Russian propaganda – and, indeed it seems that Putin has a pretty firm grip on his country’s media, with few dissenting voices still in the country after most opposition newspapers and media organisations were either shut down or fled the country. But the growing popularity of Telegram, an independent social media app, means a growing number of people inside Russia can see different viewpoints – including the BBC news in Russian. </p>
<p>Ekaterina Romanova, who is studying for her PhD in mass communications at the University of Florida, writes that people who are finding ways of supplementing the one-note diet of pro-Putin news on state TV with independent voices from around the world, are more likely to <a href="https://theconversation.com/russians-with-diverse-media-diet-more-likely-to-oppose-ukraine-war-184221">oppose the war</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/russians-with-diverse-media-diet-more-likely-to-oppose-ukraine-war-184221">Russians with diverse media diet more likely to oppose Ukraine war</a>
</strong>
</em>
</p>
<hr>
<p>On the financial markets, meanwhile, the strength of the Russian rouble has been raising eyebrows among those who thought that the fierce sanctions imposed by western countries was wrecking the economy and undermining the value of the currency. Kirill Shakhnov, an economist from the University of Surrey, explains why the <a href="https://theconversation.com/russias-rouble-is-now-stronger-than-before-the-war-western-sanctions-are-partly-to-blame-184700">rouble has defied expectations</a> and is stronger now than before the invasion – and, ironically, this may in large part be to do with the effect of the sanctions.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/russias-rouble-is-now-stronger-than-before-the-war-western-sanctions-are-partly-to-blame-184700">Russia's rouble is now stronger than before the war – western sanctions are partly to blame</a>
</strong>
</em>
</p>
<hr>
<p>Finally, the conflict in Ukraine is seeing facial recognition technology being used in warfare for the first time. Ukraine’s Ministry of Defence has been using Clearview AI facial recognition software since March 2022 to build a case for war crimes and identify the dead – both Russian and Ukrainian. As Felipe Romero Moreno – a legal scholar from the University of Hertfordshire – writes, the software can help Ukrainian officials <a href="https://theconversation.com/facial-recognition-technology-how-its-being-used-in-ukraine-and-why-its-still-so-controversial-183171">identify dead soldiers</a> more efficiently than fingerprints, and works even if a soldier’s face is damaged. But this comes with its own ethical questions, as Moreno notes.</p>
<p><em>Ukraine Recap is available as a weekly email newsletter. <a href="https://theconversation.com/uk/newsletters/ukraine-recap-114?utm_source=TCUK&utm_medium=linkback&utm_campaign=UK+Newsletter+Ukraine+Recap+2022+Mar&utm_content=WeeklyRecapBottom">Click here to get our recaps directly in your inbox.</a></em></p><img src="https://counter.theconversation.com/content/185249/count.gif" alt="The Conversation" width="1" height="1" />
A digest of the week’s coverage of the war against Ukraine.Jonathan Este, Senior International Affairs Editor, Associate EditorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1851262022-06-15T07:07:57Z2022-06-15T07:07:57ZBunnings, Kmart and The Good Guys say they use facial recognition for ‘loss prevention’. An expert explains what it might mean for you<figure><img src="https://images.theconversation.com/files/468915/original/file-20220615-14-ex57sp.jpg?ixlib=rb-1.1.0&rect=348%2C128%2C4215%2C2524&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Once the purview of law enforcement and intelligence agencies, facial recognition is now being used to identify consumers in Australian stores. </p>
<p>If you’ve seen the movie Minority Report, you’ll remember how Tom Cruise’s character John Anderton is identified through iris recognition to perform his duties, and later tracked with it when he’s a wanted man. When he replaces his eyes to evade identification, Anderton is bombarded with advertisements targeting his new assumed identity.</p>
<p>This once-futuristic idea from a movie could soon be a reality in our lives. An investigative report published by consumer magazine <a href="https://www.choice.com.au/consumers-and-data/data-collection-and-use/how-your-data-is-used/articles/kmart-bunnings-and-the-good-guys-using-facial-recognition-technology-in-store">Choice</a> reveals three major retailers (out of 25 queried), Kmart, Bunnings and The Good Guys, have admitted using facial recognition technology on customers for “loss prevention”. </p>
<p>The companies say they advise consumers of the use of the technology as a condition of entry. But do consumers really know what this entails, and how or where their images could be used or stored?</p>
<h2>What is facial recognition and why do we care?</h2>
<p>We’ve grown accustomed to our phones and cameras using facial detection software to put our faces into focus. But facial <em>recognition</em> technology takes this a step further by matching our unique identifying information to a stored digital image.</p>
<p>Facial recognition has come a long way. It was initially used in 2001 to identify relationships between gamblers and employees in Las Vegas casinos, where there was suspected collusion. </p>
<p>The United States government would eventually use <a href="https://www.infoworld.com/article/2628017/innovation-that-matters--jeff-jonas-connects-the-invisible-dots.html">the same</a> technology to <a href="https://www.nationalgeographic.com/science/article/140505-jeff-jonas-big-data-gambling-computers-technology-ibm">identify the 9/11 hijackers</a>. It’s now widely adopted by law enforcement and intelligence communities.</p>
<p>Currently, software such as Clearview AI and PimEyes are being used in highly sophisticated ways, including by Ukrainian and Russian forces to <a href="https://www.washingtonpost.com/technology/2022/04/15/ukraine-facial-recognition-warfare/">identify combatants in Ukraine</a>. </p>
<h2>But what is this technology doing in Bunnings?</h2>
<p>As with its early use in casinos, Kmart, Bunnings and The Good Guys told Choice their facial recognition software is used for “loss prevention”.</p>
<p>Images captured on store surveillance devices and body cameras could be used to identify in-store individuals engaged in theft, or other criminal activities. Real-time identification could allow law enforcement to quickly identify shoppers with unpaid tickets, outstanding warrants, or existing criminal complaints.</p>
<p>Bunnings chief operating officer Simon McDowell told SBS News the technology was used “solely to keep team and customers safe and prevent unlawful activity in our stores”. Both The Good Guys and Kmart told <a href="https://www.theguardian.com/technology/2022/jun/15/bunnings-kmart-and-the-good-guys-using-facial-recognition-technology-to-crack-down-on-theft-choice-says">news outlets</a> they were using it for the same reasons, in a select number of stores – and that customers were notified through signage. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=354&fit=crop&dpr=1 600w, https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=354&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=354&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=445&fit=crop&dpr=1 754w, https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=445&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/468933/original/file-20220615-25-71yxl3.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=445&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Choice supplied this photo of a sign, which it said was taken at a Kmart in Marrickville, NSW.</span>
<span class="attribution"><span class="source">CHOICE</span></span>
</figcaption>
</figure>
<p>Choice confirmed there were some signs disclosing use of the technology – but reported these signs were small and would be missed by most shoppers. </p>
<p>The news has stoked shoppers’ fears of how their image data may be used. As in Minority Report, images captured in a store could theoretically be used for targeted advertising and to “enhance” <a href="https://www.wired.com/2011/11/malls-track-phone-signals/">the shopping experience</a>.</p>
<p>It’s likely images and video collected through standard in-store surveillance are either matched immediately against a remote database using specialised facial recognition software, or analysed against a database of tagged and catalogued images later on. Ideally, the images would be encoded and stored in a file that’s readable only by the algorithm specific to the device or software processor.</p>
<h2>Potential for misuse</h2>
<p>We have already seen online retailers use this tactic through <a href="https://theconversation.com/googles-scrapping-third-party-cookies-but-invasive-targeted-advertising-will-live-on-156530">cookies</a> and linking our purchase history on <a href="https://theconversation.com/smartphone-data-tracking-is-more-than-creepy-heres-why-you-should-be-worried-91110">electronic devices</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/is-your-phone-really-listening-to-your-conversations-well-turns-out-it-doesnt-have-to-162172">Is your phone really listening to your conversations? Well, turns out it doesn't have to</a>
</strong>
</em>
</p>
<hr>
<p>We have also seen companies correlate our social media profiles and our other online experiences across various websites. Australian stores employing facial recognition could use collected information internally to track:</p>
<ul>
<li>the number of visits by a person</li>
<li>the times of those visits</li>
<li>pattern or behavioural analysis (such as a consumer’s reaction to pricing or signage) and</li>
<li>associations with other shoppers (such as friends, family and anyone else with them). </li>
</ul>
<p>Retailers could also use this identity data to extract information from social media, where most people have images of themselves uploaded. They could then perform risk analysis based on the credit and financial reporting access of that specific shopper. </p>
<p>Externally, the images and associated consumer information could be merged with financial, economic, social and political data already collected by commercial data aggregators – adding to the already massive data aggregation market.</p>
<p>Current Australian privacy laws require retailers to disclose what data are being collected, retained and protected, as well as how it might be used outside of a loss prevention model.</p>
<p>A Bunnings spokesperson told The Guardian the technology was being used in line with the Australian Privacy Act. Choice has reached out to the Office of the Australian Information Commissioner to determine whether the use of the technology is indeed consistent with the Privacy Act.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/shadow-profiles-facebook-knows-about-you-even-if-youre-not-on-facebook-94804">Shadow profiles - Facebook knows about you, even if you're not on Facebook</a>
</strong>
</em>
</p>
<hr>
<h2>What to do?</h2>
<p>While the retailers highlighted in Choice’s investigation state consumers must agree to the collection of their images as a condition of entry, the reality is the collection, retention, and use of their images are not usually disclosed in any explicit way. </p>
<p>As far as data collection in retail settings goes, there should be a precondition for all stores to make sure consumers are made aware of:</p>
<ul>
<li>the specific information that is collected while they are visiting</li>
<li>how it might be aggregated and combined with other relevant information from third parties</li>
<li>how long the images or data will be retained, retrieved, or accessed and by whom, and </li>
<li>what security precautions are being used to secure the data.</li>
</ul>
<p>Furthermore, as with their online shopping experience, consumers should be given the option to opt-out of such data collection. </p>
<p>Until then, consumers may try to avoid collection by donning hats, sunglasses and face masks. But considering the rate at which facial recognition technology is advancing – and how large the personal data market has already grown – retail cameras may soon be able to see through these disguises, too.</p><img src="https://counter.theconversation.com/content/185126/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dennis B Desmond previously received funding from the United States Department of Defense.</span></em></p>Australia’s consumer advocacy group Choice identified three Australian retailers who use facial recognition to identify consumers. What are the privacy concerns?Dennis B. Desmond, Lecturer, Cyberintelligence and Cybercrime Investigations, University of the Sunshine CoastLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1831712022-06-14T10:52:06Z2022-06-14T10:52:06ZFacial recognition technology: how it’s being used in Ukraine and why it’s still so controversial<figure><img src="https://images.theconversation.com/files/467206/original/file-20220606-16-kq0nk2.jpg?ixlib=rb-1.1.0&rect=6%2C6%2C4601%2C2583&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Facial recognition technology is controversial in many countries.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/smart-technologies-your-smartphone-collection-analysis-1516856447">Shutterstock</a></span></figcaption></figure><p>Facial recognition technology is being used in warfare for the first time. It could be a game changer in Ukraine, where it is being used to identify the dead and reunite families. But if we fail to grapple with the ethics of this technology now, we could find ourselves in a human rights minefield. </p>
<p><a href="https://therecord.media/at-war-with-facial-recognition-clearview-ai-in-ukraine/">Ukraine’s</a> Ministry of Defence has been using <a href="https://www.clearview.ai/">Clearview AI</a> facial recognition software since March 2022 to build a case for war crimes and identify the dead – both Russian and Ukrainian. <a href="https://therecord.media/podcast/">The Ministry of Digital Transformation</a> in Ukraine said it is using Clearview AI technology to give Russians the chance to experience the “true cost of the war”, and to let families know that if they want to find their loved ones’ bodies, they are “welcome to come to Ukraine”. </p>
<p>Ukraine is being given free access to the software. It’s also being used at checkpoints and could help reunite refugees with their families. </p>
<h2>The privacy backlash</h2>
<p>Last month, however, the <a href="https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/">UK Information Commissioner’s Office (ICO)</a> fined Clearview AI more than £7.5 million for collecting images of people in the UK and elsewhere from the web and social media. It was ordered to delete the images and stop obtaining and using the personal data of UK residents publicly available on the internet. Originally the ICO said it intended to fine Clearview AI <a href="https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/">£17 million</a>. </p>
<p>According to the ICO, given the huge number of UK social media users, Clearview AI’s face database is likely to contain a significant amount of images collected without consent.</p>
<p>A lawyer for Clearview, AI Lee Wolosky, said: “While we appreciate the ICO’s desire to reduce their monetary penalty on Clearview AI, we nevertheless stand by our position that the decision to impose any fine is incorrect as a matter of law. Clearview AI is not subject to the ICO’s jurisdiction, and Clearview AI does no business in the UK at this time.”</p>
<p>Clearview AI has said it wants 100 billion face images in its database <a href="https://www.washingtonpost.com/technology/2022/02/16/clearview-expansion-facial-recognition/">by early 2023</a> – equivalent to 14 for every person on Earth. Multiple photos of the same person improve the system’s accuracy. </p>
<p>According to <a href="https://www.clearview.ai/">Clearview AI’s website</a>, its facial recognition technology helps law enforcement tackle crime, and enables transportation businesses, banks and other commercial companies to detect theft, prevent fraud and verify identities. </p>
<p><a href="https://www.buzzfeed.com/emilyashton/clearview-users-police-uk">Buzzfeed</a> reported in February 2020 that several British police forces have previously used Clearview AI. A spokeswoman for Clearview AI said police in the UK do not have access to its technology, while spokespeople for both the National Crime Agency and Metropolitan police would neither confirm nor deny use of specific tools or techniques. However, in March 2022 the College of Policing published <a href="https://www.college.police.uk/article/live-facial-recognition-technology-guidance-published">new guidance</a> for UK police forces on the use of live facial recognition. </p>
<p>The UK government plans to replace key <a href="https://www.legislation.gov.uk/ukpga/1998/42/contents">human rights laws</a> with a new <a href="https://www.gov.uk/government/consultations/human-rights-act-reform-a-modern-bill-of-rights">Modern Bill of Rights</a> which could make it difficult, <a href="https://ukconstitutionallaw.org/2022/04/19/tetyana-krupiy-the-modern-bill-of-rights-creates-barriers-to-challenging-algorithmic-decisions/">if not impossible</a>, for people to challenge decisions based on AI evidence in court, including facial recognition. </p>
<p>According to advocacy group <a href="https://www.libertyhumanrights.org.uk/wp-content/uploads/2022/03/Libertys-response-to-the-Ministry-of-Justices-consultation-%E2%80%98Human-Rights-Act-Reform-A-Modern-Bill-of-Rights-March-2022.pdf">Liberty </a>, the bill is likely to have a disproportionate impact on over-policed communities, as it would create different classes of claimants based on <a href="https://eachother.org.uk/the-governments-proposed-bill-of-rights-is-a-power-grab/">their past behaviour</a>. </p>
<h2>A tool for warfare</h2>
<p><a href="https://www.reuters.com/technology/exclusive-ukraine-has-started-using-clearview-ais-facial-recognition-during-war-2022-03-13/">Clearview AI’s chief executive</a> Hoan Ton-That said its facial recognition software has allowed Ukrainian law enforcement and government officials to store more than 2 billion images from <a href="https://www.makeuseof.com/tag/10-incredible-vk-facts-you-should-know-aka-russias-facebook/">VKontakte</a>, a Russian social networking service. Hoan said the software can help Ukrainian officials identify dead soldiers more efficiently than fingerprints, and works even if a soldier’s face is damaged. </p>
<p>But there is conflicting evidence about facial recognition software’s effectiveness. According to the <a href="https://www.osti.gov/servlets/purl/1559672#:%7E:text=During%20the%20early%20stages%20of,have%20little%20effect%20on%20detection.">US Department of Energy</a>, decomposition of a person’s face can reduce the software’s accuracy. On the other hand, <a href="https://piurilabs.di.unimi.it/Papers/CIVEMS_2021_Forensic.pdf">recent scientific research</a> demonstrated results relating to the identification of dead people that were similar to or better than human assessment. </p>
<p><a href="https://www.osti.gov/servlets/purl/1559672">Research</a> suggests fingerprints, dental records and DNA are still the most reliable identification techniques. But they are tools for trained professionals, while facial recognition can be used by non-experts. </p>
<p>Another issue flagged by <a href="https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper-1_en.pdf">research</a> is that facial recognition can mistakenly pair two images, or fail to match photos of the same person. In Ukraine, the consequences of any potential error with AI could be disastrous. An innocent civilian could be killed if they are misidentified as a Russian soldier. </p>
<h2>A controversial history</h2>
<p>In 2016 Hoan <a href="https://lastweekin.ai/p/the-messy-history-and-many-legal?s=r">began recruiting computer science engineers</a> to create Clearview AI’s algorithm. But it was not until 2019 that the American facial recognition company started discretely providing its software to US police and law enforcement agencies.</p>
<p>In January 2020, <a href="https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html">The New York Times</a> published its story: ‘The Secretive Company That Might End Privacy as We Know It’. This article prompted more than 40 civil rights and tech organisations to send <a href="https://epic.org/wp-content/uploads/privacy/facerecognition/PCLOB-Letter-FRT-Suspension.pdf">a letter</a> to the <a href="https://www.pclob.gov/">Privacy and Civil Liberties Oversight Board</a> and four US congressional committees, demanding the suspension of Clearview AI’s facial recognition software. </p>
<p>In February 2020, following a data leak of Clearview AI’s client list, <a href="https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement">BuzzFeed </a> revealed that Clearview AI’s facial recognition software was being used by individuals in more than 2,200 law enforcement departments, government agencies and companies across 27 different countries. </p>
<figure class="align-center ">
<img alt="Man in suit watches screens showing surveillance camera footage" src="https://images.theconversation.com/files/467210/original/file-20220606-24-utzinn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/467210/original/file-20220606-24-utzinn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=390&fit=crop&dpr=1 600w, https://images.theconversation.com/files/467210/original/file-20220606-24-utzinn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=390&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/467210/original/file-20220606-24-utzinn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=390&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/467210/original/file-20220606-24-utzinn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=490&fit=crop&dpr=1 754w, https://images.theconversation.com/files/467210/original/file-20220606-24-utzinn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=490&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/467210/original/file-20220606-24-utzinn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=490&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Facial recognition technology is also used to detect theft, prevent fraud and verify identities.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/iot-machine-learning-human-object-recognition-1214866462">Shutterstock</a></span>
</figcaption>
</figure>
<p>On May 9 2022, <a href="https://www.theguardian.com/us-news/2022/may/09/clearview-chicago-settlement-aclu">Clearview AI agreed</a> to stop selling access to its face database to individuals and businesses in the US, after the American Civil Liberties Union launched a lawsuit accusing Clearview AI of breaching an Illinois privacy law.</p>
<p>Over the last two years, data protection authorities in <a href="https://www.priv.gc.ca/en/opc-news/news-and-announcements/2020/nr-c_200706/">Canada</a>, <a href="https://www.cnil.fr/en/facial-recognition-cnil-orders-clearview-ai-stop-reusing-photographs-available-internet">France</a>, <a href="https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9751362">Italy</a>, <a href="https://noyb.eu/en/digital-rights-alliance-file-legal-complaints-against-facial-recognition-company-clearview-ai">Austria</a> and <a href="https://www.homodigitalis.gr/posts/9305">Greece</a> have all fined, investigated or banned Clearview AI from collecting images of people.</p>
<p>The future of Clearview AI in the UK is uncertain. The worst-case scenario for ordinary people and businesses would be if the UK government fails to take on board the concerns raised in response to its <a href="https://www.gov.uk/government/consultations/human-rights-act-reform-a-modern-bill-of-rights">consultation </a> on the Modern Bill of Rights. <a href="https://www.libertyhumanrights.org.uk/issue/plans-to-reform-the-human-rights-act-are-an-unashamed-power-grab/">Liberty</a> has warned of a potential human rights “power grab”.</p>
<p>The best outcome, in my opinion, would be for the UK government to scrap its plans for a <a href="https://consult.justice.gov.uk/human-rights/human-rights-act-reform/">Modern Bill of Rights</a>. This would also mean that UK courts should continue to take account of cases from the European Court of Human Rights as case law.</p>
<p>Unless <a href="https://uhra.herts.ac.uk/bitstream/handle/2299/25414/AI_facial_recognition_and_biometric_detection_balancing_consumer_rights_and_corporate_interests_Copy.pdf?sequence=1">laws</a> governing the use of facial recognition are adopted, police use of this technology risks <a href="https://www.libertyhumanrights.org.uk/wp-content/uploads/2020/02/Bridges-Court-of-Appeal-judgment.pdf">breaching privacy rights</a>, data protection and equality laws.</p><img src="https://counter.theconversation.com/content/183171/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Felipe Romero Moreno is affiliated with The British and Irish Law Education Technology Association (BILETA) <a href="https://www.bileta.org.uk/">https://www.bileta.org.uk/</a></span></em></p>Lawmakers around the world are making decisions about whether facial recognition technology is acceptable.Dr Felipe Romero-Moreno, Senior Lecturer and Research Tutor, School of Law, University of HertfordshireLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1834472022-05-24T06:01:41Z2022-05-24T06:01:41ZPay ‘with a smile or a wave’: why Mastercard’s new face recognition payment system raises concerns<figure><img src="https://images.theconversation.com/files/464421/original/file-20220520-19-yabkx8.jpeg?ixlib=rb-1.1.0&rect=0%2C31%2C3943%2C2787&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Mastercard’s <a href="https://www.mastercard.com/news/press/2022/may/with-a-smile-or-a-wave-paying-in-store-just-got-personal/">“smile to pay”</a> system, announced last week, is supposed to save time for customers at checkouts. It is being trialled in Brazil, with future pilots planned for the Middle East and Asia.</p>
<p>The company argues touch-less technology will help speed up transaction times, shorten lines in shops, heighten security and improve hygiene in businesses. But it raises concerns relating to customer privacy, data storage, crime risk and bias. </p>
<h2>How will it work?</h2>
<p>Mastercard’s biometric checkout system will provide customers facial recognition-based payments, by linking the biometric authentication systems of a number of third-party companies with Mastercard’s own payment systems. </p>
<p>A Mastercard spokesperson told The Conversation it had already partnered with NEC, Payface, Aurus, Fujitsu Limited, PopID and PayByFace, with more providers to be named. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="The 'Fujitsu' logo in red is displayed on a building's side" src="https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/464953/original/file-20220524-22-ga0v7l.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mastercard has partnered with Fujitsu, a massive information and communications technology firm offering many different products and services.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>They said “providers need to go through independent laboratory certification against the program criteria to be considered” – but details of these criteria aren’t yet publicly available.</p>
<p>According to <a href="https://www.siliconrepublic.com/business/mastercard-facial-recognition-biometric-payments">media</a> reports, customers will have to install an app which will take their picture and payment information. This information will be saved and stored on the third-party provider’s servers. </p>
<p>At the checkout, the customer’s face will be matched with the stored data. And once their identity is verified, funds will be deducted automatically. The “wave” option is a bit of a trick: as the customer watches the camera while waving, the camera still scans their face – not their hand.</p>
<p>Similar authentication technologies are used on smartphones (face ID) and in many airports around the world, including “<a href="https://www.abf.gov.au/entering-and-leaving-australia/smartgates/arrivals">smartgates</a>” in Australia.</p>
<p><a href="https://www.theverge.com/2017/9/4/16251304/kfc-china-alipay-ant-financial-smile-to-pay">China</a> started using biometrics-based checkout technology back in 2017. But Mastercard is among the first to launch such a system in Western markets – competing with the “pay with your palm” <a href="https://techcrunch.com/2020/09/29/amazon-introduces-the-amazon-one-a-way-to-pay-with-your-palm-when-entering-stores/">system</a> used at cashier-less Amazon Go and Whole Foods brick and mortars in the United States.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-facial-analysis-is-scientifically-questionable-should-we-be-using-it-for-border-control-155474">AI facial analysis is scientifically questionable. Should we be using it for border control?</a>
</strong>
</em>
</p>
<hr>
<h2>What we don’t know</h2>
<p>Much about the precise functioning of Mastercard’s system isn’t clear. How accurate will the facial recognition be? Who will have access to the databases of biometric data? </p>
<p>A Mastercard spokesperson told The Conversation customers’ data would be stored with the relevant biometric service provider in encrypted form, and removed when the customer “indicates they want to end their enrolment”. But how will the removal of data be enforced if Mastercard itself can’t access it?</p>
<p>Obviously, privacy protection is a major concern, especially when there are many potential third-party providers involved.</p>
<p>On the bright side, Mastercard’s <a href="https://www.investopedia.com/articles/markets/032615/how-mastercard-makes-its-money-ma.asp">customers</a> will have a choice as to whether or not they use the biometrics checkout system. However, it will be at retailers’ discretion whether they offer it, or whether they offer it exclusively as the only payment option.</p>
<p>Similar face-recognition technologies used in airports, and <a href="https://www.brookings.edu/research/police-surveillance-and-facial-recognition-why-data-privacy-is-an-imperative-for-communities-of-color/">by police</a>, often offer no choice. </p>
<p>We can assume Mastercard and the biometrics provider with whom they partner will require customer consent, as per most privacy laws. But will customers know what they are consenting to? </p>
<p>Ultimately, the biometric service providers Mastercard teams up with will decide how they use the data, for how long, where they store it, and who can access it. Mastercard will merely decide what providers are “good enough” to be accepted as partners, and the minimum standards they must adhere to. </p>
<p>Customers who want the convenience of this checkout service will have to consent to all the related data and privacy terms. And as reports have noted, there is potential for Mastercard to integrate the feature with loyalty schemes and make personalised recommendations <a href="https://www.cnbc.com/2022/05/17/mastercard-launches-tech-that-lets-you-pay-with-your-face-or-hand.html">based on purchases</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fingerprint-login-should-be-a-secure-defence-for-our-data-but-most-of-us-dont-use-it-properly-127442">Fingerprint login should be a secure defence for our data, but most of us don't use it properly</a>
</strong>
</em>
</p>
<hr>
<h2>Accuracy is a problem</h2>
<p>While the accuracy of face recognition technologies has previously been challenged, the current <em>best</em> facial authentication algorithms have an error of just 0.08%, according to tests by the <a href="https://github.com/usnistgov/frvt/blob/nist-pages/reports/1N/frvt_1N_report_2020_03_27.pdf">National Institute of Standards and Technology</a>. In some countries, even banks have <a href="https://techhq.com/2020/09/biometrics-the-most-secure-solution-for-banking/">become comfortable</a> relying on it to log users into their accounts.</p>
<p>Yet we can’t know how accurate the technologies used in Mastercard’s biometric checkout system will be. The algorithms underpinning a technology can work almost perfectly when trailed in a lab, but perform <a href="https://www.csis.org/blogs/technology-policy-blog/how-accurate-are-facial-recognition-systems-%E2%80%93-and-why-does-it-matter">poorly</a> in real life settings, where lighting, angles and other parameters are varied.</p>
<h2>Bias is another problem</h2>
<p>In a 2019 study, NIST <a href="https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf#page=5">found</a> that out of 189 facial recognition algorithms, the majority were biased. Specifically, they were less accurate on people from racial and ethnic minorities. </p>
<p>Even if the technology has improved in the past few years, it’s not foolproof. And we don’t know the extent to which Mastercard’s system has overcome this challenge.</p>
<p>If the software fails to recognise a customer at the check out, they might end up disappointed, or even become irate – which would completely undo any promise of speed or convenience.</p>
<p>But if the technology misidentifies a person (for instance, John is recognised as Peter – or <a href="https://www.youtube.com/watch?v=e8-yupM-6Oc">twins are confused</a> for each other), then money could be taken from the wrong person’s account. How would such a situation be dealt with?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=617&fit=crop&dpr=1 600w, https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=617&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=617&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=776&fit=crop&dpr=1 754w, https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=776&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/464424/original/file-20220520-19-5hfuvx.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=776&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">There’s no evidence facial recognition technology is infallible. These systems can misidentify and also have biases.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>Is the technology secure?</h2>
<p>We often hear about software and databases being hacked, even in <a href="https://www.csoonline.com/article/2130877/the-biggest-data-breaches-of-the-21st-century.html">cases of</a> supposedly very “secure” organisations. Despite Mastercard’s <a href="https://wwmastw.cnbc.com/2022/05/17/mastercard-launches-tech-that-lets-you-pay-with-your-face-or-hand.html">efforts</a> to ensure security, there’s no guarantee the third-party providers’ databases – with potentially millions of people’s biometric data – won’t be hacked.</p>
<p>In the wrong hands, this data could lead to <a href="https://www.comparitech.com/identity-theft-protection/identity-theft-statistics/">identity theft</a>, which is one of the fastest growing types of crime, and financial fraud. </p>
<h2>Do we want it?</h2>
<p>Mastercard suggests 74% of customers are in favour of using such technology, referencing a stat from its <a href="https://www.mastercard.com/news/ap/en/newsroom/press-releases/en/2020/april/mastercard-study-shows-consumers-moving-to-contactless-payments-for-everyday-purchases/">own study</a> – also used by <a href="https://www.mastercard.com/news/ap/en/newsroom/press-releases/en/2020/october/mastercard-idemia-and-matchmove-pilot-fingerprint-biometric-card-in-asia-to-enhance-security-and-safety-of-contactless-payments">business partner</a> Idemia (a company that sells biometric identification products). </p>
<p>But the report cited is vague and brief. Other studies show entirely different results. For example, <a href="https://www.getapp.com/resources/facial-recognition-technology/#how-comfortable-are-consumers-with-facial-recognition-technology">this study</a> suggests 69% of customers aren’t comfortable with face recognition tech being used in retail settings. And <a href="https://www.securitymagazine.com/articles/93521-are-consumers-comfortable-with-facial-recognition-it-depends-says-new-study">this one</a> shows only 16% trust such tech.</p>
<p>Also, if consumers knew the risks the technology poses, the number of those willing to use it might drop even lower.</p><img src="https://counter.theconversation.com/content/183447/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rita Matulionyte receives funding from Lithuanian Research Council for the research project 'Government Use of Facial Recognition Technologies: Legal Challenges and Possible Solutions' (2021-2023). She is affiliated with Australian Society for Computers and Law (AUSCL). </span></em></p>The technology is currently being trialled outside of Australia. It’s one of the first major attempts to bring it to western markets on a large scale.Rita Matulionyte, Senior Lecturer in Law, Macquarie UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1767502022-02-24T16:36:27Z2022-02-24T16:36:27ZChildren struggle more than adults to recognize masked faces<figure><img src="https://images.theconversation.com/files/447669/original/file-20220221-28422-lavtez.jpg?ixlib=rb-1.1.0&rect=23%2C0%2C5161%2C2487&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Public health measures to curb the spread of COVID-19 require face masks in many settings.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Face perception is one of the most important visual abilities for humans. A quick glance at a person’s face provides us with rich and socially relevant information, including race, age, gender and emotional state. </p>
<p>Humans typically process faces as a unified whole instead of relying on specific facial features like eyes, nose or mouth. Scientists refer to this type of processing as “<a href="https://doi.org/10.1177/0956797611401753">holistic processing</a>,” and believe that it is essential to recognizing faces quickly and accurately. </p>
<p>Our research explores how mask wearing — the new reality imposed by the COVID-19 pandemic — changes how children and adults process and perceive faces.</p>
<h2>Facing difficulties</h2>
<p>Scientists have promoted mask-wearing as <a href="https://doi.org/10.1073/pnas.2014564118">one of the most important and effective tools to reduce COVID-19 transmission</a>. Many governments around the world required face masks in public places, especially when physical distancing is less feasible. Wearing a mask became prevalent in diverse social settings including on public transportation, in schools and at sporting events and concerts. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/masking-in-schools-a-doctor-and-covid-19-researcher-explains-how-it-keeps-children-safe-177239">Masking in schools: A doctor and COVID-19 researcher explains how it keeps children safe</a>
</strong>
</em>
</p>
<hr>
<p>However, because face masks conceal the lower part of the face, <a href="https://doi.org/10.1038/s41598-020-78986-9">our research group was not surprised to find a reduced ability to learn to recognize new faces when they are masked</a>. Notably, we found that when people had difficulty recognizing masked faces, there were changes in how faces were recognized. Masked faces were processed in a less holistic manner, and more in a feature-by-feature way. </p>
<p>Sensitivity to faces appears early in life. In fact, <a href="https://doi.org/10.1111/1467-9280.00179">newborns are already sensitive</a> to the spatial arrangement of a face, and stare more at visual patterns that resemble this organization (two eyes above a nose above a mouth). Despite this early sensitivity, face perception develops slowly, and some studies suggest that this ability is not fully matured even in teenagers. Given that their face processing system is not fully developed, we wondered if children might have even greater difficulties recognizing masked faces compared to adults. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A ridiculously adorable baby looks at their mother" src="https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/447702/original/file-20220222-28-1lix8sd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Infants show signs of facial recognition and are drawn to visual representations of how facial features are organized.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Impaired recognition</h2>
<p>To test this question, <a href="https://doi.org/10.1186/s41235-022-00360-2">we conducted a study using a version of the face recognition test used in adults, but specially designed for children</a>. This test includes increasing levels of difficulty: The child participants are presented with photographs of other children’s faces, and they need to choose which face they had seen during the study phase. To make the test more challenging, the faces are presented from different viewpoints, and external cues, like hair, are removed. </p>
<p>We tested 72 children. Half of them completed the regular version of the test, while the other half completed a “masked” version of the test, where the children in the photographs appeared to be wearing masks. Each child completed the test twice, once with faces presented upright and once with faces presented upside down. </p>
<p>Turning a face upside down stops people from processing the faces in a holistic way because the typical spatial organization of the face (two eyes above a nose above a mouth) is distorted. When we try to recognize faces without masks, we are much worse at doing so when the face is upside down. This is because we are no longer able to rely on our natural holistic processing system and, instead, we rely on a weaker feature-based strategy. </p>
<p>We assumed that if participants were just as poor at recognizing upright faces with masks as they were at recognizing upside-down faces with masks, it would mean that these masked faces are no longer being processed in a holistic way.</p>
<p>The results of the study were clear. First, we found that children were impaired in recognizing masked faces. The group of children who completed the masked version showed a 20 per cent reduction in their test score. This was even worse than what we had originally found in adults (15 per cent reduction), suggesting that children might find it even more difficult to recognize faces with masks than adults. </p>
<p>We also found evidence for a smaller “upside-down” effect for masked faces. This finding indicates that children processed the masked faces in a more feature-to-feature fashion, which might explain some of their difficulty in recognizing masked faces.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A girl wearing a surgical mask hangs upside down in a swing" src="https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/447703/original/file-20220222-27005-1qcsa24.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">It takes us longer to process upside-down faces because features aren’t where they’re supposed to be.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Children’s recognition cues</h2>
<p>These new findings raise a number of important questions that scientists could address in the future. First, do children learn to use other cues to recognize friends and teachers — for example, by relying on people’s voices or movements? Second, would children become better at recognizing masked faces as they gained more experience with such faces? </p>
<p>This seems likely because previous research found that children’s brains are more adaptable and that experience can shape their visual abilities. Third, do difficulties recognizing masked faces affect children’s ability to communicate with others and form meaningful social relationships? </p>
<p>It is important to emphasize again that masks are one of the most effective tools in our effort to reduce the spread of COVID-19 and keep people safe and healthy. Despite the difficulties that adults and children experience with recognition of masked faces, any decisions about mandatory mask wearing should be informed by public health experts. </p>
<p>At the same time, it’s important to understand how masks may change how children perceive faces so that we can determine whether children are better able to adapt to masks, and which cues or strategies help to improve recognition of masked faces.</p><img src="https://counter.theconversation.com/content/176750/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Erez Freud receives funding from the Natural Sciences and Engineering Research Council and from the Vision Science to Applications (VISTA) program funded by the Canada First Research Excellence Fund (CFREF, 2016–2023) . </span></em></p><p class="fine-print"><em><span>Shayna Rosenbaum receives funding from the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council (NSERC), and from the Vision Science to Applications (VISTA) program funded by the Canada First Research Excellence Fund (CFREF, 2016–2023) .</span></em></p>We rely on the spatial arrangement of facial features to process faces, and wearing masks interferes with that — especially for children.Erez Freud, Assistant Professor, Psychology, York University, CanadaR. Shayna Rosenbaum, Professor and York Research Chair, Psychology, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1758172022-02-01T13:16:12Z2022-02-01T13:16:12ZGovernment agencies are tapping a facial recognition company to prove you’re you – here’s why that raises concerns about privacy, accuracy and fairness<figure><img src="https://images.theconversation.com/files/443239/original/file-20220128-19-ghy893.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C8000%2C5317&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Beginning this summer, you might need to upload a selfie and a photo ID to a private company, ID.me, if you want to file your taxes online.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/young-woman-using-smartphone-while-working-with-royalty-free-image/1224140562">Oscar Wong/Moment via Getty Images</a></span></figcaption></figure><p>The U.S. Internal Revenue Service is planning to <a href="https://www.irs.gov/newsroom/irs-unveils-new-online-identity-verification-process-for-accessing-self-help-tools">require citizens to create accounts</a> with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with <a href="https://www.id.me/">ID.me</a> to authenticate the identities of people accessing services.</p>
<p>The IRS’s move is aimed at cutting down on identity theft, a crime that <a href="https://www.ftc.gov/system/files/documents/reports/consumer-sentinel-network-data-book-2020/csn_annual_data_book_2020.pdf">affects millions of Americans</a>. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and <a href="https://www.cnbc.com/2021/12/21/criminals-have-stolen-nearly-100-billion-in-covid-relief-funds-secret-service.html">fraud in many of the programs</a> that were administered as part of the <a href="https://www.whitehouse.gov/american-rescue-plan/">American Relief Plan</a> has been a major concern to the government.</p>
<p>The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to <a href="https://www.bloomberg.com/news/articles/2022-01-28/treasury-weighing-id-me-alternatives-over-privacy-concerns?sref=Hjm5biAW">revisit its decision</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a webpage with the IRS logo in the top left corner and buttons for creating or logging into an account" src="https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=309&fit=crop&dpr=1 600w, https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=309&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=309&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=388&fit=crop&dpr=1 754w, https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=388&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/443053/original/file-20220127-9782-2f0nex.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=388&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Here’s what greets you when you click the link to sign into your IRS account. If current plans remain in place, the blue button will go away in the summer of 2022.</span>
<span class="attribution"><a class="source" href="https://sa.www4.irs.gov/secureaccess/ui/?TYPE=33554433&REALMOID=06-0006b18e-628e-1187-a229-7c2b0ad00000&GUID=&SMAUTHREASON=0&METHOD=GET&SMAGENTNAME=-SM-u0ktItgVFneUJDzkQ7tjvLYXyclDooCJJ7%2bjXGjg3YC5id2x9riHE98hoVgd1BBv&TARGET=-SM-http%3a%2f%2fsa%2ewww4%2eirs%2egov%2fola%2f">Screenshot, IRS sign-in webpage</a></span>
</figcaption>
</figure>
<p>As a <a href="https://scholar.google.com/citations?user=JNPbTdIAAAAJ&hl=en">computer science researcher</a> and the chair of the <a href="https://www.acm.org/public-policy/tpc">Global Technology Policy Council of the Association for Computing Machinery</a>, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general <a href="https://theconversation.com/feds-are-increasing-use-of-facial-recognition-systems-despite-calls-for-a-moratorium-145913">use of this technology in policing and other government functions</a>, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well.</p>
<h2>ID dot who?</h2>
<p>ID.me is a private company that <a href="https://www.bloomberg.com/news/features/2022-01-20/cybersecurity-company-id-me-is-becoming-government-s-digital-gatekeeper?sref=Hjm5biAW">formed as TroopSwap</a>, a site that offered retail discounts to members of the armed forces. As part of that effort, the company created an ID service so that military staff who qualified for discounts at various companies could prove they were, indeed, service members. In 2013, the company renamed itself ID.me and started to market its ID service more broadly. The U.S. Department of Veterans Affairs began using the technology in 2016, the company’s first government use.</p>
<p>To use ID.me, a user loads a mobile phone app and takes a selfie – a photo of their own face. ID.me then compares that image to various IDs that it obtains either through open records or through information that applicants provide through the app. If it finds a match, it creates an account and uses image recognition for ID. If it cannot perform a match, users can contact a “trusted referee” and have a video call to fix the problem.</p>
<p>A number of companies and <a href="https://www.usnews.com/news/technology/articles/2021-07-22/factbox-states-using-idme-rival-identity-check-tools-for-jobless-claims">states</a> have been using ID.me for several years. News reports have documented <a href="https://www.cpr.org/2021/05/10/unemployment-payouts-have-dropped-40-percent-is-id-me-stopping-scams-or-blocking-benefits/">problems people have had with ID.me</a> failing to authenticate them, and with the company’s customer support in resolving those problems. Also, the system’s technology requirements <a href="https://www.usnews.com/news/best-states/colorado/articles/2021-05-02/system-for-unemployment-benefits-exposes-digital-divide">could widen the digital divide</a>, making it harder for many of the people who need government services the most to access them. </p>
<p>But much of the concern about the IRS and other federal agencies using ID.me revolves around its use of facial recognition technology and collection of biometric data.</p>
<h2>Accuracy and bias</h2>
<p>To start with, there are a number of general concerns about the accuracy of facial recognition technologies and whether there are <a href="https://theconversation.com/ai-technologies-like-police-facial-recognition-discriminate-against-people-of-colour-143227">discriminatory biases</a> in their accuracy. These have led the Association for Computing Machinery, among other organizations, to <a href="https://theconversation.com/feds-are-increasing-use-of-facial-recognition-systems-despite-calls-for-a-moratorium-145913">call for a moratorium on government use</a> of facial recognition technology. </p>
<p>A study of commercial and academic facial recognition algorithms by the National Institute of Standards and Technology found that U.S. facial-matching algorithms generally have <a href="https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software">higher false positive rates for Asian and Black faces</a> than for white faces, although recent results have improved. ID.me claims that there is <a href="https://insights.id.me/viewpoint/no-identity-left-behind-american-increased-access-online-services/">no racial bias</a> in its face-matching verification process. </p>
<p>There are many other conditions that can also cause inaccuracy – physical changes caused by illness or an accident, hair loss due to chemotherapy, color change due to aging, gender conversions and others. How any company, including ID.me, handles such situations is unclear, and this is one issue that has raised concerns. Imagine having a disfiguring accident and not being able to log into your medical insurance company’s website because of damage to your face.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BqQT4sIOYA0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Facial recognition technology is spreading fast. Is the technology – and society – ready?</span></figcaption>
</figure>
<h2>Data privacy</h2>
<p>There are other issues that go beyond the question of just how well the algorithm works. As part of its process, ID.me collects a very large amount of personal information. It has a very long and difficult-to-read privacy policy, but essentially while ID.me doesn’t share most of the personal information, it does share various information about internet use and website visits with other partners. The nature of these exchanges is not immediately apparent. </p>
<p>So one question that arises is what level of information the company shares with the government, and whether the information can be used in tracking U.S. citizens between regulated boundaries that apply to government agencies. Privacy advocates on both the left and right have long opposed any form of a mandatory uniform government identification card. Does handing off the identification to a private company allow the government to essentially achieve this through subterfuge? It’s not difficult to imagine that some states – and maybe eventually the federal government – could insist on an identification from ID.me or one of its competitors to access government services, get medical coverage and even to vote. </p>
<p>As Joy Buolamwini, an MIT AI researcher and founder of the <a href="https://www.ajl.org/">Algorithmic Justice League</a>, argued, beyond accuracy and bias issues is the question of <a href="https://www.theatlantic.com/ideas/archive/2022/01/irs-should-stop-using-facial-recognition/621386/">the right not to use biometric technology</a>. “Government pressure on citizens to share their biometric data with the government affects all of us — no matter your race, gender, or political affiliations,” she wrote.</p>
<h2>Too many unknowns for comfort</h2>
<p>Another issue is who audits ID.me for the security of its applications? While no one is accusing ID.me of bad practices, security researchers are worried about how the company may protect the incredible level of personal information it will end up with. Imagine a security breach that released the IRS information for millions of taxpayers. In the fast-changing world of cybersecurity, with threats ranging from individual hacking to international criminal activities, experts would like assurance that a company provided with so much personal information is using state-of-the-art security and keeping it up to date. </p>
<p>[<em>Over 140,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-140ksignup">Sign up today</a>.]</p>
<p>Much of the questioning of the IRS decision comes because these are early days for government use of private companies to provide biometric security, and some of the details are still not fully explained. Even if you grant that the IRS use of the technology is appropriately limited, this is potentially the start of what could quickly snowball to many government agencies using commercial facial recognition companies to get around regulations that were put in place specifically to rein in government powers. </p>
<p>The U.S. stands at the edge of a slippery slope, and while that doesn’t mean facial recognition technology shouldn’t be used at all, I believe it does mean that the government should put a lot more care and due diligence into exploring the terrain ahead before taking those critical first steps.</p><img src="https://counter.theconversation.com/content/175817/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Hendler receives funding from IBM, DARPA, and the NSF. He is a Professor at Rensselaer Polytechnic Institute, affiliated with the Association for Computing Machinery (ACM) and consults or has consulted for a number of government agencies. The opinions expressed in this piece are solely those of the author and do not necessarily represent the opinions of the ACM or any of the other organizations with which he is affiliated.</span></em></p>Federal and state governments are turning to a facial recognition company to ensure that people accessing services are who they say they are. The move promises to cut down on fraud, but at what cost?James Hendler, Professor of Computer, Web and Cognitive Sciences, Rensselaer Polytechnic InstituteLicensed as Creative Commons – attribution, no derivatives.