tag:theconversation.com,2011:/us/topics/algorithms-at-work-44799/articlesalgorithms at work – The Conversation2023-10-10T17:00:39Ztag:theconversation.com,2011:article/2145252023-10-10T17:00:39Z2023-10-10T17:00:39ZAI: we may not need a new human right to protect us from decisions by algorithms – the laws already exist<figure><img src="https://images.theconversation.com/files/552765/original/file-20231009-17-qwgeso.jpg?ixlib=rb-1.1.0&rect=32%2C8%2C5431%2C3628&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Hiring algorithms could filter candidates before interviews even take place.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/back-view-female-candidate-apply-position-1191901783">fizkes / Shutterstock</a></span></figcaption></figure><p>There are risks and harms that come with relying on algorithms to make decisions. People are already feeling the impact of doing so. Whether <a href="https://www.science.org/doi/10.1126/science.aax2342">reinforcing racial biases</a> or <a href="https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election">spreading misinformation</a>, many technologies that are labelled as artificial intelligence (AI) help amplify age-old malfunctions of the human condition.</p>
<p>In light of such problems, calls have been made to <a href="https://academic.oup.com/ejil/article-abstract/32/4/1249/6448877">create a new human right</a> against being subject to automated decision-making (ADM), which the UK <a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/">Information Commissioner’s Office (ICO) describes</a> as “the process of making a decision by automated means without any human involvement”. </p>
<p>Such systems rely on being exposed to data, whether factual, inferred, or created via profiling. But if effective regulation of ADM is the goal, creating new laws is probably not the way to go. </p>
<p><a href="https://academic.oup.com/ijlit/article/31/2/114/7227602">Our research</a> suggests we should consider a different approach. Legal frameworks for data protection, non-discrimination, and human rights already offer protection to people from the negative impacts of ADM. Rules from these bodies of law can also guide regulation more generally. We could therefore focus on ensuring that the laws we already have are properly implemented.</p>
<h2>Current harms and future risks</h2>
<p>Automated decision making is being used in various ways – and there are more applications on the way. Areas subject to automation include the processing of asylum and welfare support applications and the deployment of lethal military technology. But even where ADM is considered to bring benefits, it can also have negative effects.</p>
<p>The criminalisation of children is one possible risk of using certain ADM systems, where “<a href="https://academic.oup.com/hrlr/article-abstract/22/1/ngab028/6438104">predictive risk models</a>” used in child protection services can result in vulnerable children being further discriminated against. ADM can also make securing work harder –- a hiring algorithm <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1468-2230.12759">developed by Amazon</a> “scored female applicants more poorly than their equivalently qualified male counterparts.” </p>
<p>In several countries, including the UK, courts also rely on ADM. For example, it’s used to make sentencing recommendations, calculate the <a href="https://theconversation.com/a-black-box-ai-system-has-been-influencing-criminal-justice-decisions-for-over-two-decades-its-time-to-open-it-up-200594">probability of a person reoffending</a>, and assess the flight risk of defendants, which determines whether they will be released on bail pending trial. </p>
<p>These applications can result in unfair processes and unjust outcomes for many reasons. This could happen because a judge unwittingly accepts erroneous results produced by ADM, or because no one is able to understand how or why a particular system arrived at its conclusion. </p>
<p>Historically, human prejudices <a href="https://www.science.org/doi/10.1126/science.aax2342">have also been embedded</a> in the design of such software. This is because the algorithms are trained on real world data, often from the internet. Exposing the system to this information may help improve their performance at a task from one perspective, but the data also reflects people’s biases. This means that members of marginalised groups can end up being punished, in the way we saw earlier when women were disadvantaged by a hiring algorithm. </p>
<h2>Protection and regulation</h2>
<p>The urge to adopt new legal rules is perhaps understandable considering the stakes and the potential harm ADM could and does do. However, as regards creating a new human right, negotiating new laws takes time, money and resources. And once any new law comes into force it can take decades to be accurately understood for the purposes of practice.</p>
<p>Given that many relevant laws already exist, it’s unclear whether a new human right would significantly influence how systems for automated decision making are designed and deployed.</p>
<p>Yet without tangible implementation and enforcement, the content of these existing laws can become hollow. Effective governance of ADM by these laws requires <a href="https://oecd.ai/en/catalogue/tools/algorithmic-impact-assessment-tool">impact assessments of automated decisions</a>, human supervision of ADM systems, and complaints processes. These should all be mandated. A thorough impact assessment will be able to identify, for example, unintended harms to individuals and groups, and help shape appropriate mitigation measures. </p>
<p>Yet these information gathering measures need to be <a href="https://judicature.duke.edu/wp-content/uploads/2021/04/Sales_Spring2021.pdf">accompanied by sufficient oversight</a> by a competent, resourced, and – possibly – public body. This would help uphold democratic accountability. Such bodies would also be tasked with ensuring that people negatively affected by ADM could file complaints that are adequately dealt with. These steps would make current laws on data protection, non-discrimination, and human rights more meaningful and effective in protecting individuals and groups from the harms of automated decisions.</p>
<p>The law across many areas is often criticised – sometimes rightly – for struggling to adapt to change. But a merit of the law in general is its ability to provide recourse to people that have experienced wrongdoing. It provides principled teeth to take a bite out of unprincipled conduct. </p>
<p>This capacity is significant for another reason. Corporate spin regarding digital technologies matches how they are often portrayed in public. Commentary, too, frequently tends towards “<a href="https://www.tandfonline.com/doi/full/10.1080/13642987.2023.2227100">hyperbole, alarmism, or exaggeration</a>”. This hype complements practices such as ethics-washing that provide a means of feigning commitment to regulation, while ignoring the very laws capable of providing it.</p>
<p>Chatter about the likes of <a href="https://www.unesco.org/en/artificial-intelligence/recommendation-ethics">“AI ethics”</a> grease the wheels of these strategies, sometimes turning nuanced and significant philosophical insights into box-ticking exercises. Ethics are an essential component of guiding the design, development, and deployment of automated decision making. However, the language of “ethics” can also be used by spin doctors to <a href="https://www.technologyreview.com/2019/12/27/57/ai-ethics-washing-time-to-act/">distract us</a>.</p>
<p>If anything here is worth remembering, it’s that ADM is not only a future problem, it’s a present problem. The laws that exist now can be used to address pressing issues stemming from this technology. </p>
<p>Whether this happens depends on public and private bodies improving the procedural machinery needed to enforce and oversee legal rules. These rules, many of which have been around for a while, just need a bit more life breathed into them to function effectively.</p><img src="https://counter.theconversation.com/content/214525/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Mackenzie-Gray Scott receives funding from the British Academy, and is Visiting Professor at the Center for Technology and Society, Getulio Vargas Foundation.</span></em></p><p class="fine-print"><em><span>Elena Abrusci does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Effective implementation of existing law can protect us from the risks posed by AI algorithms.Elena Abrusci, Senior lecturer in Law, Brunel University LondonRichard Mackenzie-Gray Scott, Postdoctoral Fellow, Bonavero Institute of Human Rights, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1856682022-07-26T11:57:29Z2022-07-26T11:57:29ZThere is a lot of antisemitic hate speech on social media – and algorithms are partly to blame<figure><img src="https://images.theconversation.com/files/475238/original/file-20220720-25-ycf9uk.jpg?ixlib=rb-1.1.0&rect=7%2C0%2C4747%2C3078&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Social media is being used all over the world to express hatred of Jews.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/online-messaging-social-media-auto-post-production-royalty-free-image/1307414278?adppopup=true">Urupong/ iStock / Getty Images Plus</a></span></figcaption></figure><p>Antisemitic incidents have shown a sharp rise in the United States. The Anti-Defamation League, a New York-based Jewish civil rights group that has been tracking cases since 1979, found that <a href="https://www.adl.org/resources/press-release/adl-audit-finds-antisemitic-incidents-united-states-reached-all-time-high">there were 2,717 incidents in 2021</a>. This represents an increase of 34% over 2020. In Europe, the European Commission <a href="https://www.isdglobal.org/wp-content/uploads/2021/06/the-rise-of-antisemitism-during-the-pandemic.pdf">found a sevenfold increase</a> in antisemitic postings across French language accounts, and an over thirteenfold increase in antisemitic comments within German channels during the pandemic. </p>
<p>Together with other scholars who study antisemitism, we started to look at how <a href="https://doi.org/10.4324/9781003200499-2">technology and the business model of the social media platforms were driving antisemitism</a>.
A 2022 book that we co-edited, “<a href="https://doi.org/10.4324/9781003200499">Antisemitism on Social Media</a>,” offers perspectives from the U.S., Germany, Denmark, Israel, India, U.K. and Sweden on how algorithms on Facebook, Twitter, TikTok and YouTube contribute to spreading antisemitism.</p>
<h2>What does antisemitism on social media look like?</h2>
<p>Hatred against Jews on social media is often expressed in stereotypical depictions of Jews that stem from Nazi propaganda or in denial of the Holocaust. </p>
<p>Antisemitic social media posts also express hatred toward Jews that is based on the notion that all Jews are <a href="https://www.vox.com/2018/11/20/18080010/zionism-israel-palestine">Zionist</a> – that is, they are part of the national movement supporting Israel as a Jewish state – and Zionism is constructed as innately evil.</p>
<p>However, today’s antisemitism is not only directed at Israelis, and it does not always take the form of traditional slogans or hate speech. Contemporary antisemitism manifests itself in various forms such as GIFs, memes, vlogs, comments and reactions such as likes and dislikes on the platforms. </p>
<p>Scholar <a href="https://pure.au.dk/portal/en/persons/sophie-schmalenberger(9ff053c5-5bcf-44a1-b4b9-2dd472196ab1).html">Sophie Schmalenberger</a> found that antisemitism is expressed not just in blunt, hurtful language and images on social media, but also in coded forms that may easily remain undetected. For example, on Facebook, Germany’s radical right-wing party Alternative für Deutschland, or AfD, <a href="https://doi.org/10.4324/9781003200499-4">omits the mentioning of the Holocaust</a> in posts about the Second World War. It also uses antisemitic language and rhetoric that present antisemitism as acceptable.</p>
<p>Antisemitism may take on subtle forms such as in emojis. The emoji combination of a star of David, a Jewish symbol, and a rat resembles the <a href="https://www.philaholocaustmemorial.org/antisemitism-explained/">Nazi propaganda likening Jews to vermin</a>. In Nazi Germany, the constant repetition and normalization of such depictions led to the dehumanization of Jews and eventually the acceptance of genocide. </p>
<p>Other forms of antisemitism on social media are <a href="https://doi.org/10.4324/9781003200499-6">antisemitic troll attacks</a>: Users organize to disrupt online events by flooding them with messages that deny the Holocaust or spread <a href="https://doi.org/10.4324/9781003200499-3">conspiracy myths as QAnon does</a>. </p>
<p>Scholars <a href="https://www.wilsoncenter.org/person/gabriel-weimann">Gabi Weimann</a> and <a href="https://il.linkedin.com/in/natalie-masri-5b4893205?original_referer=https%3A%2F%2Fwww.google.com%2F">Natalie Masri</a> have studied TikTok. They found that kids and young adults are especially in danger of being exposed, often unwittingly, to antisemitism on the <a href="https://doi.org/10.4324/9781003200499-11">very popular and fast-growing platform</a>, which already counts over 1 billion users worldwide. Some of the content that is posted combines clips of footage from Nazi Germany with new text belittling or making fun of the victims of the Holocaust. </p>
<p>The continuous exposure to antisemitic content at a young age, scholars say, can lead to both normalization of the content and radicalization of the Tik-Tok viewer. </p>
<h2>Algorithmic antisemitism</h2>
<p>Antisemitism is fueled by algorithms, which are programmed to register engagement. This ensures that the more engagement a post receives, the more users see it. Engagement includes all reactions such as likes and dislikes, shares and comments, including countercomments. The problem is that reactions to posts also <a href="https://gizmodo.com/former-facebook-exec-you-don-t-realize-it-but-you-are-1821181133">trigger rewarding dopamine hits in users</a>. Because outrageous content creates the most engagement, users feel more encouraged to post hateful content.</p>
<p>However, even social media users who post critical comments on hateful content don’t realize that because of the way algorithms work, they end up contributing to its spread. </p>
<p>Research on video recommendations on <a href="https://doi.org/10.1057/s41599-020-00550-7">YouTube also shows how algorithms gradually lead users to more radical content</a>. Algorithmic antisemitism is thus a form of what criminologist <a href="https://hatelab.net/people/">Matthew Williams</a> calls “algorithmic hate” in his book “<a href="https://thescienceofhate.com/">The Science of Hate</a>.” </p>
<h2>What can be done about it?</h2>
<p>To combat antisemitism on social media, strategies need to be evidence based. But neither social media companies nor researchers have devoted enough time and resources to this issue so far.</p>
<p>The study of antisemitism on social media poses unique challenges to researchers: They need access to the data and funding to be able to help develop effective counterstrategies. So far, scholars depend on the cooperation of the social media companies to <a href="https://undark.org/2022/04/18/why-researchers-want-broader-access-to-social-media-data/">access the data, which is mostly unregulated</a>. </p>
<p>Social media companies have implemented guidelines on <a href="https://doi.org/10.4324/9781003200499-14">reporting antisemitism on social media</a>, and civil society organizations have been demanding action against algorithmic antisemitism. However, the measures taken so far are woefully inadequate, if not dangerous. For example, counterspeech, which is often promoted as a possible strategy, tends to amplify hateful content. </p>
<p>To meaningfully address antisemitic hate speech, social media companies would need to change the algorithms that collect and curate user data for advertisement companies, which make up a large part of their revenue.</p>
<p>There is a global, borderless spread of antisemitic posts on social media happening on an unprecedented scale. We believe it will require the collective efforts of social media companies, researchers and civil society to combat this problem.</p>
<hr>
<p><a href="https://theconversation.com/au/topics/social-media-and-society-125586" target="_blank"><img src="https://images.theconversation.com/files/479539/original/file-20220817-20-g5jxhm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=144&fit=crop&dpr=1" width="100%"></a></p><img src="https://counter.theconversation.com/content/185668/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Antisemitism today does not always appear in the form of traditional hate speech. It manifests in GIFs, memes, vlogs, comments and reactions on social media platforms.Sabine von Mering, Director, Center for German and European Studies, Brandeis UniversityMonika Hübscher, Research Associate, PhD Candidate, University of Duisburg-EssenLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1715902021-11-21T18:56:06Z2021-11-21T18:56:06ZAlgorithms can decide your marks, your work prospects and your financial security. How do you know they’re fair?<p>Algorithms are becoming commonplace. They can determine employment prospects, <a href="https://www.afr.com/companies/financial-services/banks-warned-using-ai-in-loan-assessments-could-awaken-a-zombie-20210615-p5814i">financial security</a> and more. The use of algorithms can be controversial – for example, <a href="https://www.innovationaus.com/robodebt-was-technology-beta-testing-on-most-vulnerable-citizens/">robodebt</a>, as the Australian government’s flawed online welfare compliance system came to be known. </p>
<p>Algorithms are increasingly being used to make decisions that have a lasting impact on our current and future lives. </p>
<p>Some of the greatest impacts of algorithmic decision-making are in education. If you have anything to do with an Australian school or a university, at some stage an algorithm will make a decision that matters for you. </p>
<p>So what sort of decisions might involve algorithms? Some decisions will involve the next question for school students to answer on a test, such as the <a href="https://nap.edu.au/online-assessment/research-and-development/tailored-tests">online provision of NAPLAN</a>. Some algorithms support <a href="https://theconversation.com/artificial-intelligence-holds-great-potential-for-both-students-and-teachers-but-only-if-used-wisely-81024">human decision-making in universities</a>, such as identifying students at risk of failing a subject. Others take the human out of the loop, like some forms of <a href="https://theconversation.com/online-exam-monitoring-is-now-common-in-australian-universities-but-is-it-here-to-stay-159074">online exam supervision</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/unis-are-using-artificial-intelligence-to-keep-students-sitting-exams-honest-but-this-creates-its-own-problems-170708">Unis are using artificial intelligence to keep students sitting exams honest. But this creates its own problems</a>
</strong>
</em>
</p>
<hr>
<h2>How do algorithms work?</h2>
<p>Despite their pervasive impacts on our lives, it is often difficult to understand how algorithms work, why they have been designed, and why they are used. As algorithms become a key part of decision-making in education – and many other aspects of our lives – people need to know two things:</p>
<ol>
<li><p>how algorithms work</p></li>
<li><p>the kinds of trade-offs that are made in decision-making using algorithms.</p></li>
</ol>
<p>In research to explore these two issues, we developed <a href="https://www.edufuturesstudio.com/uk-exam-algorithm-game">an algorithm game</a> using participatory methodologies to involve diverse stakeholders in the research. The process becomes a form of collective experimentation to encourage new perspectives and insights into an issue. </p>
<p>Our algorithm game is based on the <a href="https://www.theverge.com/2020/8/17/21372045/uk-a-level-results-algorithm-biased-coronavirus-covid-19-pandemic-university-applications">UK exam controversy</a> in 2020. During COVID-19 lockdowns, an <a href="https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/">algorithm was used to determine grades</a> for students wishing to attend university. The algorithm predicted grades for some students that were far lower than expected. In the face of protests, the algorithm was eventually scrapped. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1298725638174457857"}"></div></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/scotlands-exam-result-crisis-assessment-and-social-justice-in-a-time-of-covid-19-144248">Scotland's exam result crisis: assessment and social justice in a time of COVID-19</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://education-futures-studio.sydney.edu.au/education-futures-studio/workbench/">Our interdisciplinary team</a> co-designed the UK exam algorithm game over a series of two workshops and multiple meetings this year. Our workshops included students, data scientists, ethicists and social scientists. Such interdisciplinary perspectives are vital to understand the range of social, ethical and technical implications of algorithms in education. </p>
<h2>Algorithms make trade-offs, so transparency is needed</h2>
<p>The UK example highlights key issues with using algorithms in society, including issues of transparency and bias in data. These issues matter everywhere, including <a href="https://www.sbs.com.au/news/scott-morrison-warns-high-tech-race-must-consider-ethical-implications-for-human-rights/09268bbc-d7a9-4dd6-81f9-f531a59c887c">Australia</a>.</p>
<p>We designed the algorithm game to help people develop the tools to have more of a say in shaping the world algorithms are creating. Algorithm “games” invite people to play with and learn about the parameters of how an algorithm operates. Examples include games that show people how algorithms are used in <a href="https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/">criminal sentencing</a>, or can help to <a href="https://automating.nyc/#toyAlgo">predict fire risk in buildings</a> </p>
<p>There is a growing public awareness that algorithms, especially those used in forms of artificial intelligence, need to be understood as raising <a href="https://www.nature.com/articles/d41586-018-05469-3">issues of fairness</a>. But while everyone may have a vernacular understanding of what is fair or unfair, when algorithms are used numerous trade-offs are involved. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/from-robodebt-to-racism-what-can-go-wrong-when-governments-let-algorithms-make-the-decisions-132594">From robodebt to racism: what can go wrong when governments let algorithms make the decisions</a>
</strong>
</em>
</p>
<hr>
<p>In our algorithm game, we take people through a series of problems where the solution to a fairness problem simply introduces a new one. For example, the UK algorithm did not work very well for predicting the grades of students in schools where smaller numbers of students took certain subjects. This was unfair for these students. </p>
<p>The solution meant the algorithm was not used for these often <a href="https://ffteducationdatalab.org.uk/2020/08/a-level-results-2020-why-independent-schools-have-done-well-out-of-this-years-awarding-process/">very privileged schools</a>. These students then received grades predicted by their teachers. But these grades were mostly higher than the algorithm-generated grades received by students in larger schools, which were more often government comprehensive schools. So this meant the decision was fair for students in small schools, unfair for those in larger schools who had grades allocated by the algorithm. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1296435366153596928"}"></div></p>
<p>What we try to show in our game that it is not possible to have a perfect outcome. And that neither humans or algorithms will make a set of choices that are fair for everyone. This means we have to make decisions about which values matter when we use algorithms. </p>
<h2>Public must have a say to balance the power of EdTech</h2>
<p>While our algorithm game focuses on the use of an algorithm developed by a government, algorithms in education are commonly introduced as part of educational technology. The EdTech industry is <a href="https://www.pwc.com.au/government/government-matters/education-tech-edtech-revolutionise-education-institutions.html">expanding rapidly in Australia</a>. Companies are seeking to dominate all stages of education: enrolment, learning design, learning experience and lifelong learning. </p>
<p>Alongside these developments, COVID-19 has accelerated the use of algorithmic decision-making in education and beyond. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/artificial-intelligence-holds-great-potential-for-both-students-and-teachers-but-only-if-used-wisely-81024">Artificial intelligence holds great potential for both students and teachers – but only if used wisely</a>
</strong>
</em>
</p>
<hr>
<p>While these innovations open up amazing possibilities, algorithms also bring with them a set of challenges we must face as a society. Examples like the UK exam algorithm expose us to how such algorithms work and the kinds of decisions that have to be made when designing them. We are then forced to answer deep questions of which values we will choose to prioritise and what <a href="https://www.nuffieldfoundation.org/wp-content/uploads/2019/12/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf">roadmap for research</a> we take forward. </p>
<p>Our choices will shape our future and the future of generations to come. </p>
<hr>
<p><em>The following people were also involved in the research underpinning the algorithm game. From the <a href="https://gradientinstitute.org">Gradient Institute</a> for responsible AI, Simon O'Callaghan, Alistair Reid and Tiberio Caetano. And from the <a href="https://www.techforsocialgood.org">Tech for Social Good</a> group, Vincent Zhang.</em></p><img src="https://counter.theconversation.com/content/171590/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kalervo Gulson receives funding from the Australian Research Council that supported this research.</span></em></p><p class="fine-print"><em><span>Claire Benn, Kirsty Kitto, Simon Knight, and Teresa Swist do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A UK controversy about school leavers’ marks shows algorithms can get things wrong. To ensure algorithms are as fair as possible, how they work and the trade-offs involved must be made clear.Kalervo Gulson, Professor and ARC Future Fellow, Education & Social Work, Education Futures Studio, University of SydneyClaire Benn, Research Fellow, Humanising Machine Intelligence Grand Challenge, Australian National UniversityKirsty Kitto, Associate Professor in Data Science, University of Technology SydneySimon Knight, Senior Lecturer and Director, Centre for Research on Education in a Digital Society, University of Technology SydneyTeresa Swist, Postdoctoral Research Associate, Education Futures Studio, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1660302021-08-23T03:00:03Z2021-08-23T03:00:03Z3 ways ‘algorithmic management’ makes work more stressful and less satisfying<figure><img src="https://images.theconversation.com/files/417115/original/file-20210819-23-17n7ykj.jpg?ixlib=rb-1.1.0&rect=0%2C247%2C3840%2C1908&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>If you think your manager treats you unfairly, the thought might have crossed your mind that replacing said boss with an unbiased machine that rewards performance based on objective data is a path to workplace happiness.</p>
<p>But as appealing as that may sound, you’d be wrong. Our <a href="https://www.sciencedirect.com/science/article/pii/S1053482221000176">review of 45 studies</a> on machines as managers shows we hate being slaves to algorithms (perhaps even more than we hate being slaves to annoying people).</p>
<p>Algorithmic management — in which decisions about assigning tasks to workers are automated — is most often associated with the gig economy. </p>
<p>Platforms such as Uber were built on technology that used real-time data collection and surveillance, ratings systems and “nudges” to manage workers. Amazon has been another enthusiastic adopter, using software and surveillance to direct human workers in its massive warehouses.</p>
<p>As algorithms become ever more sophisticated, we’re seeing them in more workplaces, taking over tasks once the province of human bosses.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-workers-cant-see-are-increasingly-pulling-the-management-strings-144724">Algorithms workers can't see are increasingly pulling the management strings</a>
</strong>
</em>
</p>
<hr>
<p>To get a better sense of what this will mean for the quality of people’s work and well-being, we <a href="https://www.sciencedirect.com/science/article/pii/S1053482221000176">analysed published research studies</a> from across the world that have investigated the impact of algorithmic management on work. </p>
<p>We identified six management functions that algorithms are currently able to perform: monitoring, goal setting, performance management, scheduling, compensation, and job termination. We then looked at how these affected workers, drawing on decades of psychological research showing what aspects of work are important to people.</p>
<p>Just four of the 45 studies showed mixed effects on work (some positive and some negative). The rest highlighted consistently negative effects on workers. In this article we’re going to look at three main impacts:</p>
<ul>
<li>Less task variety and skill use</li>
<li>Reduced job autonomy</li>
<li>Greater uncertainty and insecurity</li>
</ul>
<h2>1. Reduced task variety and skill use</h2>
<p>A great example of the way algorithmic management can reduce task variety and skill use is demonstrated by a <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/ntwe.12087">2017 study</a> on the use of electronic monitoring to pay British nurses providing home care to elderly and disabled people.</p>
<p>The system under which the nurses worked was meant to improve their efficiency. They had to use an app to “tag” their care activities. They were paid only for the tasks that could be tagged. Nothing else was recognised. The result was they focused on the urgent and technical care tasks — such as changing bandages or giving medication — and gave up spending time talking to their patients. This reduced both the quality of care as well as the nurses’ sense of doing significant and worthwhile work.</p>
<p>Research <a href="https://journals.sagepub.com/doi/full/10.1177/0149206319869435">suggests</a> increasing use of algorithms to monitor and manage workers will reduce task variety and skill us. Call centres, for example, already use technology <a href="https://www.wired.com/story/this-call-may-be-monitored-for-tone-and-emotion/">to assess a customers’ mood</a> and instruct the call centre worker on exactly how to respond, from what emotions they should deeply to how fast they should speak.</p>
<h2>2. Reduced job autonomy</h2>
<p>Gig workers refer to as the “fallacy of autonomy” that arises from the apparent ability to choose when and how long they work, when the reality is that platform algorithms use things like acceptance rates to calculate performance scores and to determine future assignments.</p>
<p>This loss of general autonomy is underlined by a <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/ntwe.12102">2019 study that interviewed 30 gig workers</a> using the “piecework” platforms Amazon Mechanical Turk, MobileWorks and CloudFactory. In theory workers could choose how long they worked. In practice they felt they needed to constantly be on call to secure the best paying tasks. </p>
<p>This isn’t just the experience of gig workers. A <a href="https://www.tandfonline.com/doi/full/10.1080/01972243.2015.998105">detailed 2013 study of the US truck driving industry</a> showed the downside of algorithms dictating what routes drivers should take, and when they should stop, based on weather and traffic conditions. As one driver in the study put it: “A computer does not know when we are tired, fatigued, or anything else […] I am also a professional and I do not need a [computer] telling me when to stop driving.” </p>
<h2>3. Increased intensity and insecurity</h2>
<p>Algorithmic management can heighten work intensity in a number of ways. It can dictate the pace directly, as with Amazon’s use of timers for “pickers” in its fulfilment centres. </p>
<p>But perhaps more pernicious is its ability to ramp up the work pressure indirectly. Workers who don’t really understand how an algorithm makes its decisions feel more uncertain and insecure about their performance. They worry about every aspect of affecting how the machine rates and ranks them. </p>
<p>For example, in a <a href="https://journals.sagepub.com/doi/10.1177/0950017020969593">2020 study</a> of the experience of 25 food couriers in Edinburgh, the riders spoke about feeling anxious and being “on edge” to accept and complete jobs lest their performance statistics be affected. This led them to take risks such as riding through red lights or through busy traffic in heavy rain. They felt pressure to take all assignments and complete them as quickly as possible so as to be assigned more jobs. </p>
<h2>Avoiding a tsunami of unhealthy work</h2>
<p>The overwhelming extent to which studies show negative psychological outcomes from algorithmic management suggests we face a tsunami of unhealthy work as the use of such technology accelerates.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/worker-protection-laws-arent-ready-for-an-automated-future-119051">Worker-protection laws aren't ready for an automated future</a>
</strong>
</em>
</p>
<hr>
<p>Currently the design and use of algorithmic management systems is driven by “efficiency” for the employer. A more considered approach is needed to ensure these systems can coexist with dignified, meaningful work. </p>
<p>Transparency and accountability is key to ensuring workers (and their representatives) understand what is being monitored, and why, and that they can appeal those decisions to a higher, human, power.</p><img src="https://counter.theconversation.com/content/166030/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sharon Kaye Parker receives funding from Australian Research Council.</span></em></p><p class="fine-print"><em><span><a href="mailto:xavier.parent-rocheleau@hec.ca">xavier.parent-rocheleau@hec.ca</a> receives funding from Social Sciences and Humanities Research Council of Canada. </span></em></p>Our review of 45 studies on machines as managers shows we generally hate being slaves to algorithms.Sharon Kaye Parker, Australian Research Council Laureate Fellow, Curtin UniversityXavier Parent-Rocheleau, Professor, HEC MontréalLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1611732021-05-19T04:30:58Z2021-05-19T04:30:58ZAn employee, not a contractor: unfair dismissal ruling against Deliveroo is a big deal for Australia’s gig workers<figure><img src="https://images.theconversation.com/files/401498/original/file-20210519-21-1an1yis.jpg?ixlib=rb-1.1.0&rect=0%2C808%2C5000%2C2514&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">CatwalkPhotos/Shutterstock</span></span></figcaption></figure><p>The ruling by Australia’s Fair Work Commission that online food delivery platform Deliveroo unfairly dismissed rider Diego Franco marks a major shift in the Australian “gig” economy. </p>
<p>What’s most significant is the commission has ruled Franco was, in fact, an employee of Deliveroo, not an independent contractor – the legal strategy platform companies use to avoid employer obligations and shirk paying employee entitlements such as a minimum wage and leave entitlements.</p>
<p>With Deliveroo’s rival Menulog having committed in April to trial an employment model for its <a href="https://theconversation.com/did-somebody-say-workers-rights-three-big-questions-about-menulogs-employment-plan-158942">riders working in Sydney’s CBD</a>, the ruling further swings the pendulum towards employment rights for “gig workers”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/did-somebody-say-workers-rights-three-big-questions-about-menulogs-employment-plan-158942">Did somebody say workers' rights? Three big questions about Menulog's employment plan</a>
</strong>
</em>
</p>
<hr>
<h2>How did we get to this important decision?</h2>
<p>Franco took his case to the Fair Work Commission (Australia’s industrial relations tribunal) after Deliveroo terminated his account in April 2020. He had delivered food in Sydney for the platform since April 2017. Deliveroo’s reason for termination was he had delivered food orders too slowly. </p>
<p>With support from the Transport Workers Union, Franco lodged an unfair dismissal claim. Deliveroo’s lawyers sought to thwart the claim on the basis that he was a contractor. Only if he was an employee could the commission rule on whether he had been unfairly dismissed. </p>
<p>So Franco’s case required him first to demonstrate he was not a contractor but an employee. Then he had to show Deliveroo had unfairly dismissed him.</p>
<p>Commissioner Ian Cambridge’s ruling in Franco’s favour on both these points was based on a full analysis of the workplace controls that platforms <a href="https://journals.sagepub.com/doi/pdf/10.1177/0950017019836911">can exercise over workers</a>. That analysis was more extensive than previous Fair Work Commission decisions – including in <a href="https://www.fwc.gov.au/documents/decisionssigned/html/2017fwc6610.htm">December 2017</a>, <a href="https://www.fwc.gov.au/documents/decisionssigned/html/2018fwc2579.htm">May 2018</a>, <a href="https://www.fwc.gov.au/documents/decisionssigned/html/2019fwc4807.htm">July 2019</a> and <a href="https://www.fwc.gov.au/documents/decisionssigned/html/pdf/2020fwcfb1698.pdf">April 2020</a> – that had ruled delivery drivers and riders need not be treated as employees. </p>
<h2>Exercising control</h2>
<p>The Fair Work Commission decides whether a worker should be classified as an employee or contractor by looking at multiple factors to form an overall picture (or “smell test”) of the work relationship. A key indicator is “control”. </p>
<p>In this case, Commissioner Cambridge considered both the level of control Deliveroo exercised over Franco and also its “capacity” to exercise control. This went further than previous decisions. </p>
<p>Deliveroo, like other platforms, uses algorithms in <a href="https://journals.sagepub.com/doi/pdf/10.1177/0308518X20914346">managing its workforce</a>. While its lawyers argued its algorithms did not use performance data to allocate work, it did use performance data in deciding to terminate Franco’s account. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-workers-cant-see-are-increasingly-pulling-the-management-strings-144724">Algorithms workers can't see are increasingly pulling the management strings</a>
</strong>
</em>
</p>
<hr>
<p>This highlighted how such apps collect data that could be used to control workers. </p>
<p><a href="https://www.fwc.gov.au/documents/decisionssigned/html/2020fwcfb1698.htm">Previous commission rulings</a> against gig workers being classified as employees have found couriers have considerable control over their work because they can, for instance, decide where and when to make themselves available for deliveries. Francos’ case, however, showed the “economic reality” of the circumstances significantly constrained his autonomy. </p>
<p>On that basis, Commissioner Cambridge ruled Deliveroo had significant actual, or potential, control over how work was performed, when work was done and who received work. This suggested the platform acted like an employer.</p>
<h2>Multi-apping not a barrier</h2>
<p>Another key aspect of the ruling was rejecting Deliveroo’s argument Franco could not be an employee because he was working for other platforms at the same time. This practice, known as “multi-apping”, is common in the gig economy due to the struggle of earning enough money just working for one platform. </p>
<p>Commissioner Cambridge ruled <a href="https://theconversation.com/did-somebody-say-workers-rights-three-big-questions-about-menulogs-employment-plan-158942">multi-apping</a> was merely “an example of the phenomenon of change that new technology is bringing to the traditional arrangements for employment”. </p>
<p>The overall picture, he said, was that Franco “was not carrying on a trade or business of his own, or on his own behalf. Instead, he was working in Deliveroo’s business as part of that business.” In short, he was an employee. </p>
<h2>Callous conduct</h2>
<p>Commissioner Cambridge was also highly critical of Deliveroo terminating Franco’s contract via email, describing this conduct as “callous”. </p>
<p>Whether Franco was a contractor or not, <a href="https://www.fwc.gov.au/documents/decisionssigned/html/2021fwc2818.htm#P441_86136">he said</a>, “basic human dignity requires that a matter of such gravity should be conveyed personally”.</p>
<p>He also criticised the company for not informing Franco of the expected performance standards, for not giving Franco adequate warning about the consequence of slow deliveries, and not affording Franco procedural justice such as giving him an opportunity to respond before being terminated. </p>
<p>This meant Franco, an employee, had been unfairly dismissed and was to be reinstated.</p>
<h2>What are the wider consequences</h2>
<p>This case highlights the need for properly designed and implemented human and algorithmic management processes to ensure a <a href="https://journals.sagepub.com/doi/pdf/10.1177/0022185618817069">level of job quality</a> even in the context of digitally intermediated “gig” work.</p>
<p>It goes further than that, however. </p>
<p>Ruling that Franco was an employee and not a contractor reflects a broader global trend. More and more gig workers are pushing back against the contractual arrangements of gig-economy platforms – and courts are agreeing.</p>
<p>In recent months there have been several major decisions. In March, the <a href="https://theconversation.com/a-new-deal-for-uber-drivers-in-uk-but-australias-gig-workers-must-wait-157597">UK Supreme Court ruled</a> two Uber drivers were workers (a different classification to being an employee, but with more rights than an independent contractor). </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-new-deal-for-uber-drivers-in-uk-but-australias-gig-workers-must-wait-157597">A new deal for Uber drivers in UK, but Australia's ‘gig workers' must wait</a>
</strong>
</em>
</p>
<hr>
<p>In February, Deliveroo lost an appeal against a <a href="https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:GHAMS:2021:392">Netherlands ruling</a> its couriers are employees.</p>
<p>Although Deliveroo is likely to appeal the Fair Work Commission ruling, this case is another sign the thin ice on which “gig” platforms have been skating for years is cracking.</p><img src="https://counter.theconversation.com/content/161173/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alex Veen is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project. He further receives funding from the Australian Research Council in the form a Discovery Early Career Researcher Award (DECRA) for his project entitled 'Algorithmic management and the future of work: lessons from the gig economy.'</span></em></p><p class="fine-print"><em><span>Caleb Goods is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project.</span></em></p><p class="fine-print"><em><span>Rick Sullivan is supervised by Alex Veen at the University of Sydney Business School. He receives a stipend attached to Alex Veen's Australian Research Council in the form a Discovery Early Career Researcher Award (DECRA) project entitled 'Algorithmic management and the future of work: lessons from the gig economy.'</span></em></p><p class="fine-print"><em><span>Tom Barratt is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project.</span></em></p>The Fair Work Commission’s ruling that delivery rider Diego Franco was an employee of Deliveroo is a major legal win for Australia’s gig workers.Alex Veen, Lecturer and DECRA Fellow, University of SydneyCaleb Goods, Senior Lecturer - Management and Organisations, UWA Business School, The University of Western AustraliaRick Sullivan, PhD candidate, University of SydneyTom Barratt, Lecturer, Centre for Work + Wellbeing, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1595312021-04-23T06:18:20Z2021-04-23T06:18:20Z‘They track our every move’: why the cards were stacked against a union at Amazon<p>“Working at an Amazon warehouse is no easy thing. The shifts are long. The pace is super-fast. You are constantly being watched and monitored. They seem to think you are just another machine.”</p>
<p>So testified Jennifer Bates before a US Senate Budget Committee hearing into income and wealth inequality <a href="https://www.budget.senate.gov/download/jennifer-bates-testimony">on March 17</a>. Less than a month later her co-workers at Amazon’s fulfilment centre in Bessemer, Alabama, voted 1,798 to 738 against allowing the Retail, Wholesale and Department Stores Union into their workplace to represent them. </p>
<p>That might seem a surprising outcome. But Bates’s testimony hinted at the odds of workers voting against the company’s wishes – which was heavily anti-union: </p>
<blockquote>
<p>They track our every move – if your computer isn’t scanning, you get charged with being “time off task”. From the onset I learned that if I worked too slow or had too much time off task, I could be disciplined or even fired.</p>
</blockquote>
<p>That is, employees at Amazon knew they were under constant surveillance, and also that the company had a history of sacking those who were pro-union.</p>
<hr>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/2UfsYEjoeVc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Jennifer Bates testifies before the Senate committee on March 17 2021.</span></figcaption>
</figure>
<hr>
<h2>A very high barrier</h2>
<p>Bates, 48, got a job at the Bessemer facility in May 2020, two months after it opened. The warehouse is the size of 16 American football fields. About 5,800 people are employed there. </p>
<p>Shocked by the conditions, she and a few other coworkers finally contacted the union. From this grew the campaign to become the first Amazon workplace in the US to unionise. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/whats-at-stake-in-amazons-bessemer-alabama-union-vote-5-questions-answered-157498">What's at stake in Amazon's Bessemer, Alabama, union vote: 5 questions answered</a>
</strong>
</em>
</p>
<hr>
<p>The campaign ended in a resounding defeat for the union. But that’s because the US has unusual unionisation laws compared to the most other industrialised and democratic nations.</p>
<p>American law requires that more than 50% of workers at a workplace vote for a union for it to be recognised as their bargaining agent.</p>
<p>In other countries workers decide for themselves if they want to join a union. Nor are unions prevented from bargaining for members if they represent less than half the workforce. </p>
<p>In Italy, for example, union laws required <a href="https://theconversation.com/tech-innovators-start-to-see-old-fashioned-benefits-of-collective-bargaining-100164">Amazon to negotiate</a> with the union (FILCAMS CGIL) regardless of the number of members at the company’s warehouse. In Australia, a union has the right to bargain with just a single member in a workplace. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Inside an Amazon fulfilment centre in Chattanooga, Tennessee, August 2017." src="https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/396713/original/file-20210423-16-dp8yfn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An Amazon fulfilment centre in Chattanooga, Tennessee, August 2017.</span>
<span class="attribution"><span class="source">Doug Strickland/Chattanooga Times Free Press/AP</span></span>
</figcaption>
</figure>
<p>The “all-in” union law dates from 1935 when President Franklin Roosevelt created the National Labor Relations Board as an umpire to oversee collective bargaining efforts. It wasn’t meant to impede unions, but over time employers have used their resources and advantages to turn it into a barrier to unions.</p>
<p>In Bessemer, blocking union officials from coming on site was just one of Amazon’s advantages. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/396680/original/file-20210423-13-vcnw6v.jpg?ixlib=rb-1.1.0&rect=20%2C0%2C2000%2C1122&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/396680/original/file-20210423-13-vcnw6v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=434&fit=crop&dpr=1 600w, https://images.theconversation.com/files/396680/original/file-20210423-13-vcnw6v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=434&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/396680/original/file-20210423-13-vcnw6v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=434&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/396680/original/file-20210423-13-vcnw6v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=546&fit=crop&dpr=1 754w, https://images.theconversation.com/files/396680/original/file-20210423-13-vcnw6v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=546&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/396680/original/file-20210423-13-vcnw6v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=546&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Michael Foster, of the Retail, Wholesale and Department Store Union, campaigns for the union outside Amazon’s warehouse in Bessemer, Alabama, on March 29 2021.</span>
<span class="attribution"><span class="source">Jay Reeves/AP</span></span>
</figcaption>
</figure>
<p>Internally the company ran an incessant anti-union campaign. Anti-union messages were sent to workers’ mobile phones and were even fixed to the back of toilet stalls. Jennifer Bates again:</p>
<blockquote>
<p>We were forced into what they call ‘union education’ meetings. We had no choice but to attend them, not given an opportunity to decline […] They would last for as much as an hour, and we’d have to go sometimes several times a week.</p>
</blockquote>
<p>The company also has a long history of surveilling its workers. Any employee expressing union sympathies on social media risked being identified by the “spooks” the company pays to <a href="https://www.cnbc.com/2020/10/24/how-amazon-prevents-unions-by-surveilling-employee-activism.html">monitor their activities outside of work</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/396720/original/file-20210423-18-y1oua9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/396720/original/file-20210423-18-y1oua9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=445&fit=crop&dpr=1 600w, https://images.theconversation.com/files/396720/original/file-20210423-18-y1oua9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=445&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/396720/original/file-20210423-18-y1oua9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=445&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/396720/original/file-20210423-18-y1oua9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=559&fit=crop&dpr=1 754w, https://images.theconversation.com/files/396720/original/file-20210423-18-y1oua9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=559&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/396720/original/file-20210423-18-y1oua9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=559&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Amazon fired Emily Cunningham, left, and Kathryn Dellinger in 2020 for publicly criticising its lack of action on climate change and not doing enough to protect warehouse workers from COVID-19. The National Labor Relations Board has ruled their firing was illegal.</span>
<span class="attribution"><span class="source">Ted S. Warren/AP</span></span>
</figcaption>
</figure>
<h2>Political solution</h2>
<p>In the early 1980s, 20% of American workers were union members; now <a href="https://www.bls.gov/news.release/pdf/union2.pdf">just 11% are</a>. But the percentage of non-unionised workers who would vote to join one if they could has grown from <a href="https://iwer.mit.edu/2018/08/30/who-wants-to-join-a-union-a-growing-number-of-americans/">from 33% to 48%</a>. </p>
<p>That means there’s more support for unions now than ever. There is a clear need for law reform to address the growing gap between the number of workers who want to be in a union and the number who actually are.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-workers-cant-see-are-increasingly-pulling-the-management-strings-144724">Algorithms workers can't see are increasingly pulling the management strings</a>
</strong>
</em>
</p>
<hr>
<p>US President Joe Biden has signalled support for such reform. On March 1, his official Twitter account said of Amazon’s Alabama workers voting to unionise: </p>
<blockquote>
<p>It’s a vitally important choice – one that should be made without intimidation or threats by employers. Every worker should have a free and fair choice to join a union.</p>
</blockquote>
<hr>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Joe Biden tweet: 'Every worker should have a free and fair choice to join a union.'" src="https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=173&fit=crop&dpr=1 600w, https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=173&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=173&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=217&fit=crop&dpr=1 754w, https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=217&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/396694/original/file-20210423-13-1btlr2n.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=217&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://twitter.com/POTUS/status/1366191901196644354">Twitter</a></span>
</figcaption>
</figure>
<hr>
<p>A bill that would help level the playing field is awaiting a vote in the US Senate. Co-sponsored by Democrat Representative Robert Cortez Scott (California) and Senator Patty Murray (Washington), the Protecting the Right to Organize (PRO) Act would:</p>
<ul>
<li><p>allow unions to collect bargaining fees from non-members, thereby avoiding the need to win a majority in a workplace election in order to bargain</p></li>
<li><p>ban company-sponsored meetings to discourage union organising</p></li>
<li><p>prevent retaliatory firing of union activists.</p></li>
</ul>
<p>If the bill passes the Senate, Biden has said he will sign it into law. </p>
<p>Biden also has the ability to alter union election regulations at the US Department of Labor once its new leadership is confirmed. On the campaign trail, he promised that employers overstepping the mark in opposing unions (as <a href="https://www.epi.org/publication/unlawful-employer-opposition-to-union-election-campaigns/">nearly half do</a>) would be pursued and punished with multi-year debarment from receiving federal contracts.</p>
<p>These changes could mark a turning point for the union movement – and the millions of workers who, like Jennifer Bates, think there should be another way.</p><img src="https://counter.theconversation.com/content/159531/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Walker is an official of the Shop, Distributive & Allied Employees' Association</span></em></p>Employees at Amazon knew they were under constant surveillance and that the company had a history of sacking those who were pro-union.Michael Walker, Adjunct Fellow, Centre for Workforce Futures, Macquarie UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1589422021-04-15T01:54:22Z2021-04-15T01:54:22ZDid somebody say workers’ rights? Three big questions about Menulog’s employment plan<p>Menulog, Australia’s second-largest food ordering and delivery platform, has declared it will break with the standard “gig platform” business model and engage some of its couriers as employees, not independent contractors.</p>
<p>“<a href="https://www.smh.com.au/politics/federal/we-owe-it-to-our-couriers-menulog-trials-employee-rights-for-workers-20210412-p57iin.html">We owe it to our couriers,</a>” Menulog’s managing director Morten Belling told the Senate Select Committee <a href="https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Job_Security/JobSecurity">inquiry into job security</a> this week. The inquiry is investigating the scope of insecure or precarious employment in Australia.</p>
<p>The Transport Workers’ Union says Menulog’s move is a “<a href="https://www.news.com.au/finance/work/at-work/menulog-announces-trial-to-improve-pay-for-delivery-riders-union-applauds-move-as-watershed-moment/news-story/6a564847772b761cd16798edc7231987">watershed moment for the gig economy</a>”. By committing to pay couriers a minimum wage and superannuation, it is going further than its competitors such as <a href="https://journals.sagepub.com/doi/full/10.1177/0022185618817069">UberEats and Deliveroo</a>.</p>
<p>But let’s not get too excited yet. </p>
<p>What Menulog has announced is just a pilot program, offering employment to some couriers in Sydney’s CBD. How much of an overall benefit it makes even to those workers will depend on the details. </p>
<p>Work can still be insecure and poorly paid even when a worker is “employed”. Just ask many casual employees in the <a href="https://doi.org/10.1177/0143831X18765247">hospitality</a>, horticulture and retail sectors.</p>
<h2>Accepting greater responsibility</h2>
<p>To give Menulog credit, the company isn’t legally obliged to make this change. </p>
<p>The prevailing independent contractor model, paying workers “piece” rates with no benefits such as superannuation and paid leave, is controversial yet thus far legal – even though it means many <a href="https://www.abc.net.au/news/2021-04-12/uber-eats-drivers-earn-less-than-minimum-wage-inquiry/100062900">earn less than the minimum wage</a>.</p>
<p>In engaging couriers as employees, Menulog is accepting greater responsibility for their welfare. Things like insurance and workers compensation become straightforward. As contractors, already lowly paid workers are often responsible for their own insurance. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/workers-compensation-doesnt-cover-gig-workers-heres-a-way-to-protect-them-99946">Workers' compensation doesn't cover gig workers – here's a way to protect them</a>
</strong>
</em>
</p>
<hr>
<p>These are vitally important issues given the risks involved in courier work. Last year <a href="https://www.smh.com.au/national/nsw/fifth-food-delivery-rider-dies-following-truck-crash-in-central-sydney-20201123-p56h9y.html">five delivery riders were killed</a> in traffic accidents. Though none were Menulog couriers, Belling mentioned this as a key driver for the company’s change. </p>
<p>The shift to an employment model should also result in greater income certainty for workers. But to what extent they will be better off depends on at least three important details. </p>
<h2>1. What’s the award?</h2>
<p>The first question is what <a href="https://www.fwc.gov.au/awards-and-agreements/awards/modern-awards">modern award</a> – the document that sets minimum terms and conditions of employment within a specific industry or occupation – will couriers be employed under. </p>
<p>According to Menulog there are “<a href="https://mashable.com/article/menulog-gig-contractor-employee-australia/">a number of challenges</a>” in moving to an employment model, with Australia’s award system cited as a potential barrier. </p>
<p>The award now covering couriers is the <a href="http://awardviewer.fwo.gov.au/award/show/MA000038">Road Transport and Distribution Award 2020</a>. Menulog has indicated it wants delivery workers to be covered by a new award, and intends to consult with the union to create it. </p>
<p>It hasn’t spelt out what the specific “challenges” in the existing award are – employer groups often talk in generalities about a <a href="https://www.aigroup.com.au/policy-and-research/mediacentre/releases/workplace-flexibility-22Apr/">lack of flexibility</a> – but it may include removing minimum engagement periods.</p>
<p>Under the existing award – as with others – a casual employee must be paid for a minimum four-hour shift. Minimum engagement periods are important for giving workers some certainty as to how much they will earn when asked to work. In contrast, an independent contractor can be engaged for a one-off delivery that may only be for a few minutes to earn a few dollars. </p>
<p>The award also stipulates penalty rates and allowances for unsociable hours or days (such as public holidays).</p>
<p>If Menulog’s move involves eroding fundamental award principles about minimum hours and payments, its couriers could find “employment” isn’t much better their current conditions. </p>
<h2>2. Does every worker get to be an employee?</h2>
<p>Even given the limited scope of the trial (Menulog operates throughout Australia and New Zealand, while its parent company Just Eat Takeaway operates in <a href="https://www.justeattakeaway.com/about-us/our-markets/">23 countries</a>) it is unclear if the platform plans to make all couriers working in Sydney’s CBD employees.</p>
<hr>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=360&fit=crop&dpr=1 600w, https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=360&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=360&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=453&fit=crop&dpr=1 754w, https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=453&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/394942/original/file-20210414-15-kn9lm2.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=453&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Menulog’s parent company Just Eat Takeaway has operations in 21 countries and partnerships in two others (Brazil and Colombia).</span>
<span class="attribution"><a class="source" href="https://www.justeattakeaway.com/about-us/our-markets/">www.justeattakeaway.com</a></span>
</figcaption>
</figure>
<hr>
<p>Or will it end up with a two-tier system, where some couriers are engaged as employees and other remain contractors? If this is the case, it could make contract work even more precarious. </p>
<p>It’s important to know who gets to be an employee and why. This should be transparent. Platform companies are notorious for their “black-box” <a href="https://theconversation.com/algorithms-workers-cant-see-are-increasingly-pulling-the-management-strings-144724">algorithmic management</a>. Their algorithms now effectively make <a href="https://journals.sagepub.com/doi/full/10.1177/0950017019836911">workers compete with each other for gigs</a>. A system that makes them compete for the chance to be rewarded with the badge of “employee” is hardly much better. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-workers-cant-see-are-increasingly-pulling-the-management-strings-144724">Algorithms workers can't see are increasingly pulling the management strings</a>
</strong>
</em>
</p>
<hr>
<h2>3. How to deal with multi-apping?</h2>
<p>It is a feature of the gig economy that couriers often work on multiple apps at the same time to try and win more gigs – a practice known as “<a href="https://journals.sagepub.com/doi/full/10.1177/0308518X20914346">multi-apping</a>”. </p>
<p>If they become Menulog employees, will they have to forego this right? Will they be allowed to earn money through other platforms during times when they’re not employed? </p>
<p>Again, these details will need to be worked out. The answer will have ramifications across the food delivery industry. </p>
<h2>Finally, are customers willing to pay?</h2>
<p>Menulog’s announcement has been welcomed by unions, including the Australian Council of Trade Unions’ <a href="https://www.actu.org.au/actu-media/media-releases/2021/unions-welcome-menulog-decision-to-change-their-business-model-to-give-employees-rights">head Sally McManus</a>. But the details that remain unclear are fundamentally important.</p>
<p>This trial may mark a major shift in this part of the “gig economy”. The head of Just Eat Takeaway, Jitse Groen, said last year he would rather his workers get more <a href="https://www.bbc.com/news/business-53780299">protections and benefits</a>. Belling told the Senate inquiry that treating couriers as employees “may cost us more, but it’s the right thing to do”. </p>
<p>But how much more Menulog is prepared to pay also depends on how much more customers are <a href="https://doi.org/10.1016/j.jocm.2020.100254">willing to pay</a>.</p>
<p>It is important for the gig economy as a whole that Menulog get this right in Australia. That will depend on the answers to the above questions.</p><img src="https://counter.theconversation.com/content/158942/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Barratt is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project.</span></em></p><p class="fine-print"><em><span>Alex Veen is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project. </span></em></p><p class="fine-print"><em><span>Caleb Goods is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project.</span></em></p>Food-ordering platform Menulog has declared it will break with the standard contractor business model. But let’s not get too excited yet.Tom Barratt, Lecturer, Centre for Work + Wellbeing, Edith Cowan UniversityAlex Veen, Lecturer and DECRA Fellow, University of SydneyCaleb Goods, Senior Lecturer - Management and Organisations, UWA Business School, The University of Western AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1517682020-12-10T06:02:19Z2020-12-10T06:02:19ZUpheaval at Google signals pushback against biased algorithms and unaccountable AI<figure><img src="https://images.theconversation.com/files/374138/original/file-20201210-13-3whyn5.jpg?ixlib=rb-1.1.0&rect=29%2C9%2C3240%2C2143&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/techcrunch/30671211838/">TechCrunch</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Artificial intelligence (AI) is no longer the stuff of science fiction. In the form of machine learning tools and decision-making algorithms, it’s all around us. AI determines what news you get served up on the internet. It plays a key role in online matchmaking, which is now the way <a href="https://www.statista.com/chart/20822/way-of-meeting-partner-heterosexual-us-couples/">most romantic couples get together</a>. It will tell you how to get to your next meeting, and what time to leave home so you’re not late. </p>
<p>AI often appears both omniscient and neutral, but on closer inspection we find AI learns from and adopts human biases. As a result, algorithms replicate familiar forms of discrimination but hide them in a “black box” that makes seemingly objective decisions. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-workers-cant-see-are-increasingly-pulling-the-management-strings-144724">Algorithms workers can't see are increasingly pulling the management strings</a>
</strong>
</em>
</p>
<hr>
<p>For many workers, such as delivery drivers, AI has <a href="https://theconversation.com/algorithms-workers-cant-see-are-increasingly-pulling-the-management-strings-144724">replaced human managers</a>. Algorithms tell them what to do, evaluate their performance and decide whether to fire them.</p>
<p>But as the use of AI grows and its drawbacks become more clear, workers in the very companies that make the tools of algorithmic management are beginning to push back.</p>
<h2>Trouble at Google</h2>
<p>One of the most familiar forms of AI is the Google algorithm, and the order in which it presents search results. Google has an <a href="https://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/">88% market share</a> of internet searches, and the Google homepage is <a href="https://www.alexa.com/topsites">the most visited page</a> on the entire internet. How it determines its search results is hugely influential but completely opaque to users.</p>
<p>Earlier this month, one of Google’s lead researchers on AI ethics and bias, Timnit Gebru, <a href="https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html">abruptly left</a> the company. Gebru <a href="https://twitter.com/timnitGebru/status/1334352694664957952">says</a> she was fired after an internal email sent to colleagues about racial discrimination and toxic work conditions at Google, while senior management <a href="https://twitter.com/JeffDean/status/1334953632719011840">maintains</a> Gebru resigned over the publication of a research paper. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1334352694664957952"}"></div></p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1334953632719011840"}"></div></p>
<p>Gebru’s departure came after she put her name to a <a href="https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/">paper</a> flagging the risk of bias in large language models (the kind used by Google). The paper argued such language models could hurt marginalised communities. </p>
<p>Gebru has <a href="https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212">previously shown</a> that facial recognition technology was highly inaccurate for Black people.</p>
<p>Google’s response rapidly <a href="https://www.cnbc.com/2020/12/08/timnit-gebru-departure-perfect-storm-for-alphabet-ceo-sundar-pichai.html">stirred unrest</a> among Google’s workforce, with many of Gebru’s colleagues <a href="https://www.theverge.com/2020/12/7/22158501/timnit-gebru-team-google-public-statement-fired">supporting</a> her account of events.</p>
<p>Further annoying Gebru’s coworkers and academic sympathisers was the perceived attempt to <a href="https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382">muzzle unwelcome research findings</a>, compromising the perception of any research published by in-house researchers.</p>
<h2>When algorithms make decisions</h2>
<p>Here are a few examples of how algorithms can recycle and reinforce existing prejudices:</p>
<ul>
<li><p>Automated resume-scanning systems have been <a href="https://www.weforum.org/agenda/2019/05/ai-assisted-recruitment-is-biased-heres-how-to-beat-it/">found to discriminate</a> against African-American names, graduates of women’s colleges, and even the word “women” in a job application.</p></li>
<li><p>Credit-scoring AI that can <a href="https://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers-fight-back/">cut people off</a> from public benefits such as health care, unemployment and child support has been found to penalise low-income individuals.</p></li>
<li><p>Misplaced trust in algorithms lay at the heart of Australia’s Robodebt debacle in which the assumption of a regular week-to-week wage packet was baked into the system.</p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/from-robodebt-to-racism-what-can-go-wrong-when-governments-let-algorithms-make-the-decisions-132594">From robodebt to racism: what can go wrong when governments let algorithms make the decisions</a>
</strong>
</em>
</p>
<hr>
<p>Human systems have checks and balances and higher authorities that can be appealed to when there is an apparent error. Algorithmic decisions often do not. </p>
<p>In <a href="https://www.researchgate.net/publication/346875386_'You_can't_pick_up_a_phone_and_talk_to_someone'_How_algorithms_function_as_biopower_in_the_gig_economy">our research</a> forthcoming in the journal <em>Organization</em> my colleagues and I found that this lack of a right of appeal, or even a pathway to appeal, reinforces forms of power and control in workplaces.</p>
<h2>Now what?</h2>
<p>So AI, an influential tool of the world’s largest corporations, appears to systematically disadvantage minorities and economically marginalised people. What can be done? </p>
<p>The protest initiated and led by Google’s own employees may yet bring about change inside the company. Internal discontent at the online giant did get results two years ago, when <a href="https://theconversation.com/the-google-walkout-is-a-watershed-moment-in-21st-century-labour-activism-106353">protest over the kid-glove treatment of executives</a> facing complaints of sexual misconduct led to a change in the company’s policy.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-google-walkout-is-a-watershed-moment-in-21st-century-labour-activism-106353">The Google walkout is a watershed moment in 21st century labour activism</a>
</strong>
</em>
</p>
<hr>
<p>Outsiders are also beginning to take more of an interest. The European Union’s <a href="https://gdpr-info.eu/">General Data Protection Regulation</a> (GDPR), which has boosted privacy standards since 2018, taught regulators around the world that the black box of algorithmic decision-making can indeed be prised open.</p>
<p>The G7 group of leading economies recently set up a <a href="https://gpai.ai/">Global Partnership on Artificial Intelligence</a> to drive discussion around regulatory solutions to these problems, but it is still in its infancy.</p>
<p>As an industrial relations issue, the use of AI in hiring and management needs to be <a href="https://ssrn.com/abstract=3675082">brought into the scope</a> of collective bargaining agreements. Current workplace grievance procedures may allow human decisions to be appealed to a higher authority, but will be inadequate when the decisions are not made by humans – and people in authority may not even know how the AI arrived at its conclusions.</p>
<p>Until internal protests or outside intervention start to impact on the way AI is designed, we will continue to rely on self-regulation. Given the events of the past week, this may not inspire a great deal of confidence.</p><img src="https://counter.theconversation.com/content/151768/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Walker is an official of the Shop, Distributive & Allied Employees' Association</span></em></p>The departure of AI ethics researcher Timnit Gebru from Google highlights attempts to make algorithmic decision-making accountable.Michael Walker, Adjunct Fellow, Macquarie UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1447242020-08-24T03:35:36Z2020-08-24T03:35:36ZAlgorithms workers can’t see are increasingly pulling the management strings<figure><img src="https://images.theconversation.com/files/353546/original/file-20200819-42970-1qgkka4.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C5000%2C3315&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>“I’m sorry, Dave. I’m afraid I can’t do that.” HAL’s cold, if polite, refusal to open the pod bay doors in <a href="https://theconversation.com/50-years-old-2001-a-space-odyssey-still-offers-insight-about-the-future-102303">2001 A Space Odyssey</a> has become a defining warning about putting too much trust in artificial intelligence, particularly if you work in space. </p>
<p>In the movies, when a machine decides to be the boss – or humans let it – things go wrong. Yet despite myriad dystopian warnings, control by machines is fast becoming our reality. </p>
<p>Algorithms – sets of instructions to solve a problem or complete a task – now drive everything from browser search results to <a href="https://theconversation.com/medical-ai-can-now-predict-survival-rates-but-its-not-ready-to-unleash-on-patients-127039">better medical care</a>. </p>
<p>They are helping <a href="https://theconversation.com/algorithms-are-designing-better-buildings-140302">design buildings</a>. They are <a href="https://theconversation.com/can-slower-financial-traders-find-a-haven-in-a-world-of-high-speed-algorithms-61055">speeding up trading</a> on financial markets, making and losing fortunes in micro-seconds. They are calculating the most efficient routes for <a href="https://www.ups.com/us/en/services/knowledge-center/article.page?kid=aa3710c2">delivery drivers</a>. </p>
<p>In the workplace, self-learning algorithmic computer systems are being introduced by companies to assist in areas such as hiring, setting tasks, measuring productivity, evaluating performance and even terminating employment: “I’m sorry, Dave. I’m afraid you are being made redundant.”</p>
<p>Giving self‐learning algorithms the responsibility to make and execute decisions affecting workers is called “<a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1748-8583.12258">algorithmic management</a>”. It carries a host of risks in depersonalising management systems and entrenching pre-existing biases. </p>
<p>At an even deeper level, perhaps, algorithmic management entrenches a power imbalance between management and worker. Algorithms are closely guarded secrets. Their decision-making processes are hidden. It’s a black-box: perhaps you have some understanding of the data that went in, and you see the result that comes out, but you have no idea of what goes on in between.</p>
<h2>Algorithms at work</h2>
<p>Here are a few examples of algorithms already at work.</p>
<p>At Amazon’s fulfilment centre in south-east Melbourne, they set the pace for “pickers”, who have timers on their scanners showing how long they have <a href="https://www.abc.net.au/news/2019-02-27/amazon-australia-warehouse-working-conditions/10807308?nw=0">to find the next item</a>. As soon as they scan that item, the timer resets for the next. All at a “not quite walking, not quite running” speed. </p>
<p>Or how about AI determining your success in a job interview? More than 700 companies <a href="https://theconversation.com/facial-analysis-ai-is-being-used-in-job-interviews-it-will-probably-reinforce-inequality-124790">have trialled such technology</a>. US developer HireVue says its software speeds up the hiring process by 90% by having applicants answer identical questions and then scoring them according to language, tone and facial expressions.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facial-analysis-ai-is-being-used-in-job-interviews-it-will-probably-reinforce-inequality-124790">Facial analysis AI is being used in job interviews – it will probably reinforce inequality</a>
</strong>
</em>
</p>
<hr>
<p>Granted, human assessments during job interviews are notoriously flawed. Algorithms, however, can also be <a href="https://journals.sagepub.com/doi/full/10.1177/2053951718756684">biased</a>. The classic example is the COMPAS software used by US judges, probation and parole officers to rate a person’s risk of reoffending. In 2016 a <a href="https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm">ProPublica investigation</a> showed the algorithm was heavily discriminatory, incorrectly classifying black subjects as higher risk 45% of the time, compared with 23% for white subjects.</p>
<h2>How gig workers cope</h2>
<p>Algorithms do what their code tells them to do. The problem is this code is rarely available. This makes them difficult to scrutinise, or even understand.</p>
<p>Nowhere is this more evident than in the gig economy. Uber, Lyft, Deliveroo and other platforms could not exist without algorithms <a href="https://journals.aom.org/doi/full/10.5465/annals.2018.0174">allocating, monitoring, evaluating and rewarding</a> work.</p>
<p>Over the past year Uber Eats’ <a href="https://www.news.com.au/finance/work/at-work/40-per-cent-drop-overnight-ubereats-bicycle-riders-say-algorithm-change-preferences-motorbikes-and-cars/news-story/ef3d3a0bc8ee9a7374616b5d2c4a67eb">bicycle couriers</a> and <a href="https://www.twu.com.au/press/survey-shows-ubereats-drivers-struggle-with-bankruptcy-homelessness/">drivers</a>, for instance, have blamed unexplained changes to the algorithm for slashing their jobs, and incomes. </p>
<p>Riders can’t be 100% sure it was all down to the algorithm. But that’s part of the problem. The fact those who depend on the algorithm don’t know one way or the other has a powerful influence on them. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/uber-drivers-experience-highlights-the-dead-end-job-prospects-facing-more-australian-workers-116973">Uber drivers' experience highlights the dead-end job prospects facing more Australian workers</a>
</strong>
</em>
</p>
<hr>
<p>This is a key result from our <a href="https://journals.sagepub.com/doi/full/10.1177/0950017019836911">interviews with 58 food-delivery couriers</a>. Most knew their jobs were allocated by an algorithm (via an app). They knew the app collected data. What they didn’t know was how data was used to award them work.</p>
<p>In response, they developed a range of strategies (or guessed how) to “win” more jobs, such as accepting gigs as quickly as possible and waiting in “magic” locations. Ironically, these attempts to please the algorithm often meant losing the very flexibility that was one the attractions of gig work. </p>
<p>The information asymmetry created by algorithmic management has two profound effects. First, it threatens to entrench systemic biases, the type of discrimination hidden within the COMPAS algorithm for years. Second, it compounds the <a href="https://journals.sagepub.com/doi/10.1177/0308518X20914346">power imbalance</a> between management and worker. </p>
<p>Our data also confirmed others’ findings that it is almost impossible to complain about the decisions of the algorithm. Workers often do not know the exact basis of those decisions, and there’s no one to complain to anyway. When Uber Eats bicycle couriers asked for reasons about their plummeting income, for example, responses from the company advised them “we have <a href="https://www.news.com.au/finance/work/at-work/40-per-cent-drop-overnight-ubereats-bicycle-riders-say-algorithm-change-preferences-motorbikes-and-cars/news-story/ef3d3a0bc8ee9a7374616b5d2c4a67eb">no manual control</a> over how many deliveries you receive”.</p>
<h2>Broader lessons</h2>
<p>When algorithmic management operates as a “black box” one of the consequences is that it is can become an <a href="https://journals.sagepub.com/doi/10.1177/0950017019836911">indirect control mechanism</a>. Thus far under-appreciated by Australian regulators, this control mechanism has enabled platforms to mobilise a reliable and scalable workforce while avoiding <a href="https://www.fwc.gov.au/documents/decisionssigned">employer responsibilities</a>.</p>
<p>“The absence of concrete evidence about how the algorithms operate”, the Victorian government’s <a href="https://s3.ap-southeast-2.amazonaws.com/hdp.au.prod.app.vic-engage.files/4915/9469/1146/Report_of_the_Inquiry_into_the_Victorian_On-Demand_Workforce-reduced_size.pdf">inquiry into the “on-demand” workforce</a> notes in its report, “makes it hard for a driver or rider to complain if they feel disadvantaged by one.”</p>
<p>The report, published in June, also found: it is “hard to confirm if concern over algorithm transparency is real.” </p>
<p>But it is precisely the fact it is hard to confirm that’s the problem. How can we start to even identify, let alone resolve, issues like algorithmic management? </p>
<p>Fair conduct standards to ensure transparency and accountability are a start. One example is the <a href="https://fair.work">Fair Work initiative</a>, led by the <a href="https://www.oii.ox.ac.uk/">Oxford Internet Institute</a>. The initiative is bringing together researchers with platforms, workers, unions and regulators to develop global principles for work in the platform economy. This includes “fair management”, which focuses on how transparent the results and outcomes of algorithms are for workers. </p>
<p>Understandings about impact of algorithms on all forms of work is still in its infancy. It demands greater scrutiny and research. Without human oversight based on agreed principles we risk inviting HAL into our workplaces.</p><img src="https://counter.theconversation.com/content/144724/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Barratt is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project.</span></em></p><p class="fine-print"><em><span>Alex Veen is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project</span></em></p><p class="fine-print"><em><span>Caleb Goods is part of a research team that received a University of Sydney Business School Industry Partnership grant. Uber Technologies is a Partner Organisation on this grant and provided a minority financial contribution to the project.</span></em></p>Handing management to algorithms creates ‘black-box bosses" whose decision-making is hard to understand or question.Tom Barratt, Lecturer, School of Business and Law, Edith Cowan UniversityAlex Veen, Lecturer (Academic Fellow) in Work and Organisational Studies, University of SydneyCaleb Goods, Lecturer - Management and Organisations, UWA Business School, The University of Western AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1421392020-07-13T13:46:27Z2020-07-13T13:46:27ZFacts or fake news: Revealing patterns in the COVID-19 tweets of Trudeau and Trump<figure><img src="https://images.theconversation.com/files/346736/original/file-20200709-54-2a9ouo.jpg?ixlib=rb-1.1.0&rect=47%2C44%2C1801%2C1158&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Prime Minister Justin Trudeau and President Donald Trump have had different approaches to tweeting during the COVID-19 pandemic. Here the two talk during a NATO session in December 2019.</span> <span class="attribution"><span class="source">THE CANADIAN PRESS/Sean Kilpatrick</span></span></figcaption></figure><p>From its ostensible origins in Wuhan, China, in late 2019, COVID-19 has spread across the globe. There are now a staggering <a href="https://www.who.int/emergencies/diseases/novel-coronavirus-2019">11.5 million cases worldwide, resulting in over half a million deaths</a>. March saw the pandemic’s beginnings in Canada and the United States, followed by widespread lockdowns meant to slow the progression of the virus. </p>
<p>While the number of new daily cases in Canada is declining, U.S. cases have reached record highs. The U.S. represents four per cent of the world’s population, but accounts for <a href="https://www.cnn.com/2020/06/30/health/us-coronavirus-toll-in-numbers-june-trnd/index.html">one-quarter of COVID-19 cases and deaths</a>. As of July 8, 2020, <a href="https://ourworldindata.org/">there were 9,051 cases per million people in the U.S. compared to 2,812 cases per million in Canada</a>. These statistics point to a substantial difference in community spread in the two countries. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=424&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=424&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=424&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=532&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=532&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346437/original/file-20200708-3991-1tm6xe7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=532&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Total confirmed cases of COVID-19 per million people in the U.S. and Canada.</span>
<span class="attribution"><span class="source">(OurWorldInData.org)</span></span>
</figcaption>
</figure>
<p>Twitter provides an online record of political leaders’ policies and personal sentiments. Both Canadian Prime Minister Justin Trudeau and U.S. President Donald Trump often tweet to large numbers of followers. The <a href="https://twitter.com/realDonaldTrump">@realDonaldTrump</a> Twitter account has 82.7 million followers with more than <a href="https://www.trackalytics.com/twitter/profile/realdonaldtrump/">20,000 tweets during Trump’s presidency</a>. The account <a href="https://twitter.com/JustinTrudeau">@JustinTrudeau</a> has five million followers and has <a href="https://www.trackalytics.com/twitter/profile/justintrudeau/">tweeted 18,000 times since Trudeau became prime minister</a>. </p>
<p>There’s a significant difference in how the two leaders have talked about this virus on Twitter. One has focused more on politics, while the other has focused on policy and public health.</p>
<h2>Networks of Twitter keywords</h2>
<p>We conducted a quantitative analysis of themes emerging in Trudeau’s and Trump’s tweets during the COVID-19 pandemic. Our study used network science, which considers systems and their interactions. We formed what are called “co-occurrence networks” based on keywords taken from tweets, with two keywords linked if they appear in the same tweet. For example, if the keywords “covid19” and “pandemic” appear in the same tweet, then they were linked. The monthly top 100 keywords from @JustinTrudeau and @realDonaldTrump were extracted based on their frequency.</p>
<p>To simplify the networks, we removed retweets and common stop words such as “the” and “at.” We created visualizations of the networks to group the keywords into thematically related clusters or communities. We find a higher proportion of links inside communities and a sparser set of links between them. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345899/original/file-20200706-3980-k5lgnf.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trudeau’s March Twitter keyword network.</span>
</figcaption>
</figure>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/345900/original/file-20200706-3975-qec1sb.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345900/original/file-20200706-3975-qec1sb.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345900/original/file-20200706-3975-qec1sb.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345900/original/file-20200706-3975-qec1sb.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345900/original/file-20200706-3975-qec1sb.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345900/original/file-20200706-3975-qec1sb.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345900/original/file-20200706-3975-qec1sb.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Trump’s March Twitter keyword network.</span>
</figcaption>
</figure>
<p>The algorithm extracted the communities in the keywords. Keywords and links were scaled up or down in size depending on their frequency. Communities of keywords were assigned colours such as blue, green and orange, and more correlated keywords were located closer together in the network.</p>
<p>Looking back at the first two months of 2020, Trudeau’s and Trump’s tweets were unrelated to COVID-19. Trudeau focused on the shooting down of the passenger plane in Iran that had 57 Canadian citizens on board, followed by protests for the Wet’suwet’en First Nation. Trump focused on his impeachment trial and endorsing candidates in Republican congressional primaries. </p>
<p>In March, the federal government’s response to COVID-19 dominated Trudeau’s Twitter keywords. In contrast, other topics competed for prevalence in Trump’s tweets. These included tweets about fake news (closely situated to “coronavirus” in the keyword network) and perceived unfairness from the Democrats.</p>
<p>Claims of fake news coverage of the severity of the pandemic dominated Trump’s April tweets. Trudeau’s tweets centred on topics such as wage subsidies and appreciation for front-line workers.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345902/original/file-20200706-3967-ptcpfs.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trudeau’s April Twitter keyword network.</span>
</figcaption>
</figure>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345901/original/file-20200706-4013-1optmex.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trump’s April Twitter keyword network.</span>
</figcaption>
</figure>
<p>In May and June, keywords from Trump’s tweets revolved around Obamagate, Republican endorsements and transit funding. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345904/original/file-20200706-29-1vrtvy5.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trump’s May Twitter keyword network.</span>
</figcaption>
</figure>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/345905/original/file-20200706-3975-5cxt07.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345905/original/file-20200706-3975-5cxt07.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345905/original/file-20200706-3975-5cxt07.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345905/original/file-20200706-3975-5cxt07.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345905/original/file-20200706-3975-5cxt07.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345905/original/file-20200706-3975-5cxt07.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345905/original/file-20200706-3975-5cxt07.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Trump’s June Twitter keyword network.</span>
</figcaption>
</figure>
<p>Trudeau’s keyword networks for both months were in stark contrast to Trump’s, with keywords related to the virus remaining prevalent.</p>
<h2>What the networks tell us</h2>
<p>The keyword networks from March to June point to divergent messaging on the pandemic by the two leaders, as reflected in their tweets. While both leaders focused on COVID-19 in their March tweets, Trump did increasingly less so over the coming months. His reference to the virus was often through a political lens, with keywords related to the media or Democratic rivals. </p>
<p>For each month we considered, the keywords fell into a small collection of communities, ranging from three to five. These observations are consistent with an <a href="https://theconversation.com/the-math-behind-trumps-tweets-100314">earlier analysis</a> of Trump’s tweets around his election.</p>
<p>Trump famously made comments <a href="https://www.theguardian.com/us-news/2020/mar/28/trump-coronavirus-misleading-claims">downplaying the pandemic</a> in its early days, and made subsequent statements referencing progress controlling the pandemic, <a href="https://www.washingtonpost.com/nation/2020/07/04/coronavirus-update-us/">despite a record number of new cases</a>. The <a href="https://www.washingtonpost.com/national/rushed-reopening-led-to-case-spikes-that-threaten-to-overwhelm-hospitals-in-some-states/2020/07/05/c936bd16-beea-11ea-9fdd-b7ac6b051dc8_story.html">early reopening of U.S. states</a> may have been a possible cause for increased cases.</p>
<p>In contrast, Trudeau has stayed consistent in his daily briefings and tweets since lockdowns began in March, highlighting economic recovery programs and providing public health-care information.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345903/original/file-20200706-3958-1l89ph5.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trudeau’s May Twitter keyword network.</span>
</figcaption>
</figure>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345906/original/file-20200706-29-zf7u0u.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trudeau’s June Twitter keyword network.</span>
</figcaption>
</figure>
<p>Interestingly, Trudeau’s minority government has been <a href="https://www.cbc.ca/news/politics/grenier-polltracker-26june2020-1.5627260">enjoying a surge in popularity</a>, while polls suggest <a href="https://www.forbes.com/sites/alisondurkee/2020/06/25/americans-disapprove-of-trumps-coronavirus-handling-by-highest-margin-yet-poll-finds/#3dcc4cb44df8">rising disapproval of the Trump administration’s</a> handling of the pandemic. </p>
<p>As the COVID-19 becomes part of the new normal, there is greater public awareness of the effectiveness of lockdowns and actions needed to curb the spread of the virus such as social distancing, hand-washing, and wearing masks. However, <a href="https://www.marketwatch.com/story/why-do-so-many-americans-refuse-to-wear-face-masks-it-may-have-nothing-to-do-with-politics-2020-06-16">not everyone is willing to comply</a>. </p>
<p>Our network analysis suggests that consistent social media messaging by federal leadership may play a role in influencing views of the pandemic and efforts to contain it. We hope that political leaders with large platforms will use them to amplify the advice of medical professionals and help slow the spread of the virus.</p><img src="https://counter.theconversation.com/content/142139/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anthony Bonato receives funding from NSERC. </span></em></p><p class="fine-print"><em><span>Alex Nazareth does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A tale of two leaders on Twitter in the age of COVID-19.Anthony Bonato, Professor of Mathematics, Toronto Metropolitan UniversityAlex Nazareth, MSc Candidate, Applied Math, Toronto Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1224652019-08-29T03:59:14Z2019-08-29T03:59:14ZCryptology from the crypt: how I cracked a 70-year-old coded message from beyond the grave<figure><img src="https://images.theconversation.com/files/289788/original/file-20190828-184222-1l0aytj.jpg?ixlib=rb-1.1.0&rect=15%2C31%2C5273%2C3504&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The American Survival Research Foundation offered a reward of $1,000 for cracking one of Thouless's two codes within three years of his death. It was not claimed. </span> <span class="attribution"><span class="source">Shutterstock.com</span></span></figcaption></figure><p>In recent weeks I managed to decrypt a difficult cipher that, despite expert codebreakers’ best efforts, had remained unsolved for 70 years. </p>
<p>The code was created by the late Cambridge professor and scientist Robert Henry Thouless, who passed away in 1984. He created it as a “test of survival” to see if he could communicate with the living after his death. Thouless thought if he successfully transmitted cipher keywords to the living through spiritual mediums and the message was received, this would prove he had survived his death.</p>
<p>In 2019, I was more interested in seeing whether computer speed, storage and networking capabilities had advanced enough to break a code that had outlived its maker. After about five days <a href="http://scienceblogs.de/klausis-krypto-kolumne/2019/08/16/richard-bean-solves-another-top-50-crypto-mystery/">I had my answer</a>. </p>
<p>The cipher text read: </p>
<blockquote>
<p>INXPH CJKGM JIRPR FBCVY WYWES NOECN SCVHE GYRJQ TEBJM TGXAT TWPNH CNYBC FNXPF LFXRV QWQL </p>
</blockquote>
<p>The solution: </p>
<blockquote>
<p>A number of successful experiments of this kind would give strong evidence for survival.</p>
</blockquote>
<h2>In the name of Psi-ence</h2>
<p>In 1882, the <a href="https://www.spr.ac.uk/">Society for Psychical Research</a> was founded in the UK. Its purpose was to study spiritualism, the paranormal, psychic powers and the possibility of life after death. During World War II Thouless became one of its many famous presidents – a list that also included Britain’s future prime minister Arthur Balfour and radio pioneer Sir Oliver Lodge. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/289991/original/file-20190829-184207-1jxz2ph.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/289991/original/file-20190829-184207-1jxz2ph.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=879&fit=crop&dpr=1 600w, https://images.theconversation.com/files/289991/original/file-20190829-184207-1jxz2ph.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=879&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/289991/original/file-20190829-184207-1jxz2ph.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=879&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/289991/original/file-20190829-184207-1jxz2ph.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1105&fit=crop&dpr=1 754w, https://images.theconversation.com/files/289991/original/file-20190829-184207-1jxz2ph.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1105&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/289991/original/file-20190829-184207-1jxz2ph.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1105&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Robert Thouless’s son David Thouless (pictured) won the Nobel Prize for Physics in 2016. He passed away this year.</span>
<span class="attribution"><span class="source">Wikimedia Commons</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>In the course of his academic work at Cambridge, Thouless devised experiments to test claimants for evidence of “psi” - a term he introduced in his 1942 paper “<a href="https://doi.org/10.1111/j.2044-8295.1942.tb01036.x">Experiments on Paranormal Guessing</a>”. The word was used to describe all phenomena of “telepathy”, “clairvoyance”, “precognition” or “extrasensory perception” that could be tested or described. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/cicada-3301-the-mystery-keeping-cryptologists-awake-at-night-21407">Cicada 3301: the mystery keeping cryptologists awake at night</a>
</strong>
</em>
</p>
<hr>
<p>He considered different ways to create an experiment which could test for survival after death. One test involved an object or message to be sealed in a package so after the author’s death mediums could attempt to describe what was inside. A disadvantage here was that the package could only be opened once to check an answer. So in his seminal paper “<a href="https://archive.org/details/proceedingsofsoc48soci">A Test of Survival</a>”, Thouless turned to cryptography as a source of experiments. </p>
<p>He published two ciphers in this paper, which he called Passages. Passage II used a book cipher - a code in which the key comes from some aspect of a book or another text. </p>
<h2>Cracking Passage II</h2>
<p>In August 2019, I produced a table of English letter frequencies in a successful attempt to break an unsolved cipher of the Irish Republican Army, presented in a <a href="http://scienceblogs.de/klausis-krypto-kolumne/2019/08/08/top-25-crypto-mystery-solved-by-australian-codebreaker/">2008 book co-authored by California computer scientist James J. Gillogly</a>.</p>
<p>I used the books of Project Gutenberg - a large collection of books scanned or typed by volunteers as the input texts. I wrote a program to check all 37,000 of the English books, using my table of letter frequencies to then <a href="http://practicalcryptography.com/cryptanalysis/text-characterisation/quadgrams/">score</a> the output text for a solution to Passage II. </p>
<p>After a few days, I found the source book was “The Hound of Heaven” by Francis Thompson, entered into Project Gutenberg in <a href="https://www.gutenberg.org/ebooks/1469">July 1998</a>. This is a most appropriate text to reflect Thouless’ religious beliefs, as it is a famous Christian poem. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lost-treasures-and-how-to-find-them-52698">Lost treasures and how to find them</a>
</strong>
</em>
</p>
<hr>
<p>The lesson from this discovery is that book ciphers can still be a very secure way of encrypting text if the key text can be kept secret, as the only method of solution is to exhaustively test all texts. The most famous example of a book cipher is the <a href="http://scienceblogs.de/klausis-krypto-kolumne/2017/03/20/the-top-50-unsolved-encrypted-messages-40-the-beale-cryptograms/?all=1">Beale ciphers</a> of 1885, which purport to describe the location of hidden treasure in the United States.</p>
<p>In the current age of Project Gutenberg and networked computer systems, Passage II could not have remained unsolved for long. </p>
<h2>A poetic approach to code</h2>
<p>Thouless’s Passage I used the well-known Playfair cipher which was quickly solved after being made. The keyword was “SURPRISE”, with the plain text coming from Shakespeare’s Macbeth. Solving this was an impressive feat of cryptanalysis in the pre-computer age, and neither the <a href="http://scienceblogs.de/klausis-krypto-kolumne/2018/03/15/a-crypto-mystery-from-1948/?all=1">solver nor the method used is known</a>.</p>
<p>In 1949 Thouless produced Passage III using a double Playfair technique with two English keywords instead of one. <a href="https://dblp.uni-trier.de/pers/hd/g/Gillogly:James_J=">Gillogly</a> solved it in 1995, publishing an article in “<a href="https://doi.org/10.1080/0161-119691885004">Cryptologia</a>” with Larry Harnisch. The keywords were “Black Beauty” from the 1877 Anna Sewell novel. Naturally, Gillogly tried the text of Black Beauty as the source book for Passage II, without success.</p>
<p>Commenting on <a href="https://www.latimes.com/archives/la-xpm-1995-11-05-me-65166-story.html">Gillogly’s 1995 solution</a>, a Society for Psychical Research spokesperson said: “When Thouless devised the test in the late 1940s he could hardly have foreseen the future power of computers.” </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/will-superfast-quantum-computers-mean-the-end-of-unbreakable-encryption-64402">Will superfast 'quantum' computers mean the end of unbreakable encryption?</a>
</strong>
</em>
</p>
<hr>
<p>Due to the <a href="https://en.wikipedia.org/wiki/Moore%27s_law">growth in computer speed</a>, storage and networking capability, breaking Passage II became feasible. In the present day, quantum computing threatens to make many current encryption algorithms obsolete. </p>
<p>Any future similar tests of “survival” will require the use of some kind of encryption algorithm that is immune to technological advances. As was the case with Thouless, whoever devises such a test will have to take into account that computer power in the future may make the science fiction of today a reality.</p><img src="https://counter.theconversation.com/content/122465/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Bean does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Computer capabilities have boosted our decryption technology to great heights. How will the future compare to a past, one in which codes were thought to be a means of communicating after death?Richard Bean, Research Fellow, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/853952017-10-23T02:25:50Z2017-10-23T02:25:50ZDon’t fear robo-justice. Algorithms could help more people access legal advice<figure><img src="https://images.theconversation.com/files/190746/original/file-20171018-32345-1tsa5e8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Should we be afraid of robo-justice?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/law-concept-enter-button-gavel-on-407397043">Maksim Kabakou/Shutterstock</a></span></figcaption></figure><p><em>You may have heard that algorithms will take over the world. But how are they operating right now? We take a look in our series on <a href="https://theconversation.com/au/topics/algorithms-at-work-44799">Algorithms at Work</a>.</em></p>
<hr>
<p>Algorithms have a role to play in supporting but not replacing the role of lawyers.</p>
<p>Around 15 years ago, my team and I created an automated tool that helped determine eligibility for legal aid. Known as <a href="https://pdfs.semanticscholar.org/fc3f/e1bd316cdf84f43a04e08bfb4d14635c3682.pdf">GetAid</a>, we built it for Victoria Legal Aid (VLA), which helps people with legal problems to find representation. At that time, the task of determining who could access its services <a href="https://pdfs.semanticscholar.org/fc3f/e1bd316cdf84f43a04e08bfb4d14635c3682.pdf">chewed up</a> a significant amount of VLA’s operating budget. </p>
<p>After passing a financial test, applicants also needed to pass a merit test: would their case have a reasonable chance of being accepted by a court? GetAid provided advice about both stages using decision trees and machine learning.</p>
<p>It never came online for applicants. But all these years later, the idea of using tools such as GetAid in the legal system is being taken seriously. Humans now feel far more comfortable using software to assist with, and even make, decisions. There are two major reasons for this change:</p>
<ul>
<li>Efficiency: the legal community has moved away from charging clients <a href="https://www.lawsociety.com.au/cs/groups/public/documents/internetcontent/522591.pdf">in six-minute blocks</a> and instead has become concerned with providing economical advice.</li>
<li>Acceptance of the internet: legal professionals finally acknowledge that the internet can be a safe way of conducting transactions and can be used to provide important advice and to collect data.</li>
</ul>
<p>This is a good development. Intelligent decision support systems can help streamline the legal system and provide useful advice to those who cannot afford professional assistance.</p>
<h2>Intelligent legal decision support systems</h2>
<p>While robots are unlikely to replace judges, automated tools are being developed to support legal decision making. In fact, they could help support access to justice in areas such as divorce, owners corporation disputes and small value contracts.</p>
<p>In cases where litigants cannot afford the assistance of lawyers or choose to appear in court unrepresented, systems have been developed that can advise about the potential outcome of their dispute. This helps them have reasonable expectations and make acceptable arguments.</p>
<p>Our <a href="http://heinonline.org/HOL/LandingPage?handle=hein.journals/lawprisk3&div=19&id=&page=">Split-Up software</a>, for example, helps users understand how Australian Family Court judges distribute marital property after a divorce. </p>
<p>The innovative part of the process is not the computer algorithm, but dividing the process into 94 arguments, including issues such as the contributions of the wife relative to the husband; the future needs of the wife relative to the husband; and the marriage’s level of wealth.</p>
<p>Using a form of statistical machine learning known as a neural network, it examines the strength of the weighting factors – contributions, needs and level of wealth – to determine an answer about the possible percentage split.</p>
<p>Other platforms follow a similar model. Developed by the Dutch Legal Aid Board, <a href="http://www.hiil.org/project/rechtwijzer-divorce-separation-netherlands">the Rechtwijzer dispute resolution platform</a> allows people who are separating to answer questions that ultimately guide them to information relevant to their family situation. </p>
<p>Another major use of intelligent online dispute resolution is the <a href="https://civilresolutionbc.ca/">British Columbia Civil Resolution System</a>. It helps people affordably resolve small claims disputes of C$5,000 and under, as well as strata property conflicts.</p>
<p>Its <a href="http://mjdr-rrdm.ca/law/wp-content/uploads/2017/01/Salter-and-Thompson-January-20.pdf">initiators say</a> that one of the common misconceptions about the system is that it offers a form of “robojustice” – a future where “disputes are decided by algorithm”. </p>
<p>Instead, they argue the Civil Resolution Tribunal is human-driven:</p>
<blockquote>
<p>From the experts who share their knowledge through the Solution Explorer, to the dispute resolution professionals serving as facilitators and adjudicators, the CRT rests on human knowledge, skills and judgement.</p>
</blockquote>
<h2>Concerns about the use of robo-justice</h2>
<p>Twenty years after we first began constructing intelligent legal decision support systems, the underlying algorithms are not much smarter, but developments in computer hardware mean machines can now search larger databases far quicker.</p>
<p>Critics are concerned that the use of machine learning in the legal system will worsen biases against minorities, or deepen the divide between those who can afford quality legal assistance and those who cannot.</p>
<p>There is no doubt that algorithms will continue to perform <a href="https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say">existing biases against vulnerable groups</a>, but this is because the algorithms are largely copying and amplifying the decision-making trends embedded in the legal system.</p>
<p>In reality, there is already a class divide in legal access – those who can afford high quality legal professionals will always have an advantage. The development of intelligent support systems can partially redress this power imbalance by providing users with important legal advice that was previously unavailable to them.</p>
<p>There will always be a need for judges with advanced legal expertise to deal with situations that fall outside the norm. Artificial intelligence relies upon learning from prior experience and outcomes, and should not be used to make decisions about the facts of a case. </p>
<p>Ultimately, to pursue “real justice”, we need to change the law. In the meantime, robots can help with the smaller stuff.</p><img src="https://counter.theconversation.com/content/85395/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>John Zeleznikow has received research funding from the Australian Research Council, Relationships Australia Queensland, Relationships Australia Victoria, Victoria Legal Aid, Software Engineering Australia, Phillips and Wilkins, Allan Moore and Company, Victorian Institute of Sport and Tennis Australia. His partner works for Relationships Australia Victoria.</span></em></p>Automated tools could help encourage access to justice in areas such as divorce, owners corporation disputes and small value contracts.John Zeleznikow, Professor of Information Systems; Research Associate, Institute of Sport, Exercise and Active Living, Victoria UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/859102017-10-20T03:44:30Z2017-10-20T03:44:30ZWhy marking essays by algorithm risks rewarding the writing of ‘bullshit’<figure><img src="https://images.theconversation.com/files/191132/original/file-20171019-1052-1lnjyxt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Will marking algorithms really reward good writing?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/desks-2401801?src=aqJA3GWfiM5c95LnCIjIVg-1-11">Terence/Shutterstock</a></span></figcaption></figure><p><em>You may have heard that algorithms will take over the world. But how are they operating right now? We take a look in our series on <a href="https://theconversation.com/au/topics/algorithms-at-work-44799">Algorithms at Work</a>.</em></p>
<hr>
<p>Picture this: you have written an essay. You researched the topic and carefully constructed your argument. You submit your essay online and receive your grade within seconds. But how can anyone read, comprehend and judge your essay that quickly? </p>
<p>Well, the answer is no one can. Your essay was marked by a computer. Would you trust the mark you received? Would you approach your next essay with the same effort and care?</p>
<p>These are <a href="http://www.smh.com.au/comment/smh-editorial/naplan-robomarking-plan-does-not-compute-20171012-gyzpl4.html">the questions</a> that parents, teachers and unions are asking about automated essay scoring (AES). The Australian Curriculum, Assessment and Reporting Authority (ACARA) proposes to use this program to grade essays, like persuasive writing questions, in its NAPLAN standardised testing scheme for primary and secondary schools.</p>
<p>ACARA has <a href="https://www.acara.edu.au/news-and-media/news-details?section=201710120459#201710120459">defended its decision</a> <a href="http://nap.edu.au/_resources/20151130_ACARA_research_paper_on_online_automated_scoring.pdf">and suggested</a> that computer-based marking can match or even surpass the consistency of human markers.</p>
<p>In my view, this misses the point. Computers are unable to genuinely read and understand what a text is about. A good argument has little worth when marks are awarded by a structural comparison with other texts and not by judging its ideas. </p>
<p>More importantly though, we risk encouraging the writing of text that follows “the script” but essentially says nothing of worth. In other words, the writing of “bullshit”.</p>
<h2>How does algorithmic marking work?</h2>
<p>It’s not entirely clear how AES functions, but let’s assume, in line with <a href="https://www.itnews.com.au/news/how-australia-plans-to-mark-naplan-with-cognitive-computing-403322">previous announcements</a>, that it employs a form of <a href="https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html">machine-learning</a>.</p>
<p>Here’s how that could work: a machine-learning algorithm “learns” from a pool of training data – in this case, <a href="https://www.nap.edu.au/docs/default-source/default-document-library/aes-fact-sheet.pdf?sfvrsn=2">it may be</a> “trained using more than 1,000 NAPLAN writing tests scored by human markers”.</p>
<p>But it generally does not learn the criteria by which humans mark essays. Rather, machine learning consists of multiple layers of so-called “artificial neurons”. These are statistical values that are gradually adjusted during the training period to associate certain inputs (structural text patterns, vocabulary, key words, semantic structure, paragraphing and sentence length) with certain outputs (high grades or low grades).</p>
<p>When marking a new essay, the algorithm makes a statistical inference by comparing the text with learned patterns and eventually matches it with a grade. Yet the algorithm cannot explain why this inference was reached.</p>
<p>Importantly, high grades are awarded to papers that show the structural features of highly persuasive writing – papers that follow the “persuasion rulebook”, so to speak. </p>
<h2>Rewarding bullshit</h2>
<p>Are the <a href="http://nap.edu.au/_resources/20151130_ACARA_research_paper_on_online_automated_scoring.pdf">claims by ACARA</a> that algorithmic marking can match the consistency of human markers wrong? Probably not, but that’s not the issue.</p>
<p>It’s possible that machine-learning could reliably award higher grades for those papers that follow the structural script for persuasive writing. And it might indeed do this with higher consistency than human markers. Examples from other fields show this – for instance, in the <a href="https://www.newyorker.com/magazine/2017/04/03/ai-versus-md">classification of images in medical diagnosis</a>. It will certainly be quicker and cheaper.</p>
<p>But it will not matter what a text is <em>about</em>: whether the argument is ethical, offensive or outright nonsensical, whether it conveys any coherent ideas or whether it speaks effectively to the intended audience. </p>
<p>The only thing that matters is that the text has the right structural patterns. In essence, algorithmic marking might reward the writing of “bullshit” – text written with little regard for the subject matter and solely to fulfil the algorithm’s criteria.</p>
<p>Not simply lying, analysts use “bullshit” to describe empty talk or meaningless jargon. Princeton philosopher Harry Frankfurt argues that <a href="https://www.stoa.org.uk/topics/bullshit/pdf/on-bullshit.pdf">talking bullshit</a> may actually be worse than lying, because the lie at least reaffirms the truth:</p>
<blockquote>
<p>It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it … For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says.</p>
</blockquote>
<p>Unlike humans, algorithms are incapable of truly understanding when something is nonsense rather than genuine ideas and argumentation. It doesn’t know whether a text has any worth or relationship to our world at all. </p>
<p>That’s why algorithmic marking, whether in NAPLAN or otherwise, risks rewarding the writing of bullshit. </p>
<h2>Encouraging the wrong thing</h2>
<p>Our politics, businesses and media are already flooded with empty arguments and jargon. Let’s not reward the skill of writing it.</p>
<p>Any application of algorithmic decision-making creates feedback loops. It influences future behaviour by rewarding and foregrounding some aspects of human practice and backgrounding others. </p>
<p>This is particularly the case when incentives are tied to the outcomes of algorithmic decision-making. In the case of NAPLAN, we know that the government rewards schools that score highly. As a result, there is already an entire industry geared towards “cracking the script” of NAPLAN in order to secure high marks. </p>
<p>Imagine what happens when students realise that genuine ideas and valid arguments are not rewarded by the algorithm</p><img src="https://counter.theconversation.com/content/85910/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kai Riemer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>High grades might be awarded to papers that show the structural features of highly persuasive writing – papers that follow the “persuasion script”, so to speak.Kai Riemer, Professor of Information Technology and Organisation, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/857472017-10-19T03:07:31Z2017-10-19T03:07:31ZWhat businesses can learn from sports about using algorithms<figure><img src="https://images.theconversation.com/files/190747/original/file-20171018-32341-1kwjwn5.jpg?ixlib=rb-1.1.0&rect=20%2C36%2C3389%2C2190&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Sport algorithms aren't working for business.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p><em>You may have heard that algorithms will take over the world. But how are they operating right now? We take a look in our series on <a href="https://theconversation.com/au/topics/algorithms-at-work-44799">Algorithms at Work</a>.</em></p>
<hr>
<p>Replacing human decision-making with algorithms seems to make sense. People tend to rely on <a href="https://hbr.org/2015/05/from-economic-man-to-behavioral-economics">unreliable cognitive shortcuts</a>, get fatigued or distracted, and can be swayed by subjective opinion and inter-personal alliances.</p>
<p>On the other hand, algorithms are bearers of encoded logic which consistently execute pre-determined decision criteria. Therefore they are immune to emotional influences; they can rise above social relationships to objectively analyse data and optimise decisions.</p>
<p>But, while the application of algorithms has produced remarkable outcomes in the sporting world, this success does not fully translate to the business world. The reason is rooted in the different levels of complexity of these environments.</p>
<h2>Algorithms in sports</h2>
<p>One of the earliest and most successful applications of algorithmic thinking has been in professional sports. It was in the middle of the previous century when American baseball teams started using player statistics to make decisions such as which players to hire, develop, and trade. </p>
<p>Known as <a href="http://sabr.org/">Sabermetrics</a>, this analytical approach has since expanded to other sports and grown in sophistication.</p>
<p>Today, massive amounts of athlete data are collected through elaborate <a href="https://www.playsight.com/#/">camera systems</a> and <a href="https://www.youtube.com/watch?v=O4sxp-Ydnk4">wearable devices</a> to create increasingly refined performance metrics that coaches can leverage in real time. This allows coaches to decide when to rest a player, which combinations of players are most effective, or how many points their key players need to score in order to win a game.</p>
<p>The profound impact of this algorithmic approach is evident in that many professional teams regard “sports analytics” to be <a href="https://www.forbes.com/sites/leighsteinberg/2015/08/18/changing-the-game-the-rise-of-sports-analytics/#1065c1a94c1f">crucial</a> and view <a href="https://interestingengineering.com/new-algorithm-takes-guesswork-sports">algorithmic innovation</a> as key to their <a href="http://www.computerweekly.com/news/450296302/Foxy-Leicester-City-FC-won-Premiership-with-data-analytics">success</a>.</p>
<h2>Can the success be replicated in the world of business?</h2>
<p>Some of the best known companies in the world utilise algorithms to enhance consumer services and optimise internal processes. </p>
<p>Walmart’s e-Commerce arm, Jet.com, employs algorithms that <a href="https://www.cbsnews.com/news/walmart-goes-head-to-head-with-amazon-in-grocery-wars/">adjust prices</a> based on the items in a customer’s checkout cart. Amazon and Netflix’s touted recommendation systems are themselves algorithms applied to vast quantities of customer data. </p>
<p>Algorithm-based chat bots are now commonplace in the banking and insurance industries. Uber’s pricing and ride-allocation systems are exclusively managed by <a href="https://www.ft.com/content/88fdc58e-754f-11e6-b60a-de4532d5ea35">algorithms</a> that match ride demand and supply in different geographical areas in real time. </p>
<p>Despite their widening adoption, algorithms can have unintended negative consequences. For instance, <a href="http://www.jpost.com/International/Google-and-Facebook-allowed-advertisers-to-target-Jew-haters-505356">Google</a>, <a href="https://www.nytimes.com/2017/09/23/opinion/sunday/facebook-ad-scandal.html">Facebook</a>, <a href="http://www.smh.com.au/technology/innovation/microsofts-teenage-chatbot-tay-turns-into-a-racist-abusive-troll-20160324-gnqvro.html">Twitter</a>, and <a href="https://www.theguardian.com/technology/2017/sep/21/instagram-death-threat-facebook-olivia-solon">Instagram</a> have all come under fire for employing algorithms that promoted racist and abusive behaviour. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/do-computers-make-better-bank-managers-than-humans-85086">Do computers make better bank managers than humans?</a>
</strong>
</em>
</p>
<hr>
<p>Some hiring algorithms have been shown to be <a href="https://hbr.org/2016/12/hiring-algorithms-are-not-neutral">racially biased</a> whereas others <a href="https://www.lexology.com/library/detail.aspx?g=c806d996-45c5-4c87-9d8a-a5cce3f8b5ff">denied loans</a> to eligible applicants. In another example, rogue algorithms led to the firing of <a href="http://www.npr.org/2016/09/12/493654950/weapons-of-math-destruction-outlines-dangers-of-relying-on-data-analytics">high-performing teachers</a>.</p>
<p>These failures occurred not simply due to poor algorithmic design, but because of unanticipated interactions between algorithms and their complex environments.</p>
<p>Compared to sports, business environments are equivocal, unstructured, and dynamic. There are many possible measures for organisational success (profit, growth, market share, stock value, etc.), which cannot be easily traced back to the performance of individual employees. In fact, there are multiple dimensions along which employee performance can be assessed. Many of them are not readily observable and can be measured in various ways (engagement, commitment, innovation etc.).</p>
<p>Additionally, a lot of industries are characterised by hyper-competition, where disruptive innovations can render existing strategies obsolete and require rapid and profound technological and organisational changes.</p>
<h2>Sports and business are different worlds</h2>
<p>On the other hand, sporting competitions have a small and finite number of outcomes. The rules that govern athletes’ actions are known, accepted, explicitly enforced, and stable. Success is easily defined in terms of points, wins, and championships. Individual and team performance indicators are unambiguous and can be calculated from direct observation (points scored, hits made, assists per minute, punches per round, etc.), and their importance can be measured in terms of their relative contribution to the success of the athlete or team.</p>
<p>The relative stability, simplicity, and predictability of the sporting environment make it suitable for algorithmic technologies. Such technologies aim to emulate human decision-making by generating options within a well-defined problem area, identifying criteria to evaluate them, assigning weights to each option, calculating their scores, and choosing the option with the highest score.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-marketers-use-algorithms-to-try-to-read-your-mind-84682">How marketers use algorithms to (try to) read your mind</a>
</strong>
</em>
</p>
<hr>
<p>This process faithfully mirrors decision-making processes made by team managers and coaches and can improve them due these technologies’ ability to process and analyse large datasets (a basketball coach must select five starters from his roster. The number of options is limited and his choice can be optimised by using an algorithm that analyses individual player performance data against the opposing team’s performance data).</p>
<p>However, research has shown that in complex, ambiguous, and dynamic conditions, such as those that characterise businesses, people engage in <a href="https://www.ise.ncsu.edu/wp-content/uploads/2017/02/Klein_2008_HF_NDM.pdf">naturalistic decision-making</a>. Here, people do not use algorithmic strategies to generate utility estimates for different courses of action. </p>
<p>In fact, no explicit action of making a decision is taken. Instead, people perceive a situation through pattern-matching against previous experience. Based on this classification, certain actions present themselves as plausible because they have been applied in previous similar situations (“I have a feeling this employee is in distress. I should call him in for a meeting”).</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ethics-by-numbers-how-to-build-machine-learning-that-cares-85399">Ethics by numbers: how to build machine learning that cares</a>
</strong>
</em>
</p>
<hr>
<p>Because naturalistic decision-making involves little explicit deliberation and is mostly intuitive, current algorithmic technologies cannot reliably replicate it. This explains the mixed results from applying these technologies in business. </p>
<p>While we should not entirely exclude algorithmic technologies from business decision-making, companies would be well-advised to apply them to support analytical decision-making rather than replace naturalistic decision-making.</p><img src="https://counter.theconversation.com/content/85747/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Uri Gal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There are good reasons why business has not been as successful as sports teams at implementing algorithmic decision-making.Uri Gal, Associate Professor in Business Information Systems, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/853992017-10-17T19:14:21Z2017-10-17T19:14:21ZEthics by numbers: how to build machine learning that cares<figure><img src="https://images.theconversation.com/files/190327/original/file-20171016-27761-j268yk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">We need to build algorithms that act ethically.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/abstract-screen-software-background-programmer-occupation-425789347">BEST-BACKGROUNDS/Shutterstock</a></span></figcaption></figure><p><em>You may have heard that algorithms will take over the world. But how are they operating right now? We take a look in our series on <a href="https://theconversation.com/au/topics/algorithms-at-work-44799">Algorithms at Work</a>.</em></p>
<hr>
<p>Machine learning algorithms work blindly towards the mathematical objective set by their designers. It is vital that this task include the need to behave ethically.</p>
<p>Such systems are exploding in popularity. Companies use them to decide what news you see and who you meet online dating. Governments are starting to roll out machine learning to help
<a href="http://www.news.com.au/technology/innovation/microsoft-to-build-hyperscale-cloud-regions-for-australian-government-to-unlock-power-of-ai/news-story/c80c765751240a4a2b837a212112cf31">deliver government services</a> and to select individuals for audit.</p>
<p>Yet the algorithms that drive these systems are much simpler than you might realise: they have more in common with a pocket calculator than a robot from a sci-fi novel by Isaac Asimov. By default, they don’t understand the context in which they act, nor the ethical consequences of their decisions. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-marketers-use-algorithms-to-try-to-read-your-mind-84682">How marketers use algorithms to (try to) read your mind</a>
</strong>
</em>
</p>
<hr>
<p>The predictions of a machine learning algorithm come from generalising example data, rather expert knowledge. For example, an algorithm might use your financial situation to predict the chance you’ll default on the loan. The algorithm would be “trained” on the finances of historical customers who did or did not default.</p>
<p>For this reason, a machine learning system’s ethics must be provided as an explicit mathematical formula. And it’s not a simple task. </p>
<h2>Learning from data</h2>
<p>Data61, where I work, has designed and built machine learning systems for the government, as well as local and international companies. This has included several projects where the product’s behaviour has ethical implications. </p>
<p>Imagine a university that decides to take a forward-looking approach to enrolling students: instead of basing their selection on previous marks, the university enrols students it <em>predicts</em> will perform well. </p>
<p>The university could use a machine learning algorithm to make this prediction by training it with historical information about previous applicants and their subsequent performance. </p>
<p>Such training occurs in a very specific way. The algorithm has many parameters that control how it behaves, and the training involves optimising the parameters to meet a particular mathematical objective relating to the data. </p>
<p>The simplest and most common objective is to be able to predict the training data accurately on average. For the university, this objective would have its algorithm predict the marks of the historical applicants as accurately as possible.</p>
<h2>Ethical objectives</h2>
<p>But a simple predictive goal such as “make the smallest mistakes possible” can inadvertently produce unethical decision-making.</p>
<p>Consider a few of the many important issues missed by this often-used objective:</p>
<p><strong>1. Different people, different mistakes</strong></p>
<p>Because the algorithm only cares about the size of its mistakes averaged over all the training data, it might have very different “accuracies” on different kinds of people. </p>
<p>This effect often arises for minorities: there are fewer of them in the training data, so the algorithm doesn’t get penalised much for poorly predicting their grades. For a university predicting grades in a male-dominated course, for example, it might be the case that the algorithm is 90% accurate overall, but only 50% accurate for women. </p>
<p>To address this, the university would have to change the algorithm’s objective to care equally about accuracy for both men and women.</p>
<p><strong>2. The algorithm isn’t sure</strong></p>
<p>Simple machine learning algorithms provide a “best guess” prediction, but more sophisticated algorithms are also able to assess their own confidence in that prediction. </p>
<p>Ensuring that confidence is accurate can also be an important part of the algorithm’s objective. For example, the university might want to apply an ethical principle like “the benefit of the doubt” to applicants with uncertain predicted marks. </p>
<p><strong>3. Historical bias</strong></p>
<p>The university’s algorithm has learned to predict entirely from historical data. But if professors giving out the marks in this data had biases (say against a particular minority), then new predictions would have the same bias. </p>
<p>The university would have to remove this bias in its future admissions by changing the algorithm’s objective to compensate for it.</p>
<p><strong>4. Conflicting priorities</strong></p>
<p>The most difficult factor in creating an appropriate mathematical objective is that ethical considerations often conflict. For the university, increasing the algorthm’s accuracy for one minority group will reduce its accuracy for another. No prediction system is perfect, and their limitations will always affect some students more than others. </p>
<p>Balancing these competing factors in a single mathematical objective is a complex issue of judgement with no single answer.</p>
<h2>Building ethical algorithms</h2>
<p>These are only a few of the many complex ethical considerations for a seemingly straightforward problem. So how does this university, or a company or government, ensure the ethical behaviour of their real machine learning systems? </p>
<p>As a first step, they could designate an “ethics engineer”. Their job would be to elicit the ethical requirements of the system from its designers, convert them into a mathematical objective, and then monitor the algorithm’s ability to meet that objective as it moves into production. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/do-computers-make-better-bank-managers-than-humans-85086">Do computers make better bank managers than humans?</a>
</strong>
</em>
</p>
<hr>
<p>Unfortunately, this role is now lumped into the general domain of the “data scientist” (if it exists at all), and does not receive the attention it deserves.</p>
<p>Creating an ethical machine learning system is no simple task: it requires balancing competing priorities, understanding social expectations, and accounting for different types of disadvantage. But it is the only way for governments and companies to ensure they maintain the ethical standards society expects of them.</p><img src="https://counter.theconversation.com/content/85399/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lachlan McCalman works for Data61, a business unit of CSIRO. </span></em></p>Creating an ethical machine learning system is no simple task, but maths can help.Lachlan McCalman, Senior Research Engineer, Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/850862017-10-17T01:21:28Z2017-10-17T01:21:28ZDo computers make better bank managers than humans?<p><em>You may have heard that algorithms will take over the world. But how are they operating right now? We take a look in our series on <a href="https://theconversation.com/au/topics/algorithms-at-work-44799">Algorithms at Work</a>.</em></p>
<hr>
<p>Algorithms are increasingly making decisions that affect ordinary people’s lives. One example of this is so-called “algorithmic lending”, with some companies <a href="http://www.afr.com/business/banking-and-finance/big-four-banks-do-deals-to-fight-mortgage-disrupters-20171011-gyz7wa">claiming</a> to have reduced the time it takes to approve a home loan to mere minutes. </p>
<p>But can computers become better judges of financial risk than human bank tellers? Some computer scientists and data analysts <a href="http://www.afr.com/business/banking-and-finance/financial-services/credit-rating-agency-questions-the-rise-of-lending-algorithms-20161122-gsuwjy">certainly think so</a>. </p>
<h2>How banking is changing</h2>
<p>On the face of it, bank lending is rather simple. </p>
<p>People with excess money deposit it in a bank, expecting to earn interest. People who need cash borrow funds from the bank, promising to pay the amount borrowed plus interest. The bank makes money by charging a higher interest rate to the borrower than it pays the depositor. </p>
<p>Where it gets a bit trickier is in managing risk. If the borrower were to default on payments, not only does the bank not earn the interest income, it also loses the amount loaned (provided there wasn’t collateral attached, such as a house or car). </p>
<p>A borrower who is deemed less creditworthy is charged a higher interest rate, thereby compensating the bank for additional risk.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-marketers-use-algorithms-to-try-to-read-your-mind-84682">How marketers use algorithms to (try to) read your mind</a>
</strong>
</em>
</p>
<hr>
<p>Consequently, the banks have a delicate balancing act - they always want more borrowers to increase their income, but they need to screen out those who aren’t creditworthy. </p>
<p>Traditionally this role was fulfilled by an experienced credit manager — a judge of human character — who could distinguish between responsible borrowers and those who would be unlikely to meet their repayment schedules. </p>
<h2>Are humans any good at judging risk?</h2>
<p>When you look at the research, it doesn’t seem that humans are that great at judging financial risk.</p>
<p>Two psychologists <a href="http://journals.sagepub.com/doi/abs/10.1518/155534307X232857">conducted</a> an experimental study to assess the kind of information that loan officers rely upon. They found that in addition to “hard” financial data, loan officers rely on “soft” gut instincts. The latter was even regarded as a more valid indicator of creditworthiness than financial data. </p>
<p><a href="http://amj.aom.org/content/40/5/1063.short">Additional studies</a> of loan officers in controlled experiments showed that the longer the bank’s association with the customer, the larger the requested loan, and the more exciting its associated industry, the more likely are loan officers to underrate loan risks.</p>
<p><a href="https://www.researchgate.net/publication/227445655_The_Effects_of_Task_Size_and_Similarity_on_the_Decision_Behavior_of_Bank_Loan_Officers">Other researchers</a> have found that the more applications that loan officers have to process, the greater the likelihood that bank officers will use non-compensatory (irrational) decision strategies. For example, just because a customer has a high income that doesn’t mean they don’t have a bad credit history.</p>
<p>Loan officers have also <a href="https://deepblue.lib.umich.edu/handle/2027.42/28159">been found</a> to reach decisions early in the lending process, tending to ignore information that is inconsistent with their early impressions. Lastly, loan officers <a href="https://www.researchgate.net/publication/247874304_The_Effect_of_Auditor_Attestation_and_Tolerance_for_Ambiguity_on_Commercial_Lending_Decisions">often fail</a> to properly weigh the credibility of financial information when evaluating commercial loans. </p>
<h2>Enter algorithmic lending</h2>
<p>Compared with human bank managers, a computer algorithm is like a devoted apprentice who painstakingly observes each person’s credit history over many years.</p>
<p>Banks already have troves of data on historical loan applications paired with outcomes - whether the loan was repaid or defaulted. Armed with this information, an algorithm can screen each new credit application to determine its creditworthiness. </p>
<p>There are various methods, based on the specific data in each applicant’s profile, from which the algorithm identifies the most relevant and unique attributes.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-might-be-everywhere-but-like-us-theyre-deeply-flawed-66838">Algorithms might be everywhere, but like us, they're deeply flawed</a>
</strong>
</em>
</p>
<hr>
<p>For example, if the application is filled in by hand and scanned into the computer, the algorithm may consider whether the application was written in block capitals or in cursive handwriting. </p>
<p>The algorithm may have detected a pattern that applicants who write in all-caps without punctuation are usually less educated with a lower earning potential, and thereby inherently more risky. Who knew that how you write your name and address could result in <a href="https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapid=253231307">denial of a credit application</a>?</p>
<p>On the other hand, a degree from Harvard University could be viewed <a href="https://www.americanbanker.com/news/is-it-ok-for-lending-algorithms-to-favor-ivy-league-schools">favorably</a> by algorithms.</p>
<h2>On balance, computers come out ahead</h2>
<p>A large part of human decision making is based on the first few seconds and how much they like the applicant. A well-dressed, well-groomed young individual has more chance than an unshaven, dishevelled bloke of obtaining a loan from a human credit checker. But an algorithm is unlikely to make the same kind of judgement.</p>
<p><a href="http://scholarworks.law.ubalt.edu/cgi/viewcontent.cgi?article=1307&context=all_fac">Some critics</a> contend that algorithmic lending will shut disadvantaged people out of the financial system, because of the use of pattern-matching and financial histories. They argue that machines are by definition neutral and thus usual banking rules will not apply. This is a misconception. </p>
<p>The computer program is constrained by the same regulations as the human underwriter. For example, the computer program cannot deny applications from a particular postal code, as those are usually segregated by income levels and racial ethnicity. </p>
<p>Moreover, such overt or covert discrimination can be prevented by requiring lending agencies (and algorithms) to provide reasons why a particular application was denied, as <a href="https://www.oaic.gov.au/privacy-law/privacy-registers/privacy-codes/privacy-credit-reporting-code-2014-version-1-2#integrity">Australia has done</a>.</p>
<p>In conclusion, computers make lending decisions based on objective data and avoid the biases exhibited by people, while complying with regulations that govern fair lending practices.</p><img src="https://counter.theconversation.com/content/85086/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Saurav Dutta does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>On balance, computers may make better judges of risk than people.Saurav Dutta, Head of School at the School of Accounting, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/846822017-10-15T19:22:52Z2017-10-15T19:22:52ZHow marketers use algorithms to (try to) read your mind<figure><img src="https://images.theconversation.com/files/189884/original/file-20171012-9815-l3vwyr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Marketers are using your data to make predictions about what you'll want, when.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p><em>You may have heard that algorithms will take over the world. But how are they operating right now? We take a look in our series on <a href="https://theconversation.com/au/topics/algorithms-at-work-44799">Algorithms at Work</a>.</em></p>
<hr>
<p>Have you ever you looked for a product online and then been recommended the exact thing you need to complement it? Or have you been thinking about a particular purchase, only to receive an email with that product on sale? </p>
<p>All of this may give you a slightly spooky feeling, but what you’re really experiencing is the result of complex algorithms used to predict, and in some cases, even influence your behaviour. </p>
<p>Companies now have access to an unprecedented amount of data on your present and past shopping and browsing preferences. This ranges from transactional data, to website traffic and even social media posts. Predictive algorithms use this data to make inferences about what is likely to happen in the future. </p>
<p>For example, after a few times visiting a coffee shop, the barista might notice that you always order a latte with one sugar. They could then use this “data” to predict that tomorrow you will order the same thing, and have it ready for you before you get there. </p>
<p>Predictive algorithms work the same way, just on a much bigger scale. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-might-be-everywhere-but-like-us-theyre-deeply-flawed-66838">Algorithms might be everywhere, but like us, they're deeply flawed</a>
</strong>
</em>
</p>
<hr>
<h2>How are big data and predictive algorithms used?</h2>
<p>My colleagues and I recently conducted <a href="http://www.sciencedirect.com/science/article/pii/S0969698917301959">a study</a> using online browsing data to show there are five reasons consumers use retail websites, ranging from simply “touching base” to planning a specific purchase. </p>
<p>Using historical data, we were able to see that customers who browse a wide variety of different product categories are less likely to make a purchase than those that are focused on specific products. Meanwhile consumers were more likely to purchase if they reached the website through a search engine, compared to a link in an email. </p>
<p>With information like this websites can be personalised based on the most likely motivation of each visitor. The next time a consumer clicks through from a search engine they can be led straight to checkout, while those wanting to browse can be given time and inspiration. </p>
<p>Somewhat similar to this are the predictive algorithms used to make recommendations on websites like Amazon and Netflix. <a href="https://www.mckinsey.com/industries/retail/our-insights/how-retailers-can-keep-up-with-consumers">Analysts estimate</a> that 35% of what people buy on Amazon, and 75% of what they watch on Netflix, is driven by these algorithms.</p>
<p>These algorithms also work by analysing both your past behaviour (e.g. what you have bought or watched), as well as the behaviour of others (e.g. what people who bought or watched the same thing also bought or watched). The key to the success of these algorithms is the scope of data available. By analysing the past behaviour of similar consumers, these algorithms are able to make recommendations that are more likely to be accurate, rather than relying on guess work. </p>
<p>For the curious, part of Amazon’s famous recommendation algorithm was <a href="https://github.com/amzn/amazon-dsstne/">recently released</a> as an open source project for others to build upon.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-future-of-online-advertising-is-big-data-and-algorithms-69297">The future of online advertising is big data and algorithms</a>
</strong>
</em>
</p>
<hr>
<p>But of course, there are innumerable other data points for algorithms to analyse than just behaviour. US retailer Walmart famously <a href="http://www.countryliving.com/food-drinks/a44550/walmart-strawberry-pop-tarts-before-hurricane/">stocked up on strawberry pop-tarts</a> in the lead up to a major storm. This was the result of simple analysis of past weather data and how that influenced demand. </p>
<p>It is also possible to predict how purchase behaviour is likely to evolve in the future. Algorithms can predict whether a consumer is likely to change <a href="http://journals.ama.org/doi/abs/10.1509/jmkr.45.1.60?code=amma-site">purchase channel</a> (e.g. from in-store to online), or even if certain customers are likely to <a href="http://journals.sagepub.com/doi/abs/10.1177/1094670515616376">stop shopping</a>. </p>
<p>Prior studies that have applied these algorithms have found companies can influence a consumer’s choice of <a href="http://pubsonline.informs.org/doi/abs/10.1287/mksc.2015.0923">purchase channel</a> and even purchase value by changing the way they communicate with them, and can use promotional campaigns to decrease <a href="http://www.sciencedirect.com/science/article/pii/S001985011400114X">customer churn</a>. </p>
<h2>Should I be concerned?</h2>
<p>While these predictive algorithms undoubtedly provide benefits, there are also serious issues about privacy. In the past there have been <a href="https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/#4215f57d6668">claims</a> that companies have predicted consumers are pregnant before they know themselves. </p>
<p>These privacy concerns are critical and require careful consideration from both businesses and government. </p>
<p>However, it is important to remember that companies are not truly interested in any one consumer. While many of these algorithms are designed to mimic “personal” recommendations, in fact they are based on behaviour across the whole customer base. Additionally, the recommendations or promotions that are given to each individual are automated from the database, so the chances of any staff actually knowing about an individual customer is extremely low. </p>
<p>Consumers can also benefit from companies using these predictive algorithms. For example, if you search for a product online, chances are you will be targeted with ads for that product over the next few days. Depending on the company, these ads may include discount codes to encourage you to purchase. By waiting a few days after browsing, you may be able to get a discount for a product you were intending to buy anyway. </p>
<p>Alternatively, look for companies who adjust their price based on forecasted demand. By learning when the low-demand periods are, you can pick yourself up a bargain at lower prices. So while companies are turning to predictive analytics to try to read consumers’ minds, some smart shopping behaviours can make it a two-way street.</p><img src="https://counter.theconversation.com/content/84682/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jason Pallant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>This is how marketers are taking advantage of customer data to build predictive algorithms, and even tailor their products and offerings.Jason Pallant, Lecturer of Marketing, Swinburne University of TechnologyLicensed as Creative Commons – attribution, no derivatives.