tag:theconversation.com,2011:/global/topics/polling-methodology-3386/articlesPolling methodology – The Conversation2020-11-18T13:24:33Ztag:theconversation.com,2011:article/1501212020-11-18T13:24:33Z2020-11-18T13:24:33ZElection polls are more accurate if they ask participants how others will vote<figure><img src="https://images.theconversation.com/files/369666/original/file-20201116-17-1bjmv2o.jpg?ixlib=rb-1.1.0&rect=8%2C61%2C5835%2C3828&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">People have information on how they'll vote, but also about how others in their community may vote.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/USElection2020WisconsinMisinformation/52a5b75dca5245b48ebc25f296547302/photo">AP Photo/Wong Maye-E</a></span></figcaption></figure><p>Most public opinion polls correctly predicted the winning candidate in the 2020 U.S. presidential election – but on average, they overestimated the margin by which Democrat Joe Biden would beat Republican incumbent Donald Trump. </p>
<p>Our research into polling methods has found that pollsters’ predictions can be more accurate if they look beyond traditional questions. Traditional polls ask people whom they would vote for if the election were today, or for the <a href="https://academic.oup.com/poq/article-abstract/78/S1/233/1836783">percent chance</a> that they might vote for particular candidates.</p>
<p>But our research into <a href="https://priceschool.usc.edu/people/wandi-bruine-de-bruin">people’s expectations</a> and <a href="https://www.santafe.edu/people/profile/mirta-galesic">social judgments</a> led us and our collaborators, <a href="https://www.santafe.edu/people/profile/henrik-olsson">Henrik Olsson</a> at the Santa Fe Institute and <a href="https://economics.mit.edu/faculty/dprelec">Drazen Prelec</a> at MIT, to wonder whether different questions could yield more accurate results.</p>
<p>Specifically, we wanted to know whether asking people about the political preferences of others in their social circles and in their states could help paint a fuller picture of the American electorate. Most people know <a href="http://dx.doi.org/10.1037/rev0000096">quite a bit about the life experiences of their friends and family</a>, including how happy and healthy they are and roughly how much money they make. So we designed poll questions to see whether this knowledge of others extended to politics – and we have found that it does.</p>
<p>Pollsters, we determined, could learn more if they took advantage of this type of knowledge. Asking people how others around them are going to vote and aggregating their responses across a large national sample enables pollsters to tap into what is often called “<a href="https://www.penguinrandomhouse.com/books/175380/the-wisdom-of-crowds-by-james-surowiecki/">the wisdom of crowds</a>.”</p>
<p><iframe id="Lo86h" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/Lo86h/6/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>What are the new ‘wisdom-of-crowds’ questions?</h2>
<p>Since the 2016 U.S. presidential election season, we have been asking participants in a variety of election polls: “<a href="https://www.nature.com/articles/s41562-018-0302-y">What percentage of your social contacts will vote for each candidate?</a>” </p>
<p>In the 2016 U.S. election, this question predicted that Trump would win, and did so more accurately than questions asking about poll respondents’ own voting intentions. </p>
<p>The question about participants’ social contacts was <a href="https://www.youtube.com/watch?v=v9WSmM8VeQ0">similarly more accurate</a> than the traditional question at predicting the results of the 2017 French presidential election, the 2017 Dutch parliamentary election, the 2018 Swedish parliamentary election and the 2018 U.S. election for House of Representatives.</p>
<p>In some of these polls, we also asked, “What percentage of people in your state will vote for each candidate?” This question also taps into participants’ knowledge of those around them, but in a wider circle. Variations of this question have worked well <a href="https://academic.oup.com/poq/article/78/S1/204/1836551">in previous elections</a>.</p>
<p><iframe id="ZO25h" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/ZO25h/6/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>How well did the new polling questions do?</h2>
<p>In the 2020 U.S. presidential election, our “wisdom-of-crowds” questions were once again better at predicting the outcome of the national popular vote than the traditional questions. In the <a href="https://election.usc.edu">USC Dornsife Daybreak Poll</a> we asked more than 4,000 participants how they expected their social contacts to vote and which candidate they thought would win in their state. They were also asked how they themselves were planning to vote. </p>
<p>The current election results show a <a href="https://cookpolitical.com/2020-national-popular-vote-tracker">Biden lead of 3.7 percentage points</a> in the popular vote. <a href="https://projects.fivethirtyeight.com/polls/president-general/national/">An average of national polls</a> predicted a lead of 8.4 percentage points. In comparison, the question about social contacts <a href="https://osf.io/j54rz">predicted a 3.4-point Biden lead</a>. The state-winner question predicted Biden leading by 1.5 points. By contrast, the traditional question that asked about voters’ own intentions in the same poll predicted a 9.3-point lead. </p>
<h2>Why do the new polling questions work?</h2>
<p>We think there are three reasons that asking poll participants about others in their social circles and their state ends up being more accurate than asking about the participants themselves.</p>
<p>First, asking people about others effectively increases the sample size of the poll. It gives pollsters at least some information about the voting intentions of people whose data might otherwise have been entirely left out. For instance, many were not contacted by the pollsters, or may have declined to participate. Even though the poll respondents don’t have perfect information about everyone around them, it turns out they do know enough to give useful answers. </p>
<p>Second, we suspect people may find it easier to report about how they think others might vote than it is <a href="https://healthpolicy.usc.edu/evidence-base/could-shy-trump-voters-discomfort-with-disclosing-candidate-choice-skew-telephone-polls-evidence-from-the-usc-election-poll/">to admit how they themselves will vote</a>. Some people may feel embarrassed to admit who their favorite candidate is. Others may fear harassment. And some might lie because they want to obstruct pollsters. Our own findings suggest that Trump voters might have been more likely than Biden voters to hide their voting intentions, for all of those reasons. </p>
<p><iframe id="CpNcN" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/CpNcN/3/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>Third, most people are influenced by others around them. People often get information about political issues from friends and family – and those conversations <a href="https://psycnet.apa.org/record/1995-98606-000">may influence their voting choices</a>. Poll questions that ask participants how they will vote do not capture that social influence. But by asking participants how they think others around them will vote, pollsters may get some idea of which participants might still change their minds. </p>
<h2>Other methods we are investigating</h2>
<p>Building on these findings, we are looking at ways to <a href="https://osf.io/zv726">integrate information from these and other questions</a> into algorithms that might make even better predictions of election outcomes. </p>
<p>One algorithm, called the “<a href="https://science.sciencemag.org/content/306/5695/462">Bayesian Truth Serum</a>,” gives more weight to the answers of participants who say their voting intentions, and those of their social circles, are relatively more prevalent than people in that state think. Another algorithm, called a “<a href="https://osf.io/gp96y/">full information forecast</a>,” combines participants’ answers across several poll questions to incorporate information from each of them. Both methods largely outperformed the traditional polling question and the predictions from an <a href="https://projects.fivethirtyeight.com/polls/president-general/national/">average of polls</a>.</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p>
<p>Our poll did not have enough participants in each state to make good state-level forecasts that could help predict votes in the Electoral College. As it was, our questions about social circles and expected state winners predicted that Trump might narrowly win the Electoral College. That was wrong, but so far it appears that these questions had on average lower error than the traditional questions in predicting the difference between Biden and Trump votes across states.</p>
<p>Even though we still don’t know the final vote counts for the 2020 election, we know enough to see that pollsters could improve their predictions by asking participants how they think others will vote.</p><img src="https://counter.theconversation.com/content/150121/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>This work has been partially supported by grants from the National Science Foundation (MMS 2019982 and DRMS 1949432). The NSF had no role in study design, data collection and analysis, or preparation of reports. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.</span></em></p><p class="fine-print"><em><span>Wändi Bruine de Bruin additionally receives funding from the Riksbankens Jubileumsfond (The Swedish foundation for Humanities and Social Sciences). She is affiliated with the University of Southern California's Center for Economic and Social Research, which conducted the USC Dornsife 2020 Election Poll.</span></em></p>People know a lot about their friends and neighbors – and pollsters can learn from that information, if they ask.Mirta Galesic, Professor of Human Social Dynamics, Santa Fe Institute; External Faculty, Complexity Science Hub Vienna; Associate Researcher, Harding Center for Risk Literacy, University of PotsdamWändi Bruine de Bruin, Provost Professor of Public Policy, Psychology and Behavioral Science, USC Price School of Public Policy, USC Dornsife College of Letters, Arts and SciencesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1173982019-05-20T19:55:59Z2019-05-20T19:55:59ZState of the states: Queensland and Tasmania win it for the Coalition<figure><img src="https://images.theconversation.com/files/275301/original/file-20190519-69204-19pjifg.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Contrary to expectations, Victoria failed to deliver a government majority to Labor. </span> <span class="attribution"><span class="source">Wes Mountain/The Conversation, CC BY-ND</span></span></figcaption></figure><p><em>Our <a href="https://theconversation.com/au/topics/state-of-the-states-2019-68973">“state of the states” series</a> takes stock of the key issues, seats and policies affecting the vote in each of Australia’s states.</em></p>
<p><em>We’ll check in with our expert political analysts around the country every week of the campaign for updates on how it is playing out.</em></p>
<hr>
<h2>Queensland</h2>
<p><em>Maxine Newlands, Senior Lecturer in Political Science at James Cook University</em></p>
<p>The Coalition’s emphatic win in Queensland has been the story of the 2019 federal election. The Coalition, with some help from preference votes, took out 23 of the 30 seats – up from 21 seats in 2016.</p>
<p>Shorten spent three years campaigning in north Queensland, with over 20 town hall events, yet in the last five weeks the then Labor leader was seen just a few times in north Queensland.</p>
<p>Marginal Coalition seats have been shored up with an 11.3% swing in Dawson and 10.7% in Capricornia.</p>
<p>Labor took 20 years to win Herbert from the Coalition in 2016, but it’s easily slipped back into Coalition hands. With a 7.6% swing to the Coalition, it could be a long time until Labor holds Herbert again.</p>
<p>The Coalition win in Longman (<a href="https://tallyroom.aec.gov.au/HouseIncumbentTrailing-24310.htm">3.9% swing</a>) chips away at Labor’s Brisbane base, leaving Labor exposed with no seats outside the state capital.</p>
<p>Peter Dutton will begin his seventh term in Dickson, with preferences from United Australia Party (UAP) and Pauline Hanson’s One Nation (PHON) getting him over the line. But a predicted upward swing from <a href="https://www.abc.net.au/news/elections/federal/2019/guide/dick">1.7% to 2.1%,</a> still leaves Dickson vulnerable.</p>
<p>The 2019 election will be remembered as an election that pitted the economy against the environment. No parties seemed able to offer policies for both. And a majority of voters apparently prioritised economics over environmental policy in the search for job security.</p>
<p>The Carmichael mine might not be the silver bullet that economists hope for by bringing jobs and reducing youth unemployment, but voters’ desire for big infrastructure projects that bring knock-on community benefits secured the Coalition’s victory.</p>
<p>The coalition’s strategy to <a href="http://www.orwell.ru/library/essays/politics/english/e_polit">promise little</a>, but provide enough detail to offer hope is a tried and tested approach. By contrast, Labor’s consistent policy announcements and change mantra left them open to criticism over higher taxes for retirees and uncertainty on costings for climate change.</p>
<p>The blame game between federal and Queensland Labor has begun. The Adani mine remains the elephant in the room, setting the scene for next year’s Queensland state election.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/interactive-everything-you-need-to-know-about-adani-from-cost-environmental-impact-and-jobs-to-its-possible-future-116901">Interactive: Everything you need to know about Adani – from cost, environmental impact and jobs to its possible future</a>
</strong>
</em>
</p>
<hr>
<h2>Victoria</h2>
<p><em>Nick Economou, Senior Lecturer in the School of Political and Social Inquiry at Monash University</em></p>
<p>Contrary to general expectation, Victoria has not delivered a government majority to Labor. With more than 75% of the vote counted, the two party swing to Labor is less than 0.5%. The only certain gains for the opposition are the seats of Dunkley and Corangamite, which were already notionally Labor after a redistribution. </p>
<p>To the list of Liberal heroes mentioned by Scott Morrison on election night should be added Michael Sukkar in Deakin and Jason Wood in La Trobe, both of whom defended their potentially vulnerable seats. Sukkar’s performance was particularly noteworthy given the ferocity of the campaign run against him by those angered by his conservative stand on marriage equality, and his contribution to the fate of former Liberal leader Malcolm Turnbull. He will probably be rewarded with a ministerial appointment.</p>
<p>Independent Helen Haines has won Indi and the Greens’ Adam Bandt retains Melbourne, but all other non-major party challenges in the lower house contest failed. That includes Julia Banks in Flinders (comfortably won by Liberal Greg Hunt); Julian Burnside in his bid on behalf of the Greens to oust Josh Frydenberg in Kooyong; Jason Ball’s attempt to win Higgins on behalf of the Greens (it is a Liberal retain); and the field of aspirant independents who rolled out in Mallee (won by the National party).</p>
<p>In the Senate, meanwhile, both Labor and Liberal are guaranteed of at least two seats each. The Greens are hovering close to a quota at 11.7% of the vote, and with 1.5% of Animal Justice Party preferences likely to flow through to lead candidate Janet Rice. </p>
<p>Labor’s rather weak primary vote means there won’t be a significant transfer of quota to the Justice Party, so it’s unlikely that Derryn Hinch will be returned. The final seat will be a tussle between the third placed candidate on the Liberal ticket, David Van, and a right-of-centre minor party candidate that could still be the United Australia Party, depending on how preferences flow.</p>
<p>The expectation of a better result for Labor was based on opinion polling that measured two party support for the opposition at 54%. Currently the two party vote result is 52.2% – a strong level of support for Labor, but clearly insufficient to result in a significant transfer of seats. This was, indeed, a quite typically Victorian outcome.</p>
<h2>Tasmania</h2>
<p><em>Richard Eccleston is Director of the Institute for the Study of Social Change at the University of Tasmania</em></p>
<p>Tasmania, and the northern seats of Bass and Braddon, were always going to be key elements of a Coalition victory, and so it proved to be.</p>
<p>Indeed, Tasmania is a microcosm of the national election result, with inner urban electorates becoming more progressive, while regional seats have swung to the Coalition. Across the board support for independents, minor parties and the informal vote was up, reflecting broad-based voter disillusionment.</p>
<p>It seems clear Labor’s Justine Keay will lose the North West seat of Braddon, having suffered a 5% two-party preferred swing to Liberal challenger and new member Gavin Pearce. There was, however, a 4% swing against the Liberals on the primary vote with Liberal-aligned independent Craig Brakey securing 11%.</p>
<p>Bass is again on a knife edge.</p>
<p>Sitting Labor member Ross Hart is trailing his Liberal challenger Brigid Archer by less than 500 votes, with about 12,000 pre-poll votes to count. The final result in Bass may determine whether the Coalition can govern in its own right or whether it has to rely on the likes of the Katter Australia Party.</p>
<p>It’s clear that Bass and Braddon continue to be volatile and marginal and will be a focus of future federal campaigns.</p>
<p>Any hopes the Liberals had of securing the rural seat of Lyons were dashed with the resignation of their candidate Jessica Whelan midway through the campaign. Labor’s Brian Mitchell was returned with a small swing, as was sitting member Julie Collins in Franklin.</p>
<p>In the Hobart-based seat of Clark, independent Andrew Wilkie secured over 50% of the primary vote, while the combined Liberal and Labor vote was just 37%. The fact that almost two thirds of voters in the state capital didn’t vote for either of the major parties highlights the fundamental challenges facing our political parties and system of government more generally.</p>
<p>Election night coverage inevitably focuses on the lower house, but the composition of the Senate will have a big impact on how Scott Morrison governs. In the Tasmanian Senate race, it appears that the Liberals and Labor will each secure two seats. The Greens Nick McKim will also be returned, while Jacquie Lambie is best placed to win the final seat and resume her colourful political career.</p>
<p><iframe id="0h5iT" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/0h5iT/3/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/key-challenges-for-the-re-elected-coalition-government-our-experts-respond-117325">Key challenges for the re-elected Coalition government: our experts respond</a>
</strong>
</em>
</p>
<hr>
<h2>New South Wales</h2>
<p><em>Stewart Jackson, Lecturer in the Department of Government and International Relations at the University of Sydney</em></p>
<p>Bill Shorten and the Labor Party went into the 2019 election with some expectation of reversing the decline in their vote in NSW. They hoped to capture at least three seats (Gilmore, Reid and Banks), with the chance of another in Page, while retaining marginal seats such as Macquarie and Lindsay. </p>
<p>By the end of election night it was clear that things had not gone to plan. While Gilmore went to Labor’s Fiona Phillips, the party failed to capture Reid and Banks. And Lindsay moved in the other direction, falling to the Liberal’s Melissa McIntosh.</p>
<p>It’s also clear that the supposed surge in support for minor parties and independents didn’t materialise. While Zali Steggall won the seat of Warringah – ending Tony Abbott’s 25-year hold on the seat – hers was an isolated victory. Kerryn Phelps lost narrowly to Dave Sharma, and Kevin Mack came up well short in Farrer. Adam Blakester, in New England, was unable to make any inroads on Barnaby Joyce’s hold on the seat, with Joyce even enjoying a small swing in his favour on primary votes.</p>
<p>Former Independent MP Rob Oakeshott slipped backwards in Cowper, a seat that he came close to winning in 2016, and which he partially covered while he was MP for Lyne between 2008 and 2013. The only real challenge from a minor party candidate came from One Nation’s Stuart Bonds against Labor’s Joel Fitzgibbon in the seat of Hunter. Bonds’ 22%, directed to the National’s Josh Angus, nearly deprived the ALP of one of its key frontbenchers.</p>
<p>The key lesson then from NSW is that not much has changed. The Berejiklian government’s narrow return in March might have been a portent for what NSW voters were thinking – and certainly there was no joy for the ALP to be found there.</p>
<h2>South Australia</h2>
<p><em>Rob Manwaring, Senior Lecturer in Politics and Public Policy at Flinders University</em></p>
<p>There was a good deal of movement among South Australian voters, but it looks like no seats will change hands at all in the state. Of the ten seats, the Liberals have three, the ALP five, and the Centre Alliance have held onto the seat of Mayo. On current counting, the most marginal seat – Boothby – looks like a Liberal retain, held by Nicolle Flint, but only on the slimmest of margins. </p>
<p>In South Australia, the Coalition scored a strong-ish primary vote of 40.6% (SA) compared to 41.4% (national). In contrast, the ALP’s primary vote in the state was 35.9% (SA) compared to 33.9% (national). </p>
<p>Both parties have seen a swing towards them in SA, although the swing is slightly greater to the Liberals. On the counting so far, the 2016 Centre Alliance votes seem to have disproportionately favoured the Liberals, rather than the ALP. Given that the Centre Alliance campaigned against Labour’s “retiree tax”, and Sharkie emphasised her “non-Labor” seat, it appears their “centre” is moderately more right than left.</p>
<p>At this stage, at least two seats in the Senate remain unclear, but with “likely” predictions. The Liberals and Labor have picked up the first four spots between them (two Liberal, two Labor). Greens Senator Sarah Hanson-Young looks likely to have taken the fifth spot, and remarkably, the Liberals might take the final sixth spot off the hands off One Nation. Here, the Centre Alliance have appeared to have lost out. The main issue is that by the next federal election, most of the seats will be even “safer”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/labors-election-loss-was-not-a-surprise-if-you-take-historical-trends-into-account-117399">Labor's election loss was not a surprise if you take historical trends into account</a>
</strong>
</em>
</p>
<hr>
<h2>Western Australia</h2>
<p><em>Ian Cook, Senior Lecturer of Australian Politics at Murdoch University</em></p>
<p>Looking at the Western Australian <a href="https://www.abc.net.au/news/elections/federal/2019/results?filter=all&sort=az&state=wa">results</a> for the House of Representatives on the ABC website reminds you of the “red” (Republican) and “blue” (Democrat) states of US politics – although the meaning of the colours is more or less the opposite here.</p>
<p>And when you look at it, Western Australia is a blue, that is Liberal, state.</p>
<p>If, as most people expect, Anne Aly wins Cowan, Labor will continue to hold only five of the 16 lower house seats in WA. While most Liberal candidates in WA increased their margins, Liberal candidates in what looked like vulnerable Liberal-held seats either increased their margins or suffered only minor swings against them. (The Nationals still don’t hold a lower house seat in WA.)</p>
<p>Of the four Liberal-held seats everyone was watching, the Liberals’ now hold Halsuck by a margin of 4.6% (up from 2.1%), and Pearce by 6.8% (up from 3.6%). The Liberals’ margin fell in Swan from 3.6% to 1.7%, and in Stirling from 6.1% to 5.1%. But 5.1% is still a lot. </p>
<p>The upshot is that two of the four seats that were within Labor’s reach in 2019 are now close-to out of reach (Hasluck) or, subject to some disaster, completely out of reach (Pearce). The seat that was furthest out of reach (Stirling) is marginally closer to within reach, but is still a stretch for Labor. This leaves only one of the four seats that were within Labor’s reach still within its reach at the next election (Swan). (No Liberal-held seat has slipped into the marginal category.)</p>
<p>That’s going to make it hard to convince anyone that WA is crucial to an election result when there is only one seat in play in the state. </p>
<p>During elections to come, Western Australians will talk wistfully of that time when the those folk over East cared. When you couldn’t turn around, but to find one of the leaders or some other major party figure at your elbow, debating and throwing money around with abandon. Ah, those were the days…</p><img src="https://counter.theconversation.com/content/117398/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Maxine Newlands receives funding from Department of Industry, Innovation and Science</span></em></p><p class="fine-print"><em><span>Stewart Jackson has received funding from Department of Finance APPD program. He is a former National Convenor of the Australian Greens, between 2003-2005.</span></em></p><p class="fine-print"><em><span>Ian Cook, Nick Economou, Richard Eccleston, and Rob Manwaring do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Tasmanian seats of Bass and Braddon were always going to be key elements of a Coalition victory – and so it proved to be.Nick Economou, Senior Lecturer, School of Political and Social Inquiry, Monash UniversityIan Cook, Senior Lecturer of Australian Politics, Murdoch UniversityMaxine Newlands, Senior Lecturer in Political Science: Research Fellow at the Cairns Institute; Research Associate for Centre for Policy Futures, University of Queensland, James Cook UniversityRichard Eccleston, Professor of Political Science; Director, Institute for the Study of Social Change, University of TasmaniaRob Manwaring, Senior Lecturer, Politics and Public Policy, Flinders UniversityStewart Jackson, Lecturer, Department of Government and International Relations, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/933482018-03-14T03:46:36Z2018-03-14T03:46:36ZHow do we feel about a ‘big Australia’? That depends on the poll<figure><img src="https://images.theconversation.com/files/210202/original/file-20180314-131601-s9t7gm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Bob Carr has a decades-long record of opposition to a 'big Australia'.</span> <span class="attribution"><span class="source">AAP/Dan Himbrechts</span></span></figcaption></figure><p>It’s not new to find politicians claiming public opinion is on their side on contentious issues. So, we shouldn’t be surprised when former New South Wales premier and foreign minister Bob Carr – who has a decades-long record of opposition to a “big Australia” – says there has been a <a href="https://www.theguardian.com/australia-news/2018/mar/13/qa-australias-immigration-rate-should-be-cut-in-half-bob-carr-says">significant shift in public opinion</a> on the topic. But we should politely ask for his sources.</p>
<p>On the ABC’s Q&A on Monday night, Carr said:</p>
<blockquote>
<p>The first poll I’ve seen that indicates a big shift in public attitudes … came out in recent months. It shows 74% of Australians think there is enough of us already … I find that interesting. It’s the first breakthrough … in the last 12 months, the message has sunk in. </p>
</blockquote>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"973146454939262976"}"></div></p>
<p>It seems no-one – not least Q&A host Tony Jones – bothered to question the “big shift” nor the 74% figure in the subsequent discussion or media coverage. Where did it come from?</p>
<h2>Carr’s likely source</h2>
<p>It’s possible this finding came from <a href="http://tapri.org.au/wp-content/uploads/2016/04/TAPRI-survey-19-Oct-2017-final-3.pdf">a survey</a> conducted in August 2017 for the <a href="http://tapri.org.au/">Australian Population Research Institute</a>.</p>
<p>The survey employed a commercial panel, which yields a large number of respondents but is not a random sample of the population. Of those who were Australian voters, 54% indicated the number of immigrants “should be reduced”. The survey then went on to ask several additional questions – some of which were of the leading variety.</p>
<p>The survey informed respondents that:</p>
<blockquote>
<p>From December 2005 to December 2016 Australia’s population grew from 20.5 million to 24.4 million; 62% of this growth was from net overseas migration.</p>
</blockquote>
<p>It then asked in blunt terms:</p>
<blockquote>
<p>Do you think Australia needs more people? </p>
</blockquote>
<p>With this wording, the proportion with a negative view of immigration (that is, Australia does not need more people) jumped to 74%. This is a clear indication of the impact of question wording and context.</p>
<h2>A finding not supported elsewhere</h2>
<p>Almost at the same time as this survey, in June-July 2017, the Scanlon Foundation conducted two surveys (which I led the work on). </p>
<p>In its <a href="http://scanlonfoundation.org.au/wp-content/uploads/2014/05/ScanlonFoundation_MappingSocialCohesion_2017.pdf">annual survey</a>, which is interviewer-administered and is a random sample of the population, the Scanlon Foundation employed a question that has been used in Australian surveying for more than 50 years and hence provides scope to track trend of opinion over time. It asked: </p>
<blockquote>
<p>What do you think of the number of immigrants accepted into Australia? </p>
</blockquote>
<p>It found just 37% considered the intake to be “too high”, 40% “about right”, and 16% “too low”. The proportion concerned by the level of immigration is within one percentage point of the average of ten years of Scanlon Foundation surveying. With attention narrowed to respondents who are Australian citizens (and have voting rights), there is little difference in the result. </p>
<p>An issue in surveying is the impact of interviewer administration. Some argue that self-administered surveys, completed online, <a href="https://www.aapor.org/Education-Resources/Reports/Report-on-Online-Panels">are more reliable</a>.</p>
<p>To test the impact of the mode of surveying, a set of questions was administered in a second Scanlon Foundation survey <a href="https://www.monash.edu/__data/assets/pdf_file/0009/1189188/mapping-social-cohesion-national-report-2017.pdf">using the Life in Australia panel</a>. The large majority of these respondents complete the survey online, without interviewer assistance, and the panel was formed using a probability process to reflect the Australian population. </p>
<p>The finding was almost identical with the result obtained in the first Scanlon Foundation survey: a minority – 40% – considered the intake to be “too high”.</p>
<p>There have been several other probability-based surveys on attitudes to immigration in 2016 and 2017, including the <a href="http://ada.edu.au/ADAData/AES/Trends%20in%20Australian%20Political%20Opinion%201987-2016.pdf">Australian Election Study</a> conducted by researchers at the Australian National University, a <a href="http://www.roymorgan.com/findings/7017-australian-views-on-immigration-population-october-2016-201610241910">Morgan survey</a>, and the annual <a href="https://www.lowyinstitute.org/publications/2017-lowy-institute-poll">Lowy Institute Poll</a>. </p>
<p>None of these surveys obtained a majority agreeing that immigration is “too high”, much less concern at the level of 74%. The 2017 Lowy Institute Poll found 40% favour reduction.</p>
<p>Most surveys are consistent in finding there is a substantial minority of the view that immigration is too high, but not a large majority, as Carr claimed. </p>
<h2>The importance of context</h2>
<p>In evaluating survey findings, attention needs to be directed to sampling procedure (whether random or not), the question asked, the context of the question, and the record of all relevant surveys – not just one survey.</p>
<p>There is one additional issue of note: public controversy and claims made about the impact of immigration can shift opinion in a short time. </p>
<p>In 2010, in the context of political campaigning focused on immigration and “big Australia”, the Scanlon Foundation survey <a href="http://scanlonfoundation.org.au/wp-content/uploads/2014/07/mapping-social-cohesion-summary-report-2010.pdf">recorded a shift</a> of ten percentage points in the level of concern about immigration.</p>
<hr>
<p><em>This piece has been updated since publication to clarify Andrew Markus led the Scanlon Foundation’s survey work.</em></p><img src="https://counter.theconversation.com/content/93348/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Markus has received grants to research Australian public opinion from the Scanlon Foundation, the Australian Research Council and the Australian government.
</span></em></p>Most surveys are consistent in finding there is a substantial minority of the view that immigration is too high, but not a large majority.Andrew Markus, Pratt Foundation Research Chair of Jewish Civilisation, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/791702017-06-08T23:35:07Z2017-06-08T23:35:07ZGE2017: Can you trust that surprise exit poll?<p>True to form, the 2017 UK election exit poll has brought shock to election night as the prospect of the Tories failing to secure a majority in parliament appears to be a real possibility. The poll puts the Conservatives on 314 seats – 12 short of a majority – and Labour on 266 seats. </p>
<iframe src="https://datawrapper.dwcdn.net/r3Jm4/1/" scrolling="no" frameborder="0" allowtransparency="true" allowfullscreen="allowfullscreen" webkitallowfullscreen="webkitallowfullscreen" mozallowfullscreen="mozallowfullscreen" oallowfullscreen="oallowfullscreen" msallowfullscreen="msallowfullscreen" width="100%" height="402"></iframe>
<p>After catching their breath, the first question most people are asking is whether we can trust these results. </p>
<p>The short answer is yes. While the official seat count may differ in either direction due to sampling error, exit polls have provided reliable estimates in past elections. In fact, they seemed to have fared much better than opinion polls in recent years for at least two important reasons.</p>
<p>Firstly, exit polling is done at selected polling stations within constituencies with sufficiently large samples to reduce uncertainty around the estimates. Exit polls include responses from tens of thousands of voters, while most national opinion polls have sample sizes of around 1,200.</p>
<p>Secondly, the exit polls only include responses from actual voters. Opinion polls must rely on vote intentions and the likelihood that a particular respondent will actually cast their ballot on election day.</p>
<p>However, like most opinion polls, exit polls are not based on probability samples and are thus not entirely representative of the voting public. Instead, a sample of the more than <a href="http://www.bbc.co.uk/news/election-2015-32428768">39,000 polling stations</a> is selected to represent the total. This process of sampling introduces error, which is why the exit poll estimates may be different than the certified seat counts.</p>
<h2>How well did the pollsters do?</h2>
<p>The figure below provides a snapshot of election forecasts on June 7 2017.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=459&fit=crop&dpr=1 600w, https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=459&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=459&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=577&fit=crop&dpr=1 754w, https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=577&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/173024/original/file-20170608-22791-7d7f0a.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=577&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">How the polls stacked up.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The mean projection based on the polls ahead of voting was 348 seats for the Conservative party, with a difference of 69 seats between the most optimistic prediction and the least optimistic one. If the exit poll is correct, then YouGov’s model predicting 302 Tory seats appears to have performed best among the major forecasters. That would suggest YouGov’s <a href="https://theconversation.com/its-sophisticated-but-can-you-believe-yougovs-startling-election-prediction-78701">sophisticated multilevel regression and post-stratification method</a> (“Mister P”) was effective. More importantly, it appears that YouGov used a better turnout model than other pollsters to weight the data (to correct for who would turn out to vote in this election).</p>
<p>In recent elections, pollsters have systematically underestimated vote share for the Conservative party <a href="https://fivethirtyeight.com/features/are-the-u-k-polls-skewed/">by roughly 5%</a>. This time around, they seem closer to the mark, and some may have even overestimated the party’s final vote share.</p>
<p>One final issue is worth noting. Even experts find it extremely difficult to predict election outcomes. A few weeks ago (May 16 to the 26), 335 election experts provided their projections in the first <a href="https://www.psa.ac.uk/psa/news/expert-predictions-2017-general-election-survey-stephen-fisher-chris-hanretty-and-will">Political Studies Association expert survey</a>. Although these predictions were taken several weeks ago and didn’t allow for recent changes in the campaign, the experts predicted a Conservative seat count of 371. Regardless of the final count, one thing seems clear: a Conservative seat count of 371 will probably be some way off.</p><img src="https://counter.theconversation.com/content/79170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Todd K. Hartman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The exit poll published at 10pm suggests the Conservatives could fall short of a parliamentary majority. Is it to be believed?Todd K. Hartman, Lecturer in Quantitative Social Science, Sheffield Methods Institute, University of SheffieldLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/784342017-05-26T21:24:42Z2017-05-26T21:24:42ZAre UK pollsters heading for another embarrassing election?<p>Following the political surprises of 2015 and 2016, there has been <a href="http://www.newstatesman.com/politics/staggers/2016/11/polling-dead">much reflection</a> and debate on the accuracy of the polls in the run-up to the impending snap-election of 2017. It is fair to say that, although perhaps somewhat <a href="http://www.aapor.org/Education-Resources/Reports/An-Evaluation-of-2016-Election-Polls-in-the-U-S.aspx">unfair on the pollsters</a>, the EU referendum and US presidential election have exacerbated – rather than healed – the widespread loss of public faith in the polls induced by the 2015 general election debacle. </p>
<p>So are the pollsters heading for further ignominy on June 8? Given the <a href="https://yougov.co.uk/news/2017/05/25/are-tories-losing-ground-or-regaining-it/">substantial-if-narrowing</a> lead the Conservatives currently hold in the polls, this seems unlikely. </p>
<p>Polls are judged first and foremost on whether they correctly indicate which party will form the next government and, as the chart below shows, were the Conservatives not to win an overall majority on June 8, we would be looking at a polling miss of unprecedented magnitude. The largest polling error on record was in <a href="http://www.telegraph.co.uk/news/general-election-2015/11532150/Campaign-Calculus-How-wrong-are-the-polls.html">1992</a>, when the Conservative lead over Labour was underestimated by an average of nine percentage points – about the same as the Conservatives’ current polling advantage.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=440&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=440&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=440&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=553&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=553&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171165/original/file-20170526-6421-1npbiah.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=553&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p>But correctly predicting which party will obtain an overall majority in a relatively uncompetitive election isn’t in itself a very impressive feat. It’s still possible that, when judged on the basis of statistical error rather than picking the winner, the pollsters will fare little better in 2017 than they did in 2015 – if not even worse.</p>
<p>If that happens, it won’t be down to complacency. After the 2015 election, the British Polling Council (BPC) and the Market Research Society set up an official inquiry to work out why the polls had failed so badly. The resulting report <a href="http://eprints.ncrm.ac.uk/3789/1/Report_final_revised.pdf">concluded</a> that the primary reason for the polling errors was the use of unrepresentative samples. </p>
<p>The pollsters’ recruitment methods meant their final samples included too many Labour voters and too few Conservative ones – and the weighting and adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree. While the inquiry could not rule out a modest late swing towards the Conservatives, initial claims that the polling errors were due to “<a href="https://www.theguardian.com/politics/2015/may/08/election-2015-how-shy-tories-confounded-polls-cameron-victory">shy Tories</a>” (respondents who deliberately misreported their intentions) or “<a href="http://www.telegraph.co.uk/news/politics/labour/11599365/Lazy-Labour-lost-Ed-Miliband-the-election-says-pollster.html">lazy Labour</a>” (Labour voters who said they’d vote but ultimately didn’t) did not stand up to scrutiny.</p>
<h2>Fixing it</h2>
<p>The inquiry made a number of recommendations for changes in how polls are carried out and how their findings are presented to both media clients and the public. It also proposed amendments to the BPC rules on the disclosure and reporting of polls, most notably that pollsters should provide a clear statement on weighting procedures and should detail any methodological changes made since the previous published poll. </p>
<p>The BPC’s <a href="http://www.britishpollingcouncil.org/bpc-inquiry-report/">official response</a> to these recommendations indicated that it would make procedural changes to its rules either immediately, or during the course of 2017, while it would be up to individual polling organisations to implement recommendations relating to methodological practice, and subjected to a review in 2019. Theresa May’s surprise decision to call an early election means that, understandably, most of the recommendations of the inquiry haven’t yet been implemented.</p>
<p>This is not to say that the pollsters are approaching June 8 with precisely the same methodologies they used in 2015. On the contrary, the polling industry appears to have made a number of changes to its sampling and weighting procedures. Some changes are intended to improve sample composition: recruiting more politically disengaged people into online surveys, extending fieldwork periods, increasing sample sizes and so on. </p>
<p>Other pollsters have introduced new quota-setting and weighting procedures, adjusting samples by self-reported political interest, past vote and education, using modelling to estimate the probability that respondents will actually vote, and <a href="http://ukpollingreport.co.uk/faq-dont-knows">reallocating “don’t knows”</a> differently across parties. </p>
<p>But frustratingly for the pollsters, of course, we will not know if these changes are working until June 9.</p>
<h2>Coming together</h2>
<p>The 2015 polling inquiry also found that the pollsters had “herded” around an inaccurate estimate of the Conservative-Labour margin, and that this consensus contributed to the collective sense of shock at the election result. The situation in 2017, however, is rather different. </p>
<p>There are suggestions this time that the polls are <a href="http://www.newstatesman.com/politics/june2017/2017/05/were-easy-target-how-tory-manifesto-pledge-will-tear-families-apart">overstating Labour’s performance</a>, a pattern that has been a consistent feature of UK polling since the general election of 1979. This can be seen in the chart below, which plots the difference between poll estimates and Labour’s eventual vote share by days from the election. </p>
<p>The black line is the average of all polls, while grey lines are poll estimates across individual elections. What the chart shows is that, while previous election polls do converge toward the result over the final three weeks of the campaign, they still tend to overestimate the Labour vote – even on the very eve of the election.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=440&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=440&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=440&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=553&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=553&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171166/original/file-20170526-6380-594nui.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=553&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p>If pollsters continue to adjust their sampling and weighting procedures during the campaign, a belief that Labour will end up under-performing their polling will create implicit incentives to make methodological choices that reduce the Labour share in vote intention estimates. If the received wisdom is correct, this could reduce the polls’ average error – but if recent events have taught us anything it’s that, in politics, received wisdom is often wrong.</p>
<p>In the meantime it’s worth remembering another conclusion of the 2015 polling inquiry: that observers tend to endow opinion polls with greater levels of precision than they are capable of delivering. Polling, after all, is difficult. It involves hitting a moving target by persuading reluctant and reflexive citizens to provide truthful responses to socially loaded questions for little or no return. </p>
<p>Small wonder, then, that the average error on the Conservative-Labour margin between 1945, when political polling in the UK began, and 2015 is in the region of 4-5%. As yet, there’s no particular reason to assume 2017 will represent a radical departure from the historical record.</p><img src="https://counter.theconversation.com/content/78434/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Will Jennings has received funding from the Economic and Social Research Council. </span></em></p><p class="fine-print"><em><span>Patrick Sturgis receives funding from the Economic and Social Research Council and the Wellcome Trust </span></em></p>Polling is difficult – and everyone except pollsters overestimates how accurate polls are.Will Jennings, Professor of Political Science and Public Policy, University of SouthamptonPatrick Sturgis, Professor of Research Methodology, Director of National Centre for Research Methods, University of SouthamptonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/685442016-11-18T03:34:24Z2016-11-18T03:34:24ZWhat will pollsters do after 2016?<figure><img src="https://images.theconversation.com/files/146132/original/image-20161115-31138-1uzu85t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What will polling look like in the future?</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-299573114/">Person taking survey via shutterstock.com</a></span></figcaption></figure><p>Clinton defeated Trump much like <a href="http://www.chicagotribune.com/news/nationworld/politics/chi-chicagodays-deweydefeats-story-story.html">Dewey defeated Truman</a>. Both election results were dramatic surprises because pre-election polls created expectations that didn’t match the final outcomes.</p>
<p>Many polls were very accurate. For example, the polling averages in <a href="http://www.realclearpolitics.com/epolls/2016/president/va/virginia_trump_vs_clinton_vs_johnson_vs_stein-5966.html">Virginia</a>, <a href="http://www.realclearpolitics.com/epolls/2016/president/co/colorado_trump_vs_clinton_vs_johnson_vs_stein-5974.html">Colorado</a> and <a href="http://www.realclearpolitics.com/epolls/2016/president/az/arizona_trump_vs_clinton_vs_johnson_vs_stein-6087.html">Arizona</a> were within 0.1 percent of the election outcome.</p>
<p>That said, <a href="http://www.nytimes.com/interactive/2016/11/13/upshot/putting-the-polling-miss-of-2016-in-perspective.html">many polls</a> missed the mark in 2016. Polls of Wisconsin in particular performed <a href="http://www.realclearpolitics.com/epolls/2016/president/wi/wisconsin_trump_vs_clinton_vs_johnson_vs_stein-5976.html">very poorly</a>, suggesting Clinton was ahead by 6.5 percent before her ultimate loss by 1 percent. </p>
<p>If polls are going to remain a <a href="http://www.aapor.org/Education-Resources/Reports/Polling-and-Democracy.aspx">major part of the democratic process</a> both in the United States and globally, pollsters have a professional duty to be as accurate as possible.</p>
<p>How will the polling industry improve accuracy after the 2016 election? The first step is to identify sources of error in polling.</p>
<h2>Potential sources of polling error</h2>
<p>When poll results are reported, they come with a <a href="http://www.pewresearch.org/fact-tank/2016/09/08/understanding-the-margin-of-error-in-election-polls/">margin of error</a> – saying the poll is accurate within plus-or-minus a few percentage points. Those margins are the best-case scenarios. They account for statistically expected error, but not entirely for <a href="http://ropercenter.cornell.edu/support/polling-fundamentals-total-survey-error/">several other sources of error inherent</a> in every poll.</p>
<p>Chief among these sources are the questions we ask, how we collect the data, how we figure out whom to ask and how we interpret the results. Each of these deserves a look.</p>
<p>The first two categories – question wording and data collection – are likely not the source of the systemic problems we saw in 2016. For one thing, pollsters have <a href="http://dx.doi.org/10.1093/poq/nfh008">good techniques to test questions in advance</a> and develop good standards. Interviewers may occasionally misread questions, but this is both rare and not likely systematic enough to cause problems outside of a few surveys each election cycle.</p>
<h2>Sampling errors</h2>
<p>The nastiest of all errors for pollsters happen in sampling – determining which people should be asked the poll’s questions. These errors are both the hardest to detect and the most likely to cause major problems across many polls. </p>
<p>At the most basic level, sampling errors happen when the people being polled are not in fact representative of the wider population. For example, an election poll of Alabama should not include people who are citizens of Mississippi.</p>
<p>It is essentially impossible to have a poll with perfect sample selection. Even with our best efforts at random sampling, not all individuals have an equal probability of selection because some are more likely to respond to pollsters than others.</p>
<p>Sampling errors could have crept into 2016 polls in several ways. First, <a href="http://www.people-press.org/2012/05/15/assessing-the-representativeness-of-public-opinion-surveys/">far fewer people</a> are willing to respond to surveys today than in previous years. That’s in large part because <a href="http://www.pewresearch.org/2010/04/14/is-caller-id-is-increasing-non-response-rates-in-your-surveys/">people are more likely to screen their phone calls</a> than in the past.</p>
<p>Young people and those who aren’t interested in politics are particularly hard to reach. Those who did respond to pollsters may not have been representative of the wider group. Pollsters have ways to adjust their findings to account for these variation. One common technique is weighting. But these adjustments can still fall short. A single young black Trump supporter had a <a href="http://www.latimes.com/politics/la-na-pol-daybreak-poll-questions-20161013-snap-story.html">measurable difference</a> in one poll because of this weighting.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/cnXfmOwUwQI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Weighting survey results, explained.</span></figcaption>
</figure>
<h2>Who is a ‘likely voter,’ anyway?</h2>
<p>General population surveys, such as those of all adult residents of a geographic area, are not particularly prone to sampling errors. This is because <a href="http://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml">U.S. Census Bureau data</a> tell us the characteristics of any given community. Therefore, we can choose samples and weight responses that reflect the specific population.</p>
<p>Election “horse-race” polls are more difficult, primarily because pollsters must first determine which people are actually going to vote. But voter turnout in the United States is <a href="http://dx.doi.org/10.1016/j.electstud.2005.09.002">voluntary and volatile</a>. Pollsters do not know in advance how many members of each politically relevant demographic group will actually turn out to vote.</p>
<p>One way pollsters can seek to identify likely voters is to include several questions in the poll that help them decide whose responses to include in the final analysis. Though the big surprises on election night were polls biased against Trump, pollsters also were biased against Clinton in states such as <a href="http://www.realclearpolitics.com/epolls/2016/president/nv/nevada_trump_vs_clinton_vs_johnson-6004.html">Nevada</a>. </p>
<p>When looking back at 2016 polling problems, some pollsters may find that they were too restrictive in <a href="http://www.pewresearch.org/files/2016/01/PM_2016-01-07_likely-voters_FINAL.pdf">identifying likely voters</a>, which <a href="http://fivethirtyeight.blogs.nytimes.com/2012/07/19/does-romney-have-an-edge-from-likely-voter-polls/">often</a> favors Republicans. Others may have been too lax, which generally favors Democrats. The challenge we will face, though, is that a likely voter screening technique that worked well in 2016 might not work well in 2020 because the <a href="https://www.census.gov/content/dam/Census/library/publications/2015/demo/p25-1143.pdf">electorate will change</a>.</p>
<h2>Interpretation challenges</h2>
<p>A major problem polls faced in 2016 was not in their data specifically, but in <a href="https://theconversation.com/reports-of-the-death-of-polling-have-been-greatly-exaggerated-68504">how those data were interpreted</a>, either by pollsters themselves or by the media. At the end of the day, polls are but rough estimates of public opinion. They are the best estimates we have available, but they are still estimates – ballpark figures, not certainties.</p>
<p>Many people expect polls to be highly accurate, and they often are – but how the public often thinks of accuracy is different from how pollsters do. Imagine an election poll that showed a one-point lead for a Democrat, and had a margin of error of four percentage points. If the Republican actually wins the election by one point, many people would think the poll was wrong, off by two points. But that’s not the case: The pollster actually said the race was too close to call given typical margins of error and somewhat <a href="http://abcnews.go.com/Politics/undecided-voters-unpredictable-year-experts/story?id=41946563">unpredictable undecided voters</a>.</p>
<p>Organizations that aggregate polls – such as <a href="http://fivethirtyeight.com/">FiveThirtyEight</a>, <a href="http://www.nytimes.com/section/upshot">the Upshot</a> and <a href="http://elections.huffingtonpost.com/pollster">Huffington Post Pollster</a> – have added to this tendency. They combine many polls into one complex statistic, which they then argue is more accurate than any one poll on its own.</p>
<p>Those poll aggregators have been <a href="http://www.huffingtonpost.com/simon-jackman/pollster-predictive-perfo_b_2087862.html">accurate in the past</a>, which led the public to rely more heavily on them than they probably should. Without an expectation of extremely accurate polling, the surprise of election night would have been far less dramatic.</p>
<p>Personally, I paid less attention to aggregators and more attention to a handful of <a href="http://www.ipspr.sc.edu/publication/Link.htm">high-quality polls</a> in each swing state. As a result, I entered election night realizing that most swing states were really too close to call – despite some aggregators’ claims to the contrary.</p>
<h2>Changes in the context of the race</h2>
<p>Technically speaking, polls are designed to measure opinion at the particular point in time during which interviews were conducted. In practice, however, they are used to gauge opinion in the future – on Election Day, which is usually a week or two after most organizations stop conducting polls.</p>
<p>As a result, late shifts in public opinion won’t always be apparent in polls. For example, many polls were conducted before <a href="http://www.nytimes.com/2016/10/29/us/politics/fbi-hillary-clinton-email.html">the announcements by FBI Director James Comey</a> about Hillary Clinton’s emails.</p>
<p>A shift in public opinion after a poll is taken is not technically an error. But as happened this year, unpredictable events like the Comey announcements can cause polling averages to differ from the actual election outcome.</p>
<h2>‘Secret’ Trump voters?</h2>
<p>It will take time to assess the extent of <a href="http://www.mcclatchydc.com/news/politics-government/election/article98915057.html">supposed “secret” Trump voters</a>, those people who did not appear in polls as Trump voters but did in fact vote for him. Pollsters will need several months to determine if the extent of their existence is likely due more to <a href="http://www.vox.com/the-big-idea/2016/11/6/13540646/poll-shifts-misleading-clinton-leads-trump">sampling errors</a>, such as Trump voters being less likely to answer the phone, <a href="http://www.politico.com/story/2016/11/poll-shy-voters-trump-230667">than to</a> people being <a href="http://www.people-press.org/2016/08/03/few-clinton-or-trump-supporters-have-close-friends-in-the-other-camp/#how-open-are-voters-with-their-candidate-preferences">embarrassed about their vote intention</a>. Still, <a href="https://theconversation.com/voters-embarrassment-and-fear-of-social-stigma-messed-with-pollsters-predictions-68640">pollsters need to do more</a> to test this potential form of social desirability bias.</p>
<p>When the 2016 polling postmortem is done, I suspect we will find few “secret” Trump voters were lying to pollsters out of political correctness. Rather, we’ll discover a group of Trump voters who simply didn’t normally take surveys. For example, Christians who believe the Bible is the inerrant word of God are often underrepresented in surveys. It’s not because they are ashamed of their faith. It’s because <a href="http://dx.doi.org/10.1093/socrel/68.1.83">they don’t like to talk to survey researchers, a form of sampling bias</a>.</p>
<h2>Polling after 2016</h2>
<p>Pollsters were aware of the challenges facing them in the 2016 election season. Most notably, they identified declining response rates – fewer people willing to be asked polling questions. They reported that concern, and others, in a <a href="https://doi.org/10.1017/S104909651600144X">poll of academic pollsters</a> I conducted in 2015 with Kenneth Fernandez of the College of Southern Nevada and Maggie Macdonald of Emory University. </p>
<p>Many pollsters (73 percent) in our survey were using the internet in some capacity, a sign they were willing to try new survey methods. A majority (55 percent) of pollsters in our sample agreed that poll aggregators increased interest in survey research among the public and the media, an opinion suggesting a win-win for both aggregators and pollsters. However, some (34 percent) also agreed poll aggregators helped to give low-quality surveys legitimacy.</p>
<p>Many pollsters have embraced an industry-wide <a href="http://www.aapor.org/Standards-Ethics/Transparency-Initiative/Latest-News.aspx">transparency initiative</a> that will include revealing their methods for determining who is a likely voter, and weighting their responses to reflect the population. The polling industry will figure out what happened in places like Wisconsin, but surveys are a complex process and disentangling hundreds of surveys across 50 states will not be immediate. The American Association of Public Opinion Research, the largest professional association of pollsters in the country, has already <a href="https://www.aapor.org/Publications-Media/Press-Releases/AAPOR-to-Examine-2016-Presidential-Election-Pollin.aspx">convened a group of survey methodologists</a> to examine the 2016 results.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/U1MYM35qUr8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Surveys are a complex process.</span></figcaption>
</figure>
<p>Polls remain a valuable resource for democracy. Without polls we would base our understanding of elections more on “hunches” and <a href="http://dx.doi.org/10.2307/420860">guesses based on rough trends</a>. We would know little about why people support a given candidate or policy. And we might see <a href="http://www.jstor.org/stable/2131766">more traumatic major swings</a> in the partisan composition of our leaders.</p>
<p>If political polls were weather forecasts, they would be good at saying whether the chance of rain is high or low, but they would not be good at declaring with confidence that the temperature will be 78 degrees instead of 75 degrees. In modern politics with narrow margins of victory, what causes someone to win an election is closer to a minor change in temperature than an unexpected deluge. If I’m planning a large outdoor event, I would still be <a href="http://www.esa.doc.gov/economic-briefings/value-government-weather-and-climate-data">better off</a> with an imperfect forecast than a nonexistent perfect prediction.</p><img src="https://counter.theconversation.com/content/68544/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jason Husser does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Pollsters must be as accurate as possible. How will they address the challenges revealed in the 2016 election, and other changes in the coming years?Jason Husser, Director of the Elon University Poll, Elon UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/685042016-11-09T21:52:47Z2016-11-09T21:52:47ZReports of the death of polling have been greatly exaggerated<figure><img src="https://images.theconversation.com/files/145314/original/image-20161109-19085-h0d3mg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Polls are best guesses, votes are real.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/South-Korea-2016-US-Election/9cfd103892dd4a5d98480b7cbbd63a87/1/0">AP Photo/Lee Jin-man</a></span></figcaption></figure><p>The first words anyone spoke to me once the election results came in were “What went wrong?” To which I replied, “I was tired and had trouble tying my tie. I’ll fix it before I get to class.” Far from being sartorially flippant, the point I was making was this: Nothing went “wrong.” The polls worked like they were supposed to work. If there was a problem, it was in how they were used – and the fact that we all forgot they deal in probabilities and not certainties.</p>
<h2>Polling theory dictates the process</h2>
<p>In political polls, like those we’ve been subjected to for the past 11 months, pollsters seek to estimate the position of those who will vote in the election. This is a notoriously difficult target to hit; until we vote, we cannot be certain if we will vote. Because the population of “people who’ve cast ballots in the 2016 presidential election” does not yet exist, pollsters must draw their sample from some other – hopefully related – population. They could choose adults, registered voters, or likely voters. None of these sampled populations are identical to the target population. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=325&fit=crop&dpr=1 600w, https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=325&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=325&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=409&fit=crop&dpr=1 754w, https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=409&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/145310/original/image-20161109-19062-1sj0xco.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=409&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">All these actual voters are the population pollsters are trying to approximate in their efforts.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/2016-Election-Michigan-Voting/99b8431004c248fe86ad2dbe3f81360a/56/0">Jacob Hamilton/The Saginaw News via AP</a></span>
</figcaption>
</figure>
<p>No sample can be exactly the same as the population of interest – and that difference is the source of a poll’s first structural source of uncertainty. However, methods exist to reduce structural bias by increasing the likelihood that our small sample is representative of the larger population. </p>
<p>The gold standard in terms of lowering bias is “simple random sampling.” In SRS, a sample is polled from the target population, wherein each person has the same probability of being selected, and the estimates reported are based solely on those polled. The beauty of a truly random sample is that it will, on average, give great estimates of the population. Its main problem is that it is heavily dependent on who winds up in the sample itself. This creates highly variable polls. </p>
<p>To control this variability, polling firms may use stratification – a process that attempts to weight the polls to match the demographics of the overall population. For instance, if 30 percent of voters are Republican and only 10 percent of your sample is, you’d increase the weight given to your Republicans’ responses to account for your poll having too few of them.</p>
<p>When done well, stratification reduces the inherent variability of poll results by exchanging some of that variability for bias. You’re swapping random error for systematic error. If your estimates of the proportion of voters who are Republican is wrong, your estimates are incorrectly weighted. </p>
<p>To make this concrete, simple random sampling estimates are like a pattern from a well-aimed shotgun. The average of the pattern is the center of the target, even if none of the shot actually hit it. Stratified sampling is like the pattern of a rifle: tight, but perhaps not centered on the target.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=398&fit=crop&dpr=1 600w, https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=398&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=398&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/145311/original/image-20161109-19085-z012zq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Pollsters have to figure out how to reach the people they’re targeting.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/joethorn/117886479">Joe Thorn</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<h2>Getting the answers</h2>
<p>A second issue arises in contacting the sample.</p>
<p>The 2012 election showed that relying solely on landline telephones produces estimates that <a href="https://www.sciencedaily.com/releases/2015/06/150615142851.htm">tend to overestimate Republican support</a>. On the other hand, calling cellphones is much more expensive.</p>
<p>Some polling organizations – including <a href="http://www.theecps.com/">Emerson College</a> – stayed with calling only landlines. Others called a set proportion of cellphones. <a href="http://www.publicpolicypolling.com/">Public Policy Polling</a> stuck with calling 80 percent landline and 20 percent cellphone throughout the election cycle. <a href="https://www.monmouth.edu/polling-institute/">Monmouth University</a> tended closer to a 50-50 split.</p>
<p>Other firms gave up on the telephone altogether. <a href="https://blog.electiontracking.surveymonkey.com/2016/10/19/surveymonkey-election-tracking-methodology/">Survey Monkey</a> relied on their large database of online users. The <a href="http://graphics.latimes.com/usc-presidential-poll-dashboard/">University of Southern California</a> created a panel of approximately 3,000 people and polled the same group online throughout the cycle.</p>
<p>Be assured that in the weeks ahead, polling analysts will be looking at these different methods to determine which gave estimates closest to the eventual result. We can already draw some preliminary conclusions. One is that <a href="http://cesrusc.org/election/">the LA Times/USC poll</a>, which polled the same panel of people online over time, seems to have overestimated Trump support. Their final estimates were 48.2 percent Clinton and 51.8 percent Trump (as proportion of the two-party vote). The current popular vote is split 50.1 percent Clinton to 49.9 percent Trump.</p>
<p>A second takeaway is that the polls from Marist University, which contacted a blend of landline and cellphone users, may have come closest at the national level. <a href="http://www.mcclatchydc.com/news/politics-government/election/article112635048.html">Their last estimates on November 3</a> had the national race at 50.6 percent Clinton and 49.4 percent Trump, as proportion of the two-party vote. </p>
<h2>The interpretation</h2>
<p>Once the polling firms produce their estimates, interpretation is in the hands of the various users.</p>
<p>From the standpoint of researchers, the polls gave what we wanted: data from which to gauge public opinion. After an excellent 2012 season, analyst Nate Silver put his reputation on the line with some decisions he made about <a href="http://fivethirtyeight.com/features/a-users-guide-to-fivethirtyeights-2016-general-election-forecast/">his estimation process</a>: he adjusted a smoothing parameter late in the election cycle. The effect made his polls more responsive to changes in the polls. Statistically speaking, this means Silver is assuming that people are less likely to change position early in the election cycle, but may change more easily later. </p>
<p>Among others, the Huffington Post accused him of “<a href="http://www.huffingtonpost.com/entry/nate-silver-election-forecast_us_581e1c33e4b0d9ce6fbc6f7f">putting his thumb on the scales</a>” in favor of Trump. However, Silver made his adjustments <a href="http://fivethirtyeight.com/features/heres-proof-some-pollsters-are-putting-a-thumb-on-the-scale/">to reflect observed human nature and action</a>. The results support him. Where the Huffington Post had predicted a <a href="http://www.nytimes.com/interactive/2016/upshot/presidential-polls-forecast.html">Clinton victory with 98 percent confidence</a>, Silver’s FiveThirtyEight gave her <a href="http://fivethirtyeight.com/features/final-election-update-theres-a-wide-range-of-outcomes-and-most-of-them-come-up-clinton/">only a 71 percent chance of winning</a>.</p>
<p>Using the various polls, most major sites had the <a href="http://www.nytimes.com/interactive/2016/upshot/presidential-polls-forecast.html">probability of a Clinton victory around 90 percent</a>. My own model put the probability at 80 percent.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/145272/original/image-20161109-19051-h3nl7u.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Estimated electoral vote count at each point in the election cycle. Blue and red represent Clinton and Trump, respectively. Dark colors indicate a strong likelihood of that electoral vote belonging to the candidate, light colors indicate a lower probability. Gray indicates tossup electoral votes.</span>
<span class="attribution"><span class="source">Ole J. Forsberg</span></span>
</figcaption>
</figure>
<p>From the standpoint of the media, the polls provided a great narrative, a story to tell and motivate their readers. Most major news organizations included <a href="http://www.cbsnews.com/news/clinton-trump-even-in-ohio-and-florida-two-days-before-election-cbs-poll/">standard boilerplate</a> about the polls being estimates, that they have a margin of error and that the margin of error holds 95 percent of the time.</p>
<p>However, in many cases, journalists didn’t seem to understand what those words meant. If the margin of error is +/- 2.5 and the support for Clinton drops 2 percent, that’s not a statistically significant change. There is no evidence that it is anything more than background noise. If the margin of error is +/- 2.5 and the support for Trump rises 3 percent, that is a statistically significant change. However, as this margin of error is measured at the 95 percent level of confidence, even those “significant changes” are wrong 5 percent of the time.</p>
<p>To help solve these problems, I think journalists covering elections should take a statistics course or a polling course. There is information in the numbers, and it behooves us all to understand what it does and does not say.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/145312/original/image-20161109-19089-1pq02wg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Processing news that the polls didn’t seem to predict.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/SIPA-Anthony-Behar-Sipa-USA-A-PA-USA-18842987/585ec9e07a6441e29db1b7fd20373603/258/0">Sipa USA via AP</a></span>
</figcaption>
</figure>
<p>Finally, as with the media, the polls gave the public a great story, one that could support their views – as long as they chose the “right” polls and ignored the “wrong” ones. In 2012, many on the right <a href="http://thehill.com/homenews/campaign/251413-gop-takes-aim-at-skewed-polls">claimed the polls were skewed</a>. Once the election was over and the postmortems done, we found out they actually were, just <a href="https://www.campaignsandelections.com/campaign-insider/study-tells-pollsters-call-more-cellphones">not in the direction Republicans had claimed</a>.</p>
<p>The story line of skewed polls was never rebutted in the minds of the general population. As a result, <a href="http://www.politico.com/story/2016/06/donald-trump-polls-bias-224903">confidence in polls</a> remains very low. It’s becoming more common for people to see polling as unethical and as a tool that <a href="http://www.breitbart.com/radio/2016/10/12/pat-caddell-nbcwsj-poll-unprecedented-unethical-intended-push-trump-done-narrative/">advances a particular narrative</a>. </p>
<p>And in fact, many polls are performed to push a political view. The <a href="http://www.vanityfair.com/news/2004/11/mccain200411">push polling in South Carolina</a> by Bush supporters in 2000 is the most notorious example of this. In the days leading up to the South Carolina primary, a group supporting George W. Bush “polled” residents, <a href="http://www.vanityfair.com/news/2004/11/mccain200411">asking inflammatory questions about his opponent John McCain</a>. The responses of those contacted were never recorded and analyzed. The sole purpose of a push poll like this is to disseminate information and influence respondents. Is it any wonder many do not trust polls?</p>
<h2>Is polling dead?</h2>
<p>Today, many people are talking about <a href="http://www.huffingtonpost.com/jeanne-zaino/the-death-of-polling-midt_b_6129654.html">the death of polling</a>. Apparently, we seem to forget that probabilities attach themselves to polling at every step in the process. The sample is a random sample from a sampled population. The target population does not exist until election day. People change their minds about voting. Everywhere in polling, there is probability.</p>
<p>Nate Silver’s model gave Trump a 29 percent chance of winning the presidency. My model gave him a 20 percent chance. What do those probabilities actually mean? Flip a coin twice. If it comes up heads both times, you just elected President Trump – two coin tosses in a row coming up heads has the same probability of happening that many of these polls gave for Trump moving into the White House.</p>
<p>And yet, polling is a science; we can always learn more. As we move forward, there are many things to learn from this election. Which polling organization was best in terms of its weighting formula? How can we best contact people? What proportion should be cellphones? How can we use online polls to get good estimates?</p>
<p>Those will be the questions at the forefront of polling research over the next couple years as we grapple with the causes of several recent high-profile polling “failures.”</p><img src="https://counter.theconversation.com/content/68504/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ole J. Forsberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>People around the world were shocked when Hillary Clinton, ahead in many polls, didn’t end up the U.S.‘ president-elect. But that doesn’t mean the polls themselves were wrong.Ole J. Forsberg, Assistant Professor of Mathematics, Knox CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/685472016-11-09T17:24:40Z2016-11-09T17:24:40ZThe madness of crowds, polls and experts confirmed by Trump victory<p>Since records began in 1868, no clear favourite for the White House has lost, except in the case of the 1948 election, when 8 to 1 longshot Harry Truman defeated his Republican rival, Thomas Dewey.</p>
<p>We can now add 2016 to that list, thanks to Donald Trump, who has beaten 5 to 1 on favourite, Hillary Clinton, to take the presidency. In so doing, he also defied the polls, the experts and the wisdom of crowds. </p>
<p>I have been tracking various forecasting methodologies and prognosticators over the past few months, right up to election day, and can confirm that the rout of conventional wisdom was almost total.</p>
<h2>Odds on</h2>
<p>On the morning of the election, the best price available about Hillary Clinton was 7 to 2 on, equal to an implied win probability of about 78%. The spread betting markets made her a little over an 80% favourite, and gave her a head start over Trump of more than 80 electoral votes. The <a href="https://www.predictit.org/">PredictIt</a> prediction market assigned her a 79% chance of victory, and estimated her likely advantage as 323 electoral votes to 215 for Trump. Meanwhile, the Predictwise crowd wisdom platform assessed her chance of winning at a solid 89%, compared to 75% by the Hypermind prediction site.</p>
<p>The polling aggregation services fared no better. The RealClearPolitics and HuffPost Pollster polling averages gave Hillary Clinton a lead of between 3% and 6%. The <a href="http://fivethirtyeight.com/">FiveThirtyEight</a> platform, which removes bias from polls based on their previous performance, gave her a popular vote lead on the day of 3.6% and an electoral vote advantage of 67 over Trump. Her chance of winning was assessed as 71.9% based on this polling. </p>
<p>Perhaps the biggest failure of the night, however, was Sam Wang’s <a href="http://election.princeton.edu/">Princeton Election Consortium</a>, which gave Clinton more than a 99% chance of victory. Still, it must be said that his topline figures (an electoral college advantage of 307 to 231 for Clinton, and 2.5% in the popular vote) were less far off than a number of the other forecasting methodologies.</p>
<p>The <a href="http://www.nytimes.com/section/upshot">New York Times Upshot</a> elections model, which bases its estimates on state and national polls, gave Clinton a 84% chance of victory, which they helpfully compared to the chance of an NFL kicker making a 38-yard field goal. About 16% of the time they miss. That was the same chance as Hillary Clinton losing, they suggested.</p>
<h2>Talking heads</h2>
<p>Expert opinion was also woefully off. One of the most high-profile providers of expert political opinion is the <a href="http://www.centerforpolitics.org/crystalball/">Sabato Crystal Ball</a>, run by Larry Sabato of the University of Virginia’s Center for Politics. This service has a very good track record. Yet, in line with the polls and the markets, the Crystal Ball got it badly wrong this time. Its final prediction was a win for Hillary Clinton by 322 electoral votes to 216. </p>
<p>It is the PollyVote election forecasting service which provides perhaps the most broad-based expert opinion survey, however, calling on its own panel of political experts to periodically update its forecast of the likely two-way vote share of the main candidates. The final expert panel survey, conducted on the eve of the election, put Clinton 4.4% up over Trump (52.2% to 47.8%).</p>
<p>In attempting to estimate the final vote share tallies of the candidates, PollyVote provides not just the estimates of experts, but also evidence gathered from a range of other methodologies, including prediction markets, poll aggregators, econometric models, citizen forecasts and index models. The idea is that aggregating and combining the wisdom of each and taking an average should provide a better estimate than any in isolation. It is a methodology which has served well over the past three election cycles.</p>
<p>This time the methodology broke down as badly as any of the main forecasting methodologies in isolation. Taking them in turn, the prediction market indicator (based on the trading in the Iowa electronic markets) gave Hillary Clinton a lead of 54.6% to 45.4%. Using data from RealClearPolitics and HuffPost Pollster to construct its poll aggregation metric, it gave the lead to Clinton by 52% to 48%.</p>
<p>PollyVote also highlights the various econometric forecasting models available, which typically use variables such as growth, unemployment, incumbency, and so on, to provide an aggregated estimate. That estimate was, this time, quite successful, giving Clinton the advantage in the popular vote of 50.2% to 49.8%. Winning the popular vote is, however, not the same thing as winning the electoral college, as Democrats in particular have learned in recent years.</p>
<p>The final two methodologies used to make up the PollyVote forecast are index models, which use information about the candidates, and citizen forecasts, which ask people whom they expect to win. The index models this time gave Clinton the edge over trump by 53.5% to 46.5%, and the citizen forecasts by 52.2% to 47.8%. Combining all these methodologies together produced an estimated advantage for Clinton over Trump of 52.5% to 47.5%.</p>
<p>The bottom line, therefore, is that most of the tried and tested forecasting methodologies failed this time. Election 2016 truly demonstrated, on a grand scale, the madness of crowds, polls and experts.</p><img src="https://counter.theconversation.com/content/68547/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Leighton Vaughan Williams does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Forecasters have been left reeling once again after failing to predict the outcome of the US election.Leighton Vaughan Williams, Professor of Economics and Finance and Director, Betting Research Unit & Political Forecasting Unit, Nottingham Trent UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/659562016-09-27T00:31:25Z2016-09-27T00:31:25ZOne in two favour Muslim immigration ban? Beware the survey panel given an all-or-nothing choice<figure><img src="https://images.theconversation.com/files/139147/original/image-20160926-2437-xrilma.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Pauline Hanson claimed poll results showing high levels of opposition to Muslim immigration were understated.</span> <span class="attribution"><span class="source">AAP/Mick Tsikas</span></span></figcaption></figure><p>An <a href="http://www.essentialvision.com.au/wp-content/uploads/2016/09/Essential-Report_160802_immigration.pdf">Essential Report poll</a> finding that 49% of Australians want to ban Muslim immigration received <a href="http://www.theaustralian.com.au/national-affairs/immigration/muslim-migrant-ban-backed-by-almost-half-australians-poll-shows/news-story/fe65dc9cc7018e545539e32b11029385">extensive</a> <a href="http://www.adelaidenow.com.au/news/breaking-news/poll-suggests-49-back-muslim-migrant-ban/news-story/892bc9b802ec08deab0d2769362ba927">media</a> <a href="http://time.com/4502458/australia-ban-muslim-immigration/">coverage</a> last week. In addition to general reporting, Essential’s executive director, Peter Lewis, <a href="https://www.theguardian.com/commentisfree/2016/sep/21/progressives-can-attract-hanson-supporters-but-not-by-insulting-them">wrote in The Guardian</a>:</p>
<blockquote>
<p>The result floored me.</p>
</blockquote>
<p>Less surprised was commentator Ray Hadley <a href="https://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiv_pCVrqzPAhUCKJQKHVLjDncQFggbMAA&url=http%3A%2F%2Fwww.dailytelegraph.com.au%2Fnews%2Fopinion%2Fpoll-knocks-the-socks-off-the-lattesipping-lefties%2Fnews-story%2F04e4c1b1b78bb702580ebaccaff34e51&usg=AFQjCNHDW5m47NUKK7skBibGj45uLfYV_w&bvm=bv.133700528,d.dGo">in The Daily Telegraph</a>:</p>
<blockquote>
<p>The left-leaning café latte sippers were left scratching their heads this week when an Essential poll revealed …</p>
</blockquote>
<p>Senior journalists, including from Fairfax Media, and politicians took the findings at face value. Labor’s deputy leader, Tanya Plibersek, <a href="https://www.theguardian.com/australia-news/2016/sep/22/muslim-immigration-poll-result-due-to-poor-leadership-says-tanya-plibersek">saw the survey</a> as proof that:</p>
<blockquote>
<p>We’re not doing a good enough job as national leaders to bring harmony and cohesion to our community.</p>
</blockquote>
<p>Among the few to question the result was new Labor MP Anne Aly. <a href="http://www.smh.com.au/federal-politics/political-news/this-is-not-the-australia-i-know-first-muslim-woman-mp-hits-back-at-immigration-poll-20160922-grm6w3.html">She asked</a> whether public opinion was really so adverse.</p>
<p>A second questioner was One Nation senator Pauline Hanson, <a href="http://www.smh.com.au/federal-politics/political-news/pauline-hanson-says-people-too-afraid-to-tell-of-muslim-immigration-fears-20160922-grmi8d.html">who said</a> the poll understated the degree of opposition:</p>
<blockquote>
<p>I believe it’s a lot higher than that. Because people … have been in fear to answer the question … because they don’t know who’s taking the call.</p>
</blockquote>
<h2>Surveying methodology</h2>
<p>Some aspects of the Essential findings are worthy of critical scrutiny. One relates to methodology. </p>
<p>There are two main approaches to surveying. One is a sampling of the population based on randomly generated telephone numbers. The other utilises an online panel of respondents who complete surveys out of interest and for reward.</p>
<p>Contrary to Hanson’s claims, no-one was “taking the call” in the Essential survey: it utilised an online panel. </p>
<p>Surveys employing online panels are much cheaper and quicker to run. They have a proven record on a number of issues, notably predicting election outcomes, as over a period of years they develop weighting formulas for their panel calibrated against election results. But there are no formulas of the same level of precision when surveys deal with social issues. </p>
<p>An <a href="https://www.aapor.org/AAPOR_Main/media/MainSiteFiles/AAPOROnlinePanelsTFReportFinalRevised1.pdf">extensive review</a> of online survey methodologies found that:</p>
<blockquote>
<p>Computer administration yields more reports of socially undesirable attitudes and behaviours than oral interviewing, but no evidence that directly demonstrates that the computer reports are more accurate.</p>
</blockquote>
<p>Major organisations seeking the highest level of reliability continue to employ random population sampling, despite the cost involved.</p>
<p>To test the impact of different methodologies, in 2014 the Scanlon Foundation <a href="http://www.monash.edu/__data/assets/pdf_file/0017/134711/mapping-social-cohesion-national-report-2014.pdf">administered the same questionnaire</a> to both a random sample of the population and an online panel. It found 44% of Australia-born online panel respondents whose parents were born in Australia indicated they held “very negative” or “negative” views toward Muslims. The same demographic in the random sample had a much lower percentage (28%).</p>
<p>There is a second issue, just as important, with the Essential finding. </p>
<p>Surveys do not simply identify a rock-solid public opinion; they explore, with the potential to distort through questions asked. Essential chose not to present respondents with a range of options on Muslim immigration. Rather, it was a yes/no choice: </p>
<blockquote>
<p>Would you support or oppose a ban on Muslim immigration to Australia?</p>
</blockquote>
<p>The product was easy-to-understand copy for the media, but arguably also a gross simplification. Public opinion on social issues defies binary categorisation. It is more accurately understood in terms of a continuum, with a middle ground on some issues in excess of half the population.</p>
<p>For example, <a href="http://www.aph.gov.au/%7E/media/05%20About%20Parliament/54%20Parliamentary%20Depts/544%20Parliamentary%20Library/Pub_archive/Goot.pdf?la=en">with regard to asylum seekers</a>, nine polls between 2001 and 2010 using various methodologies asked respondents if they favoured or opposed the turning back of boats. The average for these surveys was 67% in favour of turnbacks.</p>
<p>But, in 2010, the Scanlon Foundation survey <a href="http://scanlonfoundation.org.au/wp-content/uploads/2014/07/mapping-social-cohesion-summary-report-2010.pdf">tested opinion on this topic</a> by offering four policy options, ranging from eligibility for permanent settlement to turning back of boats. In this context, a minority of just 27% supported turnbacks. </p>
<h2>Minorities and Australian opinion</h2>
<p>Survey findings are typically considered in isolation in the media, with no understanding of context, of what is within the expected and what is beyond it. </p>
<p>The Essential survey of attitudes to Muslims is hardly the first in the field. Several random population samples since 2010 have found that when respondents are asked for attitudes to minorities, by far the highest level of negative opinion is towards Muslims. </p>
<p>In a <a href="https://www.vichealth.vic.gov.au/%7E/media/ResourceCentre/PublicationsandResources/Discrimination/LEAD-community-attitudes-survey.pdf">2013 VicHealth survey</a>, 22% of respondents indicated they were negative towards Muslims. This number was 22%-26% in six <a href="http://scanlonfoundation.org.au/research/surveys/">Scanlon Foundation surveys</a> between 2010 and 2015. </p>
<p>A random population sample by <a href="http://www.roymorgan.com/findings/6507-australian-immigration-population-october-2015-201510200401">Roy Morgan Research</a> in October 2015 asked respondents if they “support or oppose Muslim immigration”. It found a minority, 36%, opposed; 55% in support. Of Greens-voting respondents in the Morgan poll, just 1% indicated they were opposed. This is a marked contrast with the Essential finding of 35%.</p>
<p>A last issue concerns broad context. If the Essential finding is a sound reflection of Australian opinion, is it beyond the realm of previous findings? We cannot be certain, because past surveys rarely raised the zero option – the banning of a specific group – without establishing the range of opinion.</p>
<p>Between 1984 and 1988, however, when there was considerable public discussion of Asian immigration, ten surveys <a href="https://books.google.com.au/books/about/Australian_Multiculturalism_for_a_New_Ce.html?id=vr1yAAAAMAAJ&redir_esc=y">asked</a> if the number of Asian immigrants was too high. On average the surveys found 58% were of that opinion, with a peak of 77% obtained by Newspoll in 1988. </p>
<p>And, in 1996 – at the time of Hanson’s <a href="http://www.smh.com.au/federal-politics/political-news/pauline-hansons-1996-maiden-speech-to-parliament-full-transcript-20160914-grgjv3.html">first maiden speech</a> in the federal parliament – an AGB McNair telephone poll found 53% of respondents agreed that Asian immigration “should be reduced”.</p><img src="https://counter.theconversation.com/content/65956/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Markus has received grants to research Australian public opinion from the Scanlon Foundation, the Australian Research Council and the Australian government.</span></em></p>Survey findings are typically considered in isolation in the media, with no understanding of context, of what is within and what is beyond the expected.Andrew Markus, Pratt Foundation Research Chair of Jewish Civilisation, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/616392016-06-26T10:19:12Z2016-06-26T10:19:12ZEU referendum – how the polls got it wrong again<figure><img src="https://images.theconversation.com/files/128138/original/image-20160625-28349-e43gjt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">PA/Stefan Rousseau</span></span></figcaption></figure><p>Estimating the likely outcome of the <a href="https://theconversation.com/uk/eu-referendum-2016">referendum</a> on the UK’s membership of the EU was always going to be a challenge for the opinion polls. In a general election they have years of experience as to what does and does not work on which to draw when estimating the level of support for the various political parties. They still make mistakes, as was <a href="https://theconversation.com/revealed-why-the-polls-got-it-so-wrong-in-the-british-general-election-53138">evident last year</a>, but at least they can learn from them. In a one-off referendum they have no previous experience on which to draw — and there is certainly no guarantee that what has worked in a general election will prove effective in what is a very different kind of contest.</p>
<p>Meanwhile the subject matter of this referendum raised a particular challenge. General elections in the UK are primarily about left and right. The question is whether the government should be doing a little more or doing a little. The social division underlying this debate tends to be between the middle class and the working class.</p>
<p>But this referendum was about something different. With <a href="https://theconversation.com/after-this-miserable-and-divisive-campaign-we-need-to-talk-about-immigration-61577">immigration</a> featuring as one of the central issues, it was a division between “social liberals” and “social conservatives”. The former tend to be comfortable with the diversity that comes with immigration, while the latter prefer a society in which people share the same customs and culture. Social liberals were <a href="http://lordashcroftpolls.com/2016/06/how-the-united-kingdom-voted-and-why/">inclined to vote in favour of remaining</a> in the EU, while social conservatives were more inclined to vote to leave.</p>
<p>The principal social division behind this debate is not social class but education. Graduates tend to be social liberals, while those with few, if any, educational qualifications are inclined to be social conservatives. Age also matters too, with younger people tending to be more socially liberal.</p>
<p>Pollsters in the UK have less experience of measuring this dimension of politics. They do not, for example, necessarily collect information on the educational background of their respondents as a matter of routine. Yet any poll that contained too many or two few graduates was certainly at risk of over or underestimating the level of support for staying in the EU.</p>
<p>Meanwhile, we do not know whether there is any reason to anticipate availability bias — that is whether Remain or Leave supporters are easier for pollsters to find than those of the opposite view. Equally, the pollsters have less idea what those who say they don’t know how they are going to vote will eventually do.</p>
<h2>Online or on the phone?</h2>
<p>The pollsters’ difficulties in estimating referendum vote intentions were all too obvious during the referendum campaign. In particular, polls conducted by phone <a href="https://theconversation.com/can-you-trust-the-eu-referendum-polls-59841">systematically diverged</a> from those done via the internet in their estimate of the relative strength of the two sides. For much of the campaign, phone polls reckoned that Remain was on 55% and Leave on 45%. The internet polls were scoring the contest at 50% each — a fact that often seemed to be ignored by those who were confident that the Remain side would win. This divergence alone was clear evidence of the potential difficulty of estimating referendum vote intention correctly.</p>
<p>In the event, that difficulty was all too evident when the ballot boxes were eventually opened. Eight polling companies published “final” estimates of referendum voting intention based on interviewing that concluded no more than four days before polling day.</p>
<p>Although two companies did anticipate that Leave would win, and one reckoned the outcome would be a draw, the remaining five companies all put Remain ahead. No company even managed to estimate Leave’s share exactly, let alone underestimate it. In short, the polls (and especially those conducted by phone) collectively underestimated the strength of Leave support.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=326&fit=crop&dpr=1 600w, https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=326&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=326&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=409&fit=crop&dpr=1 754w, https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=409&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/128136/original/image-20160625-28362-5rm8p3.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=409&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>There is little doubt that the companies are disappointed with this outcome. Some have already issued statements that they will be investigating what went wrong. The <a href="http://www.britishpollingcouncil.org/category/press-releases/">British Polling Council</a> has indicated that it will be asking its members to undertake such investigations and may have the findings externally reviewed. It will inevitably take a while before we get to the bottom of what went wrong. However, it is already clear that there is one issue that will be worthy of investigation.</p>
<p>As the pollsters worked out their final estimates of the eventual outcome, many of them made different decisions from those that they had made previously about how to deal with the possible impact of turnout and the eventual choice made by the “don’t knows”. In the event those decisions did not improve their polls’ accuracy.</p>
<p>On average the eight polls between them anticipated that Remain would win with 52%, and Leave would end up with 48%. If all the pollsters had stuck to what they had been doing earlier in the campaign (and Populus did not adjust its figures in the way that it did), the average score of the polls would have been Remain 50%, Leave 50%. In short, at least half of the error in the polls may be a consequence of the decisions that the pollsters made about to how to adjust their final figures. Polling a referendum truly is a tough business.</p><img src="https://counter.theconversation.com/content/61639/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>John Curtice is Professor of Politics, Strathclyde University, Senior Research Fellow, NatCen Social Research and Chief Commentator, whatukthinks.org/eu. He is also President of the British Polling Council.
</span></em></p>The polling industry struggled to predict the last British election, and referendums are even harder.John Curtice, Senior Research Fellow, National Centre for Social ResearchLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/531382016-01-14T06:12:37Z2016-01-14T06:12:37ZRevealed: why the polls got it so wrong in the British general election<figure><img src="https://images.theconversation.com/files/108054/original/image-20160113-10414-1wij2eg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Absolutely definitely Labour? Ok thanks bye!</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Since the surprise result of the <a href="https://theconversation.com/britains-election-is-over-so-what-does-it-all-mean-41261">British election</a> in May 2015, there has been plenty of speculation about why the opinion polls ahead of the vote were so wrong. On average, <a href="http://www.bbc.co.uk/news/uk-politics-32751993">they put the Conservatives and Labour neck and neck</a>, when in fact the Conservatives were seven points ahead. </p>
<p>Hard evidence on the reasons for their failure, however has so far been less plentiful. But a new report published today provides important evidence on what really happened.</p>
<p>The report presents the results obtained by the latest instalment of NatCen’s annual <a href="http://www.bsa.natcen.ac.uk/">British Social Attitudes survey</a>, which was conducted face to face between the beginning of July and the beginning of November last year. All 4,328 respondents to the survey were asked whether or not they voted in the May election and, if so, for which party.</p>
<p>What we found suggests that the main reason for the disparity between the polls and the actual election outcome is unlikely to have been failure by voters to be honest about how they planned to vote. Instead it is more likely that the problem lay in the failure of the pollsters to interview the right mix of voters in the first place.</p>
<h2>A different approach</h2>
<p>The British Social Attitudes survey is conducted in a very different way from the polls. Not only does interviewing take place over an extended period of four months, but during that time repeated efforts are made, as necessary, to make contact with those who have been selected for interview.</p>
<p>At the same time, potential respondents are selected using random probability sampling. This means that more or less anyone in Britain can be selected for interview, while their chances of being selected can also be calculated.</p>
<p>Political opinion polls, by contrast, are typically conducted over just two or three days. That means they are more likely to represent the views of people who are easily contactable. True, polls that are conducted by phone select the numbers they ring at random, but once the phone is answered, the person at the other end of the line who is selected for interview is not selected in that way. Pollsters often find their calls go unanswered or that the person on the other end of the line does not want to talk.</p>
<p>At the same time, polls conducted over the internet are typically done by drawing interviewees from a panel of people who have either previously volunteered to take part in internet surveys or have been successfully recruited into membership. They are certainly not drawn from the population at random. So in both methods there is bound to be a degree of self selection. And this appears to favour Labour.</p>
<p>Meanwhile, not only did the 2015 polls underestimate Conservative support and overestimate Labour’s before election day, they also came up with much the same result when they <a href="https://twitter.com/YouGov/status/596427188645326848">went back</a> to interview the same people after the result was in – that is Conservative and Labour more or less neck and neck with each other.</p>
<p>In other words, the polls were still wrong even when the election was over. That means we cannot simply lay the blame for their difficulties on such possibilities as “late swing” or a failure by those who said they would vote for Labour to make it to the polling station. Instead it points to the likelihood that the polls were simply interviewing too many Labour voters in the first place.</p>
<h2>How it happened</h2>
<p>The British Social Attitudes survey helps shed some light on this. If, in contrast to the polls, it did manage more or less to replicate the election result, that would add considerably to the evidence that the polls were led astray because their samples were not fully representative. </p>
<p>Indeed, the survey did replicate the result, relatively successfully. At 6.1 points, its Conservative lead of 6.1 points matches the actual Conservative lead over Labour of 6.6 points almost exactly.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=313&fit=crop&dpr=1 600w, https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=313&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=313&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=393&fit=crop&dpr=1 754w, https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=393&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/108031/original/image-20160113-10417-hew95u.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=393&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Reported vote in the 2015 British Social Attitudes survey compared with the actual election result.</span>
<span class="attribution"><span class="source">NatCen</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Moreover, this is not the only survey to have found plenty more Conservative voters in the election than Labour ones. Face-to-face interviews conducted for the <a href="http://www.britishelectionstudy.com/">British Election Study</a> (also undertaken using random probability sampling) put the Conservatives as much as eight points ahead of Labour.</p>
<p>That two random probability samples have both succeeded where the polls largely failed strongly suggests that the problems that beset the polls did indeed lie in the character of the samples they obtained.</p>
<h2>Lessons for the future</h2>
<p>The British Social Attitudes data also provide some clues as to why those interviewed by the polls were not necessarily representative of Britain as a whole.</p>
<p>First, those who participated in polls were much more interested in the election than voters in general. The polls pointed to as much as a 90% turnout, far above the 66% that eventually did vote.</p>
<p>By contrast, just 70% of those who participated in the British Social Attitudes survey in 2015 said that they made it to the polling station. More detailed analysis suggests that many a poll overestimated how many younger people, in particular, would vote. And because younger voters were more Labour inclined than older ones, this created a risk that Labour’s strength would be overestimated among those who were actually going to vote.</p>
<p>Second, those who are contacted most easily by polls and survey researchers appear to be more likely to have voted Labour than those who are more difficult to find. In the British Social Attitudes survey, no less than 41% of those who gave an interview the first time an interviewer knocked on their door said that they voted Labour, while just 35% said that they voted Conservative.</p>
<p>Only among those where a second or (especially) a third call had to be made are Conservative voters more plentiful than Labour ones. Meanwhile, Labour’s lead among first-call interviewees cannot be accounted for by their demographic profile, which perhaps helps explain why the pollsters’ attempts to weight their data to match Britain’s known demographic profile failed to eliminate the pro-Labour bias in their samples.</p>
<p>Of course nobody is ever going to suggest that a poll should be conducted over a period of four months, though maybe taking a little longer would prove to be in the pollsters’ own best interests, even when their role is to generate tomorrow’s newspaper headline.</p>
<p>But if the objective is to conduct serious, long-term and in-depth research to enhance our understanding of the public mood in Britain, the lesson is clear. Time-consuming and expensive though it may be, random probability sampling is still the most robust way of measuring public opinion. Hopefully it is a lesson that will now be appreciated by those who fund opinion research.</p><img src="https://counter.theconversation.com/content/53138/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>John Curtice is a Senior Research Fellow at NatCen Social Research and is a co-editor of the British Social Attitudes report series. He is also President of the British Polling Council, a representative organisation of polling companies that aims to uphold standards of transparency in the industry and has co-sponsored the inquiry into the performance of the polls in the 2015 general election.</span></em></p>New survey information puts paid to ‘shy Tories’ theory.John Curtice, Senior Research Fellow, National Centre for Social ResearchLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/416932015-05-12T14:46:52Z2015-05-12T14:46:52ZIn defence of pollsters: they never said they could predict the UK election<figure><img src="https://images.theconversation.com/files/81392/original/image-20150512-22545-7yixa4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Talk is cheap before ballots are cast.</span> <span class="attribution"><span class="source">TCmakephoto/Shutterstock</span></span></figcaption></figure><p>In the wake of the 2015 election result, the media has quickly thrown pollsters under the bus for <a href="http://www.economist.com/news/europe/21650937-favoured-incumbent-comes-second-and-ex-rock-star-seen-fringe-candidate-draws-21-another">getting it wrong</a>. Indeed, some pollsters have <a href="http://www.telegraph.co.uk/news/general-election-2015/11591306/Why-the-opinion-polls-got-it-so-wrong-YouGov-president-explains.html">jumped off the curb</a> with no help at all from their friends in the press.</p>
<p>Polls consistently suggested the Conservatives and Labour were neck and neck, so when the former came away with a majority large enough to govern alone, the critics had a field day – even though every mention of the election in the media before polling day had begun with the opening phrase “it’s the most unpredictable election in years”.</p>
<p>Not to be outdone, the <a href="http://www.britishpollingcouncil.org/general-election-7-may-2015/">British Polling Council</a> has commissioned a special inquiry into the causes of the alleged debacle.</p>
<p>But we believe the criticisms, mea culpas and rush to judgement are misdirected. Polling is not the same as predicting – especially not under the electoral system that operates in the UK.</p>
<h2>The trouble with polling</h2>
<p>There are two fundamental points to keep in mind in the wake of this election. First, polls provide raw material for forecasts – they do not seek to predict the outcome of an election. Second, national-level vote share forecasts may be very poor guides to the number of seats parties are actually likely to win, especially when votes are tallied in a first-past-the-post electoral system with widely varying patterns of constituency-level competition.</p>
<p>If you want to use vote intention numbers in polls to estimate popular vote shares, you have to take into account whether or not people will actually vote. One of the reasons why <a href="https://theconversation.com/john-curtice-how-we-called-the-election-right-on-polling-night-more-or-less-41556">the exit poll was so accurate</a> in this election is that we can be sure the people surveyed definitely voted, because they were leaving the polling station when they were interviewed. When you ask them before polling day, you can’t tell for certain they will show up on the day.</p>
<p>With a survey conducted before the election, the numbers should be filtered by the likelihood that survey respondents actually will vote. The graph below illustrates the point. Using data from the Essex Continuous Monitoring Survey, conducted in the third week of April 2015, we use an 11-point likelihood of voting scale. Respondents who are very unlikely to vote rate themselves zero and those very likely to vote ten. We then combine that rating with information about whether they voted in the 2010 general election to select the likely voters.</p>
<p>For first-time eligible voters, we use a well-researched measure – whether they consider voting to be a civic duty – if they strongly agree with this it means they are more likely to vote. The resulting adjusted vote shares deviate from the parties’ actual vote shares (in Great Britain) by less than 1% on average.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=427&fit=crop&dpr=1 600w, https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=427&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=427&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=537&fit=crop&dpr=1 754w, https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=537&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/81360/original/image-20150512-22557-p57isr.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=537&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">polls and votes.</span>
<span class="attribution"><span class="source">Essex Continuous Monitoring Survey</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>A second important point is that a survey’s reported vote intentions are based on samples and as such they are subject to sampling errors, which makes them necessarily uncertain. This means that the samples do not precisely reproduce the characteristics of voters in general – just by chance there are perhaps too many middle-aged men or not enough young people in the sample.</p>
<p>So we construct what are called confidence intervals around the vote intention figures. These are measures of how uncertain we are about the figures, and allow us to say things like: “there is a 95% chance that the Labour vote will vary from 29% to 33%”. These tend to get forgotten when the polls are reported, but they should be both calculated and heeded. This also means that if the pollsters do everything right, then by chance they are going to get it wrong about one out of every 20 elections. </p>
<p>Our April survey data shows that all estimated vote shares, with the exception of the Conservatives, are not significantly different from the parties’ actual vote totals. That means the difference between the survey outcome and the actual vote is due to chance, and not to any systematic difference between the two measures. In the Conservative case, the survey number is only .04% outside the confidence band, which means it is an unusual but not really a rogue figure.</p>
<p>Keeping sampling error in mind and developing well-validated likely voter filters will improve the use of survey data for forecasting popular vote totals.</p>
<h2>The first-past-the-post problem</h2>
<p>Those who insist on using survey data to forecast election outcomes will need to go further, since elections in Britain are decided by the number of seats won rather than by the actual number of votes. Polls could show one party getting significantly more support but that doesn’t necessarily mean they will secure the most seats. The party that won the most votes did not win the most seats in the elections of 1929, 1951 and February 1974.</p>
<p>Even with appropriate adjustments, national vote intention percentages from surveys are likely to be insufficient in an era of voter volatility and multi-party competition. When the voters have more choice or they think that their preferred party has no chance in their constituency they may very well change their vote. This happens more now that loyalty towards the top two parties is weaker.</p>
<p>In the run-up to the 2015 election, Lord Ashcroft tried to address the problem by conducting a large number (nearly 170!) <a href="http://lordashcroftpolls.com/constituency-polls/">constituency polls</a>. Since some of these were conducted nearly a year before the election, there was a risk of missing possible changes in voter intentions in various constituencies. Moreover, constituency-level polls (like their national counterparts) need to correct for the likelihood of people voting – and as always, sampling errors make predictions of close races a hazardous enterprise, because the confidence intervals overlap.</p>
<h2>Can we fix it?</h2>
<p>Although we could imagine building on the Ashcroft approach, it is very expensive and expanding it is not a realistic option. Developing new, less expensive election forecasting tools that make better use of the kinds of survey data that are likely to be available should be a top priority.</p>
<p>One approach, advocated by economists, is to jettison polls entirely and use betting markets. We are sceptical about this – the final <a href="http://www.may2015.com/featured/tories-lead-by-20-seats-and-are-now-on-course-to-win-this-election/">Ladbrokes</a> numbers were just as far off the mark as the polls, putting the Conservatives on 286 seats, Labour on 267 and the Liberal Democrats on 26. These figures are not surprising; after all, many punters search the media for forecasts based on polling data to help them decide how to place their bets.</p>
<p>Other forecasting methods, using trends in unemployment, interest rates, housing prices and other variables also have their advocates. We are confident that economic conditions affect electoral choice, but decades of <a href="http://www.cambridge.org/tn/academic/subjects/politics-international-relations/comparative-politics/economy-and-vote-economic-conditions-and-elections-fifteen-countries">research</a> indicate that models translating trends in economic sentiments into election outcomes frequently fail to perform as advertised. </p>
<p>Ultimately, the media, voters and pollsters alike would do well to recognise the distinction between polls and forecasts. Raw poll numbers are not adequate for forecasting parties’ vote shares, let alone their seat totals.</p>
<p>Polls are most useful for providing information about the attitudes and reported behaviour of people. With large sample sizes and well-validated likely voter filters, high quality pre and post-election polls can tell us a lot about the who and why of electoral choice and what parties’ popular vote totals are likely to be on election day. If the filters to identify who actually votes were better, then the polls would have been more accurate.</p>
<p>Such polls also provide valuable inputs for seat-share forecasting models. But they are not substitutes for such models. The latter are still in the development stage. A lot of interesting work has been done and more remains. But for the moment we shouldn’t automatically assume that the polls were wrong – even if the election result took many people by surprise.</p><img src="https://counter.theconversation.com/content/41693/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Whiteley receives funding from the ESRC.</span></em></p><p class="fine-print"><em><span>Harold D Clarke was a member of the British Election Study Team for the 2001, 2005 and 2010 general election project funded by the ESRC.</span></em></p>Did anyone tell you this was the “most unpredictable election in years”? There’s a reason for that.Paul Whiteley, Professor, Department of Government, University of EssexHarold D Clarke, Ashbel Smith Professor, University of Texas at DallasLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/415302015-05-08T15:36:24Z2015-05-08T15:36:24ZWhy the polls got it so wrong in the British election<p>If the opinion polls had proved accurate, we would have woken up on the morning of May 8 to a House of Commons in which the Labour Party had a chance to form government. By the end of the day, the country would have had a new prime minister called Ed Miliband.</p>
<p>This didn’t happen. Instead the Conservative Party was returned with almost 100 more seats than Labour and a narrow majority. So what went wrong? Why were the polls so far off? And why has <a href="http://www.britishpollingcouncil.org/general-election-7-may-2015/">the British Polling Council announced an inquiry</a>?</p>
<p>We have been here before. The polls <a href="http://www.independent.co.uk/news/uk/exclusive-how-did-labour-lose-in-92-the-most-authoritative-study-of-the-last-general-election-is-published-tomorrow-here-its-authors-present-their-conclusions-and-explode-the-myths-about-the-greatest-upset-since-1945-1439286.html">were woefully inaccurate in the 1992 election</a>, predicting a Labour victory, only for John Major’s Conservatives to win by a clear seven percentage points. While the polls had performed a bit better since, history repeated itself this year.</p>
<h2>Facing realities</h2>
<p>A big issue at hand is the methodology used. On the whole, pollsters simply do not make any effort to duplicate the real polling experience. Even as election day approaches, they very rarely identify who the candidates are in their survey questions, instead simply prompting party labels. This tends to miss a lot of late tactical vote switching. The filter they use to determine who will actually vote as opposed to say they will vote is clearly faulty, which can be seen if we compare the actual voter turnout figures with those projected in most of the polling numbers. </p>
<p>Almost invariably, they over-estimate how many of those who say they will vote do actually vote. Finally, the raw polls do not make allowance for what we can learn from past experience as to what happens when people actually make the cross on the ballot paper compared to their stated intention. </p>
<p>We know that there tends to be a late swing to the incumbents in the privacy of the polling booth. For this reason, it is wise to adjust raw polls for this late swing. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=368&fit=crop&dpr=1 600w, https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=368&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=368&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=462&fit=crop&dpr=1 754w, https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=462&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/81021/original/image-20150508-22785-1dtn68q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=462&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Swing when you’re winning.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/widnr/6521323531/in/photolist-aWguhB-kv4MJ-5pMo9F-dWRWFX-e7gr3Q-6WfwLJ-gA3BJn-239g8-avLh6-hYPdcn-7NT92W-5nW69Z-eGpSUU-4TJSfU-doe3jQ-91hp6s-ocyouX-bmq6Wa-a9uYm8-9q1WcM-mksE2k-gGU2Xs-cLasUq-4Cgokb-rh4avL-2CU96-4FojUc-5tMzyF-inBhJ-avk3xj-7956x-azSCo3-azPY4v-wc2ps-aF52t3-5iXJ4n-dA3EP-5Z4pjW-XpfTS-5txDLW-cKqjq-cKmnU-4GsLaS-83XFsn-fuAm9W-hrtKYV-cKpYz-cKpAe-yTvjb-2MQgou">Wisconsin Department of Natural Resources</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Of all these factors, which was the main cause of the polling meltdown? For the answer, I think we need just <a href="http://www.theguardian.com/politics/2015/may/07/election-2015-how-do-exit-polls-work">look to the exit poll</a>, which was conducted at polling stations with people who had actually voted. </p>
<h2>Exit, pursued by a pollster</h2>
<p>This exit poll, as in 2010, was pretty accurate, while similar exit-style polls conducted during polling day over the telephone or online with those who declared they had voted or were going to vote failed pretty much as spectacularly as the other final polls. The explanation for this difference can, I believe, be traced to the significant difference in the number of those who declare they have voted or that they will vote and those who actually do vote. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=284&fit=crop&dpr=1 600w, https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=284&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=284&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=356&fit=crop&dpr=1 754w, https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=356&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/81026/original/image-20150508-22785-6mzak1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=356&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Exit strategy. How polls at the polling station win out.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/taymazvalley/4760836638/in/photolist-8fGwSj-8mSRSp-mEck2x-8Zpj8p-qLUdwS-7zPEAm-jko7dp-EnBBB-5n5u7v-9Au3hi-CSQLp-fSegTj-dS47n5-dK3Qsm-8wCWwb-8fEiUm-8qXBwc-2LkSRj-e5fayJ-aZJapv-5UHFMh-pwh5Lf-4v5rQq-4ygqTm-dQcynU-eezi5q-aL9Scz-d8jyzC-6EG71F-qPMsJE-owLyQN-96qEhC-dxw51-jNu65P-cjY2zY-76JKnu-6ow6yn-c7mppy-aH3DRk-8CHifr-8kqUgt-2USHeh-5jFsE5-4hpZQc-p7aLdZ-5g51P1-aMyBta-8kqT5X-ab7v48-4ukgmr">Taymaz Valley</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>If this difference works particularly to the detriment of one party compared to another, then that party will under-perform in the actual vote tally relative to the voting intentions declared on the telephone or online. </p>
<p>In this case, it seems a very reasonable hypothesis that rather more of those who declared they were voting Labour failed to actually turn up at the polling station than was the case with declared Conservatives. Add to that late tactical switching and the well-established <a href="http://www.telegraph.co.uk/news/general-election-2015/11464555/Campaign-Calculus-will-the-Tories-benefit-from-the-time-honoured-incumbent-swing.html">late swing in the polling booth to incumbents</a> and we have, I believe, a large part of the answer. </p>
<h2>Skin in the game</h2>
<p>Interestingly, those who invested their own money in forecasting the outcome performed a lot better in predicting what would happen than did the pollsters. The betting markets had the Conservatives well ahead in the number of seats they would win right through the campaign and were unmoved in this belief throughout. Polls went up, polls went down, but the betting markets had made their mind up. The Tories, they were convinced, were going to win significantly more seats than Labour. </p>
<p>I have interrogated huge data sets of polls and betting markets over many, many elections stretching back years and this is part of a well-established pattern. Basically, when the polls tell you one thing, and the betting markets tell you another, follow the money. Even if the markets do not get it spot on every time, they will usually get it a lot closer than the polls. </p>
<p>So what can we learn going forward? If we want to predict the outcome of the next election, the first thing we need to do is to accept the weaknesses in the current methodologies of the pollsters, and seek to correct them, even if it proves a bit more costly. With a limited budget, it is better to produce fewer polls of higher quality than a plethora of polls of lower quality. Then adjust for known biases. Or else, just look at what the betting is saying. It’s been getting it right since 1868, before polls were even invented, and continues to do a pretty good job.</p><img src="https://counter.theconversation.com/content/41530/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Leighton Vaughan Williams does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>While pre-election polls got their sums wrong, and seemed to ignore biases in the rush to publish, a far more accurate call was being made in the betting shops of Britain.Leighton Vaughan Williams, Professor of Economics and Finance, Nottingham Trent UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/412042015-05-05T14:14:37Z2015-05-05T14:14:37ZExplainer: how do you read an election poll?<figure><img src="https://images.theconversation.com/files/80319/original/image-20150504-8401-1fggird.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Confused yet?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/streetmatt/15083719955">Matthew G</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>The first published opinion poll seems to have been in 1824, when the <a href="http://www.fandm.edu/uploads/files/271296109834777015-the-first-political-poll-6-18-2002.pdf">Harrisburg Pennsylvanian</a> newspaper correctly predicted the result of the US presidential election. Things have moved on a long way since then, and opinion polls have become a permanent part of the election landscape in most democratic countries.</p>
<p>But do the polls tell us anything useful? Well, up to a point, yes, they do.</p>
<p>The first thing about polls that puzzles people is how they can produce anything like accurate information when they canvas the voting intentions of only about 1,000 people – a tiny proportion of the population.</p>
<p>I like to use the analogy of cooking a large pan of soup. If you want to know how it tastes, you don’t have to eat the whole lot – a spoonful will do, as long as you’ve stirred the pot up properly. (And indeed the same size spoonful will do, whether you’re tasting a small pan or a huge vat.)</p>
<p>If you can get a sample of electors that’s representative of the whole electorate, you can ask them how they will vote and that will tell you, to a pretty good approximation, how the whole electorate will vote.</p>
<p>But you do have to do the equivalent of stirring up the soup. If you just skim a spoonful off the top, you’ll get whatever floats, and that might not represent the whole of the pan.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/5O-wk3Snbak?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Find out more.</span></figcaption>
</figure>
<p>The poll conducted by US magazine <a href="http://historymatters.gmu.edu/d/5168/">Literary Digest for the 1936 presidential election</a> is a classic example of what happens if you don’t stir. The publication asked about 10m people whether Alfred Landon or Franklin D Roosevelt would win, and about 2.4m replied. That is way bigger than the average opinion poll. Their results showed Landon well in the lead, but in fact Roosevelt won by a landslide. The problem was that the Literary Digest had asked (mainly) just its own readers, who were far from typical of the US electorate.</p>
<p>Since then, pollsters have been much more careful. These days, a typical opinion poll will involve asking maybe 1,000 or 2,000 carefully chosen people which party they intend to vote for. The people are chosen, as far as possible, to match the population of the area or country being polled, in terms of age, gender, some measure of social class, and probably other features such as work status (employed, unemployed, retired, and so on) and the region of the country where they live.</p>
<p>Most of the polling organisations speak to their sample of electors either by telephone interviews or through websites.</p>
<h2>The margin of error</h2>
<p>There are several kinds of opinion poll results for an election. The commonest kind in the UK give the voting intentions for the whole of Great Britain. (Often Northern Ireland is left out of these “national” results, because politics works differently there.)</p>
<p>The pollster will publish the percentage of electors that say they will vote for each of the political parties, and will also give some idea of the <a href="https://theconversation.com/the-margin-of-error-explained-16393">margin of error</a> attached to these percentages.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=416&fit=crop&dpr=1 600w, https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=416&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=416&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=523&fit=crop&dpr=1 754w, https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=523&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/80449/original/image-20150505-16646-vxevx4.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=523&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The latest for election 2015.</span>
<span class="attribution"><a class="source" href="https://yougov.co.uk/news/2015/05/05/conservatives-and-labour-tied/">YouGov</a></span>
</figcaption>
</figure>
<p>The margin of error is a way of reporting what’s called sampling error in the statistical trade. The point is that the pollsters didn’t ask everyone. They may have been unlucky with their sampling and just happened, by chance, to get rather fewer SNP supporters in their sample than are found in the whole electorate.</p>
<p>For a typical poll with 1,000 people, the margin of error would be about 3 percentage points. The exact meaning of this is a bit complicated – if a party is reported in a poll as having 34% of the national vote, with a margin of error of 3 percentage points, that means that there’s a high chance that the true percentage is somewhere between 31% and 37%, but it doesn’t even entirely exclude percentages outside that range.</p>
<p>Putting up the number of people polled will obviously tend to reduce the margin of error, but not as strongly as you might think. If you polled 2,000 people instead of 1,000, for instance, then (other things being equal) the margin of error would go down only from about 3% to about 2%.</p>
<h2>A mug’s game?</h2>
<p>Much more of a problem is that even a perfect set of poll results wouldn’t tell you the election result. To this extent it stops being like the soup. If you did eat the whole panful, you certainly would know exactly how the soup tastes, though the guests you’d cooked it for might be less than pleased.</p>
<p>If you could go out today and ask every elector in the UK how they are going to vote, that would give you some idea of the election result, and there would be no margin of error, but you still wouldn’t know the exact result. Some people will change their mind between now and the election. Some people will just not turn out to vote on election day. Some people won’t tell you the truth. Pollsters try to allow for these effects, but it’s not easy.</p>
<p>And that’s not even the biggest problem in countries like the UK, which has a <a href="https://theconversation.com/the-2015-election-could-reignite-the-debate-about-electoral-reform-in-britain-37449">first-past-the-post electoral system</a>.</p>
<p>Even if you knew exactly what the national shares of the vote would be, that doesn’t tell you how many parliamentary seats each party will win. In the 2010 general election, for instance, Labour got 29% of the country’s votes, and almost 40% of the parliamentary seats, while the Liberal Democrats got 23% of the votes and only 9% of the seats. UKIP got 3% of the votes but no seats, while the SNP got 1.7% of the votes and six seats.</p>
<p>The process of translating opinion poll vote shares to forecasts of seats in parliament can therefore be very complicated, and different polling companies and political analysts do it in very different ways. Until recently the most common way to do it used something called “uniform national swing”, which basically updates the results from the previous election in each constituency by incorporating how the overall percentages of votes changed nationally.</p>
<p>That often worked pretty well in the past, but can’t take into account the specific characteristics of different constituencies, and it also can’t take into account data from opinion polls run in single constituencies, which are becoming more common. So a wide range of different methods is being used to produce seat forecasts for the 2015 election – and we won’t really know which is best until after all the votes are counted and we can compare the forecasts with reality.</p>
<h2>The exit poll</h2>
<p>An exit poll is a special kind of opinion poll that, as the name might suggest, involves asking people who they voted for as they exit from a polling station. You might wonder why anyone bothers – after all, the true results will be known pretty soon. One reason becomes obvious when you look at who actually pays for exit polls – it’s media and news organisations. An exit poll can provide a prediction, and encourage people to watch the media channel that provides it. They also provide material to fill up those lengthy election night programmes.</p>
<p>Exit polls are often very accurate compared to polls taken before polling day, because they ask people who are known to have at least gone to the polling station and therefore probably voted, and because they aren’t affected by last-minute changes of voting intention.</p>
<h2>What to look out for</h2>
<p>So what should you look for when you’re reading poll results? Certainly, look at the size of the margin of error. It will be reported somewhere – if you can’t find it in the news report, it will be given on the polling organisation’s website. If the news story is making a big fuss about, say, a 2% change in one party’s fortunes, well, that’s probably less than the margin of error and may simply be down to chance variation from one sample of electors to another. The same goes if it’s making a big play about a difference of just one or two percentage points between two parties.</p>
<p>Something else to bear in mind is that there can be systematic differences in the results between different polling organisations. These so-called “house effects” might be due to difference in the way they choose their samples or do the weighting. Most of the polling organisations whose polls you see in the mainstream media are members of a trade organisation, the <a href="http://www.britishpollingcouncil.org/">British Polling Council</a>. This imposes rules of good conduct on its members, so they will not be doing something deliberately deceptive, and will be reporting what they did adequately.</p>
<p>But perhaps the most important advice is never to read too much into the results of just one opinion poll. It may be a fluke – it may have had, entirely by chance or bad luck, a particularly unrepresentative sample. Many organisations produce <a href="http://www.bbc.co.uk/news/politics/poll-tracker">summaries</a> of most or all of the published opinion polls, which is likely to be more accurate than a single poll because it averages out chance variations and house effects.</p>
<p>Other organisations combine poll results and (possibly) other information in a more sophisticated way, and may use them to produce <a href="http://electionforecast.co.uk/">forecasts</a> of numbers of parliamentary seats.</p>
<p>But, for the 2015 UK general elections, we won’t know who predicted best until the results are all out – and as usual, by that time we won’t need a forecast any more. Not until the next time, anyway.</p><img src="https://counter.theconversation.com/content/41204/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kevin McConway does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Polling is a central part of any election these days. If only it weren’t such a complicated business.Kevin McConway, Professor of Applied Statistics, The Open UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/401302015-04-16T15:52:34Z2015-04-16T15:52:34ZThe numbers that explain the British election polling deadlock<p>Recent polls have put Labour and the Conservatives in what seems to be an unshakable dead heat. The latest <a href="http://lordashcroftpolls.com/2015/04/ashcroft-national-poll-con-33-lab-33-lib-dem-9-ukip-13-green-6/">Ashcroft poll</a> puts them both on 33%. This takes the parties back to exactly where they were in the <a href="http://lordashcroftpolls.com/2015/03/ashcroft-national-poll-con-33-lab-33-lib-dem-8-ukip-12-green-5/">March 23 Ashcroft poll</a>.</p>
<p>It seems as though neither party can do anything to nudge ahead, be it knocking on doors or making major pledges to voters. To understand why, we need to look at the key factors that drive voting behaviour.</p>
<p>There are four that are particularly important – approval of the government’s record, partisanship, evaluations of the leaders, and perceptions of party performance in relation to the most important issues in the election.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=391&fit=crop&dpr=1 600w, https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=391&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=391&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=491&fit=crop&dpr=1 754w, https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=491&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/77793/original/image-20150413-24307-p30nwk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=491&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trends over time.</span>
<span class="attribution"><span class="source">Essex Continuous Monitoring Survey</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The chart shows trends in voting intentions in Britain over time, using data from the Essex Continuous Monitoring Survey. The current data shows that neither Labour nor the Conservatives have a significant advantage over the other, and are running pretty much neck-and-neck in the polls.</p>
<p>Approval of the government’s record provides a broad brush measure of how the governing party or parties are doing. The basic story here is that roughly a third of the voters approve and just over a half disapprove of the government’s record. This hasn’t changed over the past year.</p>
<p>Partisanship, or party identification, measures the brand loyalty that voters feel towards the parties. Many people do not have this, but it is an important factor in voting intention when they do. Currently, Labour and the Conservatives are neck-and-neck in partisanship. Labour had a slight advantage for a time but it had more or less disappeared by March 2015.</p>
<p>One interesting development is that UKIP has acquired loyal partisan supporters and they are comparable in number to the Liberal Democrats. Partisanship helps to provide a core or bedrock vote – since party identifiers are inclined to stay loyal even when a party is unpopular. </p>
<h2>Getting personal</h2>
<p>The third factor that drives the vote is public evaluations of the political leaders. This is a quick and easy way for voters to judge whether or not a government will be competently led – and even if the government will be effective. It turns out that the <a href="https://theconversation.com/voter-survey-shows-miliband-panic-is-overblown-33975">likeability</a> of the leaders is a good summary measure of this factor since it is associated with other desirable leadership traits such as competence, trustworthiness and decisiveness.</p>
<p>The data shows that none of the four party leaders is particularly popular. They all score less than five on a ten-point likeability scale. However, Cameron does have an advantage over Miliband and both of these leaders have an advantage over Nick Clegg and Nigel Farage.</p>
<p>The Conservatives will undoubtedly pick up votes from having this advantage but not in huge numbers. On average, the distance between the two leaders is less than half a point on the ten-point scale. Another interesting development is that Nigel Farage appears to be losing ground on the scale, although he was ahead of Nick Clegg for most of 2014.</p>
<h2>The big three</h2>
<p>The fourth key factor which explains voting is issue perception, but only a handful of issues matter to most people. The surveys show that the economy is the leading issue, followed by the NHS and then immigration. Looking at changes over time, the economy has become slightly less important and immigration remains about the same, while the NHS has been growing in importance as the election approaches.</p>
<p>When asked if the government has done a good job or a bad job in managing these issues, respondents give the Conservatives good marks for the economy but poor marks for the NHS and abysmal marks for immigration. This is one of the reasons why the improvement in the economy observed over the last year has not greatly increased Conservative voting support.</p>
<p>Any extra credit the party receives for its performance on the economy is offset by its performance on immigration and to a lesser extent the NHS. Analysis shows that as far as these issues are concerned Conservatives are the focus of attention rather than the coalition government – this helps to explain why Liberal Democrat support has not improved over time.</p>
<p>There is another reason why an improved economic performance has not put the Conservatives ahead in the polls. Close to a half of survey respondents think that the national economy has done well over the previous year, but only about a fifth of them think that their own personal finances have improved. Thus many people feel that the economic recovery does not apply to them, and in addition about 30% think that no party is good at managing the economy. This scepticism about economic recovery is making it hard for the Conservatives to claim credit for the economy and win additional support. </p>
<p>Other factors are at work in influencing the vote but these are the main ones. And this is why Labour and the Conservatives are so close together in the polls. Neither of them have a significant advantage over the other on the key drivers of vote intentions. The election campaign might shift this, but it is looking increasingly unlikely that it will change things enough to avoid another hung parliament.</p><img src="https://counter.theconversation.com/content/40130/count.gif" alt="The Conversation" width="1" height="1" />
<h4 class="border">Disclosure</h4><p class="fine-print"><em><span>Paul Whiteley received funding from the ESRC. This article does not reflect the views of the research councils.</span></em></p>Recent polls have put Labour and the Conservatives in what seems to be an unshakable dead heat. The latest Ashcroft poll puts them both on 33%. This takes the parties back to exactly where they were in…Paul Whiteley, Professor, Department of Government, University of EssexLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/382292015-03-25T03:26:50Z2015-03-25T03:26:50ZFinding new ways to track voters’ moods, beyond polls and punters<p>The recent Queensland election result surprised everyone – including the professional pollsters and punters. Sportsbet <a href="http://www.sportsbet.com.au/content/articles/lnp-now-into-1-01-shortest-odds-possible-to-win-queensland-election">declared the result</a> and paid out for a win to the Liberal National government one day before the January 31 poll.</p>
<blockquote>
<p>We were so confident yesterday that we decided to pay out early on the Liberals as it looks a foregone conclusion. The punters obviously agree with us as they have moved to $1.01.</p>
</blockquote>
<p>In that case, the punters got it wrong. Yet that hasn’t stopped the betting agency <a href="http://www.sportsbet.com.au/blog/home/sportsbet-pays-out-early-on-nsw-election">doing the same thing</a> again ahead of the March 28 polling day in NSW.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"580155822543544320"}"></div></p>
<p>The most <a href="http://www.abc.net.au/news/2015-03-23/nsw-election-liberals-mike-baird-beats-labors-luke-foley-polls/6339672">recent Fairfax/Ipsos poll</a> shows the Baird government on track to victory, with a 54% to 46% two-party preferred lead over Labor. But as many people paying close attention to the polls have warned – including <a href="https://theconversation.com/nsw-premier-mike-baird-on-health-privatisation-and-abbotts-shadow-38093">NSW Premier Mike Baird</a> and ABC election analyst <a href="http://blogs.abc.net.au/antonygreen/2015/03/why-the-baird-government-is-vulnerable.html#more">Antony Green</a> – the election could be tighter than the polls show.</p>
<p>So beyond traditional polls and betting markets, how else could we try to gauge how people feel ahead of future elections? Social media is a goldmine of real-time information on public sentiment – and there are new ways to tap into how people really feel, including with a “social mood reader”.</p>
<h2>Polling is getting harder to do well</h2>
<p>Relying too heavily on traditional political polling or even the usually reliable betting market is a risky strategy for political parties and pundits.</p>
<p>Audience measurement for ratings and polling began in the 1930s. The two men behind those measures – <a href="http://www.nytimes.com/1985/05/02/nyregion/archibald-crosley-dies-at-88-helped-develop-scientific-polling.html">Archibald Crossley</a> and <a href="http://www.gallup.com/corporate/178136/george-gallup.aspx">George Gallup</a> – were close colleagues.</p>
<p>Audience ratings and political polling were created to provide accurate samples and also to stop “hypoing”, an industry term that means distortion by vested interests wanting to control what public opinion looked like.</p>
<p>Ratings and polling rely on robust samples, which properly represent the defined populations from which they are drawn. Modern political and audience ratings pollsters have always tried to draw an accurate statistical sample for their surveys. </p>
<p>But experts such as the former head of the American Association for Public Opinion Research, <a href="http://poq.oxfordjournals.org/content/72/5/831.full">Peter Miller</a>, have found that maintaining the quality of traditional and web polls around the world is getting harder by the day. </p>
<p>Some of the reasons include the shift away from landlines to <a href="http://www.smh.com.au/comment/ten-things-polls-never-tell-you-20150322-1m3ug2.html">mobile phones</a>; the challenge of web identity <a href="http://transparency.aapor.org/index.php/transparency">transparency</a>; the unwillingness of many people, especially <a href="http://www.smh.com.au/comment/ten-things-polls-never-tell-you-20150322-1m3ug2.html">young people</a>, to participate in surveys; and, not least, increased privacy concerns.</p>
<h2>Tapping into the public mood on social media</h2>
<p>What people post and share on social media – such as on Twitter, Instagram and Facebook – can show political moods in real time and it provides qualitatively rich data.</p>
<p>In surveys and focus groups, people are asked questions about their views or behaviour. In social media networks, people’s views are on display as they are expressed. Those views, in turn, are often re-posted elsewhere and opinions are built up on particular issues.</p>
<p>As QUT’s Social Media Research team has covered <a href="https://theconversation.com/nswvotes-twitter-chatter-shows-the-power-of-incumbency-39110">in more detail in The Conversation</a>, among the most popular topics on Twitter in the #nswvotes campaign have been #nswnotforsale and #csg (coal seam gas). </p>
<p>Similar themes dominated the Queensland election, particularly privatisation, which was often discussed with the hashtags #assetssales or #Not4Sale. Those topics’ popularity were driven partly by widespread public concern but also by a well-organised <a href="http://www.brisbanetimes.com.au/queensland/antigovernment-billboards-in-the-sights-of-bleijie-20140301-33soe.html">Queensland union</a>-led <a href="https://www.facebook.com/Not4SaleQLD">Not4Sale</a> campaign, working closely with the Labor Party. (Interestingly, if you try to visit follow the Queensland not4sale.org.au link, it now automatically redirects you to stoptheselloff.org.au – the NSW unions’ anti-privatisation website.)</p>
<p>What people said and shared about “asset sales” in the Queensland election clearly indicated their voting intentions.</p>
<h2>How a ‘social mood reader’ works</h2>
<p>Depending on whether you only follow conversations you agree with or actively seek out different views, the risk with social media is that it can sound like an echo chamber.</p>
<p>So how can a pollster, or anyone, be confident that a negative or a positive “mood” in social media networks will translate into voting behaviour? <a href="https://theconversation.com/nswvotes-twitter-chatter-shows-the-power-of-incumbency-39110">New methods</a> are emerging that try to capture social mood from what people say in their social media networks.</p>
<p>Dr Brett Adams’s work at Curtin University is one example, with his development of a <a href="http://dl.acm.org/citation.cfm?id=2598633">“social mood reader”</a>, originally developed for autistic groups, to identify quickly the mood of different networks.</p>
<p>Here’s how it works. After getting permission from an individual, the social media reader can access not only what they say and share publicly, but also their private information: who they follow and like (such as if they follow Mike Baird on Facebook, or Stop the Sell Off on Twitter), where they get their information from (for instance, liking The Conversation or ABC News on Facebook), who they are friends with, and what they are seeing in their feeds. It also goes beyond just social media, including other online services the person uses, such as email.</p>
<p>There are some key differences between traditional polls or surveys (which take people time to participate in and which are often done on behalf of political parties or commercial interests) and the social mood reader, including:</p>
<ul>
<li>it takes a participant no time to be involved with the social mood reader, as they are simply giving access to what they are already saying and doing on social media</li>
<li>people like the chance to have their say on matters they care about, especially when they know it is for independent research, rather than for commercial purposes.</li>
</ul>
<p>An individual participant who agrees to share their information with the social mood reader gets to nominate some of the key circles they belong to: for instance, family, friends, work, community groups, political groups etc. You can see an example of how that looks below.</p>
<p>When there is crossover between those various groups, you then see a line or multiple lines criss-crossing between those groups.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=325&fit=crop&dpr=1 600w, https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=325&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=325&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=408&fit=crop&dpr=1 754w, https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=408&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/73583/original/image-20150303-31860-5xpynt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=408&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What the social mood reader platform currently looks like, with researcher Mark Balnaves at the centre and his various social connections all around him, on an issue where the community was largely fairly happy (or green).</span>
<span class="attribution"><span class="source">Mark Balnaves</span></span>
</figcaption>
</figure>
<h2>Red for angry, green for happy</h2>
<p>I used Dr Adams’s social mood reader in a 2011 trial with City of Geraldton-Greenough in Western Australia, to see how the local community felt about a number of issues: from the rise of fly-in, fly-out mining and what that meant for the community, right down to more local issues such as how people felt about bike safety and the future of a playground roundabout. </p>
<p>Different moods are represented by different colours, based on the Affective Norms for English Words (<a href="http://csea.phhp.ufl.edu/media/anewmessage.html">ANEW</a>). For example, a very deep red, for example, represents “enraged”, while deep green would indicate a mood of great “happiness”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=458&fit=crop&dpr=1 600w, https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=458&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=458&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=575&fit=crop&dpr=1 754w, https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=575&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/73584/original/image-20150303-31825-7zg5gb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=575&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The colours attributed to various moods in the social mood reader.</span>
<span class="attribution"><span class="source">Brett Adams</span></span>
</figcaption>
</figure>
<p>Nearly 5,000 people participated in the Geraldton-Greenough council study. What the social mood reader revealed was that the single most controversial issue proved to be fly-in, fly-out mining, with the overall community mood being red, or “angry”, at what it meant for their community.</p>
<p>The Geraldton locals were also angry when asked about the prospect of losing their playground roundabout; when the council saw that result, they kept it.</p>
<p>We also used Brian Sullivan’s <a href="http://civicevolution.org/">CivicEvolution platform</a> in the study, which is an easy way for people to pass on their ideas. So as well as testing for how people felt about issues, we gave participants in our study the chance to propose their ideas directly to the council. </p>
<p>On bike safety, for instance, the council not only found out it was a widespread community concern via the social mood reader, but also got some constructive suggestions about what could be done to improve safety.</p>
<h2>Giving people easier ways to have their say</h2>
<p>As far as I know, Dr Adams’ social mood reader has not been used in any Australian elections yet. But how could it help politicians if it were used in an election?</p>
<p>A good MP should always have a general sense of how her or his electorate feels. But rather than relying just on polls, focus groups, door knocking and gut instinct, a social mood reader could also be used to reveal how a community really feels.</p>
<p>The Geraldton-Greenough council was able to see how red and angry locals were about the prospect of losing their playground roundabout <em>before</em> it happened – and that changed the council’s mind, keeping the community happier. </p>
<p>We know that Australian politics is already highly poll-driven, with everyone from the prime minister down closely watching opinion polls and focus groups. So do we really want one more method of telling politicians their reforms are unpopular, when sometimes those policies might be the right thing to do?</p>
<p>There is no easy answer to that. But the upside of technology like the social mood reader is that it does give the community an immediate opportunity to have their say, participate in the democratic process and feel as if they are really being heard.</p>
<hr>
<p><em>Read more of The Conversation’s coverage of the <a href="https://theconversation.com/au/topics/nsw-election-2015">2015 NSW election</a>.</em></p><img src="https://counter.theconversation.com/content/38229/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Balnaves has received and receives funding from the Australian Research Council on the history of audience ratings in Australia, development of fear of terrorism metrics, history of popular music in Perth, social games, social media and e-governance, and mapping the creative industries in Newcastle NSW. The social media mood reader research discussed in this article was conducted as part of the Australian Research Council Linkage study Transitions to a Sustainable City - Geraldton WA: An applied study into co-creating sustainability though civic deliberation and social media.</span></em></p>Beyond polls and betting markets, how else can we gauge how people feel ahead of future elections? Social media is a goldmine, and one of the newer ways to tap into it is with a “social mood reader”.Mark Balnaves, Professor of Communication, University of NewcastleLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/106922012-11-13T03:11:55Z2012-11-13T03:11:55ZRepublicans trust voter modelling - why not climate modelling?<figure><img src="https://images.theconversation.com/files/17551/original/x9b9262x-1352770360.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">After defeat in the 2012 election, it is back to the drawing board for the Republican Party. But will they address the true concerns of the electorate?</span> <span class="attribution"><span class="source">EPA/STEPHAN SAVOIA/POOL</span></span></figcaption></figure><p>Tuesday, November 6 was a game changer. The Republican Party in the United States has come to understand that the political environment has been altered. White males can no longer dictate the results of an election. The dynamics of the voting electorate have changed dramatically, and they will only continue to do so. </p>
<p>It is safe to assume that conservatives who drive the agenda of the Republican Party “get it”. They are not stupid; indeed, they are quite sophisticated. They understand politics and they will respond to a changing environment.</p>
<p>So why are they so resistant to the idea of a changing climate? This may be an odd question at this point. Ignoring the evidence about a changing climate that puts increasing numbers of people in risk is folly – just as ignoring the changing demographic in critical states has already proven to be. </p>
<p>I can only hope that the Republican Party has learnt something and that those lessons will inform their opinions of how to cope, and not just with a dynamic electorate. My question is: why not apply the same lessons to respond to the dynamic components of the global environment?</p>
<p>Let me be specific.</p>
<p>A new wave of pollsters informed by some very sophisticated understanding of social science, worked on their models over the past year or so. They understood that the political environment was dynamic, they deciphered why and they then incorporated thoise insights into their projections of the future. </p>
<p>They took quite a bit of heat about being biased in their projections, but they turned out to be right! And that is the standard to be applied. Their advantage was that they had a definitive date when reality could be compared with model projections, and they did very well.</p>
<p>Climate scientists have also been working on their own models of the climate - the physical rather than the political. They have come to understand that the portrait of a warming planet is displayed most graphically in the distribution of extreme weather events. Not just hurricanes like Sandy but also droughts, wildfires and extreme precipitation events like the four and a half inches that fell on my deck in the summer of 2011 in 35 minutes. </p>
<p>They too have been criticised, butioi they do not have the benefit of a date certain in the foreseeable future when all will be revealed. That is to say, they do not have an “election day”.</p>
<p>Even so, why don’t Republicans recognise the parallel of changing environments – political on the one hand and climatic on the other? Republicans are beginning to reorganise their party on the basis of election results that can be attributed to demographic changes. </p>
<p>So why not also begin to reorganise their approach on the basis of observed changes in the frequency of extreme weather events, that can be attributed to climate change? There are “confounding factors” in both correlations, but why do they believe one conclusion more than another? Probably because the consequences to the Party are larger in the political case than they are in the climate case, at least in the short run. But…
t
Why can they not recognise that the climate system is just as dynamic as the political environment, and that the “old normals” are broken in both places?</p>
<p>That would be a first step in working toward perhaps the most significant compromise with a re-elected President, who included in his acceptance speech the statement, “We want our children to live in an America that isn’t threatened by the destructive power of a warming planet.”</p><img src="https://counter.theconversation.com/content/10692/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gary W Yohe is the co-chair of the National Climate Assessment 2013 (<a href="http://www.globalchange.gov/what-we-do/assessment/people/nca-author-teams">http://www.globalchange.gov/what-we-do/assessment/people/nca-author-teams</a>).</span></em></p>Tuesday, November 6 was a game changer. The Republican Party in the United States has come to understand that the political environment has been altered. White males can no longer dictate the results of…Gary W. Yohe, Huffington Foundation Professor of Economics and Environmental Studies, Wesleyan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/82012012-07-12T04:20:06Z2012-07-12T04:20:06ZWhat we really care about, and how to lift sustainability’s real appeal<figure><img src="https://images.theconversation.com/files/12845/original/km94b6cy-1342042289.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">More pressing matters: people can be "concerned' about many things, but what really matters to them are problems close to daily life.</span> <span class="attribution"><span class="source">AAP/April Fonti</span></span></figcaption></figure><p>New polls frequently announce that a significant proportion of the population is concerned about an issue or willing to sacrifice for a cause, from environmental sustainability to Third World debt. These polls create the sense that there is a public mandate for action on these issues - one that businesses and governments need to follow. However, standard polls don’t accurately measure people’s true beliefs.</p>
<p><strong>Polls fail for two reasons</strong></p>
<p>First, many issues are subject to what’s called social response bias. Vague questions about an issue will generally elicit responses that reflect what the respondent believes is seen positively by society or the surveying organization.</p>
<p>Second, there is no cost to responding to a poll. Answering “I am concerned” forces no real cognitive choice on the individual. There is no consequence associated with the opinion.</p>
<p>Case in point: surveys always indicated that Australians were very concerned about the environment. But after the government proposed a carbon tax, support for this environmental issue quickly <a href="http://www.smh.com.au/environment/climate-change/nation-now-indifferent-to-environment-20120429-1xt25.html">declined</a>. Suddenly, people realised that their support had consequences.</p>
<p>My colleagues and I developed a polling methodology that makes consequences real and gets at people’s true beliefs. Our approach assesses the relative value that people place on different issues, forcing individuals to make realistic tradeoffs. In other words, rather than being asked their opinion of an issue generally, individuals have to choose among issues in a way that reveals what truly matters when something must be taken off the table.</p>
<p>We have polled almost 10,000 people in Australia, Germany, the UK, and the US. We’ve found basically identical results across countries. Our findings for Australia are below.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=517&fit=crop&dpr=1 600w, https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=517&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=517&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=650&fit=crop&dpr=1 754w, https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=650&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/12844/original/rtrrkstx-1342038392.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=650&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This graph outlines the relative issue salience, or importance, for Australians of 16 general categories of social, economic and political issues (underlying these categories are 113 individual issues). The graph is read as indicating the likelihood (from 0% to 100%) that when an item appears it is considered to be salient (the ratio of the numbers indicates the odds that one issue dominates another).</span>
<span class="attribution"><span class="source">T. Devinney</span></span>
</figcaption>
</figure>
<p><strong>Our findings and what they mean for sustainability</strong></p>
<p>We find proximity matters: people care about issues close to their daily lives. These issues of high concern include food and health, crime and public safety, access to services, equality of opportunity, and individual economic well-being. Issues that seem more distant are lower priority.</p>
<p>For those of us focused on sustainability, the results are reason for concern. Social and environmental sustainability are among the lower priority issues, especially when they are framed as global rather than local. In addition, people’s concern for environmental sustainability has declined dramatically over the last five years. In 2007, environmental sustainability was 4th out of 16 issues in terms of level of concern. In 2011, it was 8th out of 16 issues.</p>
<p>These trends are the same in the other countries we studied. A minor difference is that Americans are slightly less environmentally concerned than Australians, and Germans are slightly more concerned, with the UK in between.</p>
<p>It’s not clear why environmental concern is decreasing in the countries studied, although we do know the change is not related solely to the global financial crisis. It is conceivable that the high concern in 2007, the first year we studied, was unusual. In 2007, Al Gore won the Nobel Prize and an Oscar for his climate-change work; the year was a public relations watershed for the environmental movement. A real possibility is that the lower results in 2011 represent a more realistic view of people’s long-term values.</p>
<p>What is clear is that the global environmental movement is facing an uphill battle to keep vital sustainability issues on the agenda, against issues that appear more relevant to the individual members of our societies.</p>
<p>The lesson for sustainability advocates is that they need to make environmental issues as relevant to people as things like “access to medicine” or “freedom from discrimination.” Individuals will prioritise sustainability issues if they see their relevance to everyday life. With climate change, for example, people care about record-heat locally – more than they do about the fate of Arctic polar bears. They care about potential toxins in household products more than international pollution treaties. Companies should be alert to emphasising the ways that sustainability issues can hit home.</p>
<p>Advocates should be wary of speaking in grand terms – as with, for example, Greenpeace’s recent declaration that it is moving to a “<a href="http://www.guardian.co.uk/environment/2012/jun/19/greenpeace-rio-20-civil-disobedience">war footing</a>” to protect the world’s oceans. Unfortunately, ordinary people are motivated to act, not by a higher noble cause, but when they feel their basic rights and livelihood, and that of those around them, are affected.</p>
<p><strong>More about our research</strong></p>
<p>Our first report examines the results from over 3,000 people in Australia in 2007 and 2011. The full report is available for download <a href="http://www.modern-cynic.org/social-economic-and-political-values-reports-2/">here</a>. The reports on the US, Germany and UK will be available beginning in September 2012.</p>
<p><em>Comments welcome below.</em></p><img src="https://counter.theconversation.com/content/8201/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Timothy Devinney receives funding from The Australian Research Council.</span></em></p>New polls frequently announce that a significant proportion of the population is concerned about an issue or willing to sacrifice for a cause, from environmental sustainability to Third World debt. These…Timothy Devinney, Professor of Strategy, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.