tag:theconversation.com,2011:/ca-fr/topics/fake-video-75987/articlesFake video – La Conversation2022-02-26T05:02:36Ztag:theconversation.com,2011:article/1779212022-02-26T05:02:36Z2022-02-26T05:02:36ZFake viral footage is spreading alongside the real horror in Ukraine. Here are 5 ways to spot it<p>Amid the alarming images of <a href="https://theconversation.com/russia-invades-ukraine-5-essential-reads-from-experts-177815">Russia’s invasion of Ukraine</a> over the past few days, millions of people have also seen <a href="https://www.politico.com/news/2022/02/24/social-media-platforms-russia-ukraine-disinformation-00011559">misleading, manipulated or false information</a> about the conflict on social media platforms such as Facebook, Twitter, TikTok and Telegram.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Screenshot of fake news TikTok video" src="https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=891&fit=crop&dpr=1 600w, https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=891&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=891&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1119&fit=crop&dpr=1 754w, https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1119&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/448664/original/file-20220226-31488-1blhz2o.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1119&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Old footage, rebadged on TikTok as the latest from Ukraine.</span>
<span class="attribution"><span class="source">TikTok</span></span>
</figcaption>
</figure>
<p>One example is this <a href="https://www.tiktok.com/@notimundo/video/7068170668507974918?_t=8Q8LwdZRa8s&_r=1">video of military jets posted to TikTok</a>, which is historical footage but captioned as live video of the situation in Ukraine.</p>
<p>Visuals, because of their persuasive <a href="https://link.springer.com/article/10.1007/s12525-019-00345-y">potential</a> and attention-grabbing nature, are an especially potent choice for those seeking to mislead. Where creating, editing or sharing inauthentic visual content isn’t satire or art, it is usually <a href="https://www.tandfonline.com/doi/pdf/10.1080/21670811.2017.1345645?casa_token=t8LANzDiQGUAAAAA:3vZ76fwtwpHt82jeB3mFJXPOpfsks4aRZHhDiCpcNVgJtDFIFcqskhUL796_P609UZm2KVwxeHy8xM4">politically or economically motivated</a>. </p>
<p>Disinformation campaigns aim to distract, confuse, manipulate and sow division, discord, and uncertainty in the community. This is a common strategy for <a href="http://repository.ou.ac.lk/bitstream/handle/94ousl/928/journalism_fake_news_disinformation_print_friendly_0%20(1).pdf?sequence=1">highly polarised nations</a> where socioeconomic inequalities, disenfranchisement and propaganda are prevalent. </p>
<p>How is this fake content created and spread, what’s being done to debunk it, and how can you ensure you don’t fall for it yourself?</p>
<h2>What are the most common fakery techniques?</h2>
<p>Using an existing photo or video and claiming it came from a different time or place is one of the most common forms of misinformation in this context. This requires no special software or technical skills – just a willingness to upload an old video of a missile attack or other arresting image, and describe it as new footage.</p>
<p>Another low-tech option is to <a href="https://www.grid.news/story/misinformation/2022/02/23/autopsied-bodies-and-false-flags-how-pro-russian-disinformation-spreads-chaos-in-ukraine/">stage or pose</a> actions or events and present them as reality. This was the case with destroyed vehicles that Russia claimed were bombed by Ukraine.</p>
<p>Using a particular lens or vantage point can also change how the scene looks and can be used to deceive. A tight shot of people, for example, can make it hard to gauge how many were in a crowd, compared with an aerial shot.</p>
<p>Taking things further still, Photoshop or equivalent software can be used to add or remove people or objects from a scene, or to crop elements out from a photograph. An example of object addition is the below photograph, which purports to show construction machinery outside a kindergarten in eastern Ukraine. The satirical text accompanying the image jokes about the “calibre of the construction machinery” - the author suggesting that reports of damage to buildings from military ordinance are exaggerated or untrue. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1494617413245091853"}"></div></p>
<p>Close inspection reveals this image was <a href="https://www.reuters.com/article/factcheck-ukraine-alteredmachinery-idUSL1N2UT2W0">digitally altered</a> to include the machinery. This tweet could be seen as an attempt to downplay the extent of damage resulting from a Russian-backed missile attack, and in a wider context to create confusion and doubt as to veracity of other images emerging from the conflict zone. </p>
<h2>What’s being done about it?</h2>
<p>European organisations such as <a href="https://www.bellingcat.com/news/2022/02/23/documenting-and-debunking-dubious-footage-from-ukraines-frontlines/">Bellingcat</a> have begun compiling lists of dubious social media claims about the Russia-Ukraine conflict and debunking them where necessary. </p>
<p>Journalists and fact-checkers are also working to verify content and <a href="https://twitter.com/AricToler/status/1494738571483353092?s=20&t=bndDHkpko9nibN9LjRmaWw">raise awareness</a> of known fakes. Large, well-resourced news outlets such as the BBC are also <a href="https://www.bbc.com/news/60513452">calling out misinformation</a>.</p>
<p>Social media platforms have added new <a href="https://help.twitter.com/en/rules-and-policies/state-affiliated">labels</a> to identify state-run media organisations or provide more <a href="https://9to5mac.com/2018/04/03/facebook-newsfeed-update/">background information</a> about sources or people in your networks who have also shared a particular story. </p>
<p>They have also <a href="https://www.politico.com/news/2022/02/24/social-media-platforms-russia-ukraine-disinformation-00011559">tweaked their algorithms</a> to change what content is amplified and have hired staff to spot and flag misleading content. Platforms are also doing some work behind the scenes to detect and <a href="https://transparency.twitter.com/en/reports/information-operations.html">publicly share</a> information on state-linked information operations.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-can-the-west-do-to-help-ukraine-it-can-start-by-countering-putins-information-strategy-177912">What can the West do to help Ukraine? It can start by countering Putin's information strategy</a>
</strong>
</em>
</p>
<hr>
<h2>What can I do about it?</h2>
<p>You can attempt to <a href="https://www.tandfonline.com/doi/full/10.1080/17512786.2020.1832139">fact-check images</a> for yourself rather than taking them at face value. An <a href="https://www.aap.com.au/factcheck-resources/how-do-you-fact-check-an-image/">article</a> we wrote late last year for the Australian Associated Press explains the fact-checking process at each stage: image creation, editing and distribution.</p>
<p>Here are five simple steps you can take:</p>
<p><strong>1. Examine the metadata</strong></p>
<p>This <a href="https://t.me/nm_dnr/6192">Telegram post</a> claims Polish-speaking saboteurs attacked a sewage facility in an attempt to place a tank of chlorine for a “<a href="https://theconversation.com/what-are-false-flag-attacks-and-did-russia-stage-any-to-claim-justification-for-invading-ukraine-177879">false flag</a>” attack.</p>
<p>But the video’s metadata – the details about how and when the video was created – <a href="https://twitter.com/EliotHiggins/status/1495356701717020681?s=20&t=DSIyWgyKfPu2vKvVQLjnOw">show</a> it was filmed days before the alleged date of the incident. </p>
<p>To check metadata for yourself, you can download the file and use software such as Adobe Photoshop or Bridge to examine it. Online <a href="http://metapicz.com/#landing">metadata viewers</a> also exist that allow you to check by using the image’s web link.</p>
<p>One hurdle to this approach is that social media platforms such as Facebook and Twitter often strip the metadata from photos and videos when they are uploaded to their sites. In these cases, you can try requesting the original file or consulting fact-checking websites to see whether they have already verified or debunked the footage in question.</p>
<p><strong>2. Consult a fact-checking resource</strong></p>
<p>Organisations such as the <a href="https://www.aap.com.au/factcheck/">Australian Associated Press</a>, <a href="https://www.rmit.edu.au/about/schools-colleges/media-and-communication/industry/factlab/debunking-misinformation">RMIT/ABC</a>, <a href="https://factcheck.afp.com/">Agence France-Presse (AFP)</a> and <a href="https://www.bellingcat.com/news/2022/02/23/documenting-and-debunking-dubious-footage-from-ukraines-frontlines/">Bellingcat</a> maintain lists of fact-checks their teams have performed. </p>
<p>The AFP has already <a href="https://factcheck.afp.com/doc.afp.com.323W3V8">debunked</a> a video claiming to show an explosion from the current conflict in Ukraine as being from the <a href="https://theconversation.com/what-is-ammonium-nitrate-the-chemical-that-exploded-in-beirut-143979">2020 port disaster</a> in Beirut.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1496858320182804493"}"></div></p>
<p><strong>3. Search more broadly</strong></p>
<p>If old content has been recycled and repurposed, you may be able to find the same footage used elsewhere. You can use <a href="https://www.google.com/imghp?hl=EN">Google Images</a> or <a href="https://tineye.com/">TinEye</a> to “reverse image search” a picture and see where else it appears online.</p>
<p>But be aware that simple edits such as reversing the left-right orientation of an image can fool search engines and make them think the flipped image is new.</p>
<p><strong>4. Look for inconsistencies</strong></p>
<p>Does the purported time of day match the direction of light you would expect at that time, for example? Do <a href="https://twitter.com/Forrest_Rogers/status/1496254107660738568?s=20&t=KSr6GYxwMhqW719GhZPvlA">watches</a> or clocks visible in the image correspond to the alleged timeline claimed?</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1496254107660738568"}"></div></p>
<p>You can also compare other data points, such as politicians’ schedules or verified sightings, <a href="https://earth.google.com/static/9.157.0.0/app_min.html">Google Earth</a> vision or <a href="https://www.google.com/maps">Google Maps</a> imagery, to try and triangulate claims and see whether the details are consistent.</p>
<p><strong>5. Ask yourself some simple questions</strong></p>
<p>Do you know <em>where</em>, <em>when</em> and <em>why</em> the photo or video was made? Do you know <em>who</em> made it, and whether what you’re looking at is the <em>original</em> version?</p>
<p>Using online tools such as <a href="https://www.invid-project.eu/">InVID</a> or <a href="https://29a.ch/photo-forensics/#forensic-magnifier">Forensically</a> can potentially help answer some of these questions. Or you might like to refer to this list of <a href="https://drive.google.com/file/d/1kRfo1ToexG8dEiMqurXKqzEeXdyHn7Ic/view">20 questions</a> you can use to “interrogate” social media footage with the right level of healthy scepticism.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/3-2-billion-images-and-720-000-hours-of-video-are-shared-online-daily-can-you-sort-real-from-fake-148630">3.2 billion images and 720,000 hours of video are shared online daily. Can you sort real from fake?</a>
</strong>
</em>
</p>
<hr>
<p>Ultimately, if you’re in doubt, don’t share or repeat claims that haven’t been published by a reputable source such as an international news organisation. And consider using some of these <a href="https://www.aap.com.au/factcheck-resources/how-do-you-know-what-information-sources-to-trust/">principles</a> when deciding which sources to trust.</p>
<p>By doing this, you can help limit the influence of misinformation, and help clarify the true situation in Ukraine.</p><img src="https://counter.theconversation.com/content/177921/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>T.J. Thomson has received funding from the AAP, the Australian Academy of the Humanities, and from the Australian Research Council through Discovery Project DP210100859. He is also a past contributor to the Australian Associated Press.</span></em></p><p class="fine-print"><em><span>Daniel Angus receives funding from Australian Research Council through Discovery Projects DP200100519 ‘Using machine vision to explore Instagram’s everyday promotional cultures’, DP200101317 ‘Evaluating the Challenge of ‘Fake News’ and Other Malinformation’, and Linkage Project LP190101051 'Young Australians and the Promotion of Alcohol on Social Media'.</span></em></p><p class="fine-print"><em><span>Paula Dootson has received funding from the Bushfire and Natural Hazards CRC, Queensland Government, and Natural Hazards Research Australia. </span></em></p>Footage claiming to document the situation in Ukraine may not necessarily be genuine. Here’s how to treat viral footage with the right level of scepticism before sharing it on social media.T.J. Thomson, Senior Lecturer in Visual Communication & Media, Queensland University of TechnologyDaniel Angus, Professor of Digital Communication, Queensland University of TechnologyPaula Dootson, Senior Lecturer, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1486302020-11-03T03:05:25Z2020-11-03T03:05:25Z3.2 billion images and 720,000 hours of video are shared online daily. Can you sort real from fake?<figure><img src="https://images.theconversation.com/files/366370/original/file-20201029-13-q049a9.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C2479&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Twitter screenshots/Unsplash</span>, <span class="license">Author provided</span></span></figcaption></figure><p>Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd. </p>
<p>Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330”. </p>
<p>The Associated Press’s fact check <a href="https://apnews.com/article/joe-biden-video-altered-58124115393828f85cd496514bba4726">confirmed</a> the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, <a href="https://www.theguardian.com/us-news/2020/nov/02/joe-biden-manipulated-video-mixing-up-states-twitter-removed">The Guardian</a> reports.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1323048954662182913"}"></div></p>
<p>If you use social media, the chances are you see (and forward) some of the more than <a href="https://www.brandwatch.com/blog/amazing-social-media-statistics-and-facts/">3.2 billion</a> images and <a href="https://www.tubefilter.com/2019/05/07/number-hours-video-uploaded-to-youtube-per-minute/">720,000 hours</a> of video <a href="https://www.sciencedaily.com/releases/2020/10/201021112337.htm">shared daily</a>. When faced with such a glut of content, how can we know what’s real and what’s not?</p>
<p>While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defence — and the only one you can control — is you. </p>
<h2>Seeing shouldn’t always be believing</h2>
<p>Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can <a href="https://theconversation.com/deepfake-videos-could-destroy-trust-in-society-heres-how-to-restore-it-110999">erode trust in civil institutions</a> such as news organisations, coalitions and social movements. However, fake photos and videos are often the most potent.</p>
<p>For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarised environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.</p>
<p>Only <a href="https://www.icfj.org/our-work/state-technology-global-newsrooms">11-25%</a> of journalists globally use social media content verification tools, according to the International Centre for Journalists. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-is-tilting-the-political-playing-field-more-than-ever-and-its-no-accident-148314">Facebook is tilting the political playing field more than ever, and it's no accident</a>
</strong>
</em>
</p>
<hr>
<h2>Could you spot a doctored image?</h2>
<p>Consider this photo of Martin Luther King Jr.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"413918195456966656"}"></div></p>
<p>This <a href="https://www.snopes.com/fact-check/mlk-flip-off/">altered image</a> clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on <a href="https://twitter.com/HistoryInPics/status/400762777964646400">Twitter</a>, <a href="https://www.reddit.com/r/OldSchoolCool/comments/2t0z4t/the_man_the_legend_mlkj_early_50s/">Reddit</a> and <a href="https://archive.is/POvXf">white supremacist websites</a>.</p>
<p>In the <a href="https://civilrights.flagler.edu/digital/collection/p16000coll3/id/103/">original</a> 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1087425407052337153"}"></div></p>
<p>Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together. </p>
<p>Earlier this year, a <a href="https://www.gettyimages.com.au/detail/news-photo/volunteer-works-security-at-an-entrance-to-the-so-called-news-photo/1219247529?uiloc=thumbnail_more_from_this_event_adp">photo</a> of an armed man was photoshopped by <a href="https://www.seattletimes.com/seattle-news/politics/fox-news-runs-digitally-altered-images-in-coverage-of-seattles-protests-capitol-hill-autonomous-zone/">Fox News</a>, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times <a href="https://www.seattletimes.com/seattle-news/politics/fox-news-runs-digitally-altered-images-in-coverage-of-seattles-protests-capitol-hill-autonomous-zone/">reported</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1271620044837941250"}"></div></p>
<p>Similarly, the <a href="https://perma.cc/XK5E-LFA3">image</a> below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check <a href="https://factcheck.afp.com/virtual-image-was-created-artist-new-south-wales-australia-its-not-real-photo">confirmed</a> it is not authentic and is actually a combination of <a href="https://unsplash.com/photos/EerxztHCjM8">several</a> <a href="https://unsplash.com/photos/lzcDi7-MWL4">separate</a> <a href="https://unsplash.com/photos/hLUTRzcVkqg">photos</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1214222190117376003"}"></div></p>
<h2>Fully and partially synthetic content</h2>
<p>Online, you’ll also find sophisticated “<a href="https://www.abc.net.au/triplej/programs/hack/in-event-of-moon-disaster-nixon-deepfake/12656698">deepfake</a>” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps <a href="https://www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns">such as Zao</a> and <a href="https://techcrunch.com/2020/08/17/deepfake-video-app-reface-is-just-getting-started-on-shapeshifting-selfie-culture/">Reface</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/yaq4sWFvnAY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A team from the Massachusetts Institute of Technology created this fake video showing US President Richard Nixon reading lines from a speech crafted in case the 1969 moon landing failed. (Youtube)</span></figcaption>
</figure>
<p>Or, if you don’t want to use your photo for a profile picture, you can default to one of several <a href="https://generated.photos/">websites</a> offering hundreds of thousands of AI-generated, photorealistic images of people. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="AI-generated faces." src="https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=198&fit=crop&dpr=1 600w, https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=198&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=198&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=249&fit=crop&dpr=1 754w, https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=249&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/365166/original/file-20201023-17-4s2gtw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=249&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">These people don’t exist, they’re just images generated by artificial intelligence.</span>
<span class="attribution"><a class="source" href="https://generated.photos/faces">Generated Photos</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Editing pixel values and the (not so) simple crop</h2>
<p>Cropping can greatly alter the context of a photo, too. </p>
<p>We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to <a href="https://www.theguardian.com/world/2018/sep/06/donald-trump-inauguration-crowd-size-photos-edited">The Guardian</a>. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=191&fit=crop&dpr=1 600w, https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=191&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=191&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=240&fit=crop&dpr=1 754w, https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=240&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/367129/original/file-20201103-23-1ko5gze.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=240&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right).</span>
<span class="attribution"><span class="source">AP</span></span>
</figcaption>
</figure>
<p>But what about edits that only alter pixel values such as colour, saturation or contrast?</p>
<p>One historical example illustrates the consequences of this. In 1994, Time magazine’s <a href="http://content.time.com/time/magazine/0,9263,7601940627,00.html">cover</a> of OJ Simpson considerably “darkened” Simpson in his <a href="https://en.wikipedia.org/wiki/O._J._Simpson_murder_case#/media/File:Mug_shot_of_O.J._Simpson.jpg">police mugshot</a>. This added fuel to a case already plagued by racial tension, to which the magazine <a href="https://www.nytimes.com/1994/06/25/us/time-responds-to-criticism-over-simpson-cover.html">responded</a>: </p>
<blockquote>
<p>No racial implication was intended, by Time or by the artist.</p>
</blockquote>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1323345865164132352"}"></div></p>
<h2>Tools for debunking digital fakery</h2>
<p>For those of us who don’t want to be duped by visual mis/disinformation, there are tools available — although each comes with its own limitations (something we discuss in our recent <a href="https://www.tandfonline.com/doi/full/10.1080/17512786.2020.1832139">paper</a>).</p>
<p>Invisible <a href="https://www.bbc.co.uk/mediacentre/latestnews/2020/trusted-news-initiative">digital watermarking</a> has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.</p>
<p>Reverse image search (such as <a href="https://www.google.com/imghp?hl=EN">Google’s</a>) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:</p>
<ul>
<li>relies on unedited copies of the media already being online</li>
<li>doesn’t search the <em>entire</em> web</li>
<li>doesn’t always allow filtering by publication time. Some reverse image search services such as <a href="https://tineye.com/">TinEye</a> support this function, but Google’s doesn’t.</li>
<li>returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.</li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/instead-of-showing-leadership-twitter-pays-lip-service-to-the-dangers-of-deep-fakes-127027">Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes</a>
</strong>
</em>
</p>
<hr>
<h2>Most reliable tools are sophisticated</h2>
<p>Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive and need specialised expertise.</p>
<p>Still, you can access work in this field by visiting sites such as Snopes.com — which has a growing repository of “<a href="https://www.snopes.com/fact-check/category/photos/">fauxtography</a>”.</p>
<p>Computer vision and machine learning also offer relatively advanced detection capabilities for images and <a href="https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them">videos</a>. But they too require technical expertise to operate and understand. </p>
<p>Moreover, improving them involves using large volumes of “training data”, but the image repositories used for this usually don’t contain the real-world images seen in the news. </p>
<p>If you use an image verification tool such as the REVEAL project’s <a href="http://reveal-mklab.iti.gr/reveal/">image verification assistant</a>, you might need an expert to help interpret the results.</p>
<p>The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:</p>
<ul>
<li>was it originally made for social media?</li>
<li>how widely and for how long was it circulated?</li>
<li>what responses did it receive?</li>
<li>who were the intended audiences?</li>
</ul>
<p>Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, <a href="https://datajournalism.com/read/handbook/verification-3/investigating-actors-content/5-verifying-and-questioning-images">here</a>.</p><img src="https://counter.theconversation.com/content/148630/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The research underlying this article received funding support from the Bushfire and Natural Hazards Cooperative Research Centre.
</span></em></p><p class="fine-print"><em><span>Daniel Angus receives funding from Australian Research Council through Discovery projects DP200100519 ‘Using machine vision to explore Instagram’s everyday promotional cultures’, and DP200101317 ‘Evaluating the Challenge of ‘Fake News’ and Other Malinformation’. </span></em></p><p class="fine-print"><em><span>Paula Dootson receives funding from the Bushfire and Natural Hazards Cooperative Research Centre. </span></em></p>In an age of democracy via social media, platforms are struggling to combat visual mis/disinformation such as ‘spliced’ images and deepfakes. Digital media literacy has never been so important.T.J. Thomson, Senior Lecturer in Visual Communication & Media, Queensland University of TechnologyDaniel Angus, Associate Professor in Digital Communication, Queensland University of TechnologyPaula Dootson, Senior Lecturer, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1474592020-10-29T02:56:19Z2020-10-29T02:56:19ZLiving with the train wreck: how research can harness the power of visual storytelling<figure><img src="https://images.theconversation.com/files/363853/original/file-20201016-15-1dlgddm.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1000%2C666&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Image: Daniel Ray</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Mesmerised by the cats of YouTube? Tumbled down the rabbit holes that are Insta Stories? Horrified by the US presidential debate, but kept watching regardless? </p>
<p>You are not alone. </p>
<p>Visual narratives have a powerful hold over us and, like the metaphoric train wreck, we are finding it increasingly difficult to look away. We often tend to bring a level of healthy scepticism and questioning to the stories we read or hear. But if we “see” the story, we are far less critical and more likely to be drawn to jump on board and go along for the ride.</p>
<p>As the train continues to run away, we need to pay significantly more attention. We need to <a href="https://theconversation.com/uk-election-2019-after-fake-keir-starmer-clip-how-much-of-a-problem-are-doctored-videos-126897">question the value and quality of the visuals</a> that constantly filter through our feeds and devices. </p>
<h2>Reclaiming documentary from the dark side</h2>
<p>The genre of documentary has a particularly important role to play. Thanks especially to the prolific work of David Attenborough and the like, we are now hardwired to connect with real-life stories as a form of indisputable truth. </p>
<p>In contradiction, we need to acknowledge the <a href="https://theconversation.com/in-era-of-fake-news-honest-documentary-makers-have-never-mattered-more-80595">darker side of documentary</a> and its ability to misinform. To have any hope of preventing conspiracies derailing the train, we need to sharpen the focus on quality documentary processes.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/uk-election-2019-after-fake-keir-starmer-clip-how-much-of-a-problem-are-doctored-videos-126897">UK election 2019: after fake Keir Starmer clip, how much of a problem are doctored videos?</a>
</strong>
</em>
</p>
<hr>
<p>We first used documentary filmmaking as a process to inform an educational research project in 2018. We supported five graduate teachers to record their lived experiences by creating video journals as they embarked on their first year in the profession. The journals were curated as a <a href="https://vimeo.com/300092767">documentary film</a>, Mapping the Messiness, and provide compelling insights into their individual journeys.</p>
<figure class="align-center ">
<img alt="Young woman talking" src="https://images.theconversation.com/files/366300/original/file-20201028-21-e6e98u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/366300/original/file-20201028-21-e6e98u.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=316&fit=crop&dpr=1 600w, https://images.theconversation.com/files/366300/original/file-20201028-21-e6e98u.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=316&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/366300/original/file-20201028-21-e6e98u.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=316&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/366300/original/file-20201028-21-e6e98u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=397&fit=crop&dpr=1 754w, https://images.theconversation.com/files/366300/original/file-20201028-21-e6e98u.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=397&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/366300/original/file-20201028-21-e6e98u.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=397&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Applying quality criteria in the making of Mapping the Messiness ensured the documentary presents five graduate teachers’ stories with integrity.</span>
<span class="attribution"><a class="source" href="https://vimeo.com/300092767">Screenshot from Mapping the Messiness (Magnolia Lowe/Vimeo)</a></span>
</figcaption>
</figure>
<p>Predictably, the visual product that evolved draws the viewer in and strongly connects them with the experiences of the graduates. It is difficult to avoid being deeply moved by their stories. Yet beneath this compelling surface lies a <a href="https://journals.sagepub.com/doi/10.1177/1609406920957462?icid=int.sj-abstract.citing-articles.1">rigorous application of quality criteria</a> that guided our interactions with the graduates. </p>
<p>Our learnings from this experience highlighted that the key factors informing a quality visual story are two-fold. It is about, firstly, supporting the storytellers to voluntarily share their own stories and, secondly, ensuring their input is clearly valued and conveyed in the final product. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/in-era-of-fake-news-honest-documentary-makers-have-never-mattered-more-80595">In era of fake news, honest documentary makers have never mattered more</a>
</strong>
</em>
</p>
<hr>
<h2>The ethics of visual storytelling</h2>
<p>We have entered an era where it is vital to apply ethical standards in the capture and curation of visual stories. By applying quality criteria, we introduce a framework that invites peer review, which strengthens the ethical basis of the approach. The opinions and feedback of others provide a way to ensure the credibility and authenticity of the documentary. </p>
<p>Awareness of the need for such an approach is increasing. Changes to <a href="https://www.stuff.co.nz/about-stuff/300106664/stuff-editorial-code-of-practice-and-ethics">ethical codes and practices to counter fake news</a> in our visual streams are being seen in countries like, for example, New Zealand. Collectively, these are steps to avert the consequences of the runaway train. </p>
<p>A recent <a href="https://www.newsroom.co.nz/crown-opposes-baby-uplift-video-being-official">legal case in New Zealand</a> dismissed an attempt to block the use of a documentary film, developed by an independent current affairs organisation, as evidence. This legal precedent confirms visual storytelling is a legitimate means of delivering evidence and should be considered as a credible source. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/vi7N5jknS8c?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">This documentary was accepted as evidence at a New Zealand inquiry into the removal of Māori children from their families.</span></figcaption>
</figure>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/where-are-the-in-depth-documentaries-calling-to-account-the-institutions-that-are-failing-us-111075">Where are the in-depth documentaries calling to account the institutions that are failing us?</a>
</strong>
</em>
</p>
<hr>
<p>We will continue to be faced with train wrecks in our visual world and will continue to find it hard to draw our eyes away. That is OK. It is part of human nature. But, if we are to have any hope of minimising the wreckage, we need to be reassured that visual stories can be credible and honest. To achieve this, we need to continually question and challenge the quality of the visual content we consume. </p>
<p>All aboard.</p><img src="https://counter.theconversation.com/content/147459/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In the age of fake news and deep fake videos, how can documentary making be used for research and other purposes that demand authenticity and credibility?Ange Fitzgerald, Associate Professor of Science Curriculum and Pedagogy, University of Southern QueenslandMagnolia Lowe, Adjunct Research Fellow, School of Education, University of Southern QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1465362020-10-09T12:28:02Z2020-10-09T12:28:02ZIn a battle of AI versus AI, researchers are preparing for the coming wave of deepfake propaganda<figure><img src="https://images.theconversation.com/files/362252/original/file-20201007-16-1x1f5g5.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1920%2C1077&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI-powered detectors are the best tools for spotting AI-generated fake videos.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/comparison-of-an-original-and-deepfake-video-of-facebook-news-photo/1167464772?adppopup=true">The Washington Post via Getty Images</a></span></figcaption></figure><p>An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “<a href="https://www.popularmechanics.com/technology/security/a28691128/deepfake-technology/">deepfake</a>,” a video made using artificial intelligence with <a href="https://www.mathworks.com/discovery/deep-learning.html">deep learning</a>. </p>
<p>Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds.</p>
<p>As <a href="https://scholar.google.com/citations?user=12j0HoYAAAAJ&hl=en">researchers</a> <a href="https://scholar.google.com/citations?user=icDo19sAAAAJ&hl=en">who have been studying deepfake detection</a> and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation.</p>
<h2>The problem with deepfakes</h2>
<p>Most people know that you can’t believe everything you see. Over the last couple of decades, savvy news consumers have gotten used to seeing images manipulated with photo-editing software. Videos, though, are another story. Hollywood directors can spend millions of dollars on special effects to make up a realistic scene. But using deepfakes, amateurs with a few thousand dollars of computer equipment and a few weeks to spend could make something almost as true to life.</p>
<p>Deepfakes make it possible to put people into movie scenes they were never in – <a href="https://www.youtube.com/watch?v=iDM69UEyM3w">think Tom Cruise playing Iron Man</a> – which makes for entertaining videos. Unfortunately, it also makes it possible to create <a href="https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/">pornography without the consent</a> of the people depicted. So far, those people, nearly all women, are the biggest victims when deepfake technology is misused.</p>
<p>Deepfakes can also be used to create videos of political leaders saying things they never said. The Belgian Socialist Party released a low-quality nondeepfake but still phony video of <a href="https://www.politico.eu/article/spa-donald-trump-belgium-paris-climate-agreement-belgian-socialist-party-circulates-deep-fake-trump-video/">President Trump insulting Belgium</a>, which got enough of a reaction to show the potential risks of higher-quality deepfakes. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/poSd2CyDpyA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">University of California, Berkeley’s Hany Farid explains how deepfakes are made.</span></figcaption>
</figure>
<p>Perhaps <a href="https://www.nytimes.com/2019/08/14/opinion/deepfakes-adele-disinformation.html">scariest of all</a>, they can be used to create <a href="https://www.newsweek.com/congressional-candidates-tweet-calling-floyds-death-deepfake-removed-1512916">doubt about the content of real videos</a>, by suggesting that they could be deepfakes.</p>
<p>Given these risks, it would be extremely valuable to be able to detect deepfakes and label them clearly. This would ensure that fake videos do not fool the public, and that real videos can be received as authentic. </p>
<h2>Spotting fakes</h2>
<p>Deepfake detection as a field of research was begun a little over <a href="https://muse.jhu.edu/article/715916">three years ago</a>. Early work focused on detecting visible problems in the videos, such as <a href="https://www.fastcompany.com/90230076/the-best-defense-against-deepfakes-ai-might-be-blinking">deepfakes that didn’t blink</a>. With time, however, the <a href="https://www.theverge.com/2019/6/27/18715235/deepfake-detection-ai-algorithms-accuracy-will-they-ever-work">fakes have gotten better</a> at mimicking real videos and become harder to spot for both people and detection tools. </p>
<p>There are two major categories of deepfake detection research. The first involves <a href="https://www.youtube.com/watch?v=poSd2CyDpyA">looking at the behavior of people</a> in the videos. Suppose you have a lot of video of someone famous, such as President Obama. Artificial intelligence can use this video to learn his patterns, from his hand gestures to his pauses in speech. It can then <a href="https://www.youtube.com/watch?v=cQ54GDm1eL0">watch a deepfake of him</a> and notice where it does not match those patterns. This approach has the advantage of possibly working even if the video quality itself is essentially perfect.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gsv1OsCEad0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">SRI International’s Aaron Lawson describes one approach to detecting deepfakes.</span></figcaption>
</figure>
<p>Other researchers, <a href="https://defake.app/about">including our team</a>, have been focused on <a href="https://theconversation.com/examining-a-videos-changes-over-time-helps-flag-deepfakes-120263">differences</a> that <a href="https://theconversation.com/detecting-deepfakes-by-looking-closely-reveals-a-way-to-protect-against-them-119218">all deepfakes have</a> compared to real videos. Deepfake videos are often created by merging individually generated frames to form videos. Taking that into account, our team’s methods extract the essential data from the faces in individual frames of a video and then track them through sets of concurrent frames. This allows us to detect inconsistencies in the flow of the information from one frame to another. We use a similar approach for our fake audio detection system as well.</p>
<p>These subtle details are hard for people to see, but show how deepfakes are not quite <a href="https://doi.org/10.1109/CVPR42600.2020.00505">perfect yet</a>. Detectors like these can work for any person, not just a few world leaders. In the end, it may be that both types of deepfake detectors will be needed.</p>
<p>Recent detection systems perform very well on videos specifically gathered for evaluating the tools. Unfortunately, even the best models do <a href="https://www.darkreading.com/analytics/d/d-id/1338953">poorly on videos found online</a>. Improving these tools to be more robust and useful is the key next step.</p>
<p>[<em>Get facts about coronavirus and the latest research.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=coronavirus-facts">Sign up for The Conversation’s newsletter.</a>]</p>
<h2>Who should use deepfake detectors?</h2>
<p>Ideally, a deepfake verification tool should be available to everyone. However, this technology is in the early stages of development. Researchers need to improve the tools and protect them against hackers before releasing them broadly.</p>
<p>At the same time, though, the tools to make deepfakes are available to anybody who wants to fool the public. Sitting on the sidelines is not an option. For our team, the right balance was to work with journalists, because they are the first line of defense against the spread of misinformation. </p>
<p>Before publishing stories, journalists need to verify the information. They already have tried-and-true methods, like checking with sources and getting more than one person to verify key facts. So by putting the tool into their hands, we give them more information, and we know that they will not rely on the technology alone, given that it can make mistakes. </p>
<h2>Can the detectors win the arms race?</h2>
<p>It is encouraging to see teams from <a href="https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/">Facebook</a> and <a href="https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/">Microsoft</a> investing in technology to understand and detect deepfakes. This field needs more research to keep up with the speed of advances in deepfake technology. </p>
<p>Journalists and the social media platforms also need to figure out how best to warn people about deepfakes when they are detected. Research has shown that <a href="https://www.psychologytoday.com/us/blog/words-matter/201807/when-correcting-lie-dont-repeat-it-do-instead-2">people remember the lie</a>, but not the fact that it was a lie. Will the same be true for fake videos? Simply putting “Deepfake” in the title might not be enough to counter some kinds of disinformation.</p>
<p>Deepfakes are here to stay. Managing disinformation and protecting the public will be more challenging than ever as artificial intelligence gets more powerful. We are part of a growing research community that is taking on this threat, in which detection is just the first step.</p><img src="https://counter.theconversation.com/content/146536/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>John Sohrawardi receives funding from Ethics and Governance of AI Initiative and the National Science Foundation.</span></em></p><p class="fine-print"><em><span>Matthew Wright receives funding from the Ethics and Governance of AI Initiative and the National Science Foundation. </span></em></p>Fake videos generated with sophisticated AI tools are a looming threat. Researchers are racing to build tools that can detect them, tools that are crucial for journalists to counter disinformation.John Sohrawardi, Doctoral Student in Computing and Informational Sciences, Rochester Institute of TechnologyMatthew Wright, Professor of Computing Security, Rochester Institute of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1270272019-11-19T19:20:19Z2019-11-19T19:20:19ZInstead of showing leadership, Twitter pays lip service to the dangers of deep fakes<figure><img src="https://images.theconversation.com/files/302366/original/file-20191119-12535-1ibjq98.jpg?ixlib=rb-1.1.0&rect=33%2C22%2C3648%2C2047&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Neural networks can generate artificial representations of human faces, as well as realistic renderings of actual people.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/success?u=http%3A%2F%2Fdownload.shutterstock.com%2Fgatekeeper%2FW3siZSI6MTU3NDE2OTY2MiwiYyI6Il9waG90b19zZXNzaW9uX2lkIiwiZGMiOiJpZGxfMTQzMDU3MTg2OSIsImsiOiJwaG90by8xNDMwNTcxODY5L21lZGl1bS5qcGciLCJtIjoxLCJkIjoic2h1dHRlcnN0b2NrLW1lZGlhIn0sIjJXSFNVZFhvUDRWVnFjUHdSZE9VSis3MVFGOCJd%2Fshutterstock_1430571869.jpg&ir=true&pi=41133566&m=1430571869&src=1958ce5f-79dc-4c00-a7d9-a80249171913-1-0">Shutterstock</a></span></figcaption></figure><p>Fake videos and doctored photographs, often based on events such as the <a href="https://www.space.com/apollo-11-moon-landing-hoax-believers.html">Moon landing</a> and supposed UFO appearances, have been the subject of fascination for decades.</p>
<p>Such imagery is often <a href="https://www.forbes.com/sites/chenxiwang/2019/11/01/deepfakes-revenge-porn-and-the-impact-on-women/#4f721c5e1f53">deep fake content</a>, called so because it uses deep learning associated with neural networks and digital image processing. </p>
<p>Last week, Twitter <a href="https://www.reuters.com/article/us-twitter-deepfakes/twitter-wants-your-feedback-on-its-deepfake-policy-plans-idUSKBN1XL2C6">revealed</a> plans to introduce a <a href="https://blog.twitter.com/en_us/topics/company/2019/synthetic_manipulated_media_policy_feedback.html">new policy</a> governing deep fake videos on its platform. </p>
<p>The company proposed it would warn users about deep fake content by flagging tweets with “synthetic or manipulated media”. Twitter says media may be removed in cases where it could lead to serious harm, but has stopped short of enforcing a strict removal stance. Users have until November 27 to provide feedback. </p>
<p>In adopting this warning-only approach towards deep fakes, the social media giant has shown poor judgement. </p>
<h2>Why deep fakes are dangerous</h2>
<p>With advances in computer science, deep fakes are becoming an increasingly powerful tool to deceive people using social media.</p>
<p>Deep fake clips of celebrities and politicians are realistic enough to trick users into making financial, political and personal decisions based on the fake testimony of others. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/VWrhRBb-1Ig?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">This Youtube clip featuring actor Bill Hader shows how realistic deep fake content can be.</span></figcaption>
</figure>
<p>Whether it’s a David Koch <a href="https://www.dailymail.co.uk/tvshowbiz/article-6204111/David-Koch-unwillingly-face-erectile-dysfunction-advertising-scam.html">erectile dysfunction cream</a> scam, an announcement by Donald Trump that <a href="https://shots.net/news/view/has-donald-trump-eradicated-aids">AIDs has been eradicated</a>, or a fake interview with Andrew Forrest leading to a <a href="https://www.commerce.wa.gov.au/announcements/scammers-use-fake-twiggy-forrest-investment-fleece-woman-out-670000">finance scam</a>, deep fakes present a serious risk to our ability to trust what we view online. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/people-who-spread-deepfakes-think-their-lies-reveal-a-deeper-truth-119156">People who spread deepfakes think their lies reveal a deeper truth</a>
</strong>
</em>
</p>
<hr>
<p>Social media companies have so far taken a sloppy approach to this threat. They have even promoted the use of photo algorithms letting users experiment with animated face masks, and provided tutorials on how to use editing programs. </p>
<p>Deep fake production is the <a href="https://www.sciencealert.com/deepfake-ai-algorithms-can-now-take-text-and-turn-it-into-words-spoken-in-a-video">professional version</a> of this practice. At its worst, it can even <a href="https://intelligence.house.gov/news/documentsingle.aspx?DocumentID=657">threaten democracy</a>.</p>
<p>Twitter’s latest draft policy on deep fakes sets a dangerous precedent. It allows social media platforms to handball away their responsibility to protect customers from manipulated videos and imagery. </p>
<h2>Twitter should be just as accountable as television</h2>
<p>It’s time social media giants such as Twitter started seeing themselves as the 21st century version of free-to-air television. With TV, there are clear guidelines about what cannot be broadcast. </p>
<p>Since 1992, Australians have been protected by the <a href="https://www.legislation.gov.au/Details/C2018C00060">1992 Broadcasting Services Act</a>, ensuring what is shows in “fair and accurate coverage”. The act <a href="http://www5.austlii.edu.au/au/legis/cth/consol_act/cca1995115/sch1.html">protects</a> viewers in regards to the origin and authenticity of television content.</p>
<p>The same principles should apply to social media. Americans now spend <a href="https://www.socialmediatoday.com/news/people-are-now-spending-more-time-on-smartphones-than-they-are-watching-tv/556405/">more time on social media</a> than they do watching television, and Australia isn’t far behind.</p>
<p>By suggesting they only need to flag tweets with deep fake content, Twitter’s proposed policy downplays the seriousness of the threat. </p>
<h2>Sending the wrong message</h2>
<p>Twitter’s draft policy is dangerous on two fronts. </p>
<p>Firstly, it suggests the company is somehow doing its part in protecting its users. In reality, Twitter’s decision is akin to watching a child struggle to swim in heavy surf, while nearby authorities wave a sign saying: “some waves may be hard to judge” - instead of actually helping.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lies-fake-news-and-cover-ups-how-has-it-come-to-this-in-western-democracies-102041">Lies, 'fake news' and cover-ups: how has it come to this in Western democracies?</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.theguardian.com/technology/2019/jun/23/what-do-we-do-about-deepfake-video-ai-facebook">Senior citizens</a> and inexperienced social media users are particularly vulnerable to deep fakes. This is because they’re predisposed to <a href="https://ro.ecu.edu.au/ecuworkspost2013/5709/">trust online content</a> that looks authentic.</p>
<p>The second reason Twitter’s proposition is dangerous is because social media trolls and <a href="https://ro.ecu.edu.au/ecuworkspost2013/665/">sock puppet armies</a> enjoy surprising online audiences. Sock puppets are specialists in deceiving users into believing they’re a single fake person (or multiple fake perople) by means of false posts and online identities.</p>
<p>Basically, content that has been signposted as deep fake will be exploited by people wanting to amplify its spread. It’s unrealistic to suppose this won’t happen. </p>
<p>If Twitter flags posts that are fake, yet leaves them up, the likely outcome will be a popularity surge in this content. As per social media algorithms, this means a greater number of fake videos and images will be “<a href="https://business.twitter.com/en/help/overview/what-are-promoted-tweets.html">promoted</a>” rather than retracted. </p>
<p>Twitter has an opportunity to take a leadership role in preventing the spread of deep fake content, by identifying and removing deep fakes from its platform. All major social media platforms have the responsibility to present a unified approach to the prevention and removal of manipulated and fake imagery.</p>
<p>The circulation of a <a href="https://fortune.com/2019/06/12/deepfake-mark-zuckerberg/">Nancy Pelosi deep fake</a> video earlier this year revealed social media’s inconsistency in the handling of deceitful imagery. YouTube removed the clip from its platform, Facebook flagged it as false, and Twitter let it remain. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-can-now-create-fake-porn-making-revenge-porn-even-more-complicated-92267">AI can now create fake porn, making revenge porn even more complicated</a>
</strong>
</em>
</p>
<hr>
<p>Twitter is in the business of helping users repost links and content as many times as possible. It creates profit by generating repeated referrals, commentary, and the acceptance of its content through <a href="https://fourweekmba.com/how-does-twitter-make-money/">promoted trends</a>. </p>
<p>If deep fakes aren’t removed from Twitter, their growth will be exponential. </p>
<h2>A looming threat</h2>
<p><a href="https://www.schneier.com/blog/archives/2018/10/detecting_fake_.html">Early versions</a> of such spurious content were relatively easy to spot. People in the first deep fake clips appeared unrealistic. Their eyes would’t blink and their facial gestures wouldn’t sync with the words being spoken. </p>
<p>There are also examples of harmless image manipulation. These include web apps on <a href="https://www.pocket-lint.com/apps/news/facebook/139756-facebook-messenger-here-s-how-to-use-those-new-snapchat-like-lenses">Snapchat and Facebook</a> that let users alter their photos (usually selfies) to add backgrounds, or resemble characters such as cute animals.</p>
<p>However, this new generation of altered imagery is often hard to distinguish from reality. And as criminals and pranksters improve their production of deep fakes, the other side of this double-edged sword could swing at any time.</p><img src="https://counter.theconversation.com/content/127027/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dr David Cook is affiliated with Edith Cowan University as a lecturer in the School of Science, and is a Fellow of the Australian Computer Society </span></em></p>Twitter’s proposed policy would result in the prolific spread of fabricated, but highly realistic images and videos. This could allow widespread misinformation on the platform.David Cook, Lecturer, Computer and Security Science,Edith Cowan University, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1229272019-09-11T14:51:28Z2019-09-11T14:51:28ZThe election’s on: Now Canadians should watch out for dumbfakes and deepfakes<figure><img src="https://images.theconversation.com/files/291847/original/file-20190910-190007-r4rq0f.jpg?ixlib=rb-1.1.0&rect=0%2C260%2C9144%2C5270&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">This image made from a fake video featuring former U.S. president Barack Obama shows elements of facial mapping that lets anyone make videos of real people appearing to say things they've never said.</span> <span class="attribution"><span class="source">(AP Photo)</span></span></figcaption></figure><p>Dumbfakes and deepfakes are edited or altered videos. In just the past few years, the capability to produce and share these videos has increased exponentially due, in part, to artificial intelligence. </p>
<p>These fake videos are already present in Canadian politics and are even more likely to be created and disseminated during Canada’s ongoing election campaign. </p>
<p>Dumbfakes are videos edited through traditional video editing techniques. They use technology that is readily accessible on most computers and smart phones. Political dumbfakes that already showed up in the lead-up to the election include a video that <a href="https://www.apnews.com/afs:Content:6026590105">falsely made it appear as though Prime Minister Justin Trudeau was snubbed</a> by Brazilian President Jair Bolsonaro at the G20 Summit in Japan in June 2019.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/291788/original/file-20190910-190044-11w0tm8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/291788/original/file-20190910-190044-11w0tm8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=443&fit=crop&dpr=1 600w, https://images.theconversation.com/files/291788/original/file-20190910-190044-11w0tm8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=443&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/291788/original/file-20190910-190044-11w0tm8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=443&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/291788/original/file-20190910-190044-11w0tm8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=557&fit=crop&dpr=1 754w, https://images.theconversation.com/files/291788/original/file-20190910-190044-11w0tm8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=557&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/291788/original/file-20190910-190044-11w0tm8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=557&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Trudeau shakes hands with Bolsonaro before the start of a plenary session at the G20 Summit in Osaka, Japan, in June 2019.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Adrian Wyld</span></span>
</figcaption>
</figure>
<p>In comparison, deepfakes typically use what are known as <a href="https://skymind.ai/wiki/generative-adversarial-network-gan">generative adversarial networks</a>, a type of machine learning, to swap the face of one individual onto the body of someone else or to manipulate the features of someone’s face.</p>
<p>Deepfakes may also include audio manipulation by using a voice actor or voice-mimicking technology. Technology to create deepfakes has quickly spread, including <a href="https://theconversation.com/zaos-deepfake-face-swapping-app-shows-uploading-your-photos-is-riskier-than-ever-122334">a new Chinese app called Zao</a>. </p>
<p>An example of a deepfake is the video of Canadian Conservative Party Leader Andrew Scheer as comic Pee-wee Herman in an old public service announcement about crack cocaine:</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/5BviVlHf-3A?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">This fake video was edited to morph Scheer into Pee-wee Herman./fancyscientician.</span></figcaption>
</figure>
<p>Dumbfakes and deepfakes are unique from other forms of false news due to the use of video manipulation. They offer visual representations of supposed events, as opposed to words or still images, and are therefore closest to how events are actually experienced. Indeed, <a href="https://doi.org/10.3758/MC.37.4.414">studies on doctored videos</a> have found them to be an effective tool in producing false memories. </p>
<p>While dumbfakes and deepfakes have been picked up by traditional news outlets, they are most likely to be shared on social media. This is concerning because false news, specifically political false news, <a href="http://doi.org/10.1126/science.aap9559">spreads exponentially faster and further than accurate news on Twitter</a>.</p>
<h2>Impact on election</h2>
<p>As we ponder what impact dumbfakes and deepfakes might have on the election, it’s important to note that they are not likely to affect all people equally.</p>
<p>They are most likely to have an impact on people who are marginalized and already face barriers to political engagement. <a href="https://theconversation.com/another-barrier-for-women-in-politics-violence-113637">Women, for instance, face barriers to running and staying in politics</a>. Deepfakes are likely to exacerbate that because, since their inception, <a href="https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn">deepfakes have been used to abuse women (for example, incorporating female celebrities into pornographic films)</a>. </p>
<p>An intersectional lens to understanding how dumbfakes and deepfakes could affect the election is therefore essential.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-intersectionality-all-of-who-i-am-105639">What is intersectionality? All of who I am</a>
</strong>
</em>
</p>
<hr>
<p>Three areas that dumbfakes and deepfakes are most likely to have an impact are political representation, participation, and discussion. </p>
<h2>Representation</h2>
<p>While everyone has a right to run for office, dumbfakes and deepfakes may make it more difficult for people to do so. Fake videos could be produced to blackmail politicians into not running or to discredit their campaigns by spreading false information.</p>
<p>Political candidates must now reflect on whether they’re prepared for the possibility of dumbfakes and deepfakes targeting them as they step into the public eye — both in regards to campaigning and in the increased number of photos and videos of them available online that could be used to make dumbfakes and deepfakes. </p>
<p>Political campaigns may furthermore be derailed by a damaging fake video.</p>
<h2>Participation</h2>
<p>Dumbfakes and deepfakes may be more broadly used against the public in order to silence citizens. </p>
<p>Organizations and activists who are in the public eye may be particularly targeted due to their online presence. Citizens may be silenced through the release of a harmful fake video. Even the possibility of a fake video can promote political self-censorship, especially for individuals already facing online discrimination (for example, <a href="https://www.twitterracism.com/">racist tweets</a>).</p>
<p>Dumbfakes and deepfakes may also aim to discredit important work that promotes political accountability by criticizing the government and oppressive practices. </p>
<h2>Discussion</h2>
<p>Fake videos create an environment of distrust that further hinders the ability of citizens to operate on the basis of shared information. They could also hinder discussion by playing into and worsening existing social tensions domestically and internationally. </p>
<p>We’ve seen this before. Russian disinformation efforts during the 2016 American presidential election stirred conflict on a number of issues including immigration, gun control and the Black Lives Matter movement. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-the-russian-government-used-disinformation-and-cyber-warfare-in-2016-election-an-ethical-hacker-explains-99989">How the Russian government used disinformation and cyber warfare in 2016 election – an ethical hacker explains</a>
</strong>
</em>
</p>
<hr>
<p>Another issue with dumbfakes and deepfakes is that they undermine the credibility of video evidence overall, including real videos that may depict politicians or others engaging in compromising or morally reprehensible behaviour. </p>
<h2>Protecting Canadian democracy</h2>
<p>Legal means of addressing dumbfakes and deepfakes — copyright infringement and defamation laws, for example — are <a href="https://mcmillan.ca/What-Can-The-Law-Do-About-Deepfake">currently being explored</a>. <a href="https://theconversation.com/detecting-deepfakes-by-looking-closely-reveals-a-way-to-protect-against-them-119218">Detection technology is also being advanced</a>.</p>
<p>But these approaches may not be effective if a dumbfake or a deepfake is opportunely shared just a few days before the election. Canadian citizens must therefore take on the responsibility of checking information and videos, especially around election time. </p>
<p>Dumbfakes and deepfakes have clearly changed the medium of video. The best way to protect against the disinformation they spread is <a href="https://www.youtube.com/watch?v=dDgPFk2u0E0">by being aware of their existence</a>.</p>
<p>[ <em><a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=expertise">Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day.</a></em> ]</p><img src="https://counter.theconversation.com/content/122927/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dianne Lalonde does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Fake videos pose a risk to democratic representation, participation, and discussion. Canadians need to be mindful of their existence as we head towards the federal election.Dianne Lalonde, PhD Candidate, Political Science, Western UniversityLicensed as Creative Commons – attribution, no derivatives.