tag:theconversation.com,2011:/africa/topics/visual-effects-7906/articlesVisual effects – The Conversation2023-06-09T13:26:08Ztag:theconversation.com,2011:article/2074392023-06-09T13:26:08Z2023-06-09T13:26:08ZJurassic Park yn 30 a'r chwyldro effeithiau arbennig ddigwyddodd yn sgil y ffilm<p>Mae’r mis hwn yn nodi 30 mlynedd ers ffilm a newidiodd y sinema am byth. Defnyddiodd Jurassic Park 1993 ddelweddau a gynhyrchwyd gan gyfrifiadur (CGI) arloesol i ddod â deinosoriaid yn fyw yn addasiad Steven Spielberg o'r nofel o'r un enw.</p>
<p>Daeth y ffilm yn ddigwyddiad yr oedd yn rhaid ei weld yn gyflym iawn a chafodd cynulleidfaoedd eu syfrdanu gan yr olygfa o weld deinosoriaid credadwy yn ymlwybro ar draws y sgrin fawr am y tro cyntaf. Nid yn unig y gwnaeth Jurassic Park <a href="https://books.google.co.uk/books?hl=en&lr=&id=uWiWCwAAQBAJ&oi=fnd&pg=PT6&dq=jurassic+park+cgi&ots=2GhA2wlixw&sig=lhUvmRpL2KYrbQWDfE1fRizz7FE&redir_esc=y#v=onepage&q=jurassic%20park%20cgi&f=false">gamau enfawr</a> mewn gwneud ffilmiau effeithiau arbennig, ond fe wnaeth hefyd baratoi'r ffordd ar gyfer myrdd o gynyrchiadau dilynol a oedd yn cynnwys bwystfilod o bob lliw a llun.</p>
<p>Cafodd Jurassic Park ei eni yn 1983 fel sgript sgrin gan Michael Crichton. Fe oedd awdur a chyfarwyddwr y ffilm, Westworld (1973), oedd yn adrodd stori parc adloniant lle’r oedd androidau yn camweithio ac yn rhedeg yn benwyllt. Ond cyhoeddwyd ei stori ar thema deinosoriaid am y tro cyntaf fel y nofel Jurassic Park, a ryddhawyd ym 1990 ac a ddaeth yn werthwr gorau.</p>
<p>Dyna pryd y daeth i sylw Steven Spielberg. Erbyn y 1990au cynnar, nid oedd Spielberg yn ddieithr i wneud ffilmiau ffuglen wyddonol ar gyllideb fawr. Roedd ffilmiau fel Jaws (1975), Close Encounters of the Third Kind (1977), Raiders of the Lost Ark (1981) ac E.T. the Extra-Terrestrial (1982) wedi dangos bod ganddo hanes o wneud ffilmiau hynod lwyddiannus. Roedd Jurassic Park, felly, yn berffaith ar gyfer ei gynhyrchiad nesaf.</p>
<p>Newidiodd addasiad Spielberg, a ysgrifennwyd gan Crichton a David Koepp, nifer o agweddau ar y nofel i roi diweddglo boddhaol i’r ffilm, ond gan adael digon o ddiweddglo rhydd i’w harchwilio ymhellach mewn ffilmiau eraill.</p>
<p>Wrth gwrs, nid Jurassic Park oedd y tro cyntaf i ddeinosoriaid gael sylw ar y sgrin fawr. Mae King Kong (1933) yn enghraifft gynnar o ffilm a wthiodd ffiniau'r hyn a oedd yn bosibl ar y pryd trwy gynnwys golygfeydd o'r gorila enfawr yn ymladd â deinosoriaid.</p>
<p>Daeth creaduriaid yn fyw o flaen y gynulleidfa sinema trwy gyfuno animeiddiad stopio-symudiad ag ôl-dafluniad (lle mae ffilm a saethwyd yn flaenorol yn cael ei thaflunio ar gefndir a bod actorion yn cael eu recordio yn perfformio o'i flaen). Roedd ffilmiau eraill fel Journey to the Center of the Earth (1959), The Lost World (1960) a The Land That Time Forgot (1974) wedi ceisio ffyrdd amgen o ddod â deinosoriaid i'r sgrin, gan gynnwys pypedwaith a hyd yn oed ffitio ymlusgiaid byw gyda phrostheteg.</p>
<p>O'r dulliau hyn, dewiswyd cyfuniad o animeiddiad stopio-symudiad ar gyfer saethiadau hir a phypedau animatronig ar gyfer sesiynau golwg agos i ddechrau gan Spielberg ar gyfer Jurassic Park.</p>
<h2>CGI ac animeiddio</h2>
<p>Cafwyd canlyniadau da gan brofion stopio-symudiad, yn enwedig wrth ddatblygu’r hyn a elwir yn “go-motion”, sef techneg a oedd yn niwlio modelau i ddarparu ymdeimlad o symudiad tebyg i weithred fyw. Ond roedd Spielberg a'i dîm yn dal yn awyddus i fynd ymhellach gyda'r hyn oedd yn bosib. Darparodd Dennis Muren o’r cwmni effeithiau arbennig, Industrial Light and Magic (ILM), ymagwedd amgen drwy ddefnyddio modelu ac animeiddio CGI.</p>
<p>Ar gefn gwaith CGI arloesol yn The Abyss (1989) a Terminator 2: Judgement Day (1991), cynhyrchodd Muren a'i dîm gyfres brawf o ddeinosoriaid ysgerbydol. Fe wnaeth profion yn cynnwys <em>Tyrannosaurus Rex</em> gyda chroen ychwanegol gadarnhau ymhellach y sylweddoliad mai dyma'r ffordd i barhau ar gyfer y ffilm. Adeiladodd y dechneg hon fodel y deinosor o esgyrn, ychwanegu cyhyr ac yna yn olaf, y croen.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Rc_i5TKdmhs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Golygfa'r T. rex yn dianc.</span></figcaption>
</figure>
<p>Roedd yn ymddangos bod y tîm stopio-symudiad a oedd wedi'i ymgynnull wedi'i ddileu gan y dechnoleg arloesol hon. Fodd bynnag, y gwneuthurwyr modelau a’r animeiddwyr oedd yr arbenigwyr ar ddeinosoriaid a’u symudiadau. Fe wnaethant ailhyfforddi fel animeiddwyr cyfrifiadurol i barhau i ddefnyddio eu sgiliau ar y cynhyrchiad.</p>
<p>Mae Jurassic Park yn cynnwys 15 munud o ddeinosoriaid ar y sgrin, gyda thua naw munud ohonynt yn cynnwys animatronegau Stan Winston a chwe munud o animeiddiad CGI ILM. Gwelir llwyddiant y cyfuniad hwn yn yr olygfa <em>T. Rex</em> eiconig. Mae nifer o saethiadau animatronig yn cynnwys lluniau agos o’r <em>T. Rex</em> wrth i’r saethiadau uchder llawn ddarparu bygythiad a phŵer y creadur.</p>
<p>Mae'r modd y mae Spielberg yn cyfarwyddo'r olygfa - o adeiladu tensiwn atmosfferig y storm law, trwy'r datgeliad cychwynnol a'r ymatebion, yr ymosodiad hirfaith a'r ddihangfa ddilynol - yn tywys y gynulleidfa trwy ystod o emosiynau. Er bod y darnau CGI yn gymharol fyr, maent yn cael effaith enfawr ar y stori, heb sôn am y gred bod y digwyddiad yn digwydd o'n blaenau mewn gwirionedd. Mae'n gynrychiolaeth wirioneddol o bŵer sinema.</p>
<h2>Effaith</h2>
<p>Ar ôl ei ryddhau, daeth Jurassic Park yn llwyddiant ysgubol. Roedd hefyd yn gyfle perffaith i ddatblygu ac arddangos y datblygiadau diweddaraf mewn CGI. Roedd y wefr o weld rhuthr y <em>Gallimimus</em>, arswyd ymosodiad y <em>T. Rex</em> ac arswyd yr helfa <em>Velociraptor</em> wedi swyno cynulleidfaoedd ar draws y byd. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/8hjB6UJ2kMU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Golygfa rhuthr y Gallimimus yn Jurassic Park.</span></figcaption>
</figure>
<p>Ysbrydolodd Jurassic Park nifer o ffilmiau â themâu debyg fel Dinosaur (2000) gan Disney a chyfres deledu y BBC, Walking with Dinosaurs (1999). Ond yn fwy na hynny, fe helpodd i greu chwyldro yn y defnydd o effeithiau arbennig CGI mewn ffilmiau.</p>
<p>O'r chwe munud hynny o ddeinosoriaid wedi'u hanimeiddio, mae CGI bellach wedi integreiddio cymaint â'r diwydiant nes bod bron pob cynhyrchiad ffilm a theledu yn cynnwys rhyw fath o CGI. Gall hyn olygu’n syml glanhau agweddau ar y ddelwedd a ffilmiwyd yn ddigidol gyda thynnu ac ailosod, estyniadau set, ychwanegu modelau set CGI neu gerbydau a phropiau animeiddiedig, at ffilmio gyda sgrin werdd a delweddau cyfansoddi, neu uno actorion o fewn amgylcheddau CGI llawn.</p>
<p>Mae'r ffilm yn parhau i fod yn bwynt arwyddocaol yn hanes sinema. Dyma gyhoeddodd fod creaduriaid CGI wedi cyrraedd, gan baratoi'r ffordd ar gyfer y deng mlynedd ar hugain dilynol o wneud ffilmiau ffantasi.</p><img src="https://counter.theconversation.com/content/207439/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Hodges does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Rhyddhawyd Jurassic Park ar y sgrin fawr ym mis Mehefin 1993 a newidiodd sinema am byth.Peter Hodges, Lecturer in Contextual and Critical Studies for Visual Effects and Motion Graphics, University of South WalesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2045922023-06-08T16:28:34Z2023-06-08T16:28:34ZJurassic Park at 30: how its CGI revolutionised the film industry<p><em>You can also read this article <a href="https://theconversation.com/jurassic-park-yn-30-ar-chwyldro-effeithiau-arbennig-ddigwyddodd-yn-sgil-y-ffilm-207439">in Welsh</a>.</em></p>
<p>This month marks the 30th anniversary of a film that changed cinema forever. 1993’s Jurassic Park used pioneering computer-generated imagery (CGI) to bring dinosaurs to life in Steven Spielberg’s adaption of the novel of the same name. </p>
<p>The film quickly became a must-see event and audiences were left amazed by the spectacle of seeing believable dinosaurs grace the big screen for the first time. Jurassic Park not only <a href="https://books.google.co.uk/books?hl=en&lr=&id=uWiWCwAAQBAJ&oi=fnd&pg=PT6&dq=jurassic+park+cgi&ots=2GhA2wlixw&sig=lhUvmRpL2KYrbQWDfE1fRizz7FE&redir_esc=y#v=onepage&q=jurassic%20park%20cgi&f=false">made giant leaps</a> in special-effects filmmaking, but it also paved the way for myriad subsequent productions that featured beasts of all shapes and sizes.</p>
<p>Jurassic Park originated in 1983 as a screenplay by Michael Crichton, whose previous foray into film as writer and director of Westworld (1973) featured an immersive amusement park where androids malfunctioned and caused havoc. But his dinosaur-themed story first found publication as the novel Jurassic Park, which was released in 1990 and became a bestseller. </p>
<p>That’s when it came to the attention of Steven Spielberg. By the early 1990s, Spielberg was no stranger to big-budget science-fiction filmmaking. The likes of Jaws (1975), Close Encounters of the Third Kind (1977), Raiders of the Lost Ark (1981) and E.T. the Extra-Terrestrial (1982) had demonstrated that he had a track record of making extremely successful effects-heavy but story-led films. That made Jurassic Park perfect for his next production.</p>
<p>Spielberg’s adaptation, written by Crichton and David Koepp, changed a number of aspects of the novel’s ending to provide a satisfactory conclusion to the film, yet leave enough loose ends for further exploration in the franchise.</p>
<p>Of course, Jurassic Park wasn’t the first time dinosaurs had been featured on the big screen. 1933’s King Kong is an early example of a film that pushed the boundaries of what was then possible by including sequences of the eponymous giant gorilla fighting with dinosaurs. </p>
<p>Creatures were brought to life for cinema goers by combining stop-motion animation with rear projection (where previously shot film is projected onto a backdrop and actors are recorded performing in front of it). Other feature films such as Journey to the Center of the Earth (1959), The Lost World (1960) and The Land That Time Forgot (1974) had attempted alternative ways of bringing dinosaurs to the screen, including puppetry and even fitting live reptiles with prosthetics. </p>
<p>Of these methods, a combination of stop-motion animation for long shots and animatronic puppets for close ups were initially chosen by Spielberg for Jurassic Park.</p>
<h2>CGI and animation</h2>
<p>Stop-motion tests produced good results, especially in the development of go-motion, a technique which blurred models to provide a sense of movement similar to that of live action. But Spielberg and his team were still keen to go further with what was possible. Dennis Muren from the visual effects company, Industrial Light and Magic (ILM), provided an alternative approach by using CGI modelling and animation.</p>
<p>Off the back of pioneering CGI work in The Abyss (1989) and Terminator 2: Judgement Day (1991), Muren and his team produced a test sequence of skeletal dinosaurs. Additional tests featuring a <em>Tyrannosaurus rex</em> with added skin further cemented the realisation that this was the way to go for the film. This technique built the model of the dinosaur from bones, added muscle and then finally, the skin. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Rc_i5TKdmhs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The T. rex escapes its paddock.</span></figcaption>
</figure>
<p>It seemed the assembled stop-motion team had been made extinct by this innovative technology. However, the model makers and animators were the experts on dinosaurs and their movement, and they retrained as computer animators to continue to use their skills on the production.</p>
<p>Jurassic Park features 15 minutes of on-screen dinosaurs, of which approximately nine minutes feature Stan Winston’s animatronics and six minutes of ILM’s CGI animation. The success of this combination is seen in the iconic <em>T. rex</em> attack scene. A number of animatronic shots feature close-ups of the <em>T.rex</em> before the full-height shots provide the creature’s threat and power. </p>
<p>How Spielberg orchestrates the scene, from the atmospheric, tension building of the rain storm, through the initial reveal and reactions, the prolonged attack and subsequent escape, takes the audience through a range of emotions. Although the CGI sections are relatively short, they have a huge impact on the overall storytelling, not to mention the believability that the event is actually happening in front of us. It’s a true representation of the power of cinema. </p>
<h2>Impact</h2>
<p>On release, Jurassic Park became an instant box office success, becoming the highest-grossing film ever at that time. It also presented the perfect opportunity to develop and showcase the latest advances in CGI. The thrill of seeing the stampede of Gallimimus, the horror of the <em>T.rex</em> attack and the suspense of the Velociraptor hunt captivated audiences across the globe. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/8hjB6UJ2kMU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">“They’re flocking this way” - Jurassic Park’s Gallimimus chase scene.</span></figcaption>
</figure>
<p>Jurassic Park inspired a number of similarly themed movies such as Disney’s Dinosaur (2000) and the award-winning BBC television series Walking with Dinosaurs (1999). But more than that, it helped bring about a revolution in the use of CGI in filmmaking. </p>
<p>From those six minutes of animated dinosaurs, CGI has become so integrated into the industry to the extent that nearly all film and television productions feature some form of CGI practice. This can simply mean digitally cleaning up aspects of the filmed image with removals and replacements, set extensions, adding CGI set models or animated vehicles and props, to filming with green screen and compositing images, or merging actors within full CGI environments. </p>
<p>The film remains a significant point in the history of cinema that successfully announced that CGI creatures had arrived, paving the way for the following thirty years of fantasy filmmaking.</p><img src="https://counter.theconversation.com/content/204592/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Hodges does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Jurassic Park was released on the big screen in June 1993 and changed cinema for good.Peter Hodges, Lecturer in Contextual and Critical Studies for Visual Effects and Motion Graphics, University of South WalesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1518642020-12-17T11:54:45Z2020-12-17T11:54:45ZVisual illusion that may help explain consciousness – new study<figure><img src="https://images.theconversation.com/files/375371/original/file-20201216-21-qtywpt.jpg?ixlib=rb-1.1.0&rect=39%2C49%2C3255%2C2415&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The brain is a mystery.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-conceptual-image-large-stone-shape-1032541603">Orla/Shutterstock</a></span></figcaption></figure><p>How much are you conscious of right now? Are you conscious of just the words in the centre of your visual field or all the words surrounding it? We tend to assume that our visual consciousness gives us a rich and detailed picture of the entire scene in front of us. The truth is very different, as our discovery of a visual illusion, <a href="https://journals.sagepub.com/doi/abs/10.1177/0956797619847166">published in Psychological Science</a>, shows.</p>
<p>To illustrate how limited the information in our visual field is, get a deck of playing cards. Pick a spot on the wall in front of you and stare at it. Then take a card at random. Without looking at its front, hold it far out to your left with a straight arm, until it’s on the very edge of your visual field. Keep staring at the point on the wall and flip the card round so it’s facing you. </p>
<p>Try to guess its colour. You will probably find it extremely difficult. Now slowly move the card closer to the centre of your vision, while keeping your arm straight. Pay close attention to the point at which you can identify its colour. </p>
<p>It’s amazing how central the card needs to be before you’re able to do this, let alone identify its suit or value. What this little experiment shows is how undetailed (and often inaccurate) our conscious vision is, especially outside the centre of our visual field.</p>
<h2>Crowding: how the brain gets confused</h2>
<p>Here is another example that brings us a little closer to how these phenomena are investigated scientifically. Please focus your eyes on the + sign on the left, and try to identify the letter on the right of it (of course you know already what it is, but pretend for the moment that you do not):</p>
<figure class="align-center ">
<img alt="Image of a plus sign on the left and an A on the right." src="https://images.theconversation.com/files/375363/original/file-20201216-21-j0k2iw.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/375363/original/file-20201216-21-j0k2iw.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=254&fit=crop&dpr=1 600w, https://images.theconversation.com/files/375363/original/file-20201216-21-j0k2iw.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=254&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/375363/original/file-20201216-21-j0k2iw.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=254&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/375363/original/file-20201216-21-j0k2iw.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=319&fit=crop&dpr=1 754w, https://images.theconversation.com/files/375363/original/file-20201216-21-j0k2iw.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=319&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/375363/original/file-20201216-21-j0k2iw.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=319&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Illusion 1.</span>
<span class="attribution"><span class="source">TCUK</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>You might find this a bit tricky, but you can probably still identify the letter as an “A”. But now focus your eyes on the following +, and try to identify the letters on the right:</p>
<figure class="align-center ">
<img alt="Image of a plus sign on the left and JRWTS on the right." src="https://images.theconversation.com/files/375365/original/file-20201216-21-2r5tet.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/375365/original/file-20201216-21-2r5tet.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=242&fit=crop&dpr=1 600w, https://images.theconversation.com/files/375365/original/file-20201216-21-2r5tet.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=242&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/375365/original/file-20201216-21-2r5tet.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=242&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/375365/original/file-20201216-21-2r5tet.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=305&fit=crop&dpr=1 754w, https://images.theconversation.com/files/375365/original/file-20201216-21-2r5tet.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=305&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/375365/original/file-20201216-21-2r5tet.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=305&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Illusion 2.</span>
<span class="attribution"><span class="source">TCUK</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>In this case, you’ll probably struggle to identify the letters. It probably looks like a mess of features to you. Or maybe you feel like you can see a jumble of curves and lines, without being able to say precisely what’s there. This is called “crowding”. Our visual system sometimes does OK at identifying objects in our peripheral vision, but when those objects are placed near other objects, it struggles. This is a shocking limitation on our conscious vision. The letters are clearly presented right in front of us. But still our conscious mind gets confused. </p>
<p>Crowding is a hotly debated topic in <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/tht3.28">philosophy</a>, <a href="https://www.sciencedirect.com/science/article/pii/S0042698907005561">psychology</a> and <a href="https://www.sciencedirect.com/science/article/pii/S1053811914001207?casa_token=Otxpma-h-OkAAAAA:nx1W6cQmP_J7CA39qrpNMz0JobXbEhV4FOdWPkvAv894pvSI6Gaxvcq8wE2LjKKqFiFrIvc">neuroscience</a>. We’re still not sure why crowding happens. One popular theory is that it’s a failure of what’s called “<a href="https://jov.arvojournals.org/article.aspx?articleid=2192655">feature integration</a>”. To understand feature integration, we will need to pick apart some of the jobs that your visual system does. </p>
<p>Imagine you are looking at a blue square and a red circle. Your visual system does not just have to detect the properties out there (blueness, redness, circularity, squareness). It also has to work out which property belongs to which object. This might not seem like a complicated task to us. However, in the visual brain, this is no trivial matter. </p>
<p>It takes a lot of complicated computation to work out that circularity and redness are properties of one object at the same location. The visual system needs to “glue” together the circularity and the redness as both belonging to the same object, and do the same with blueness and squareness. This gluing process is feature integration. </p>
<figure class="align-center ">
<img alt="Image of a road with autumn trees in the periphery." src="https://images.theconversation.com/files/375338/original/file-20201216-21-5exemd.jpg?ixlib=rb-1.1.0&rect=49%2C90%2C5472%2C3284&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/375338/original/file-20201216-21-5exemd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/375338/original/file-20201216-21-5exemd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/375338/original/file-20201216-21-5exemd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/375338/original/file-20201216-21-5exemd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/375338/original/file-20201216-21-5exemd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/375338/original/file-20201216-21-5exemd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">How much of the periphery do we perceive?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/road-through-algonquin-provincial-park-fall-492095068">Inga Locmele/Shutterstock</a></span>
</figcaption>
</figure>
<p>According to this theory, what happens in crowding is that the visual system detects the properties out there, but it can’t work out which properties belong to which object. As a result, what you see is a big mess of features, and your conscious mind cannot differentiate one letter from the others.</p>
<h2>New illusion</h2>
<p>Recently, we have discovered a new visual illusion that has raised a host of new questions for fans of crowding. We tested what happens when three of the objects are identical, for example in the following case:</p>
<figure class="align-center ">
<img alt="Image of a plus sign on the left and TTT on the right." src="https://images.theconversation.com/files/375364/original/file-20201216-13-pslac.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/375364/original/file-20201216-13-pslac.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=213&fit=crop&dpr=1 600w, https://images.theconversation.com/files/375364/original/file-20201216-13-pslac.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=213&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/375364/original/file-20201216-13-pslac.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=213&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/375364/original/file-20201216-13-pslac.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=267&fit=crop&dpr=1 754w, https://images.theconversation.com/files/375364/original/file-20201216-13-pslac.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=267&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/375364/original/file-20201216-13-pslac.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=267&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Illusion 3.</span>
<span class="attribution"><span class="source">TCUK</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>What do you see when you look at the +? We found that more than half of people said that there were only two letters there, rather than three. Indeed, follow-up work seems to indicate that they’re pretty confident about this incorrect judgment.</p>
<p>This is a surprising result. Unlike normal crowding, it’s not that you see a jumble of features. Rather, one whole letter neatly drops away from consciousness. This result fits poorly with the feature integration theory. It’s not that the visual system is detecting all of the properties out there, but just getting confused about which properties belong to which objects. Rather, one whole object has just disappeared.</p>
<p>We don’t think that a failure of feature integration is what’s going on. Our theory is that this illusion is due to what we call “redundancy masking”. In our view, the visual system can detect that there are several of the same letter out there, but it doesn’t seem to calculate correctly how many there are. Maybe it’s just not worth the energy to work out the number of letters with high precision.</p>
<p>When we open our eyes, we effortlessly get a conscious picture of our environment. However, the underlying processes that go into creating this picture are anything but effortless. <a href="https://theconversation.com/three-visual-illusions-that-reveal-the-hidden-workings-of-the-brain-80875">Illusions</a> like redundancy masking help us unpick how these processes work, and ultimately will help us explain consciousness itself.</p><img src="https://counter.theconversation.com/content/151864/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bilge Sayim receives funding from the French Agence Nationale de la Recherche (ANR), I-SITE ULNE, and the Swiss National Science Foundation (SNSF).</span></em></p><p class="fine-print"><em><span>Henry Taylor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How we perceive what’s going on in the periphery can reveal a lot about our conscious minds.Henry Taylor, Birmingham Fellow in Philosophy, University of BirminghamBilge Sayim, Research Scientist in Psychology, Université de LilleLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1438132020-08-03T20:01:27Z2020-08-03T20:01:27ZThat’ll do, pig, that’ll do: Babe at 25, a trailblazing cinematic classic<figure><img src="https://images.theconversation.com/files/350738/original/file-20200802-24-1dn76f1.jpg?ixlib=rb-1.1.0&rect=0%2C2%2C1769%2C957&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Universal Pictures</span></span></figcaption></figure><p>It wasn’t the first time I’d crashed a film set. My first was the 1984 film <a href="https://www.imdb.com/title/tt0087085/">The Coolangatta Gold</a>. You’ll recognise me in one of the shots because I’m the only person wearing jeans on a beach. </p>
<p>Many years, and many film sets later, I heard from a friend who was working on a movie that mixed cute furry farm animals with state of the art technology. I knew instantly I had to get smuggled on this one. </p>
<p>That film is now one of the most cherished children’s classics of the 90s – <a href="https://www.imdb.com/title/tt0112431">Babe</a>. It has been 25 years since the talking pig first appeared on international screens, and revisiting the film today shows it has not lost any of its charm or wonder. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/yuzXPzgBDvo?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Directed by Chris Noonan and starring James Cromwell and Magda Szubanski as farmers Mr and Mrs Hoggart, Babe made <a href="https://www.boxofficemojo.com/release/rl390432257/weekend/">over US$250 million</a> (A$351 million) at the international box office. It spawned a (<a href="https://film.avclub.com/the-new-cult-canon-babe-pig-in-the-city-1798213543">much less succesful</a>) sequel, <a href="https://www.imdb.com/title/tt0120595/">Babe: Pig in the City</a> in 1998.</p>
<p>The story of a young piglet raised by a Border Collie to become a sheep herding star was a crowd favourite not just because of the cute animals, but because of the dazzling technology, which makes these animals look like they really are speaking. </p>
<p>With its story of achieving in the face of adversity, coupled with simple lessons – cooperation is better than intimidation, you can aspire to more than what others expect of you – Babe still appeals to both adults and children alike.</p>
<h2>Hollywood is here</h2>
<p>I’d never heard of the book the film was based on, <a href="https://www.goodreads.com/book/show/25384384-the-sheep-pig">The Sheep Pig</a> by Dick King-Smith, so at the time I went looking for the Babe film set I knew next to nothing about the story apart from the fact that it featured talking animals. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/auteur-vs-computer-the-frightening-complexity-of-visual-effects-131458">Auteur vs computer: the frightening complexity of visual effects</a>
</strong>
</em>
</p>
<hr>
<p>When I got to Bowral, NSW, no one could resist telling me Hollywood had come to the region. Ask anyone where Babe was filming and they pointed along the road to Robertson, a 25-minute drive east, through gentle hills and valleys surrounding a small village. </p>
<figure class="align-center ">
<img alt="Babe peers in the window of the farmhouse." src="https://images.theconversation.com/files/350742/original/file-20200803-15-qy54wc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/350742/original/file-20200803-15-qy54wc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=326&fit=crop&dpr=1 600w, https://images.theconversation.com/files/350742/original/file-20200803-15-qy54wc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=326&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/350742/original/file-20200803-15-qy54wc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=326&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/350742/original/file-20200803-15-qy54wc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=410&fit=crop&dpr=1 754w, https://images.theconversation.com/files/350742/original/file-20200803-15-qy54wc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=410&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/350742/original/file-20200803-15-qy54wc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=410&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Hoggart’s farmhouse looked like it sat perfectly in the landscape. But, of course, it was only a set.</span>
<span class="attribution"><span class="source">Universal Pictures</span></span>
</figcaption>
</figure>
<p>It was not long before the conglomeration of film trucks, tents and trailers came into view. But the thing that really caught my eye was the pens full of animals – <a href="https://ew.com/article/1995/08/18/real-pigs-steal-scene-babe/">dozens</a> of sheep, chickens, cows, dogs and pigs. </p>
<p>Because filming took months, the production needed to constantly bring in new baby animals - the ones they filmed one week would soon grow too large and need replacing. </p>
<p>I was used to the hustle and bustle of a <a href="https://nevadafilm.com/production-notes-hot-set/">hot set</a>, but this was like a circus.</p>
<h2>Taking risks</h2>
<p>Why was Babe such a massive worldwide hit? It wasn’t the first film to feature talking animals. Disney had made an industry out of this with <a href="https://www.imdb.com/title/tt0061852/">The Jungle Book</a>, <a href="https://www.imdb.com/title/tt0110357/">The Lion King</a> and, of course, Mickey, Minnie and Donald. </p>
<p>The combination of computer graphics and <a href="https://entertainment.howstuffworks.com/animatronic.htm">animatronics</a> used in Babe is common now, but 25 years ago it was cutting edge and known only to a few specialists: the movie even beat <a href="https://www.imdb.com/title/tt0112384/">Apollo 13</a> for the Oscar for <a href="https://www.indiewire.com/2020/08/babe-revolutionized-talking-animal-movie-25th-anniversary-1234577309/">best visual effects</a>.</p>
<p>Still, while Babe was a film big on technology it didn’t let the multi-million dollar special effects get in the way of the story and characters. The heart of the story is always the animals. </p>
<p>Children feel a strong bond to the animals in the film because they are portrayed as child-like themselves. In Babe, the central character is a piglet and speaks with a child’s voice. He has a sense of innocence and wonder. </p>
<figure class="align-center ">
<img alt="Farm animals look inside through a window." src="https://images.theconversation.com/files/350744/original/file-20200803-21-1dbk6o6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/350744/original/file-20200803-21-1dbk6o6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/350744/original/file-20200803-21-1dbk6o6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/350744/original/file-20200803-21-1dbk6o6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/350744/original/file-20200803-21-1dbk6o6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/350744/original/file-20200803-21-1dbk6o6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/350744/original/file-20200803-21-1dbk6o6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Babe wasn’t a film about what it means to be an animal – it was about what it means to be a child.</span>
<span class="attribution"><span class="source">Universal Pictures</span></span>
</figcaption>
</figure>
<p>As a kids’ movie, Babe, also took big risks. The pigs, sheep and cattle on Hoggett’s farm were explicitly meat animals, and at one point, wild dogs kill a sheep – dark concepts for small children.</p>
<p>The scene where Babe is told humans eat pigs is particularly disturbing for younger viewers. But these risks are handled sensitively, and pay off. </p>
<p>It has even been said these scenes sparked <a href="https://animalstudiesrepository.org/cgi/viewcontent.cgi?article=1014&context=acwp_sata">positive arguments</a> for animal rights: Cromwell <a href="https://www.thevintagenews.com/2017/12/19/james-cromwell-vegan/">became vegan</a> after filming, and PETA calls it “<a href="https://www.peta.org/teachkind/humane-classroom/animal-friendly-class-movies/">a classic animal rights movie</a>”.</p>
<h2>Out of time</h2>
<p>I think why the film was embraced internationally was because it isn’t what we would classify as an “Australian ” film. There’s very little in it that defines it as “Aussie”.</p>
<p>Hoggett’s farm looks more Dorset or Devon in the south of England. There are no kangaroos or koalas, there’s no red-dirt “outback” in sight. Even the accents are a global mish-mash. Babe doesn’t belong to any one time or place. </p>
<figure class="align-center ">
<img alt="Hoggart, Babe, and a sheepdog look over the competition field." src="https://images.theconversation.com/files/350745/original/file-20200803-17-2gvgpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/350745/original/file-20200803-17-2gvgpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=326&fit=crop&dpr=1 600w, https://images.theconversation.com/files/350745/original/file-20200803-17-2gvgpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=326&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/350745/original/file-20200803-17-2gvgpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=326&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/350745/original/file-20200803-17-2gvgpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=410&fit=crop&dpr=1 754w, https://images.theconversation.com/files/350745/original/file-20200803-17-2gvgpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=410&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/350745/original/file-20200803-17-2gvgpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=410&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The film resonates because it stands apart from time and place.</span>
<span class="attribution"><span class="source">Universal Pictures</span></span>
</figcaption>
</figure>
<p>Audiences of all nationalities could enjoy it because it was truly international.
Ultimately, though, Babe stays in our hearts and minds because it was a good story, well told. Do we really need anything else? That’ll do, pig, that’ll do.</p><img src="https://counter.theconversation.com/content/143813/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daryl Sparkes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Babe combined cutting edge visual effects with a moving story of success in the face of adversity that still resonates today.Daryl Sparkes, Senior Lecturer (Media Studies and Production), University of Southern QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1314582020-07-06T05:02:07Z2020-07-06T05:02:07ZAuteur vs computer: the frightening complexity of visual effects<figure><img src="https://images.theconversation.com/files/318996/original/file-20200306-106579-17g9h03.jpg?ixlib=rb-1.1.0&rect=0%2C2%2C1777%2C936&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Marvel Studios</span></span></figcaption></figure><p>When Star Wars was awarded the Academy Award for Best Visual Effects in 1978 it marked the first time the visual component of effects was <a href="https://muse.jhu.edu/article/479266/pdf">differentiated from sound</a>. </p>
<p>Yet, even in this moment when visual effects (VFX) was first recognised by the Academy, it was already being <a href="https://www.amazon.com/Riders-Raging-Bulls-Sex-Drugs-Rock/dp/0684857081/ref=sr_1_1?keywords=biskind+easy+riders&qid=1582943161&sr=8-1">pointed to</a> as the destroyer of the auteur renaissance: a Hollywood era in which directors like Martin Scorcese, Stanley Kubrick and even George Lucas himself enjoyed unprecedented freedom to make the films they wanted to make with full studio backing.</p>
<p>The financial success of films like Star Wars turned studios towards a strategy of <a href="https://www.vox.com/2015/12/15/10119474/how-star-wars-changed-hollywood">event films</a>. These productions didn’t rely on specific directors, but on spectacle and the worldwide distribution only a prominent studio could mount. The high costs of event films ensured studios more tightly controlled production and the tension between director-driven film making and VFX was born. </p>
<p>As technology has progressed, VFX has only become more profitable, complex and difficult for directors to control. 2019’s <a href="https://www.imdb.com/title/tt4154796/">Avengers: Endgame</a> contains 2,500 VFX shots and is the <a href="https://www.vox.com/2019/7/22/20703487/avengers-endgame-avatar-biggest-movie-all-time-box-office">highest grossing</a> film of all time.</p>
<h2>What exactly are VFX?</h2>
<p>It is hard to land on an agreed upon definition for VFX and there are several terms to unpack before you get there. </p>
<p><em>Effects</em> is the catchall term for the visual tricks in film and television.</p>
<p><em>Practical effects</em> or <em>special effects</em> are solutions accomplished in camera using animatronics like <a href="https://www.imdb.com/title/tt0083866/?ref_=fn_al_tt_1">E.T.</a>; miniatures like the flying cars in <a href="https://www.imdb.com/title/tt0083658/?ref_=fn_al_tt_1">Blade Runner</a>; prosthetics like the hobbit feet in <a href="https://www.imdb.com/title/tt0120737/?ref_=fn_al_tt_1">The Lord of the Rings</a>; and pyrotechnics like the explosions in <a href="https://www.imdb.com/title/tt1392190/?ref_=fn_al_tt_1">Mad Max: Fury Road</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/ent02yItm60?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Behind-the-scenes video shows the practical effects used in Mad Max: Fury Road.</span></figcaption>
</figure>
<p><em>Visual effects</em> create the required imagery off-set using computers. VFX might be as simple as compositing one image onto another – like when an actor filmed in front of a green screen is placed into a different environment – or as complicated as creating a completely digital environment, like the world of Pandora in <a href="https://www.imdb.com/title/tt0499549/?ref_=fn_al_tt_2">Avatar</a>.</p>
<p>A dozen or more artists with individualised skill sets might touch a complex shot. Different kinds of artists create geometric models of characters or props, create the textures for those models, place those models in the scene, animate the characters, simulate the costumes, and render the final images. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/iPDTSYR853U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Different artists would create each of the VFX layers in The Great Gatsby.</span></figcaption>
</figure>
<p>VFX production generally takes place at independent studios, with studios like Disney or Universal acting as clients. </p>
<p>This creates a paradigm in which the client studio serves as an intermediary between the director and the VFX artists. The director rarely talks to or even sees the hundreds of artists producing this critical part of the film. </p>
<p>To further complicate things, the number of VFX shots in a blockbuster is often so large a single vendor cannot take on all of them. It is <a href="https://www.nytimes.com/2017/05/04/magazine/why-hollywoods-most-thrilling-scenes-are-now-orchestrated-thousands-of-miles-away.html">common practice</a> to spread VFX sequences between multiple vendors in multiple countries.</p>
<h2>An invisible job</h2>
<p>For directors, VFX become ephemeral and hard to pin down. </p>
<p>These created worlds do not exist until they do, and the processes by which they materialise relies on massive distributed systems of highly specialised and anonymous artists in concert with complex computer processes.</p>
<p>It is no wonder filmmakers go on to speak about this essential aspect of their productions in a way that reflects their alienation. </p>
<p>Popular directors like Chrisopher Nolan and JJ Abrams have extensively used VFX while decrying them as inferior to in-camera effects. While Abrams touted 2015’s Star Wars:The Force Awakens as a return to the <a href="https://screenrant.com/star-wars-7-jj-abrams-interview/">practical aesthetic</a> of the original trilogy, roughly <a href="https://www.fxguide.com/fxfeatured/the-force-returns/">2,100 shots</a> in the film used VFX.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/HgzxrwXHCoU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Some of the VFX for Star Wars: The Force Awakens.</span></figcaption>
</figure>
<p>In reference to 2017’s Dunkirk, Nolan <a href="https://variety.com/2017/film/news/christopher-nolan-dunkirk-oscars-movies-tv-spielberg-1202607836/">said</a>: “The older techniques are working better. With visual effects, after a while the contemporary tricks look cheaper.”</p>
<p>While Dunkirk did use practical effects, the film <a href="https://www.indiewire.com/2018/01/dunkirk-christopher-nolans-visual-effects-oscar-1201913630/">relied heavily</a> on visual effects to augment and enhance the action. </p>
<p>Influenced by the language from directors, critics often cite bad effects as how Hollywood is ruining movies. Writing for Variety on Avengers and “<a href="https://variety.com/2015/film/news/avengers-age-of-ultron-cgi-special-effects-1201487125/#!">the age of CGI overkill</a>”, Brian Lowley said:</p>
<blockquote>
<p>While the results can be visually astounding, the movies regularly feel as lifeless and mechanized as the technology responsible.</p>
</blockquote>
<p>Yet when effects are good, they can be virtually undetectable. When a medium’s success is predicated on its self-erasure, we are left with a discourse which only ever identifies it as a problem – or never acknowledges it at all.</p>
<h2>Worth doing well</h2>
<p>There is no disputing VFX are often used in the service of what critic Johnathan Romney calls “<a href="https://aeon.co/essays/can-the-creative-weirdness-of-cgi-be-recovered-from-cliche">the permanent apocalypse</a>” of blockbuster films: an unending cycle of computer-generated mayhem.</p>
<p>But if these movies are bad, it’s not because they use VFX. It’s because they didn’t know how to. </p>
<p>While 2019’s <a href="https://www.imdb.com/title/tt5697572/?ref_=fn_al_tt_1">Cats</a> was definitely disturbing, “bad VFX” are the not the reason the film bombed, nor were the VFX “bad” because of the skill of the artists who made them. As the Visual Effects Society <a href="https://collider.com/vfx-society-responds-to-oscars-cats-joke/">said</a>: “the best visual effects in the world will not compensate for a story told badly.”</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/318261/original/file-20200303-66106-1871spv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/318261/original/file-20200303-66106-1871spv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=250&fit=crop&dpr=1 600w, https://images.theconversation.com/files/318261/original/file-20200303-66106-1871spv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=250&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/318261/original/file-20200303-66106-1871spv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=250&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/318261/original/file-20200303-66106-1871spv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=314&fit=crop&dpr=1 754w, https://images.theconversation.com/files/318261/original/file-20200303-66106-1871spv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=314&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/318261/original/file-20200303-66106-1871spv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=314&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Don’t blame the VFX artists for Cats.</span>
<span class="attribution"><span class="source">Warner Bros.</span></span>
</figcaption>
</figure>
<p>VFX is a powerful medium. It can be used in ways that are predictable, or in ways that expand the boundaries of our collective imagination. </p>
<p>But until VFX production becomes a better integrated part of the creative process, it will rarely be used in the service of a better kind of film.</p><img src="https://counter.theconversation.com/content/131458/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sonya Teich is affiliated with Women in Film and TV.</span></em></p>Avengers: Endgame contained 2,500 visual effects – or VFX – shots. So what are visual effects, and how do they get to the big screen?Sonya Teich, Lecturer in Design, Visual Effects Projects, Te Herenga Waka — Victoria University of WellingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1310992020-02-05T11:31:36Z2020-02-05T11:31:36ZOscars 2020: Why people are talking about visual effects<figure><img src="https://images.theconversation.com/files/313513/original/file-20200204-41476-18q7c68.jpg?ixlib=rb-1.1.0&rect=5%2C0%2C1763%2C752&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Universal Pictures</span></span></figcaption></figure><p>As the presentation of the 2020 Academy Awards approaches, there has been a <a href="https://www.goldderby.com/article/2020/1917-visual-effects-oscar-predictions/">lot of buzz</a> around the visual effects category. Two films – Sam Mendes’s 1917 and Martin Scorsese’s The Irishman have, in particular, <a href="https://www.washingtonexaminer.com/news/the-irishman-and-1917-among-oscar-nominations-for-best-picture">attracted a lot of attention</a> for the tricks they use to immerse the viewer in the characters and storyline.</p>
<p>The first film to win an award for visual effects, in the first ever Oscars ceremony in 1929, also won best picture. American special effects artist and film director <a href="http://americanpomeroys.blogspot.com/2013/02/roy-jobbins-pomeroy-oscar-winner-and.html">Roy Pomeroy</a> won for Wings, a first world war movie featuring breathtaking realistic dogfight sequences. His work still looks amazing, given the tools he had to work with. In the 90 years since he won his award, though, visual effects have become ever more sophisticated.</p>
<h2>Big bangs theory</h2>
<p>If we take a look at the films that are nominated for Best Visual Effects in this year’s Academy Awards, we see five very different types of film.
Star Wars: The Rise of Skywalker is the continuing sci-fi saga of the battle between the Jedi and the Sith. A set of tried-and-tested visual effects techniques were <a href="https://screenrant.com/star-wars-rise-skywalker-visual-effects-interview/">used in the film</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BdHCp62jC84?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>This included the return of a <a href="https://www.insider.com/how-carrie-fisher-was-in-star-wars-the-rise-of-skywalker-2020-1">fully digital replacement for Princess Leia</a> using pieces of old footage of the late Carrie Fisher and computer-generated elements to create a complete character that blended seamlessly into the new narrative. Most of the environments were created in the computer and then composited with actors’ performances against a green screen that allows backgrounds to be replaced with digital sets.</p>
<p>Avengers: Endgame, is the final episode of a comic book-based world of superheroes and their enemies in one final, epic battle. Green screens played a huge part in this film as well, allowing intricate digital environments to play their part in the storytelling.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Jxn8oIPT444?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>As you’d expect there are plenty of pyrotechnics, explosions and battle scenes that were made with animated digital characters.</p>
<h2>Rumble in the jungle</h2>
<p>The Lion King, a computer-generated remake of the Disney classic, originally animated, on the whole, by hand in 2D. Many of the techniques used in this movie were originally developed for the making of the 2016 remake of Jungle Book which, like The Lion King, was reworked as a fully digital film – apart from Mowgli who was played by a real boy. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/rBZ7s6sa71Q?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>In The Lion King, director John Favreau developed a technique that he felt would inform the animation of the animals in a far more realistic way than how animation is traditionally created. Rather than simply recording voice actors in a sound booth, he put them in a studio and filmed them acting together so that animators had nuanced reference to work with to ensure the tiniest of reactions were captured in the creatures’ performances.</p>
<p>Virtual reality also played a big part in the making of the film. Camera operators were able to use digital sets to see the environments and move digital cameras in a realistic way.</p>
<h2>Forever young</h2>
<p>The Irishman jumps between present-day action and as far back as the 1950s, made more complicated by the fact that the characters are played by the same actors. The point of difference is that prosthetics and makeup weren’t used, but stars including Robert De Niro and Al Pacino were “de-aged” using computers, using images of the actors from photographs and previous films to build “digital masks” in the computer that replaced the actor’s real faces.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/OF-lElIlZM0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>This meant that De Niro who plays the lead role was, at 74 when filming began, playing the role of a man in his 30s and by the end of the film the same man in his 80s. How <a href="https://www.wired.com/story/the-irishman-netflix-ilm-de-aging/">successfully</a> is something that has been <a href="https://www.vanityfair.com/hollywood/2019/12/the-de-aging-technology-that-could-change-acting-forever">hotly debated</a> – but nobody can doubt the expertise with which the artists carried out their task.</p>
<h2>Spot the joins</h2>
<p>The final film nominated is the first world war epic 1917, co-written, produced and directed by Sam Mendes. Loosely based on a story Mendes was told by his grandfather, the film relies on a <a href="https://www.wired.co.uk/article/1917-sam-mendes-film-one-shot-vfx">single shot depiction of the entire narrative</a>, following the main character on his journey to get a message to the front line. This technique, also used in 2015’s best picture winner, Birdman, required meticulous planning to ensure that the cuts that occurred were invisible to the viewer. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/OsbeS2O24SA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Camera moves were choreographed to allow two scenes that were filmed in the same location at different times to be taken into the computer and “stitched” together as if they were one complete shot. Doing this over and over enabled the illusion of one continuous sequence.</p>
<p>Like many films though, 1917 used a host of other visual effects techniques that were unseen. This is often regarded as the pinnacle of success in visual effects – an effect that can’t be seen versus one that is smacking you in the face with a large, wet fish.</p>
<h2>Appliance of science</h2>
<p>Some of the nominated movies need visual effects to create worlds and creatures that don’t exist, while some employ tricks to enhance the cinematic experience and the ability of the filmmaker to tell their story. All of them use the technical expertise of visual effects artists to bring the director’s vision to the screen.</p>
<p>And there’s a great deal of scientific knowhow that goes into creating cinematic illusion. The movie that won the visual effects award in 2014, Interstellar, involved recreating the appearance of a black hole. To do this, visual effects artists worked with scientists to accurately model the phenomenon. The results were so advanced that scientists have since <a href="https://www.wired.com/2014/10/astrophysics-interstellar-black-hole/">cited its importance</a> to their ongoing work.</p>
<p>This scientific knowledge underpins flawless visual effects production. Not only does a visual effects artist need to know how their tools work, they need to be able to understand the science that informs the visuals we see on the screen. Human and animal anatomy, lighting, pyrotechnics, fluid simulation, mechanical engineering and robotics are just a few of the scientific disciplines that add strings to a visual effects artist’s bow.</p>
<p>So, when we talk about visual effects and the people who create them, remember the science that supports almost everything they do. Every frame is looked at in minute detail, so much so that the casual viewer might never understand the hours that go into making one of these films look the way they do and allow us to sit back and enjoy the story.</p><img src="https://counter.theconversation.com/content/131099/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chris Williams does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The range of movies in the visual effects category shows how advanced this science has become.Chris Williams, Senior Principal Academic, National Centre for Computer Animation, Bournemouth UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1265592019-11-21T13:58:54Z2019-11-21T13:58:54ZWhen de-aging De Niro and Pacino, ‘Irishman’ animators tried to avoid pitfalls of the past<p>If you thought 76-year-old Robert De Niro and 79-year-old Al Pacino were done starring in blockbuster gangster films, think again.</p>
<p>Both assume lead roles in Martin Scorsese’s “The Irishman,” which chronicles the life of hitman Frank Sheeran and labor union leader Jimmy Hoffa over several decades. </p>
<p>Different actors weren’t cast to play the younger versions of Sheeran and Hoffa. Instead, Scorsese and his production team utilized “de-aging” technology to make De Niro and Pacino appear younger.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/zbPKbT2B7bE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Moshe Mahler talks about animators’ struggle to avoid the uncanny valley.</span></figcaption>
</figure>
<p>To de-age actors, a visual effects team creates a computer-generated, younger version of an actor’s face and then replaces the actor’s real face with the synthetic, animated version. </p>
<p>Human beings are actually quite good at picking up on even the <a href="https://doi.org/10.1016/j.visres.2017.05.011">smallest of details of the human face</a>. For this reason, we had several project lines devoted to advancing these types of digital human technologies at <a href="https://www.disneyresearch.com/">Disney Research</a>, where I spent nearly a decade of my career.</p>
<p>Animators need to avoid what’s called “the uncanny valley” – a pitfall in realistic, computer-generated animation that animators have been struggling to overcome for decades.</p>
<h2>Into the uncanny valley</h2>
<p>In 2010, I was a contributing author to a paper titled “<a href="http://graphics.cs.cmu.edu/projects/MMM/">The Saliency of Anomalies in Animated Human Characters</a>.” </p>
<p>In the paper, we found that audiences are much more sensitive to distortions in computer-generated faces, even when larger, seemingly more obvious distortions are present on the body. In other words, there’s more room for error when creating computer-generated bodies and a much smaller margin for error when creating computer-generated faces. </p>
<p>This brings us to the uncanny valley. The term refers to the uncomfortable feeling viewers might experience when they see computer-generated faces that “aren’t quite right.” </p>
<p><a href="https://web.ics.purdue.edu/%7Edrkelly/MoriTheUncannyValley1970.pdf">The term was coined in 1970</a> by robotics professor Masahiro Mori. Mori hypothesized that as a humanoid becomes more lifelike, an audience’s “familiarity” toward it increases until a point where the humanoid is almost lifelike, but not perfectly lifelike. At this point, subtle imperfections lead to responses of repulsion or rejection. </p>
<p>The term “uncanny valley” comes from visualizing this idea on two axes.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=311&fit=crop&dpr=1 600w, https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=311&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=311&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=390&fit=crop&dpr=1 754w, https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=390&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/302719/original/file-20191120-467-1lgxomc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=390&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The hypothesized graph for the uncanny valley, redrawn from Masahiro Mori’s 1970 article on the subject.</span>
<span class="attribution"><a class="source" href="https://web.ics.purdue.edu/~drkelly/MoriTheUncannyValley1970.pdf">J. Hodgins et al.</a>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The x-axis describes “human likeness” or realism, while the y-axis describes “familiarity,” empathy or emotional engagement. The steep falloff in the graph represents the uncanny valley – the point at which people recoil and feel less empathy. The effect is stronger if the humanoid is moving. </p>
<h2>Animating appealing people</h2>
<p>While the hypothesis originated in the robotics community, the concept of the uncanny valley gained popularity in the animation industry. For animators, the word “appeal” may be the closest relative we have to Mori’s familiarity.</p>
<p>Appeal is one of the 12 basic principles of animation that animators Frank Thomas and Ollie Johnston outline in their book, “<a href="https://books.google.com/books?id=2x0RAQAAMAAJ&q=the+illusion+of+life&dq=the+illusion+of+life&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwiD49-2pu3lAhWQdd8KHUx9DuEQ6AEwAHoECAUQAg">The Illusion of Life</a>.”</p>
<p>In animation, appeal has to do with the character’s magnetism – whether he or she is beautiful, cuddly and kind, or ugly, disgusting and mean. Animated human characters, like <a href="https://d.newsweek.com/en/full/455187/elsa-frozen.jpg">Elsa</a> in “Frozen,” tend to be stylized in a way that caricature human features, which allows us to caricature their motion as well. </p>
<p>Two computer-animated films from 2004, “The Polar Express” and “The Incredibles,” highlight this quandary. </p>
<p>“<a href="https://www.imdb.com/title/tt0317705/?ref_=nv_sr_1?ref_=nv_sr_1">The Incredibles</a>” was the first Pixar film that starred actual human beings instead of toys, bugs, fish or monsters. But the animation team didn’t try to make them look like real humans: They had larger eyes, soft, rounded silhouettes and simplified features. These types of design decisions work toward the “magnetism” of a character that most audiences ultimately find appealing. </p>
<p>“<a href="https://www.imdb.com/title/tt0338348/">The Polar Express</a>,” on the other hand, used performance capture technology so Tom Hanks could play five lifelike characters, including the 9-year-old protagonist. </p>
<p>Mapping a 50-year-old’s facial movements onto a 9-year-old boy’s face ended up creating a whole host of problems. For example, how should a moment where Hanks is bursting with excitement be transferred to a 9-year-old’s face? In order to use performance capture data to transplant an actor’s expressions onto an animated character, animators need to do what’s called “motion retargeting.” Because this was new territory for animators – and due to the technological limitations of the time – the nuanced facial expressions that make Hanks a talented actor were lost. </p>
<p>In retrospect, this is a fairly extreme example of de-aging – and one that didn’t sit well with most viewers. </p>
<p><a href="https://www.youtube.com/watch?v=ve_fMwJ1GJY">The animated boy</a> seemed “off,” with audiences and critics <a href="https://www.rollingstone.com/movies/movie-reviews/the-polar-express-253058/">disturbed</a> by what Rolling Stone’s Peter Travers described as the film’s “spooky” and “lifeless” animation.</p>
<h2>Adapting to the technology</h2>
<p>Not all trips into the uncanny valley end up fruitless. Animators can learn from experience.</p>
<p>For example, in 1988, Pixar released the short film “<a href="https://www.imdb.com/title/tt0096273/">Tin Toy</a>,” in which an animated baby torments a group of toys. At the time, Pixar hadn’t developed the technology needed to depict appealing humanoid characters. <a href="https://www.imdb.com/title/tt0096273/mediaviewer/rm3826469376">The baby</a> almost evokes <a href="https://hips.hearstapps.com/digitalspyuk.cdnds.net/18/38/1537686437-chucky-doll.jpg">Chuckie</a> from the horror film “Child’s Play.”</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/302725/original/file-20191120-515-1ylx4xt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/302725/original/file-20191120-515-1ylx4xt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=362&fit=crop&dpr=1 600w, https://images.theconversation.com/files/302725/original/file-20191120-515-1ylx4xt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=362&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/302725/original/file-20191120-515-1ylx4xt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=362&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/302725/original/file-20191120-515-1ylx4xt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=455&fit=crop&dpr=1 754w, https://images.theconversation.com/files/302725/original/file-20191120-515-1ylx4xt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=455&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/302725/original/file-20191120-515-1ylx4xt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=455&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The baby in Pixar’s ‘Tin Toy’ is unsettling, to say the least.</span>
<span class="attribution"><a class="source" href="https://pixar-planet.fr/wp-content/uploads/2010/04/billy-personnage-tin-toy-04.jpg">Pixar</a></span>
</figcaption>
</figure>
<p>The film’s shiny plastic and metal toys, on the other hand, worked well within the constraints of the era’s computer animation technology. This is largely why the ensuing “Toy Story” franchise ended up featuring toys, not humans, as the protagonists.</p>
<p>It also helps to apply performance capture technology on computer-generated characters who aren’t fully human. That’s what James Cameron did in his 2009 blockbuster, “<a href="https://www.imdb.com/title/tt0499549/?ref_=fn_al_tt_2">Avatar</a>.”</p>
<p>The film’s Na’vi species are humanlike but remain an alien species. They’re blue. They have large, radiant eyes. The bridge of their nose is wide and stiff, while the tip of their nose is catlike. </p>
<p>Importantly, however, the animated characters of the film still look somewhat like the actors who played them. <a href="https://i.ytimg.com/vi/hXej4xfDfhM/maxresdefault.jpg">Sigourney Weaver’s avatar</a> looks very much like Sigourney Weaver, which helps avoid the “retargeting” problem that occurred in “Polar Express.” Audiences don’t expect the alien race to look or move exactly like humans. </p>
<h2>Surmounting the valley</h2>
<p>While the technology continues to improve, recreating realistic human faces remains one of the most difficult tasks for animators. </p>
<p>A strong example of de-aging technology can be seen in “<a href="https://www.imdb.com/title/tt1856101/">Blade Runner: 2049</a>.” The shot of a de-aged Sean Young is a stunning technical feat, but the scene also doesn’t ask too much of the computer-generated performance. In fact, the computer-generated version of Young only says a couple of sentences. Most of all, the use of the technology actually serves the story. The moment is designed to be eerie; audiences are supposed to be unsettled.</p>
<p>Because “The Irishman” is based on a real story, with realistic characters with realistic faces, audiences are much more sensitive to the use of de-aging technologies. </p>
<p>My guess is that some viewers won’t notice the technology, some will marvel at it and others will find it distracting. I usually fall into the latter two categories. It is incredibly distracting to me despite the impressive quality of the de-aging. </p>
<p>I often teach my students that when working with new technology, just because we can, that doesn’t always mean we should. </p>
<p>Interestingly, De Niro won his first Academy Award for his portrayal of a young Vito Corleone in “<a href="https://www.imdb.com/title/tt0071562/?ref_=nv_sr_2?ref_=nv_sr_2">The Godfather: Part II</a>,” while Marlon Brando played the older Vito Corleone.</p>
<p>If Francis Ford Coppola had today’s technology and could have simply “de-aged” Brando, would he have done so? And how would that have changed one of the most memorable gangster films of all time?</p>
<p>[ <em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/126559/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Moshe Mahler is also the owner of BIG eMotion Technologies.
The "The Saliency of Anomalies in Animated Human Characters" was supported in part by Disney Research, the Irish Research Council for Science Engineering and
Technology (IRCSET), and NSF CCF-0811450.</span></em></p>For decades, animators have attempted to recreate realistic human faces without entering what’s called the ‘uncanny valley.’Moshe Mahler, Special Faculty, Carnegie Mellon UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/929892018-03-13T18:57:13Z2018-03-13T18:57:13ZI’ve always wondered: why is a green screen green?<figure><img src="https://images.theconversation.com/files/209862/original/file-20180312-30983-jyp3kp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Green screen technology has become a common feature of film and TV production.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/vancouverfilmschool/33048640241/in/photolist-SmoVm6-YVV78a-5sKCX3-cigNZj-cigm2w-csaFfm-c8DoZU-csaQAq-csaxMu-9DJ2Pp-5JCskT-5ZLA9u-csaQ55-66K8JS-6ay4Ps-8s5LN7-csaxoC-csaXcw-5JC9aL-5t1koc-5yo9Jm-bXnYVB-gdHih-cthTVY-csbc4y-bWxD4X-bvzpex-ceL4wu-ahqS6j-ccpXp7-ctackE-ceL3jE-5hEAqB-coiCfC-8DSkm9-5sZGsV-bVQBET-abvxTy-caR8vw-ctdsX5-ax7DPQ-cuDQMo-ccpojQ-bXoiJg-bWexu6-cdBG9f-ax7DK9-ctjc1J-ax7EFA-goXzqv">Vancouver Film School/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p><em>This is an article from I’ve Always Wondered, a series where readers send in questions they’d like an expert to answer. Send your question to alwayswondered@theconversation.edu.au</em></p>
<hr>
<p><strong>I’ve always wondered why is a green screen green in TV and film making, as opposed to blue or white or beige? – Misha from Brunswick East (The Conversation’s Editor)</strong></p>
<p>If you’ve ever watched a modern blockbuster film, then you’ve almost certainly seen the magic of green screen compositing – or chroma keying – in action. The technique enables film and TV producers to record actors in front of a plain green backdrop, then replace the backdrop with special effects. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/DVvR7s5D22Y?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Green screens were originally blue when chroma keying was first used in 1940 by Larry Butler on The Thief of Baghdad – which won him the Academy Award for special effects. Since then, green has become more common.</p>
<p>Why? The really short answer is that green screens are green because people are not green. In order for the effect to work, the background must use a colour that isn’t used elsewhere in the shot – and green is nothing like human skin tone. Of course, people wear green clothes, green jewellery and occasionally have green hair or green makeup, but all those things can be changed in a way that skin colour can’t be. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"741582174491684864"}"></div></p>
<p>If you are lit by white light, from the sun or a bulb, the light hitting you contains the full visible spectrum of wavelengths. And human skins reflect broadly similar ratios of each colour of the spectrum. If we reflected one colour much more than the others, we’d appear to be a saturated colour. </p>
<p>We’re used to describing skin colour with colour-words, such as brown, pink, white, black or even yellow, but from a colour science perspective, we’re all orange. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/does-colour-really-affect-our-mind-and-body-a-professor-of-colour-science-explains-84382">Does colour really affect our mind and body? A professor of colour science explains</a>
</strong>
</em>
</p>
<hr>
<h2>The elements of colour</h2>
<p>Colour is defined by our perception, not by physics. Humans have three types of colour-sensitive cells in the retinas of our eyes, which have different colour sensitivities. We can think of them as being “red”, “green” and “blue” sensors, although their sensitivities overlap considerably and are closer to yellow, blue-ish green and blue. </p>
<p>To fully describe a colour, it’s helpful to think of it using three numbers. This could be the red, green, blue intensities (RGB) or the following representation known as HSV. “Hue” (H) corresponds closely to what we loosely call colour, “saturation” (S) corresponds to how rich a colour is, and “value” (V) loosely corresponds to the brightness. These three colour coordinates explain how we might describe a colour as a “dark grey green” or a “light rich blue”. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/209856/original/file-20180312-30954-444n55.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Figure 1: Representing colour as a hue, saturation and value (brightness) is closer to how we perceive colour, describe it and remember it.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:HSV_color_solid_cylinder_alpha_lowgamma.png">Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-genuinely-believable-cgi-actor-it-wont-be-long-71410">A genuinely believable CGI actor? It won't be long</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.sciencedirect.com/science/article/pii/S0921889001001221">Human skin</a> ranges in brightness (or “value” as it’s shown in the diagram above), but the hue and saturation don’t vary much at all. There are some good physiological reasons for this. In essence, our outer skin layer (epidermis) behaves optically as a neutrally coloured filter over our dermis, which is red largely due to the colour of blood that perfuses it. </p>
<h2>Cameras mimic the human eye</h2>
<p>Most still and video cameras work a little like our eyes, with a grid of sensors – or pixels – which detect red, green or blue. </p>
<p>But rather like we perceive things as having a brightness and a colour, most video electronics and video recorders convert these inputs into separate brightness and colour information, called luminance (or luma) and chrominance (or chroma) in video jargon.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/210013/original/file-20180312-30989-9q5iel.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/210013/original/file-20180312-30989-9q5iel.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=218&fit=crop&dpr=1 600w, https://images.theconversation.com/files/210013/original/file-20180312-30989-9q5iel.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=218&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/210013/original/file-20180312-30989-9q5iel.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=218&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/210013/original/file-20180312-30989-9q5iel.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=274&fit=crop&dpr=1 754w, https://images.theconversation.com/files/210013/original/file-20180312-30989-9q5iel.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=274&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/210013/original/file-20180312-30989-9q5iel.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=274&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Figure 2. A full-colour image (right) can be decomposed into a luminance (brightness) component (left), which has no colour information, and a chrominance (colour) component (centre) which has no brightness information. The luminance image is what a black-and-white camera records.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Luma_Chroma_both.png">Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>The luminance is basically the brightness, while the chrominance is the location in the hue/saturation colour circle. </p>
<p>When colour TV was introduced, sending the chroma component on a separate sub-channel allowed existing black-and-white TVs to receive the luma channel only and work with the new colour signal. Analogue TV is extinct, but digital TV and internet video still encode luma and chroma separately. This is partly for data compression reasons, but also because it is a more natural representation for correcting colour, and for playing video tricks with green screens.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/tupacs-rise-from-the-dead-was-sadly-not-holography-6641">Tupac's rise from the dead was, sadly, not holography</a>
</strong>
</em>
</p>
<hr>
<h2>How green screens work</h2>
<p>The other name for a green screen – chroma key – gives away how it works. Video production equipment called a chroma keyer looks at the chrominance data.</p>
<p>Pixels that fall in a narrow pie-slice of the hue-saturation circle, centred on the green hue, are deemed to be the green screen. A video switch replaces them with pixels from the background video channel – for example, a weather map. Pixels with all other hues – orange (skin tones), red, yellow, magenta and blue – coming from the camera are let through. </p>
<p>The resulting video output is the weatherperson superposed in front of the weather map. It doesn’t matter at all if the background video has green in it, but if the person on camera is wearing any green, the background will be keyed through that area, and they will appear transparent!</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/ypndvpEjyHg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Blue screens work almost as well. Because green and blue are both well away from orange-red on the hue circle, both are suitable for chroma-keying people. If Kermit needed to be keyed on top of a background a blue screen would be essential, whereas Superman needs a green screen. </p>
<p>Film-based compositing methods preferred blue screens, due to the availability of blue-sensitive films. Green screen works slightly better for video as there are more green-sensitive pixels in common camera designs than red or blue. And blue coloured clothes are harder to avoid than green ones. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"403210792264138752"}"></div></p>
<p>All sorts of other colours have been used, including magenta, and even white screens lit with <a href="https://en.wikipedia.org/wiki/Sodium_vapor_process">bright yellow sodium lamps</a> used to superpose Mary Poppins over London. But as digital cameras take over feature film production, it’s increasingly easy being green.</p><img src="https://counter.theconversation.com/content/92989/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>From Superman to Jurassic Park, green screen technology is what makes the jaw-dropping effects you see in blockbuster movies possible. But how does it work?Lincoln Turner, Researcher, Atomic, Molecular and Optical Physics, Monash UniversityRussell Anderson, Lecturer, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/724412017-02-08T12:43:38Z2017-02-08T12:43:38ZKristen Stewart co-wrote a paper about AI in filmmaking – here’s what an academic expert thought<p>Actor, director, model, teen vampire movie star – now Kristen Stewart can add research author to her list of credentials. Stewart, best known for her portrayal of Bella Swan in the Twilight Saga films, has co-written an article published in Cornell University’s online library <a href="https://arxiv.org">arXiv</a>. It explains the artificial intelligence technique used in her new short film, Come Swim, that enables footage to take on the visual appearance of a painting.</p>
<p>Many popular software programs, <a href="https://helpx.adobe.com/uk/photoshop/how-to/turn-photo-into-painting.html">such as Adobe Photoshop</a> <a href="https://docs.gimp.org/en/filters-artistic.html">and GIMP</a>, already provide filters that can make photographs take on the general style of oil paintings, pen sketches, screen-prints or chalk drawings, for example. The algorithms needed to accomplish this first began to produce aesthetically pleasing results <a href="http://delivery.acm.org/10.1145/290000/280951/p453-hertzmann.pdf?ip=194.82.45.1&id=280951&acc=ACTIVE%20SERVICE&key=BF07A2EE685417C5%2E53532537238F1269%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&CFID=725424965&CFTOKEN=21256213&__acm__=1486467739_165505894faa79a40b3c32e099af5e0d">in the early 1990s</a>.</p>
<p>The techniques that Stewart and her colleagues examine in their article take this idea much further. The new methods they employed enable photographs or video sequences to take on not just a general style but also the look of a specific painting. This could be as abstract or impressionistic as desired, enabling the film-maker to take inspiration from anyone from Pollock to Picasso. The technique is known as style transfer.</p>
<p>Come Swim was reportedly inspired by a painting by Stewart, which seems to have provided the impetus for the project that the article describes. The short film uses the style transfer technique to generate dream-like sequences for the opening and closing scenes. The paper, co written with producer David Ethan Shapiro (Starlight Studios) and visual effects artist and research engineer Bhautik Joshi (Adobe), describes the experience of creating these sequences.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"826860171829837825"}"></div></p>
<p><a href="https://arxiv.org/pdf/1701.04928v1.pdf">The article</a> is relatively brief, at just three pages including references. It doesn’t provide rigorous testing, or give a thorough explanation of the algorithms that underpin the software used. Instead, it is more of a case study that gives an anecdotal account of the filmakers’ experiences and how they fine-tuned the style transfer system. It highlights the progress that has recently been made in using artificial intelligence software to apply style transfer techniques to video sequences, making it possible to use them in professional film-making.</p>
<p>Movie frames, along with the painting to be imitated (known as the style image), are fed into what’s known as a <a href="https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721#.9foyw4qqq">convolutional neural network</a>. This is an artificial intelligence system that is inspired, both in what it does and how it is structured, by the way that neurons are interconnected in the visual cortex of the brain. The neural network was trained to process the movie frames using the example of Stewart’s painting (the style image) to generate the stylised footage.</p>
<p>The software parameters were manipulated to process the images quickly with minimal computing power to produce footage that was “in service of the story”. The parameters experimented with included colour and texture, the source image’s resolution, the number of iterations the algorithm ran for, and the style transfer ratio (the degree to which the style image was imitated).</p>
<h2>Smoother sequences</h2>
<p>Until recently, style transfer in video sequence has been difficult because each frame would be converted into an individual image that didn’t necessarily look like the others in the sequence. This meant when processed frames were combined into a video it produced jarring changes in the image on screen, compromising the aesthetic effect.</p>
<p>These limitations have now been overcome to some degree by software that can smoothly blend between two fixed images, enabling <a href="https://arxiv.org/abs/1610.07629">smooth style-transferred video sequences to be created</a>. These sequences can even be generated <a href="https://research.googleblog.com/2016/10/supercharging-style-transfer.html">in real time</a> and using multiple style images (for example several paintings) to produces videos that are a blended pastiche of numerous visual styles.</p>
<p>These exciting developments will enable video-based art installations, short films and perhaps feature-length films to be manipulated to adopt a particular visual style. This means entire films could be given the impression of an animated painting or another source that the production team wishes to imitate. Stewart and her colleagues have made use of pioneering new production techniques that open the door to an infinite number of imaginative visual styles.</p><img src="https://counter.theconversation.com/content/72441/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ian van der Linde does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Twilight star used a pioneering technique for turning videos into animations which resemble stylised paintings.Ian van der Linde, Reader, Vision and Eye Research Unit, Anglia Ruskin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/652352016-09-14T04:36:37Z2016-09-14T04:36:37ZHow Game of Thrones’ Emmy-award-winning battle scene was made<figure><img src="https://images.theconversation.com/files/137683/original/image-20160914-4936-1tl8xez.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Seventy real horses mixed with the fake to create the chaos of battle. </span> <span class="attribution"><span class="source">Iloura</span></span></figcaption></figure><p>Melbourne based visual effects provider Iloura – also known for creating ghosts for the 2016 reboot of Ghostbusters – has won its first Emmy for the Game Of Thrones (GoT) season six episode “Battle of the Bastards”. </p>
<p>Iloura’s first time working on GoT was on the biggest, most spectacular episode the series has attempted to date. Its work on Battle of the Bastards is astonishing in both its scope and flawless realisation. The 22 minutes of the eponymous battle are gritty and visceral, giving the viewer a real sense of the chaos of men and horses fighting in the mud. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/OKA4fcY3eeo?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The battle is a showdown between Jon Snow (Kit Harington) and Ramsay Bolton (Iwan Rheon) over control of the North. Many of the narrative threads of the last six seasons were bought together in a single high-impact collision, and it required a mammoth effort to pull it off. </p>
<p>Iloura worked for eight months with 120 people to create a battle that combined 70 real horses, hundreds of extras and computer-generated images. Precise planning was required to keep the entire production on the same page, from meticulously plotting action sequences, to keeping track of what kind of light was used on-set. Visual Effects Supervisor Glenn Melenhorst told me,</p>
<blockquote>
<p>Thrones is very tightly controlled from a creative perspective. The sequence was all in the preplanning, necessarily so as there were so many stunts and horses charging about on set.</p>
</blockquote>
<p>Sequences like Battle of the Bastards require just about every trick in the VFX book. Actors and stunt performers are filmed on location or against <a href="https://www.videomaker.com/article/c10/17026-how-does-green-screen-work">greenscreen</a> but must then be integrated into a scene with CG characters, props and other elements.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/137582/original/image-20160913-4983-flcj19.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/137582/original/image-20160913-4983-flcj19.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=677&fit=crop&dpr=1 600w, https://images.theconversation.com/files/137582/original/image-20160913-4983-flcj19.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=677&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/137582/original/image-20160913-4983-flcj19.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=677&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/137582/original/image-20160913-4983-flcj19.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=850&fit=crop&dpr=1 754w, https://images.theconversation.com/files/137582/original/image-20160913-4983-flcj19.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=850&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/137582/original/image-20160913-4983-flcj19.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=850&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">‘Massive’ armies generated by computer.</span>
<span class="attribution"><span class="source">iloura.com.au</span></span>
</figcaption>
</figure>
<p>Animation staff are often broken into teams, with some working on detailed character animation while others deal with effects animation such as smoke and fire. The various elements are combined into a composite image by specialists called compositers, who adjust each element to ensure the finished image looks like a complete whole rather than a patchwork of bits. </p>
<p>For GoT, Iloura used software called <a href="https://en.wikipedia.org/wiki/MASSIVE_(software)">Massive</a>, pioneered by Peter Jackson’s VFX company. Massive allows for simulations of large crowds, with each individual character moving and interacting with their environment according to a predetermined set of possible actions. </p>
<p>Using Massive, Iloura’s 120 strong team was able to enhance the 500 extras and 70 real horses, creating three separate armies, each comprised of thousands of virtual soldiers and horses. Melenhorst said,</p>
<blockquote>
<p>This was really one of those ‘how the hell are we going to do that?’ type of jobs. Everything was hard, particularly as we had to generate totally photo-real humans and horses acting in close-up as well as simulate and animate thousand-strong armies in wide shots. It was all terribly unforgiving work…</p>
</blockquote>
<p>Iloura had to build every element in the battle, with the exception of the castle Winterfell, which had already been created – and a raven that had appeared in an earlier episode. </p>
<p>Virtual actors didn’t just appear in the sweeping overhead wide shots. The signature shot of the Battle of the Bastards shows Jon Snow, on foot fighting his way through a chaotic melee of charging horses and men, dodging spears and flights of arrows. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/137581/original/image-20160913-4936-1yo595j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/137581/original/image-20160913-4936-1yo595j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=677&fit=crop&dpr=1 600w, https://images.theconversation.com/files/137581/original/image-20160913-4936-1yo595j.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=677&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/137581/original/image-20160913-4936-1yo595j.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=677&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/137581/original/image-20160913-4936-1yo595j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=850&fit=crop&dpr=1 754w, https://images.theconversation.com/files/137581/original/image-20160913-4936-1yo595j.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=850&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/137581/original/image-20160913-4936-1yo595j.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=850&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Kit Harington battles imaginary enemies.</span>
<span class="attribution"><span class="source">iloura.com.au</span></span>
</figcaption>
</figure>
<p>Actors are far too valuable to risk, so for one shot where Jon Snow leaps from his horse into the fray, Harington’s head was digitally composited onto a CG body. Once on the ground and apparently in the midst of battle, Harington had to dodge and parry imaginary targets, which were later added as CG elements. Still, Melenhorst stresses, </p>
<blockquote>
<p>We used live elements and green screen bits and pieces as often as we could. My mantra has always been to use live action as often as possible. Nothing beats reality.</p>
</blockquote>
<p>To build a scene such as this takes time and incredible attention to detail. For maximum realism, virtual creatures are created from the inside out. First, a detailed skeleton is built and rigged to move with the same range of movement as a real horse. This is fitted with anatomically correct musculature that’s carefully crafted to stretch and deform accurately, then a skin that hugs the musculature is applied.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/137585/original/image-20160913-4936-1krzxsu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/137585/original/image-20160913-4936-1krzxsu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=677&fit=crop&dpr=1 600w, https://images.theconversation.com/files/137585/original/image-20160913-4936-1krzxsu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=677&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/137585/original/image-20160913-4936-1krzxsu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=677&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/137585/original/image-20160913-4936-1krzxsu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=850&fit=crop&dpr=1 754w, https://images.theconversation.com/files/137585/original/image-20160913-4936-1krzxsu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=850&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/137585/original/image-20160913-4936-1krzxsu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=850&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">CG horse and rider are integrated with live action.</span>
<span class="attribution"><span class="source">iloura.com.au</span></span>
</figcaption>
</figure>
<p>The skin moves and deforms realistically over the muscles as the creature is manipulated by an animator. The skin is textured, hair is applied in the places and at the density it should be, and the skin and hair are given colours and textures that react to virtual light sources just as the real thing would react to light on location.</p>
<p>Clothing, armour, saddles and tack also need to be added, which must not only be built to scale but also appropriately aged and marked to appear well worn, with increasing layers of mud and muck added to all the CG assets as the fight rages on.</p>
<p>Hair and bits of hanging cloth and leather are given physical properties that allow them to swing and react naturally according to the laws of physics – but these elements also need to be manually controllable to allow animators to control the aesthetics of a shot.</p>
<p>A television program, unlike most feature films, usually has an established visual style that must be replicated. Iloura had to ensure their completed shots matched the established look of the show. The effects were so successful that, according to Melenhorst, </p>
<blockquote>
<p>The producers of the show began to have to check the original plates to work out what was CG and what was not.</p>
</blockquote>
<p>Watch a breakdown of the entire battle here:</p>
<figure>
<iframe src="https://player.vimeo.com/video/172374044" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</figure>
<p><br></p>
<hr>
<p><em>Iloura are currently working on Underworld: Blood Wars, due for release in 2017 plus the television series Outcast (2016).</em></p><img src="https://counter.theconversation.com/content/65235/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Allen has previously worked for Iloura as a visual effects artist.</span></em></p>An Australian VFX company has won an Emmy for its work on season six of Game of Thrones. Over eight months a team of 120 pulled out every trick in the book to create the visceral ‘Battle of the Bastards’.Peter Allen, Lecturer in Film and Television, Victorian College of the Arts, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/296552014-07-29T20:32:22Z2014-07-29T20:32:22ZOscars for animals? Andy Serkis should be beating his chest<figure><img src="https://images.theconversation.com/files/55144/original/b299bvzw-1406611575.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Caesar (Andy Serkis) is the leader of the ape nation in a scene from Dawn of the Planet of the Apes.
</span> <span class="attribution"><span class="source">Weta/20th Century Fox</span></span></figcaption></figure><p>The notion that a chimpanzee could win an Academy Award for acting (or anything else) seems farcical at first glance but, of course, it’s not an actual chimpanzee being discussed in the case of the latest role by <a href="http://www.serkis.com/">Andy Serkis</a>.</p>
<p>Rather, it’s an incredibly sophisticated amalgam of the actor and the very latest computational visualisation techniques from <a href="https://www.wetafx.co.nz/">Weta Digital</a>. </p>
<p>Serkis’ performance as Caesar, the leader of the fledgling ape society in the recently-released <a href="http://www.imdb.com/title/tt2103281/">Dawn of the Planet of the Apes</a> (2014) once again has Hollywood commentators pondering the possibility of <a href="http://www.theatlantic.com/entertainment/archive/2014/07/the-dawn-of-movies-without-movie-stars/374930/">an Oscar nod for a synthespian</a> – a synthetic thespian or virtual actor – but this is far from the first time this question has been raised. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=595&fit=crop&dpr=1 600w, https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=595&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=595&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=747&fit=crop&dpr=1 754w, https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=747&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/55151/original/vd5zrmvv-1406611956.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=747&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Gollum in The Lord of the Rings, voiced and performed by Andy Serkis.</span>
<span class="attribution"><span class="source">Wikimedia Commons</span></span>
</figcaption>
</figure>
<p>Andy Serkis has been behind some of the most memorable cinematic faces of the last decade, but it’s not quite his face. Rather, Serkis has held pioneering roles utilising performance capture technology. </p>
<p>Performance capture features the real-time recording and digitisation of an actor’s movements, which are then used to drive a complex digital model. </p>
<p>With the digital powerhouse of Weta Digital behind him, Serkis’ performances have driven Gollum from <a href="http://www.imdb.com/title/tt0120737/">The Lord of the Rings</a> (2001, 2002, 2003) (and now <a href="http://www.imdb.com/title/tt0903624/?ref_=nv_sr_3">The Hobbit</a> - 2012, 2013, 2014) films, the titular ape in Peter Jackson’s <a href="http://www.imdb.com/title/tt0360717/?ref_=fn_al_tt_1">King Kong</a> (2005), and the role of Caesar in <a href="http://www.imdb.com/title/tt1318514/?ref_=fn_al_tt_1">Rise of the Planet of the Apes</a> (2011) and the new sequel, Dawn. </p>
<p>For many, the question of where the acting ends and the computer-generated imagery begins, undermines the authenticity of a performance captured role <em>as a performance</em>, but no performance exists in a vacuum. Every actor’s appearance is constructed through costume, make-up and lighting, their dialogue taken from a script, the eventual role on screen painstakingly led by a director, and carefully filtered and refined during the editing process. </p>
<p>Performance capture is similar in many ways, but with the additional digital processing to translate the motion and facial expressions of an actor onto an often non-human character. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/_xFLNImXaTI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Dawn of the Planet of the Apes | From Rise to Dawn - Technological Advancements.</span></figcaption>
</figure>
<p>In a <a href="https://www.youtube.com/watch?v=_xFLNImXaTI">brief promotional featurette</a>, Serkis explains how the performance capture technology has developed, with scenes now able to be shot outdoors where once they had to be on a soundstage against a green screen. </p>
<p>Most significantly though, for Serkis, is the fidelity with which the performance capture cameras and software can directly map an actors’ face and performance onto the digital character they are playing. </p>
<p>And given that technology has always been part of acting, the authenticity of performance captured roles speaks to the symbiotic relationship between fleshy, embodied actors and the informatic machines that enhance and facilitate those performances. </p>
<p>Early <a href="http://www.zdnet.com/news/virtual-actors-cheaper-better-faster-than-humans/99749">industry fears that synthespians might replace “real” actors</a> reveals an insecurity about the relationship between people and technology. If a character can simply be created by a computer, the millions of dollars spent on A-list stars might just seem a little unnecessary. </p>
<p>The reality of performance capture, though, shows the opposite to be true: its takes a huge team to bring a single performance capture character to screen, with the actor remaining integral, filmed in excruciating detail, but also then combining software engineers, digital artists, and a range of other digital effects personnel to keep the best of the performance and use it to drive a state-of-the-art digital model. </p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/54954/original/t2nxqqyr-1406442873.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/54954/original/t2nxqqyr-1406442873.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=808&fit=crop&dpr=1 600w, https://images.theconversation.com/files/54954/original/t2nxqqyr-1406442873.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=808&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/54954/original/t2nxqqyr-1406442873.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=808&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/54954/original/t2nxqqyr-1406442873.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1015&fit=crop&dpr=1 754w, https://images.theconversation.com/files/54954/original/t2nxqqyr-1406442873.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1015&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/54954/original/t2nxqqyr-1406442873.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1015&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">For Your Consideration: Best Supporting Actor Campaign for Andy Serkis as Gollum.</span>
</figcaption>
</figure>
<p>Yet every director and crew who have worked with Serkis since his days as Gollum, as well as Serkis himself, have spent over a decade arguing for the legitimacy of performance capture as “real” acting. </p>
<p>After the pivotal role of Serkis in <a href="http://www.imdb.com/title/tt0167261/?ref_=fn_al_tt_1">The Lord of the Rings: The Two Towers</a> (2002), New Line Cinema and director Peter Jackson led the first attempt to get a role driven by performance capture <a href="http://www.telegraph.co.uk/culture/film/3589692/Can-Gollum-get-the-precious-Oscar-nod.html">acknowledged at the Academy Awards</a>. </p>
<p>In his first outing as Caesar, Serkis was widely applauded, with 20th Century Fox mounting <a href="http://articles.latimes.com/2011/nov/05/entertainment/la-et-apes-oscar-20111105">a campaign for a best actor nomination</a>. Co-star <a href="http://www.theguardian.com/film/2012/jan/09/james-franco-andy-serkis-oscar">James Franco was particularly vocal</a> in arguing that Serkis’ performance was integral to the character, worthy of critical attention and praise. </p>
<p>And with the success of Dawn, the director and co-stars are once again <a href="http://www.independent.co.uk/arts-entertainment/films/news/andy-serkis-dawn-of-the-planet-of-the-apes-oscars-chance-in-doubt-says-gary-oldman-9611673.html">lining up to applaud Serkis’ performance</a>.</p>
<p>In terms of literally performing animals, Serkis and the team playing the various apes in the film do a remarkable job in evoking empathy without sacrificing the specificities of chimpanzees and other apes. </p>
<p>It is noteworthy that Rise received <a href="http://www.peta.org/features/rise-planet-apes/">a specific commendation from PETA (People for the Ethical Treatment of Animals)</a> about the way animals were portrayed and filmed. Having a human actor behind the animal performances not only guarantees no animals will be harmed on set, but at a deeper level also begs the question about the relationships between humans and animals. </p>
<p>Such questions are at the heart of Dawn, wherein the similarities between apes and humans drive the plot rather than intrinsic differences.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/hlWyAePmAYM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Dawn Of The Planet Of The Apes – Visual Effects – Motion Capture.</span></figcaption>
</figure>
<p>Andy Serkis’ role as Caesar is central to Dawn, and as numerous online features emphasise, this is <em>his</em> acting, and <em>his</em> performance. Whether this is the year that such a digital performance is captured by the Oscars or not remains to be seen.</p><img src="https://counter.theconversation.com/content/29655/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tama Leaver receives funding from the Australian Research Council (ARC).</span></em></p>The notion that a chimpanzee could win an Academy Award for acting (or anything else) seems farcical at first glance but, of course, it’s not an actual chimpanzee being discussed in the case of the latest…Tama Leaver, Senior Lecturer in Internet Studies, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/264492014-05-23T02:11:31Z2014-05-23T02:11:31ZLet there be light: behind the trend of illuminating cities for art<figure><img src="https://images.theconversation.com/files/49209/original/pxhhsysp-1400725991.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Bright lights, big city - in the 1950s lighting for productivity and security was overtaken by lighting for spectacle, mood and advertising.</span> <span class="attribution"><span class="source">Lighting the Sails - 59 Productions Vivid LIVE 2014.</span></span></figcaption></figure><p>If you’re in Melbourne or Sydney over the next couple of weeks, you can enjoy the nightly transformation of some familiar urban landmarks. How should we understand this growing global enthusiasm for spectacular urban illumination? </p>
<p>Tonight in Sydney, the Opera House will again become an urban canvas for <a href="http://www.vividsydney.com/events/lighting-of-the-sails/">Lighting The Sails</a>, now an established part of the annual <a href="http://www.vividsydney.com/">Vivid</a> festival (May 23-June 1). Instead of reflecting the harbour’s ambient light, the sails will be shrink-wrapped with a large-scale video projection created by <a href="http://59productions.co.uk/">59 Productions</a>, the team responsible for the video design of the London 2012 Olympic <a href="https://www.youtube.com/watch?v=4As0e4de-rI">Opening Ceremony</a>. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/49195/original/65rz9phg-1400721464.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">White Night Melbourne, February 2014.</span>
<span class="attribution"><span class="source">AAP/ Newzulu/ Dhiren Velu</span></span>
</figcaption>
</figure>
<p>In Melbourne in June, a new interactive light-based installation <a href="http://www.fedsquare.com/events/radiant-lines-by-asif-khan/">Radiant Lines</a> by renowned London-based light artist <a href="http://www.asif-khan.com/">Asif Khan</a> will open in Federation Square as part of its long-running <a href="http://www.fedsquare.com/events/the-light-in-winter/program/">Light in Winter</a> festival. </p>
<p>Radiant Lines is a sculpture comprising 40 rings of raw aluminium suspended in space and illuminated by hundreds of LED lights pulsing in a rhythm that mimics bioluminescence. As visitors approach, they are able to trigger new patterns that immerse them.</p>
<p>Urban illumination projects of this kind are increasingly popular, not only for festivals such as Vivid, Light in Winter, <a href="http://enlightencanberra.com.au/">Enlighten</a> in Canberra and various “<a href="http://en.wikipedia.org/wiki/White_Night_festivals">White Nights</a>” around the world, but for all kinds of transformations of urban space — both temporary and permanent. </p>
<p>New forms of public and commercial lighting, large-scale projection, urban screens and media facades have profoundly transformed the look and ambiance of cities around the world. In Hong Kong, the city’s skyscrapers even perform <a href="http://www.tourism.gov.hk/symphony/english/details/details.html">a nightly choreography of light</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/49203/original/nkzf9w2q-1400723456.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Symphony of Lights, Hong Kong.</span>
<span class="attribution"><span class="source">Francisco Diez</span></span>
</figcaption>
</figure>
<h2>A short history of lighting cities</h2>
<p>While much urban lighting has a functional dimension, there is a long history of lighting cities for spectacle and pleasure. Gas-lit Paris was proclaimed the world’s first “city of light” in the 1820s. From the 1870s, World’s Fairs regularly showcased new developments in electric lighting, paving the way for the twentieth century “electropolis”, as cities such as Chicago, Berlin and New York came to be defined by the intensity of their illumination. </p>
<p>The “<a href="https://www.youtube.com/watch?v=5eHfWwSIjuI">bright lights, big city</a>” of which Jimmy Reed sang in the 1950s became a dominant image of the modern city, establishing a new rhetoric of urban space in which lighting for productivity and security was overtaken by lighting for spectacle, mood and advertising.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/49198/original/4dvvfwyw-1400722612.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Curtis Perry</span></span>
</figcaption>
</figure>
<p>For a century, urban lighting depended on incandescent bulbs and various fluorescent tubes, such as the neon tube that became synonymous with advertising from the 1920s. </p>
<p>The current explosion of urban illumination projects has been sparked by a range of new technologies, including the maturation of digital projection systems, light-emitting diode (LED) video screens and LED lighting from the mid-90s. </p>
<p>The capacity to computer program LED lights down to the individual pixel means that lighting designers can create complex sequences and rhythms, such as the light narrative designed by Bruce Ramus, former lighting designer for rock luminaries U2, that plays on the LED skin wrapping Melbourne’s AAMI Park stadium. </p>
<p>Bringing computer-aided design together with high-precision large-scale digital projection has also created the distinctive new art form of projection mapping. Projection mapping enables real structures, such as the curvaceous Sydney Opera House, to be transformed into a screen on which images can play without distortion. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/o5ZvCv7yUKk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">URBANSCREEN lighting the Sydney Opera House, 2012.</span></figcaption>
</figure>
<p>Precise integration of mutable light with material structures can result in astonishing contemporary retakes on the old <em>son et lumière</em> show, as solid buildings seem to come alive through virtuoso combinations of light and shadow. For Lighting the Sails in 2012, German company <a href="http://www.urbanscreen.com/">URBANSCREEN</a> took the sails metaphor of Utzon’s famous structure literally, using projection mapping to make them appear to undulate and ripple. </p>
<h2>Public space debate</h2>
<p>New lighting technologies inevitably raise new questions about access and control over public space. The complexity and cost of large-scale projection mapping means it requires the deep pockets of large festivals. But it is also becoming an increasingly common promotional tool for high profile commercial events. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=800&fit=crop&dpr=1 600w, https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=800&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=800&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1005&fit=crop&dpr=1 754w, https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1005&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/49199/original/z7mcp5tf-1400722851.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1005&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Bruce Ramus’ Helix Tree.</span>
<span class="attribution"><span class="source">PatM</span></span>
</figcaption>
</figure>
<p>Lighting the modern city has always had a strong commercial bias, and has been a key factor in tilting the contemporary city towards becoming a “brandscape”, a spectacle that is consumed rather than inhabited in other ways. However, new technologies can carry other possibilities for public communication. </p>
<p>The Helix Tree installation (pictured) that <a href="http://www.ramus.com.au/">Bruce Ramus</a> designed for Light in Winter in 2013 lit up in response to people singing. Light became a medium for congregation and collective public participation.</p>
<p>There are also myriad examples of “unauthorised” illumination projects around the world, variously termed digital graffiti, photon bombing and mobile guerrilla projection. </p>
<p>During the Occupy Wall Street phenomenon in 2011 artists projected “99%” and “Occupy Together” right across the front of buildings such as City Hall. And predating Lighting the Sails was the memorable guerrilla projection in 2006 of the “We are all boat people” logo on to the Opera House sails. </p>
<p>More than ever before, lighting has become integral to debate over public space.</p>
<p><br></p>
<p><em><a href="http://www.vividsydney.com/events/the-city-as-a-canvas-transformations-through-light">The City as a Canvas: Transformations Through Light</a>, a public talk at the Museum of Contemporary Art in Sydney, takes place on Saturday May 23 at 3.30pm.</em></p><img src="https://counter.theconversation.com/content/26449/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Scott McQuire was part of an ARC linkage project, Large-screens and the transnational public sphere 2009-2013, in which Fed Square P/L was one of three industry partners. </span></em></p>If you’re in Melbourne or Sydney over the next couple of weeks, you can enjoy the nightly transformation of some familiar urban landmarks. How should we understand this growing global enthusiasm for spectacular…Scott McQuire, Head, Media and Communication Program, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/247222014-04-03T19:46:50Z2014-04-03T19:46:50ZDrawing inspiration from DreamWorks animation<figure><img src="https://images.theconversation.com/files/45519/original/x9thgq2r-1396504651.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Character drawings are encoded with cues to help animators understand how the character should move.
</span> <span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span></figcaption></figure><p>When I first started working as an animator on the <a href="http://www.imdb.com/title/tt0158983/">South Park</a> (1999) feature film in California, I found it remarkable that every Tuesday evening the studio would hold life-drawing classes. It seemed odd that someone who was animating a simple paper cut-out character (albeit a digital simulation of one) would even need to think about drawing, let alone the drawing of realistic human forms. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1278&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1278&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1278&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1606&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1606&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45153/original/ffkpj8sm-1396244914.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1606&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Kung Fu Panda (2008), artists Christophe Lautrette and Bill Kaufmann.</span>
<span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span>
</figcaption>
</figure>
<p>But to truly understand how figures move (even the most abstracted representations) a good solid understanding of drawing is indispensable. The greater your understanding of the form and construction of a character or object, the better your understanding will be of how it moves.</p>
<p>The <a href="http://www.acmi.net.au/dreamworks.aspx">DreamWorks Animation exhibition</a>, opening at the Australian Centre for the Moving Image (<a href="http://www.acmi.net.au/default.aspx">ACMI</a>), which begins in Melbourne on April 10, features some remarkable examples of pre-production concept art works from a number of the studio’s animated films – including character designs and other concept artwork which will be explored here. </p>
<p>Drawing has always been an important part of animation. Up to 10,000 individual drawings of Walt Disney’s Mickey Mouse and friends (each slightly different in pose) might be drawn in order to produce a seven-minute <a>traditionally animated</a> film. </p>
<p>Many animators will make a habit of sketching the world around them and, when this is not practical, they might find themselves “mentally drawing” their surroundings – “sketching over forms” in their mind and mentally “deconstructing” them. </p>
<p>Today, many animation productions, such as the contemporary 3D animations of Californian film studio <a href="http://www.dreamworksstudios.com/">DreamWorks</a>, do not usually directly involve sequential drawings to create the animation effect. Yet drawing continues to play a very integral role in the pre-production process – and by extension it hovers over the whole of the production process as well. </p>
<h2>Character designs</h2>
<p>A character design drawing initially defines the form and look of the character. Will it be a lion with an angular body, or a panda with a very round body? </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=392&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=392&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=392&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=493&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=493&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45137/original/ytw8bhzv-1396240722.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=493&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Kung Fu Panda (2008), artist Nicolas Marlet.</span>
<span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span>
</figcaption>
</figure>
<p>But these character design drawings go well beyond this in that they also determine to a large extent the personality and attitude of the character; they even help to determine how the character will ultimately move. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=890&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=890&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=890&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1119&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1119&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45143/original/yzw5w5n8-1396241360.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1119&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Madagascar (2005), artist Craig Kellman.</span>
<span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span>
</figcaption>
</figure>
<p>Although the character designs at this developmental stage are merely images, they are vessels of potential movement. All sorts of movement cues and possibilities are contained – such as a character’s mass, weight, construction, proportions and balance. </p>
<p>A large unbalanced character, such as Po from <a href="http://www.imdb.com/title/tt0441773/">Kung Fu Panda</a> (2008), will move quite differently to one that is thinner and more poised, such as Alex the lion in <a href="http://www.imdb.com/title/tt0351283/?ref_=fn_al_tt_1">Madagascar</a> (2005). </p>
<p>The drawings also convey a good idea of a character’s temperament, which helps to predict how it might react and move in any given situation.</p>
<p>These character drawings are encoded with all sorts of information that will help the animator understand how the character should move. It could be argued that if the animators also have a good understanding of drawing, they will be very adept at decoding these movement cues.</p>
<h2>Environmental concept drawings</h2>
<p>Environmental drawings are concept art works that describe the spaces and settings in which the animation will take place. In addition to describing the location and the various components of the scene, they also set the overall tone, colouring, mood and atmosphere. </p>
<p>These drawings communicate to everyone involved in the production what the look and feel of the film will be. In a sense they concretise the original ideas – they bring to a reality what might previously have been merely an idea or some words on a page. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=255&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=255&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=255&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=320&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=320&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45141/original/v4pkg5bj-1396241277.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=320&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">How to Train Your Dragon (2010), artist Pierre-Olivier Vincent.</span>
<span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span>
</figcaption>
</figure>
<p>But drawing can be a distinctly exploratory activity. Initially, the artist may have a strong vision of what the scene should look like, but through the process of sketching, working and reworking an image the idea will invariably change and the final image might look quite different.</p>
<p>Many larger animation studios will have a number of artists producing concept artwork and, while each artist will have a very different interpretation of how a scene might look, collectively they will all contribute something of importance to the final film.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=469&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=469&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=469&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=590&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=590&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45154/original/c9k7jx42-1396245220.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=590&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">How to Train Your Dragon (2010), artist Pierre-Olivier Vincent.</span>
<span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span>
</figcaption>
</figure>
<p>Most of these images are indeed <em>conceptual</em>, so they may not refer directly to any particular scene in the movie. Instead they tend to convey more general themes. The dramatic development image above unmistakably communicates the adventurous and mythical qualities that are inherent in much of the <a href="http://www.imdb.com/title/tt0892769/?ref_=fn_al_tt_1">How To Train Your Dragon</a> (2010) movie. </p>
<p>Depicting characters within the environmental concept drawings is an important part of the development process. Particularly in 3D animations, characters do not merely perform in front of backdrops, but need to be fully integrated within a space. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=260&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=260&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=260&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=326&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=326&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45140/original/w74scwcm-1396241228.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=326&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Croods (2013), artists: Takao Noguchi (character), Dominque Louis (colour).</span>
<span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span>
</figcaption>
</figure>
<p>In many of these images it is clear that the character can have an effect on the environment, and that the environment has a very profound effect on the character. The above concept image for <a href="http://www.imdb.com/title/tt0481499/?ref_=fn_al_tt_1">The Croods</a> (2013) shows the character Grug Crood placed in a rather rugged environment where he undoubtedly faces a rather rugged existence.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=292&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=292&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=292&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=367&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=367&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45139/original/r7kcrx2w-1396241187.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=367&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Croods (2013), artist Margaret Wuller.</span>
<span class="attribution"><span class="source">Courtesy DreamWorks Animation SKG</span></span>
</figcaption>
</figure>
<p>By contrast, this second image shows a much gentler environment and indicates a far more pleasant existence for the character Eep Crood.</p>
<p>As the DreamWorks exhibition makes clear, traditional drawings and images still play a very important role in the production of an animated film. </p>
<p>Without conceptual drawings, the animated movie would remain just that: a concept.</p>
<p><br>
<em><a href="http://www.acmi.net.au/dreamworks.aspx">DreamWorks Animation: The Exhibition</a> runs from April 10 to October 5 at the Australian Centre for the Moving Image.</em></p>
<p><br></p><img src="https://counter.theconversation.com/content/24722/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dan Torre does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>When I first started working as an animator on the South Park (1999) feature film in California, I found it remarkable that every Tuesday evening the studio would hold life-drawing classes. It seemed odd…Dan Torre, Lecturer in Animation and Interactive Media, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/202622013-12-18T01:02:56Z2013-12-18T01:02:56ZVisual effects are changing cinema – but can the industry keep up?<figure><img src="https://images.theconversation.com/files/37405/original/hh29w5wj-1386727110.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Oscar contender Gravity, starring Sandra Bullock, relied heavily on visual effects – but the industry itself is struggling.</span> <span class="attribution"><span class="source">AAP Image/ Warner Bros Pictures</span></span></figcaption></figure><p>It’s the season to be making predictions about 2014 Oscar wins, and although Australian audiences are yet to see Best Picture favourites from the likes of Martin Scorsese (<a href="http://www.imdb.com/title/tt0993846/?ref_=nv_sr_1">The Wolf of Wall Street</a>), David O. Russell (<a href="http://www.imdb.com/title/tt1800241/?ref_=nm_flmg_dr_1">American Hustle</a>) and the Coen Brothers (<a href="http://www.imdb.com/title/tt2042568/?ref_=nm_flmg_dr_1">Inside Llewyn Davis</a>), there is one film that has at least one Oscar sewn up. </p>
<p>When Alfonso Cuaron’s <a href="http://www.imdb.com/title/tt1454468/?ref_=fn_al_tt_1">Gravity</a> inevitably, deservedly, takes the Visual Effects Oscar in March 2014, it will cap off a vintage year for an industry that looks to be creatively coming of age.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/0-eP-W0DNmY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Extended trailer for Gravity (2013).</span></figcaption>
</figure>
<p>Between last year’s visual effects-laden favourite, Ang Lee’s <a href="http://www.imdb.com/title/tt0454876/?ref_=fn_al_tt_1">The Life of Pi</a>, and the release of Gravity, we have seen films as varied in substance and style as Baz Luhrmann’s <a href="http://www.imdb.com/title/tt1343092/?ref_=nv_sr_1">The Great Gatsby</a> and Jacques Audiard’s <a href="http://www.imdb.com/title/tt2053425/?ref_=nv_sr_1">Rust and Bone</a> tie their fates to visual effects (VFX) work.</p>
<p>Visual effects facilities are increasingly responsible for subtle effects in films lauded by serious-minded critics.</p>
<h2>VFX virtuosity in Gravity</h2>
<p>Nowhere has this faith in VFX paid better dividends than in Cauron’s long-gestating space opus, which won over audiences and critics with its visual virtuosity.</p>
<p>Restrained and elegant in its use of 3D aesthetics, Gravity is an innovative work of animation: it will surprise many to know that 98% its shots were digitally created, frame by frame, by a team of more than 400 visual effects artists.</p>
<p>Over three years, this team seamlessly blended elements filmed in live action – comprised in the main of the actors’ faces – with computer-generated images that include space suits and shuttles, 30 million stars in the vast space landscape, fragments of debris hurtling through zero gravity, and even the condensation caused by Sandra Bullock’s frantic breathing as she struggles untethered in space.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/37406/original/bhjpm7ky-1386727237.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/37406/original/bhjpm7ky-1386727237.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=257&fit=crop&dpr=1 600w, https://images.theconversation.com/files/37406/original/bhjpm7ky-1386727237.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=257&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/37406/original/bhjpm7ky-1386727237.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=257&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/37406/original/bhjpm7ky-1386727237.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=323&fit=crop&dpr=1 754w, https://images.theconversation.com/files/37406/original/bhjpm7ky-1386727237.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=323&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/37406/original/bhjpm7ky-1386727237.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=323&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Visual effects artists created everything from Gravity’s grandiose spacescapes to the smallest specks of debris.</span>
<span class="attribution"><span class="source">AAP Image/ Warner Bros Pictures</span></span>
</figcaption>
</figure>
<p>As visually impressive as this achievement is, the film is also quietly ground-breaking in what it proposes for the future of visual effects in cinema. </p>
<p>Often hurled as an epithet and long synonymous with the kind of explosives-laden, logic-defying fare that fills multiplexes, computer-generated imagery (CGI) took centre stage in Gravity as an essential function of the creative process.</p>
<h2>Post-production before the cameras roll</h2>
<p>Unable to film the actors in a real or staged environment, Cuaron, his cinematographer <a href="http://www.imdb.com/name/nm0523881/?ref_=fn_al_nm_1">Emmanuel Lubezki</a>, and VFX supervisor <a href="http://www.imdb.com/name/nm0916449/?ref_=fn_al_nm_2">Tim Webber</a> inverted the model of visual effects as a post-production activity – instead rendering the film’s environments in a virtual space long before a camera had rolled.</p>
<p>Collaboration of this sort, particularly between Lubezki and Webber, signals a blurring of the boundaries between strictly delineated roles in film production. It breaks down the Fordist approach that has dominated the assembly of films in Hollywood production system. </p>
<p>The film is notable for another small but significant industry first – when the credits roll, Webber is acknowledged alongside Lubezki as a key architect of Gravity’s visual style.</p>
<p>To the casual observer, this would appear to be an industry enjoying the first flush of serious creative recognition.</p>
<p>But the VFX artists protesting outside the Oscars in February 2013, industry crisis talks, and calls for unionisation tell a different story. </p>
<h2>An industry in crisis?</h2>
<p><a href="http://www.rhythm.com/home/">Rhythm and Hues</a>, the company responsible for the visual effects of Life of Pi, <a href="http://articles.latimes.com/2013/feb/25/entertainment/la-et-ct-visual-effects-protest-20130225">won their Oscar</a> not two weeks after declaring bankruptcy, following in the footsteps of James Cameron’s Oscar-winning <a href="http://digitaldomain.com/">Digital Domain</a> facility, which went bankrupt five months earlier.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/j9Hjrs6WQ8M?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Trailer for Life of Pi (2013).</span></figcaption>
</figure>
<p>The roots of the crisis in the visual effects industry can be traced back to the 1980s, when the practice of fixed-price bidding on visual effects work was established in an environment in which expectations of what effects could do were greatly tempered.</p>
<p>Spielberg’s 1989 blockbuster <a href="http://www.imdb.com/title/tt0097576/?ref_=fn_al_tt_1">Indiana Jones and the Last Crusade</a> featured just 80 visual effects shots – in comparison with 500 for <a href="http://www.imdb.com/title/tt0120338/?ref_=nv_sr_1">Titanic</a> eight years later, and more than 3,000 for <a href="http://www.imdb.com/title/tt0499549/?ref_=nv_sr_1">Avatar</a> in 2009.</p>
<h2>A globalised nomadic workforce</h2>
<p>Globalisation and the exponential increase in computing power have also fundamentally changed the ecosystem of what was once a California-centric industry. Competition has cropped up in the developing world, where labour costs are cheaper, and in government-subsidised hubs in countries such as Canada, New Zealand and Australia, putting downward pressure on bidding for tenders.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WGpq-LDV6jU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Steven Spielberg’s 1989 movie Indiana Jones and the Last Crusade was relatively light on visual effects shots.</span></figcaption>
</figure>
<p>The situation isn’t helped by a studio system under pressure to chase bottom-dollar work even as it increasingly relies on VFX work to do everything from creating entire characters and worlds to sky replacement to pimple removal. </p>
<p>For their part, the studios are now so concerned about this instability they helped to foster they are opting to hedge their bets by spreading the work across many facilities and paying as they go.</p>
<p>The knock-on effect is a globalised, nomadic workforce forced to work longer for less. Even top-tier facilities face an uncertain future. </p>
<p>The VFX industry itself is far from blameless: facilities seem only too willing to oblige the studios, mystifying the VFX process while keeping the real cost of business hidden by absorbing much of it themselves. </p>
<p>As <a href="http://www.visualeffectssociety.com/The-State-of-the-Global-VFX-Industry-2013">a white paper</a> released earlier this year by peak industry body the Visual Effects Society observed: “no other provider to the film industry works in this manner”.</p>
<h2>Where to next?</h2>
<p>If the issues are plain for all to see, the solutions are less so. </p>
<p>While there are renewed calls from visual effects artists to unionise, there is also awareness that the budget blow-outs that would ensue from artists being properly compensated would force many facilities to close. </p>
<p>Similarly, the Visual Effects Society white paper calls on facilities to foster greater business acumen and floats the possibility of companies diversifying into other aspects of media delivery – but concedes that they are largely at the mercy of market forces out of their control.</p>
<p>While the film industry prevaricates about the solution, the threat to filmmaking is real and perhaps best summed up by <a href="http://www.imdb.com/name/nm0922543/?ref_=fn_al_nm_1">Bill Westenhofer</a>, the VFX supervisor on Life of Pi. </p>
<p>Having suffered the indignities of bankruptcy, the perceived indifference of his own director, and being played offstage at the Oscars to the Jaws theme music, Westenhofer <a href="http://articles.latimes.com/2013/feb/25/entertainment/la-et-ct-visual-effects-protest-20130225">articulated</a> many artists’ frustrations:</p>
<blockquote>
<p>Visual effects is not just a commodity that’s being done by people pushing buttons. We’re artists, and if we don’t find a way to fix the business model, we start to lose the artistry.</p>
</blockquote>
<p>If the reward for creative and technical excellence is both an Oscar and bankruptcy, VFX facilities will surely be careful what they wish for – and this is to the detriment of cinema.</p><img src="https://counter.theconversation.com/content/20262/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s the season to be making predictions about 2014 Oscar wins, and although Australian audiences are yet to see Best Picture favourites from the likes of Martin Scorsese (The Wolf of Wall Street), David…Amelia Olsen-Boyd, Postgraduate research student, University of SydneyBruce Isaacs, Lecturer in Film Studies, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/201842013-11-12T18:02:32Z2013-11-12T18:02:32ZThe truth of an illusion - Tom Cruise in Oblivion<figure><img src="https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Tom Cruise at the Austrian premiere of Oblivion (2013).</span> <span class="attribution"><span class="source">EPA/HERBERT NEUBAUER</span></span></figcaption></figure><figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=411&fit=crop&dpr=1 600w, https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=411&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=411&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=516&fit=crop&dpr=1 754w, https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=516&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/35048/original/f3x7r8tv-1384298918.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=516&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Tom Cruise at the Austrian premiere of Oblivion (2013).</span>
<span class="attribution"><span class="source">EPA/HERBERT NEUBAUER</span></span>
</figcaption>
</figure>
<p>Take a very ordinary blockbuster of the summer of 2013. Joseph Kosinski, hot from rebooting the <a href="http://www.imdb.com/title/tt1104001/">Tron</a> franchise, directs Tom Cruise as a clone seeking some lost vestige of humanity (so nothing difficult there then) in <a href="http://www.imdb.com/title/tt1483013/">Oblivion</a>. </p>
<p>In the pre-title sequence, Jack (Cruise) sets up the situation, as he moves through a house floating in the clouds, before walking out through its immaculate glass doors towards his waiting air transport. So far so ordinary.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/XmIIgE7eSak?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Trailer for Oblivion (2013).</span></figcaption>
</figure>
<p>But looking behind the scenes with the aid of the special effects journal <a href="http://www.cinefex.com/">Cinefex</a> we learn that things are not quite what they seem. Pretty much everyone I’ve spoken to started off believing the set of Oblivion was digital. </p>
<p>It wasn’t. </p>
<p>It was a physical set, surrounded by an immense series of screens on which a battery of high-definition projectors beam footage of clouds shot from the top of Haleakala in Hawaii. On the other hand, most people also thought that when we cut to the exterior, were still looking at the actor Tom Cruise, when in fact we were watching a digital double.</p>
<p>It begins to be a bit ripe when you are tricked into believing a physical set is a digital illusion. And the old guard is a bit grumpy about substituting a digital double for a physical actor. It’s even odder when you consider that both the physical set and the digital one that we see in the exterior shot are both built, in their different physical and digital forms, from the same model designed in architectural software.</p>
<p>So what is so different between digitising a scanned human and physically constructing a digital model?</p>
<p>What is unique about human beings, we like to believe, is that they are unique. Without giving away the plot, this isn’t the case with Cruise’s character. He is in search of an identity, but discovers he is not the only one who has it. He is only one of several iterations of the same source code: a projection of an underlying dataset.</p>
<p>So it is entirely appropriate that the character Jack should appear ambiguously as both human and <a href="http://en.wikipedia.org/wiki/Virtual_actor">synthespian</a>. After all, the more we learn about genetic engineering, the NSA and social programming, the more paranoid we get, or ought to.</p>
<p>Popular entertainment sometimes gives us a deeper insight into what it is to be human than a phalanx of government reports.</p><img src="https://counter.theconversation.com/content/20184/count.gif" alt="The Conversation" width="1" height="1" />
Take a very ordinary blockbuster of the summer of 2013. Joseph Kosinski, hot from rebooting the Tron franchise, directs Tom Cruise as a clone seeking some lost vestige of humanity (so nothing difficult…Sean Cubitt, Professor of Film and Television, Goldsmiths, University of LondonLicensed as Creative Commons – attribution, no derivatives.