tag:theconversation.com,2011:/global/topics/visual-perception-55045/articlesVisual perception – The Conversation2024-03-07T11:12:21Ztag:theconversation.com,2011:article/2230092024-03-07T11:12:21Z2024-03-07T11:12:21ZOur brains take rhythmic snapshots of the world as we walk – and we never knew<figure><img src="https://images.theconversation.com/files/577820/original/file-20240226-16-psaujb.jpg?ixlib=rb-1.1.0&rect=52%2C9%2C3089%2C2123&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/woman-hiking-mountains-adventure-exercising-legs-105847466">Blazej Lyjak/Shutterstock</a></span></figcaption></figure><p>For decades, psychology departments around the world have studied human behaviour in darkened laboratories that restrict natural movement.</p>
<p>Our new study, <a href="https://www.nature.com/articles/s41467-024-45780-4">published today in Nature Communications</a>, challenges the wisdom of this approach. With the help of virtual reality (VR), we have revealed previously hidden aspects of perception that happen during a simple everyday action – walking. </p>
<p>We found the rhythmic movement of walking changes how sensitive we are to the surrounding environment. With every step we take, our perception cycles through “good” and “bad” phases. </p>
<p>This means your smooth, continuous experience of an afternoon stroll is deceptive. Instead, it’s as if your brain takes rhythmic snapshots of the world – and they are synchronised with the rhythm of your footfall.</p>
<h2>The next step in studies of human perception</h2>
<p>In psychology, the study of visual perception refers to how our brains use information from our eyes to create our experience of the world.</p>
<p>Typical psychology experiments that investigate visual perception involve darkened laboratory rooms where participants are asked to sit motionless in front of a computer screen.</p>
<p>Often, their heads will be fixed in position with a chin rest, and they will be asked to respond to any changes they might see on the screen. </p>
<p>This approach has been invaluable in building our knowledge of human perception, and the foundations of how our brains make sense of the world. But these scenarios are a far cry from how we experience the world every day.</p>
<p>This means we might not be able to <a href="https://www.sciencedirect.com/science/article/pii/S1571064523000830">generalise</a> the results we discover in these highly restricted settings to the real world. It would be a bit like trying to understand fish behaviour, but only by studying fish in an aquarium.</p>
<p>Instead, we went out on a limb. Motivated by the fact our brains have evolved to support action, we set out to test vision during walking – one of our most frequent and everyday behaviours.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A row of students in a uni computer lab looking at screens." src="https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/579126/original/file-20240301-16-354ica.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Doing tests in a lab isn’t quite the same as seeing and interacting with things in the real world.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/side-view-students-using-computer-lab-122284963">sirtravelalot/Shutterstock</a></span>
</figcaption>
</figure>
<h2>A walk in a (virtual) forest</h2>
<p>Our key innovation was to use a wireless VR environment to test vision continuously while walking. </p>
<p>Several previous studies have examined the effects of light exercise on perception, but used <a href="https://doi.org/10.3389/fnhum.2010.00202">treadmills</a> or <a href="https://doi.org/10.1162/jocn_a_01082">exercise bikes</a>. While these methods are better than sitting still, they <a href="https://doi.org/10.1152/japplphysiol.01380.2006">don’t match the ways</a> we naturally move through the world.</p>
<p>Instead, we simulated an open forest. Our participants were free to roam, yet unknown to them, we were carefully tracking their head movement with every step they took. </p>
<figure>
<iframe src="https://player.vimeo.com/video/917787370" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<figcaption><span class="caption">Participants walked in a virtual forest while trying to detect brief visual ‘flashes’ in the moving white circle.</span></figcaption>
</figure>
<p>We tracked head movement because as you walk, your head bobs up and down. Your head is lowest when both feet are on the ground and highest when swinging your leg in-between steps. We used these changes in head height to mark the phases of each participant’s “step-cycle”.</p>
<p>Participants also completed our visual task while they walked, which required looking for brief visual “flashes” they needed to detect as quickly as possible.</p>
<p>By aligning performance on our visual task to the phases of the step-cycle, we found visual perception was not consistent.</p>
<p>Instead, it oscillated like the ripples of a pond, cycling through good and bad periods with every step. We found that depending on the phases of their step-cycle, participants were more likely to sense changes in their environment, had faster reaction times, and were more likely to make decisions.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/everything-we-see-is-a-mash-up-of-the-brains-last-15-seconds-of-visual-information-175577">Everything we see is a mash-up of the brain's last 15 seconds of visual information</a>
</strong>
</em>
</p>
<hr>
<h2>Oscillations in nature, oscillations in vision</h2>
<p>Oscillations in vision have been <a href="https://doi.org/10.1016/j.tics.2016.07.006">shown before</a>, but this is the first time they have been linked to walking.</p>
<p>Our key new finding is these oscillations slowed or increased to match the rhythm of a person’s step-cycle. On average, perception was best when swinging between steps, but the timing of these rhythms varied between participants. This new link between the body and mind offers clues as to how our brains coordinate perception and action during everyday behaviour. </p>
<p>Next, we want to investigate how these rhythms impact different populations. For example, certain psychiatric disorders can lead to people having <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2922365/">abnormalities</a> in their gait.</p>
<p>There are further questions we want to answer: are slips and falls more common for those with stronger oscillations in vision? Do similar oscillations occur for our perception of sound? What is the optimal timing for presenting information and responding to it when a person is moving?</p>
<p>Our findings also hint at broader questions about the nature of perception itself. How does the brain stitch together these rhythms in perception to give us our seamless experience of an evening stroll?</p>
<p>These questions were once the domain of philosophers, but we may be able to answer them, as we combine technology with action to better understand natural behaviour.</p><img src="https://counter.theconversation.com/content/223009/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Matthew Davidson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Psychology researchers have used virtual reality to find our brains oscillate with each step – an intriguing finding to better understand how we see the world.Matthew Davidson, Postdoctoral research fellow, lecturer, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1870512022-08-17T12:38:53Z2022-08-17T12:38:53ZFlies evade your swatting thanks to sophisticated vision and neural shortcuts<figure><img src="https://images.theconversation.com/files/478807/original/file-20220811-20-9x44dv.jpg?ixlib=rb-1.1.0&rect=22%2C29%2C4966%2C3485&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Fly brains can process images very quickly.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/house-fly-royalty-free-image/535501923">www.shutterexperiments.com/Moment via GettyImages</a></span></figcaption></figure><p>Sitting outside on a summer evening always sounds relaxing until flies and mosquitoes arrive – then the swatting begins. Despite their minuscule eyes and a <a href="https://www.fruitflybrain.org/#/">brain</a> roughly <a href="https://www.hhmi.org/news/complete-fly-brain-imaged-at-nanoscale-resolution">1 million times</a> smaller than yours, flies can evade almost every swat. </p>
<p>Flies can thank their fast, sophisticated eyesight and some neural quirks for their ability to escape swats with such speed and agility.</p>
<p><a href="https://scholar.google.com/citations?hl=en&user=4i4wRGgAAAAJ">Our lab</a> <a href="https://scholar.google.com/citations?user=WBxN0p4AAAAJ&hl=en&oi=ao">investigates insect flight and vision</a>, with the goal of finding out how such tiny creatures can process visual information to perform challenging behaviors, such as escaping your swatter so quickly.</p>
<h2>Faster vision</h2>
<p>Flies have compound eyes. Rather than collecting light through a single lens that makes the whole image – the strategy of human eyes – flies form images built from multiple <a href="https://doi.org/10.1007/978-3-642-74082-4_3">facets</a>, lots of individual lenses that focus incoming light onto clusters of photoreceptors, the light-sensing cells in their eyes. Essentially, each facet produces an individual pixel of the fly’s vision. </p>
<p>A fly’s world is fairly low resolution, because small heads can house only a limited number of facets – usually <a href="https://www.jstor.org/stable/24954051">hundreds to thousands</a> – and there is no easy way to sharpen their blurry vision up to the millions of pixels people effectively see. But despite this coarse resolution, flies see and process fast movements very quickly.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="An illustration of a fly eye, showing tiny hexagonal facets and the photoreceptor layer under these facets." src="https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=378&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=378&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=378&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=475&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=475&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478805/original/file-20220811-23-9ejxvf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=475&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Tiny hexagonal ‘facets’ take in light, and the photoreceptors beneath them process it in quick flashes.</span>
<span class="attribution"><a class="source" href="https://www.epfl.ch/labs/lis/research/completed/curvace/">Ecole Polytechnique Fédérale de Lausanne, Switzerland</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>We can infer how animals perceive fast movement from how quickly their photoreceptors can process light. Humans discern a maximum of <a href="https://www.mdpi.com/1648-9144/57/10/1096/htm">about 60 discrete flashes</a> of light per second. Any faster usually appears as steady light. The ability to see discrete flashes depends on the lighting conditions and which part of the retina you use. </p>
<p>Some LED lights, for example, emit discrete flashes of light quickly enough that they appear as steady light to humans – unless you turn your head. In your peripheral vision you may notice a flicker. That’s because your peripheral vision processes light more quickly, but at a lower resolution, like fly vision. </p>
<p>Remarkably, some flies can see as many as <a href="https://doi.org/10.2307/1539540">250 flashes per second</a>, around four times more flashes per second than people can perceive. </p>
<p>If you took one of these flies to the cineplex, the smooth movie you watched made up of 24 frames per second would, to the fly, appear as a series of static images, like a slide show. But this fast vision allows it to react quickly to prey, obstacles, competitors and your attempts at swatting.</p>
<p><a href="https://faculty.fiu.edu/%7Etheobald/">Our research</a> shows that flies <a href="https://doi.org/10.1016/j.visres.2020.02.007">in dim light lose some ability to see fast movements</a>. This might sound like a good opportunity to swat them, but humans also lose their ability to see quick, sharp features in the dark. So you may be just as handicapped your target. </p>
<p>When they do fly in the dark, flies and mosquitoes <a href="https://doi.org/10.1016/j.cub.2022.01.078">fly erratically</a>, with twisty flight paths to escape swats. They can also rely on <a href="https://doi.org/10.1016/bs.aiip.2016.04.007">nonvisual cues</a>, such as information from small hairs on their body that sense changes in the air currents when you move to strike.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/9wqZ7Jt3thg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Flight of a mosquito. Source: Intellectual Ventures.</span></figcaption>
</figure>
<h2>Neural tricks</h2>
<p>But why do flies see more slowly in the dark? You may have noticed your own vision becoming sluggish and blurry in the dark, and much less colorful. The process is similar for insects. Low light means <a href="https://doi.org/10.1098/rstb.2016.0062">fewer photons</a>, and just like cameras and telescopes, eyes depend on photons to make images. </p>
<p>But unlike a nice camera, which allows you to switch to a larger lens and gather more photons in dark settings, animals can’t swap out the optics of their eyes. Instead, they rely on <a href="https://doi.org/10.1016/S0042-6989(98)00262-4">summation</a>, a neural strategy that adds together the inputs of neighboring pixels, or increases the time they sample photons, to form an image.</p>
<p>Big pixels and longer exposures capture more photons, <a href="https://faculty.fiu.edu/%7Etheobald/visual-pooling/">but at the cost of sharp images</a>. Summation is equivalent to taking photographs with grainy film (higher ISO) or slow shutter speeds, which produce blurrier images, but avoid <a href="https://www.youtube.com/watch?v=eAhEatlueXA">underexposing</a> your subjects. Flies, <a href="https://doi.org/10.1016/j.visres.2018.05.007">especially small ones</a>, can’t see quickly in the dark because, in a sense, they are waiting for enough photons to arrive until they are sure of what they are seeing.</p>
<h2>Flight maneuverability</h2>
<p>In addition to rapidly perceiving looming threats, flies need to be able to fly away in a split second. This requires preparation for takeoff and <a href="https://doi.org/10.1126/science.1248955">quick flight maneuvers</a>. After visually detecting a looming threat, fruit flies, for example, adjust their posture in <a href="https://doi.org/10.1016/j.cub.2008.07.094">one-fifth of a second</a> before takeoff. Predatory flies, such as <a href="https://doi.org/10.1159/000435944">killer flies</a>, coordinate their legs, wings and halteres – dumbbell-shaped remnants of wings used for sensing in-air rotations – to quickly catch their prey midflight. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/tkK63pHFML0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Flight of a fly. Notice how they adjust their posture before takeoff. Source: The New York Times.</span></figcaption>
</figure>
<h2>How best to swat a fly</h2>
<p>To outmaneuver a fly, you must strike faster than it can detect your approaching hand. With practice, you may improve at this, but flies have honed their escapes over hundreds of millions of years. So instead of swatting, using other ways to manage flies, such as installing fly traps and cleaning backyards, is a better bet. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/OEIk_68miZc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Escape behavior of a fly in slow motion. Source: Florian Muijres et al, 2014 Science.</span></figcaption>
</figure>
<p>You can lure certain flies into a narrow neck bottle filled with apple cider vinegar and beer. Placing a funnel in the bottle neck makes it easy for them to enter, but difficult to escape. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A simple home-made fruit fly trap" src="https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=854&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=854&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=854&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1074&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1074&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478976/original/file-20220812-1219-4t7yzv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1074&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Apple cider vinegar and beer trap to control fruit flies in your kitchen or backyard.</span>
<span class="attribution"><span class="source">Ravindra Palavalli-Nettimi</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>As for mosquitoes, some commercial repellents may work, but removing stagnant water around the house – in some plants, pots or any open containers – will help <a href="https://www.cdc.gov/mosquitoes/mosquito-control/athome/outside-your-home/index.html">eliminate their egg-laying sites</a> and reduce the number of mosquitoes around from the start. Avoid insecticides, as they also <a href="https://environment-review.yale.edu/deadlier-intended-pesticides-might-be-killing-beneficial-insects-beyond-their-targets-0">harm useful insects</a> such as bees and butterflies.</p><img src="https://counter.theconversation.com/content/187051/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jamie Theobald receives funding from the National Science Foundation (IOS-1750833). </span></em></p><p class="fine-print"><em><span>Ravindra Palavalli-Nettimi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Why is it so difficult to swat a fly? A team of insect experts explains how a fly’s sophisticated vision allows it to quickly react to visual cues.Jamie Theobald, Associate Professor of Biological Sciences, Florida International UniversityRavindra Palavalli-Nettimi, Postdoctoral Research Associate, Florida International UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1821002022-06-30T12:23:16Z2022-06-30T12:23:16ZPeople vary a lot in how well they recognize, match or categorize the things they see – researchers attribute this skill to an ability they call ‘o’<figure><img src="https://images.theconversation.com/files/471229/original/file-20220627-14-d5x5pz.jpg?ixlib=rb-1.1.0&rect=30%2C100%2C5854%2C4365&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some people are inherently better at tasks like reading X-rays.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/pediatrician-shows-concerned-father-foot-x-rays-royalty-free-image/1061001356">SDI Productions/E+ via Getty Images</a></span></figcaption></figure><p>Like snowflakes, no two people are exactly the same. You’re probably used to the idea that people differ substantially in personality and in cognitive abilities – skills like problem-solving or remembering information.</p>
<p>In contrast, there’s <a href="https://doi.org/10.1177/0963721417737151">a widely held intuition</a> that people vary far less in their ability to recognize, match or categorize objects. Many everyday tasks, hobbies and even critical jobs – like interpreting satellite imagery, matching fingerprints or diagnosing medical conditions – rely on these perceptual skills. The common expectation is that smart and motivated people who receive the appropriate training should eventually be able to excel at occupations that require hundreds of perceptual decisions every day.</p>
<p><a href="https://scholar.google.com/citations?user=zMFcCjEAAAAJ&hl=en&oi=ao">We</a> <a href="https://scholar.google.com/citations?user=dxEzLKAAAAAJ&hl=en&oi=ao">are</a> psychologists who measure how people compare on challenging perceptual tasks. Our research has found that this intuition that everyone has the same capacity for perceptual skills is not supported by the evidence. </p>
<p>It’s not a problem if you choose to spend every weekend bird-watching without ever getting very good at it – you may still get some fresh air and have fun. But when perceptual decisions influence safety, health or legal outcomes, there’s a case for seeking people who can achieve the best possible performance. Our research suggests some people are just better than others at learning to discriminate things perceptually, whatever the objects may be.</p>
<h2>A general ability to recognize things</h2>
<p><a href="https://doi.org/10.1037/11491-006">Classic psychological studies</a> at the turn of the 20th century discovered that performance across a range of cognitive tasks designed to test memory, math and verbal skills is correlated. In real life, this means someone who is great at sudoku is also likely to be good at memorizing their shopping list. This finding led to the modern notion of general intelligence, describing a collection of faculties that together predict a wide range of outcomes, from <a href="https://doi.org/10.1037/a0015497">income</a> to <a href="https://doi.org/10.1111/j.0963-7214.2004.01301001.x">health and longevity</a>.</p>
<p>In a similar way, our studies reveal that those who are the <a href="https://doi.org/10.1037/xge0001100">best at bird recognition may also excel at plane identification</a>,
and they may also be the best at learning to spot tumors in <a href="https://doi.org/10.1002/acp.3460">chest X-rays</a>. In other research, the same ability predicted better performance in <a href="https://doi.org/10.3758/s13414-021-02349-3">reading musical notation</a> or <a href="https://www.visionsciences.org/presentation/?id=4101">recognizing images of prepared food</a>.</p>
<p>Of course, people vary in their experience with birds or medical images. The more familiar you are with them, the <a href="https://doi.org/10.1186/s41235-017-0073-4">better you are at recognizing them</a>. Experience and training have an important role in how people make decisions based on visual information. But does everyone start on the same footing when they begin training?</p>
<h2>Does everyone start at square one?</h2>
<p>We were interested in whether everyone starts at about the same baseline of perceptual talent. To investigate this question, we measured people’s abilities with artificial objects they had never seen, to prevent any advantage due to different levels of experience.</p>
<p>In <a href="https://doi.org/10.1037/rev0000129">one large study</a>, we assessed 246 people for 13 hours each, testing them on several tasks with six categories of computer-generated artificial objects. We asked people to remember and recognize objects, to match them, or to make judgments about some of their parts.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="images of abstract objects, a chest X-ray, four versions of a prepared food and four imaginary robots" src="https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=581&fit=crop&dpr=1 600w, https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=581&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=581&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=730&fit=crop&dpr=1 754w, https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=730&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/469066/original/file-20220615-9175-6vr9hj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=730&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Examples of tasks that tap into o, from top left: 1) Are these two objects identical despite the change in viewpoint? 2) Which lung has a tumor? 3) Which of these dishes is the oddball? 4) Which option is the average of the four robots on the right?
Answers: 1) no 2) left 3) third 4) fourth.</span>
<span class="attribution"><span class="source">Isabel Gauthier</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Our results across tasks like these repeatedly reveal that people vary as much in perceptual abilities as they do in cognitive skills. Using <a href="https://doi.org/10.1146/annurev.psych.53.100901.135239">statistical methods</a> historically applied to intelligence and personality tests, we found that over 89% of the differences between people in their performance with these different tasks and categories could be explained by a general ability. We called this ability “o” for object recognition and in honor of the “g” factor, which stands for similar statistical evidence for general intelligence. </p>
<p>In <a href="https://doi.org/10.1037/xge0001100">follow-up studies</a>, we’ve found that o applies in the same way to artificial and real objects, and that people with high o are better at computing summary statistics for groups of objects (such as estimating the “average” of several objects) and also better at <a href="https://doi.org/10.1007/s00426-021-01560-z">recognizing objects by touch</a>. You can compare yourself to others in <a href="https://jasonkchow.github.io/ov_demo/">this short demo</a>.</p>
<h2>o is a distinct ability</h2>
<p>Since it is so general, is o just another name for general intelligence? We don’t think so.</p>
<p>In one study, we found that <a href="https://doi.org/10.1016/j.cognition.2017.05.019">neither IQ nor SAT scores predict recognition</a> of novel objects. <a href="https://doi.org/10.1037/rev0000129">In other work</a>, we found that o was distinct from g, but also from the personality trait of conscientiousness. This means that book smarts may not be enough to excel in domains that rely heavily on perceptual abilities.</p>
<p>We tested this idea by measuring how good people with or without expertise in radiology were at detecting lung nodules in chest X-rays. Those with the highest o were better at this task, even after controlling for intelligence and experience in radiology. This finding demonstrates the added value of measuring o. Even when medical students are selected to be smart and provided with training, it may not guarantee the highest levels of performance in specializations that rely on perceptual skills.</p>
<p>Many doors open when you demonstrate that you’re cognitively talented, which seems only fair. But it is fair only to the extent that general intelligence is the best way – or even a sufficient way – to predict success in a given domain. Many have raised warnings that intelligence testing can lead to inequities in hiring or career placement tied to race, gender or socioeconomic status.</p>
<p>Over the years, many thinkers have downplayed innate talents to emphasize environmental influences. They argued that success can be shaped through years of <a href="https://doi.org/10.1111/j.1553-2712.2008.00227.x">deliberate practice</a>, programs to change one’s <a href="https://doi.org/10.1080/00461520.2012.722805">attitudes about learning</a>, or even <a href="https://doi.org/10.1038/nrn3135">hours of playing video games</a>. </p>
<p>But the evidence in favor of the influence of innate talents remains strong, and denying them or overpromising on the efficacy of environmental factors <a href="https://doi.org/10.1177/0963721418797300">may sometimes be harmful</a>. People can waste time and resources that could be better invested, and may run the risk of experiencing stigma if their efforts do not succeed because of factors they cannot control.</p>
<p>One answer to this problem is to learn more about talents beyond those related to intelligence and then to make better use of them. Classical notions of intelligence may be just one factor of many that determine overall ability. An increased focus on perceptual abilities, specifically those that are general, could help reduce inequities. For instance, while differences in experience can drive <a href="https://doi.org/10.1016/j.visres.2016.10.003">sex differences in the recognition of objects in some familiar categories</a>, we’ve found <a href="https://doi.org/10.1037/xge0001100">no such differences in the general ability o</a>.</p><img src="https://counter.theconversation.com/content/182100/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>To achieve perceptual expertise, you may need more than smarts and hard work. Research suggests there’s a general ability that may help you succeed in jobs that depend on perceptual decisions.Isabel Gauthier, David K. Wilson Professor of Psychology, Vanderbilt UniversityJason Chow, Ph.D. Student in Psychological Sciences, Vanderbilt UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1516502021-02-05T13:06:37Z2021-02-05T13:06:37ZDo you see red like I see red?<figure><img src="https://images.theconversation.com/files/382552/original/file-20210204-14-a5xafl.jpg?ixlib=rb-1.1.0&rect=752%2C783%2C6122%2C3968&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's disconcerting to think the way two people perceive the world might be totally different.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/man-and-woman-standing-in-a-gallery-space-with-royalty-free-image/839180292">Mads Perch/Stone via Getty Images</a></span></figcaption></figure><p>Is the red I see the same as the red you see?</p>
<p>At first, the question seems confusing. Color is an inherent part of visual experience, as fundamental as gravity. So how could anyone see color differently than you do?</p>
<p>To dispense with the seemingly silly question, you can point to different objects and ask, “What color is that?” The initial consensus apparently settles the issue.</p>
<p>But then you might uncover troubling variability. A rug that some people call green, others call blue. A <a href="https://en.wikipedia.org/wiki/The_dress">photo of a dress</a> that <a href="https://doi.org/10.1016/j.cub.2015.04.053">some people call blue and black, others say is white and gold</a>.</p>
<p>You’re confronted with an unsettling possibility. Even if we agree on the label, maybe your experience of red is different from mine and – shudder – could it correspond to my experience of green? How would we know?</p>
<p>Neuroscientists, <a href="https://scholar.google.com/citations?user=LNgp00MAAAAJ">including</a> <a href="https://scholar.google.com/citations?user=6I_zDKUAAAAJ">us</a>, have tackled <a href="https://mitpress.mit.edu/books/color-ontology-and-color-science">this age-old puzzle</a> and are starting to come up with some answers to these questions. One thing that is becoming clear is the reason individual differences in color are so disconcerting in the first place. </p>
<h2>Colors add meaning to what you see</h2>
<p>Scientists often explain why people have color vision in cold, analytic terms: Color is <a href="https://doi.org/10.1146/annurev-vision-091517-034231">for object recognition</a>. And this is certainly true, but it’s not the whole story.</p>
<p>The <a href="https://doi.org/10.1167/18.11.1">color statistics of objects</a> are not arbitrary. The parts of scenes that people choose to label (“ball,” “apple,” “tiger”) are not any random color: They are more likely to be warm colors (oranges, yellows, reds), and less likely to be cool colors (blues, greens). This is true even for artificial objects that could have been made any color.</p>
<p>These observations suggest that your brain can use color to help recognize objects, and might explain <a href="https://theconversation.com/languages-dont-all-have-the-same-number-of-terms-for-colors-scientists-have-a-new-theory-why-84117">universal color naming patterns across languages</a>. </p>
<p>But recognizing objects is not the only, or maybe even the main, job of color vision. In <a href="https://doi.org/10.1038/s41467-019-10073-8">a recent study</a>, neuroscientists Maryam Hasantash and Rosa Lafer-Sousa showed participants real-world stimuli illuminated by low-pressure-sodium lights – the energy-efficient yellow lighting you’ve likely encountered in a parking garage.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="people and fruit lit by yellow low sodium lights" src="https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=911&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=911&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=911&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1145&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1145&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1145&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The eye can’t properly encode color for scenes lit by monochromatic light.</span>
<span class="attribution"><span class="source">Rosa Lafer-Sousa</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>The yellow light prevents the eye’s retina from properly encoding color. The researchers reasoned that if they temporarily knocked out this ability in their volunteers, the impairment might point to the normal function of color information. </p>
<p>The volunteers could still recognize objects like strawberries and oranges bathed in the eerie yellow light, implying that color isn’t critical for recognizing objects. But the fruit looked unappetizing. </p>
<p>Volunteers could also recognize faces – but they looked green and sick. Researchers think that’s because your expectations about normal face coloring are violated. The green appearance is a kind of error signal telling you that something’s wrong. This phenomenon is an example of how <a href="https://doi.org/10.1111/j.1933-1592.2010.00481.x">your knowledge can affect your perception</a>. Sometimes what you know, or think you know, influences what you see. </p>
<p>This research builds up the idea that color isn’t so critical for telling you what stuff is but rather about its likely meaning. Color doesn’t tell you about the kind of fruit, but rather whether a piece of fruit is probably tasty. And for faces, color is literally a vital sign that helps us identify emotions like anger and embarrassment, <a href="https://www.sciencedirect.com/science/article/pii/S0889159116304986">as well as sickness</a>, as any parent knows. </p>
<p>It might be color’s importance for telling us about meaning, especially in social interactions, that makes variability in color experiences between people so disconcerting. </p>
<h2>Looking for objective, measurable colors</h2>
<p>Another reason variability in color experience is troubling has to do with the fact that we can’t easily measure colors.</p>
<p>Having an objective metric of experience gets us over the quandary of subjectivity. With shape, for instance, we can measure dimensions using a ruler. Disagreements about apparent size can be settled dispassionately.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="spectral power distribution of various wavelengths of light" src="https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The spectral power distribution of a 25-watt incandescent lightbulb illustrates the wavelengths of light it emits.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Spectral_power_distribution_of_a_25_W_incandescent_light_bulb.png">Thorseth/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>With color, we can measure proportions of different wavelengths across the rainbow. But these “spectral power distributions” do not by themselves tell us the color, even though they are <a href="https://doi.org/10.1017/S0140525X03000013">the physical basis for color</a>. A given distribution can appear different colors depending on context and assumptions about materials and lighting, as <a href="https://doi.org/10.1167/17.12.25">#thedress proved</a>.</p>
<p>Perhaps color is a <a href="https://aardvark.ucsd.edu/color/hatfield.html">“psychobiological” property</a> that emerges from the brain’s response to light. If so, could an objective basis for color be found not in the physics of the world but rather in the human brain’s response? </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="cross section of retina with different cell types" src="https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=389&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=389&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=389&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=488&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=488&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=488&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Cone cells in the eye’s retina encode messages about color vision.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/eye-anatomy-rod-cells-and-cone-cells-royalty-free-illustration/1091261988">ttsz/iStock via Getty Images Plus</a></span>
</figcaption>
</figure>
<p>To compute color, your brain engages <a href="https://doi.org/10.1146/annurev-vision-091517-034202">an extensive network of circuits</a> in the cerebral cortex that <a href="https://doi.org/10.1146/annurev-vision-121219-081801">interpret the retinal signals</a>, taking into account <a href="https://journals.sagepub.com/doi/full/10.1177/1073858419882621">context and your expectations</a>. Can we measure the color of a stimulus by monitoring brain activity?</p>
<h2>Your brain response to red is similar to mine</h2>
<p>Our group used magnetoencephalography – MEG for short – to monitor the tiny magnetic fields created when nerve cells in the brain fire to communicate. We were able to classify the response to various colors using machine learning and then <a href="https://doi.org/10.1016/j.cub.2020.10.062">decode from brain activity the colors</a> that participants saw.</p>
<p>So, yes, we can determine color by measuring what happens in the brain. Our results show that each color is associated with a distinct pattern of brain activity.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person seated in MEG machine looking at screen with color projection" src="https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=378&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=378&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=378&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=475&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=475&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=475&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Researchers measured volunteers’ brain responses with magnetoencephalography (MEG) to decode what colors they saw.</span>
<span class="attribution"><span class="source">Bevil Conway</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>But are the patterns of brain response similar across people? This is a hard question to answer, because one needs a way of perfectly matching the anatomy of one brain to another, which is really tough to do. For now, we can sidestep the technical challenge by asking a related question. Does my relationship between red and orange resemble your relationship between red and orange? </p>
<p>The MEG experiment showed that two colors that are perceptually more similar, as assessed by how people label the colors, give rise to more similar patterns of brain activity. So your brain’s response to color will be fairly similar when you look at something light green and something dark green but quite different when looking at something yellow versus something brown. What’s more, these similarity relationships are preserved across people. </p>
<p>Physiological measurements are unlikely to ever resolve metaphysical questions such as “what is redness?” But the MEG results nonetheless provide some reassurance that color is a fact we can agree on.</p><img src="https://counter.theconversation.com/content/151650/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bevil R. Conway receives funding from the Intramural Research Program (IRP) of the National Eye Institute (NEI). </span></em></p><p class="fine-print"><em><span>Danny Garside receives funding from the Intramural Research Program (IRP) of the National Eye Institute (NEI). </span></em></p>Neuroscientists tackling the age-old question of whether perceptions of color hold from one person to the next are coming up with some interesting answers.Bevil R. Conway, Senior Investigator at the National Eye Institute, Section on Perception, Cognition, and Action, National Institutes of HealthDanny Garside, Visiting Fellow in Sensation, Cognition & Action, National Institutes of HealthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1314382020-07-02T12:26:07Z2020-07-02T12:26:07ZDo dogs really see in just black and white?<figure><img src="https://images.theconversation.com/files/343579/original/file-20200623-188900-3set3z.jpg?ixlib=rb-1.1.0&rect=286%2C557%2C4464%2C3080&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Don't worry that your dog's world is visually drab.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/high-angle-view-of-dog-walking-on-colorful-striped-royalty-free-image/677142241">Kevin Short/EyeEm via Getty Images</a></span></figcaption></figure><figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em><a href="https://theconversation.com/us/topics/curious-kids-us-74795">Curious Kids</a> is a series for children of all ages. If you have a question you’d like an expert to answer, send it to <a href="mailto:curiouskidsus@theconversation.com">curiouskidsus@theconversation.com</a>.</em></p>
<hr>
<blockquote>
<p><strong>Do dogs really see in just black and white? – Oscar V., age 9, Somerville, Massachusetts</strong></p>
</blockquote>
<hr>
<p>Dogs definitely see the world differently than people do, but it’s a myth that their view is <a href="https://www.hillspet.com/dog-care/resources/dog-myths">just black, white and grim shades of gray</a>. </p>
<p>While most people see a full spectrum of colors from red to violet, dogs lack some of the light receptors in their eyes that allow human beings to see certain colors, particularly in the red and green range. But canines can still see yellow and blue.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=353&fit=crop&dpr=1 600w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=353&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=353&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=444&fit=crop&dpr=1 754w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=444&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=444&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Different wavelengths of light register as different colors in an animal’s visual system. Top is the human view; bottom is a dog’s eye view.</span>
<span class="attribution"><a class="source" href="https://dog-vision.andraspeter.com/tool.php">Top: iStock/Getty Images Plus via Getty Images. Bottom: As processed by András Péter's Dog Vision Image Processing Tool</a></span>
</figcaption>
</figure>
<p>What you see as red or orange, to a dog may just be another shade of tan. To my dog, Sparky, a bright orange ball lying in the green grass may look like a tan ball in another shade of tan grass. But his bright blue ball will look similar to both of us. <a href="https://dog-vision.andraspeter.com/tool.php">An online image processing tool</a> lets you see for yourself what a particular picture looks like to your pet.</p>
<p>Animals can’t use spoken language to describe what they see, but researchers easily trained dogs to touch a lit-up color disc with their nose to get a treat. Then they trained the dogs to touch a disc that was a different color than some others. When the well-trained dogs couldn’t figure out which disc to press, the scientists knew that they couldn’t see the differences in color. These experiments showed that <a href="https://doi.org/10.1017/s0952523800004430">dogs could see only yellow and blue</a>.</p>
<p>In the back of our eyeballs, human beings’ retinas contain three types of special cone-shaped cells that are responsible for all the colors we can see. When scientists used a technique called electroretinography to measure the way dogs’ eyes react to light, they found that <a href="https://doi.org/10.1017/S0952523800003291">canines have fewer kinds of these cone cells</a>. Compared to people’s three kinds, dogs only have two types of cone receptors.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=393&fit=crop&dpr=1 600w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=393&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=393&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=494&fit=crop&dpr=1 754w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=494&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=494&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Light travels to the back of the eyeball, where it registers with rod and cone cells that send visual signals on to the brain.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/eye-anatomy-rod-cells-and-cone-cells-royalty-free-illustration/1091261988">iStock/Getty Images Plus via Getty Images</a></span>
</figcaption>
</figure>
<p>Not only can dogs see fewer colors than we do, they probably don’t see as clearly as we do either. Tests show that both the structure and function of the dog eye leads them to <a href="https://ucdavis.pure.elsevier.com/en/publications/vision-in-dogs">see things at a distance as more blurry</a>. While we think of perfect vision in humans as being 20/20, typical vision in dogs is probably closer to 20/75. This means that what a person with normal vision could see from 75 feet away, a dog would need to be just 20 feet away to see as clearly. Since dogs don’t read the newspaper, their visual acuity probably doesn’t interfere with their way of life.</p>
<p>There’s likely a lot of difference in visual ability between breeds. Over the years, breeders have selected sight-hunting dogs like greyhounds to have better vision than dogs like bulldogs.</p>
<p>But that’s not the end of the story. While people have a tough time seeing clearly in dim light, scientists believe dogs can probably see as well at dusk or dawn as they can in the bright middle of the day. This is because compared to humans’, dog retinas have a <a href="https://ucdavis.pure.elsevier.com/en/publications/vision-in-dogs">higher percentage and type of another kind of visual receptor</a>. Called rod cells because of their shape, they function better in low light than cone cells do.</p>
<p>Dogs also have a reflective tissue layer at the back of their eyes that <a href="https://doi.org/10.3758/s13423-017-1404-7">helps them see in less light</a>. This mirror-like tapetum lucidum collects and concentrates the available light to help them see when it’s dark. The tapetum lucidum is what gives dogs and other mammals that glowing eye reflection when caught in your headlights at night or when you try to take a flash photo.</p>
<p>Dogs share their type of vision with many other animals, <a href="https://www.hillspet.com/cat-care/behavior-appearance/cat-vision">including cats</a> <a href="https://doi.org/10.1017/S0952523800003291">and foxes</a>. Scientists think it’s important for these hunters to be able to detect the motion of their nocturnal prey, and that’s why their vision <a href="https://www.theguardian.com/science/2016/aug/03/did-t-rex-make-your-dog-colour-blind">evolved in this way</a>. As many mammals developed the ability to forage and hunt in twilight or dark conditions, they <a href="https://doi.org/10.1016/j.devcel.2016.05.023">gave up the ability to see the variety of colors</a> that most birds, reptiles and primates have. People didn’t evolve to be active all night, so we kept the color vision and better visual acuity. </p>
<p>Before you feel sorry that dogs aren’t able to see all the colors of the rainbow, keep in mind that some of their other senses are much more developed than yours. They can <a href="https://www.akc.org/expert-advice/lifestyle/sounds-only-dogs-can-hear/">hear higher-pitched sounds from farther away</a>, and their <a href="https://www.pbs.org/wgbh/nova/article/dogs-sense-of-smell/">noses are much more powerful</a>.</p>
<p>Even though Sparky might not be able to easily see that orange toy in the grass, he can certainly smell it and find it easily when he wants to. </p>
<hr>
<p><em>Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to <a href="mailto:curiouskidsus@theconversation.com">CuriousKidsUS@theconversation.com</a>. Please tell us your name, age and the city where you live.</em></p>
<p><em>And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.</em></p><img src="https://counter.theconversation.com/content/131438/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nancy Dreschel does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Your faithful friend’s view of the world is different than yours, but maybe not in the way you imagine.Nancy Dreschel, Associate Teaching Professor of Small Animal Science, Penn StateLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/928302018-06-20T20:08:07Z2018-06-20T20:08:07ZHow eye disorders may have influenced the work of famous painters<figure><img src="https://images.theconversation.com/files/222494/original/file-20180610-191981-1v2dtiv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's been argued the Impressionists were short sighted.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Camille_Pissarro,_The_Boulevard_Montmartre_at_Night,_1897.jpg">The Boulevard Montmartre at Night, Camille Pissarro/Wikimedia Commons</a></span></figcaption></figure><p><a href="https://theconversation.com/eran-los-impresionistas-miopes-99555"><em>Leer en español</em></a>.</p>
<p>Vision is an important tool when creating a painted artwork. Vision is used to survey a scene, guide the artist’s movements over the canvas and provide feedback on the colour and form of the work. However, it’s possible for disease and disorders to alter an artist’s visual perception.</p>
<p>There is a <a href="http://digicoll.library.wisc.edu/cgi-bin/HistSciTech/HistSciTech-idx?type=article&did=HISTSCITECH.NATURE18720321.I0007&id=HistSciTech.Nature18720321&isize=M">long history</a> of <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1869328/">scientists and clinicians</a> arguing <a href="https://www.ncbi.nlm.nih.gov/pubmed/8510952">particular artists</a> were affected by <a href="https://www.ncbi.nlm.nih.gov/pubmed/26563659">vision disorders</a>, based on signs in their works. Some argued the <a href="https://www.ncbi.nlm.nih.gov/pubmed/8510952">leaders of the Impressionist movement were short-sighted</a>, for instance, and that their blurry distance vision when not using spectacles may explain their broad, impetuous style.</p>
<p>Supporting evidence of such disorders and their influence on artworks is often speculative, and hampered by a lack of clinical records to support the diagnosis. A particular challenge to verifying these speculations is that artists are, of course, free to represent the world in whatever fashion they like. </p>
<p>So, is a particular style the result of impoverished vision, or rather a conscious artistic choice made by the artist? Here are three artists who it has been claimed suffered vision impairments.</p>
<h2>El Greco</h2>
<p>Architect, painter and sculptor of the Spanish Renaissance, El Greco (1541-1614) is known for vertically elongating certain figures in his paintings. In 1913, ophthalmologist <a href="https://pdfs.semanticscholar.org/5059/8e2c07220d1bb76b52f02508ee7f09ce0077.pdf">Germán Beritens argued</a> this elongation was due to astigmatism. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=750&fit=crop&dpr=1 600w, https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=750&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=750&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=943&fit=crop&dpr=1 754w, https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=943&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/222488/original/file-20180610-191965-18866is.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=943&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Evidence suggests El Greco’s elongated shapes were a conscious artistic choice, rather than the result of an eye disorder.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:El_Greco_031.jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Astigmatism typically results when the cornea – the front surface of the eye and the principal light-focusing element – is not spherical, but shaped more like a watermelon. </p>
<p>This means the light bends in different amounts, depending on the direction in which it’s passing through the eye. Lines and contours in an image that are of a particular orientation will be less in focus than others.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=829&fit=crop&dpr=1 600w, https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=829&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=829&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1041&fit=crop&dpr=1 754w, https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1041&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/222487/original/file-20180610-191947-1815wvs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1041&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Astigmatism occurs when the cornea is shaped like a watermelon, instead of being spherical.</span>
<span class="attribution"><span class="source">from shutterstock.com</span></span>
</figcaption>
</figure>
<p>Beritens would demonstrate his astigmatism theory to house guests using a special lens that produced El Greco-like vertical elongations.</p>
<p>But there are several problems with Beriten’s theory. A common objection is that any vertical stretching should have affected El Greco’s view of both the subject being painted <em>and</em> the canvas being painted on. This would mean the astigmatism effects <a href="https://www.ncbi.nlm.nih.gov/pubmed/24577418">should largely cancel out</a>. Possibly <a href="https://www.ncbi.nlm.nih.gov/pubmed/24577418">more problematic</a> is that uncorrected astigmatism mainly causes blurry vision, rather than a change in image size.</p>
<p>Plus, <a href="https://www.ncbi.nlm.nih.gov/pubmed/26563659">other evidence suggests</a> El Greco’s use of vertical elongation was a deliberate artistic choice. For example, in his 1610 painting, St Jerome as Scholar (above), the horizontally oriented hand of the saint is also elongated, just like the figure. If El Greco’s elongated figures were due to a simple vertical stretching in his visual perception, we would expect the hand to look comparatively stubby.</p>
<h2>Claude Monet</h2>
<p>Elsewhere, the influence of eye anomalies in artworks is more compelling. Cataracts are a progressive cloudiness of the lens inside the eye, producing blurred and dulled vision that can’t be corrected with spectacles. </p>
<p>Cataracts are often brown, which filter the light passing through them, impairing colour discrimination. In severe cases, blue light is almost completely blocked.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-what-are-cataracts-63699">Explainer: what are cataracts?</a>
</strong>
</em>
</p>
<hr>
<p>Claude Monet was <a href="https://www.ncbi.nlm.nih.gov/pubmed/26563659">diagnosed with cataracts in 1912</a>, and recommended to undergo surgery. He refused. Over the subsequent decade, his ability to see critical detail reduced, as is documented in his medical records. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=750&fit=crop&dpr=1 600w, https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=750&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=750&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=942&fit=crop&dpr=1 754w, https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=942&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/222490/original/file-20180610-191951-1tqfsxl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=942&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This version of Monet’s bridge over a pond of water lillies was painted in 1899, ten years before his cataracts diagnosis.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Bridge_Over_a_Pond_of_Water_Lilies,_Claude_Monet_1899.jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Importantly, his colour vision also suffered. In 1914, he <a href="https://www.ncbi.nlm.nih.gov/pubmed/26563659">noted how reds appeared dull and muddy</a>, and by 1918 he was reduced to selecting colours from the label on the paint tube.</p>
<p>The visual impact of his cataracts is demonstrated in two paintings of the same scene: the Japanese footbridge over his garden’s lily pond. The first, painted ten years prior to his cataract diagnosis, is full of detail and subtle use of colour. </p>
<p>In contrast, the second – painted the year prior to his eventually relenting to surgery – shows colours to be dark and murky, with a near absence of blue, and a dramatic reduction in the level of painted detail.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=572&fit=crop&dpr=1 600w, https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=572&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=572&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=719&fit=crop&dpr=1 754w, https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=719&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/222491/original/file-20180610-191971-jtopu4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=719&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Japanese Footbridge was painted in 1922, a year before Monet’s cataract surgery.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Claude_Monet_-_The_Japanese_Footbridge,_Giverny_-_Google_Art_Project.jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>There is good evidence such changes were not a conscious artistic choice. In a 1922 <a href="https://psyc.ucalgary.ca/PACE/VA-Lab/AVDE-Website/Monet.html">letter to author Marc Elder</a>, Monet confided he recognised his visual impairment was causing him to spoil paintings, and that his blindness was forcing him to abandon work despite his otherwise good health.</p>
<p>One of <a href="https://www.ncbi.nlm.nih.gov/pubmed/26563659">Monet’s fears</a> was that surgery would alter his colour perception, and indeed after surgery he complained of the world appearing too yellow or sometimes too blue. It was two years before he felt his colour vision had returned to normal. </p>
<p>Experimental work <a href="https://www.ncbi.nlm.nih.gov/pubmed/15518204">has confirmed</a> colour perception is measurably altered for months after cataract surgery, as the eye and brain adapt to the increased blue light previously blocked by the cataract.</p>
<h2>Clifton Pugh</h2>
<p>In addition to eye disease, colour vision can be altered by inherited deficiencies. Around <a href="http://www.colourblindawareness.org/colour-blindness/">8% of men and 0.5% of women</a> are born with abnormal colour vision – sometimes erroneously called “colour blindness”. </p>
<p>In one of its most common severe forms, people see colours purely in terms of various levels of blue and yellow. They can’t distinguish colours that vary only in their redness or greenness, and so have trouble distinguishing ripe from unripe fruit, for example. </p>
<p>It has been argued no major artist is known to have <a href="https://www.ncbi.nlm.nih.gov/pubmed/11274694">abnormal colour vision</a>. But <a href="https://www.ncbi.nlm.nih.gov/pubmed/19515095">subsequent research</a> argues against this. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-what-is-colour-blindness-7651">Explainer: what is colour blindness?</a>
</strong>
</em>
</p>
<hr>
<p>Australian artist <a href="https://www.portrait.gov.au/portraits/2006.56/kate-hattam/31931/">Clifton Pugh</a> can readily lay claim to the title of “major artist”: he was three-times winner of the Archibald Prize for Portraiture, is highly represented in national galleries, and even won a bronze medal for painting at the Olympics (back when such things were possible). </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=480&fit=crop&dpr=1 600w, https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=480&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=480&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=603&fit=crop&dpr=1 754w, https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=603&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/222493/original/file-20180610-191978-1kwm0k6.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=603&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">It seems Pugh’s colour vision impairment didn’t noticeably influence the colours used in his artworks.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/File:Gough_Whitlam_by_Clifton_Pugh_1972.png">Low res version of Gough Whitlam, 1972/Wikimedia Commons.</a></span>
</figcaption>
</figure>
<p>His abnormal colour vision is <a href="https://www.ncbi.nlm.nih.gov/pubmed/19515095">well documented</a> in biographical information. Owing to the inherited nature of colour vision deficiencies, researchers were able to test the colour vision of surviving family members to support their case that Pugh almost certainly had a severe red-green colour deficiency. </p>
<p>But an analysis of the colours used in Pugh’s paintings failed to reveal any signatures that would suggest a colour vision deficiency. This is consistent with <a href="https://academic.oup.com/bjaesthetics/article-abstract/7/2/132/117619?redirectedFrom=fulltext">previous work</a>, demonstrating it was not possible to reliably diagnose a colour vision deficiency based on an artist’s work.</p><img src="https://counter.theconversation.com/content/92830/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Disease and disorders can affect how we see. Can the images in painted artworks tell us something about the state of an artist’s vision?Andrew Anderson, Associate Professor, Department of Optometry & Vision Sciences, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.