tag:theconversation.com,2011:/global/topics/sensory-perception-17076/articlesSensory perception – The Conversation2023-05-17T12:39:23Ztag:theconversation.com,2011:article/2038372023-05-17T12:39:23Z2023-05-17T12:39:23ZBees can learn, remember, think and make decisions – here’s a look at how they navigate the world<figure><img src="https://images.theconversation.com/files/526312/original/file-20230515-24407-1yxhj8.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2286%2C1560&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A bumblebee lands on the flowers of a white sloe bush. </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/april-2022-saxony-anhalt-kathendorf-a-bumblebee-lands-on-news-photo/1240227459">Soeren Stache/picture alliance via Getty Images</a></span></figcaption></figure><p>As trees and flowers blossom in spring, bees emerge from their winter nests and burrows. For many species it’s <a href="https://theconversation.com/spring-signals-female-bees-to-lay-the-next-generation-of-pollinators-134852">time to mate</a>, and some will start new solitary nests or colonies. </p>
<p>Bees and other pollinators are essential to human society. They provide about one-third of the <a href="https://theconversation.com/a-bee-economist-explains-honey-bees-vital-role-in-growing-tasty-almonds-101421">food we eat</a>, a service with a global value estimated at <a href="https://doi.org/10.1038/nature20588">up to $US577 billion annually</a>.</p>
<p>But bees are interesting in many other ways that are less widely known. In my new book, “<a href="https://islandpress.org/books/what-bee-knows">What a Bee Knows: Exploring the Thoughts, Memories, and Personalities of Bees</a>,” I draw on my experience <a href="https://scholar.google.com/citations?user=tqms8REAAAAJ&hl=en">studying bees for almost 50 years</a> to explore how these creatures perceive the world and their amazing abilities to navigate, learn, communicate and remember. Here’s some of what I’ve learned.</p>
<h2>It’s not all about hives and honey</h2>
<p>Because people are widely familiar with honeybees, many assume that all bees are social and live in hives or colonies with a queen. In fact, only about 10% of bees are social, and most types don’t make honey.</p>
<p>Most bees lead solitary lives, digging nests in the ground or finding abandoned beetle burrows in dead wood to call home. Some bees are cleptoparasites, <a href="https://www.sciencefriday.com/articles/death-and-thievery-in-the-colony/">sneaking into unoccupied nests to lay eggs</a>, in the same way that cowbirds lay their eggs in other birds’ nests and let the unknowing foster parents <a href="https://madisonaudubon.org/blog/2018/8/9/into-the-nest-cowbirds-everybodys-favorite-villain">rear their chicks</a>.</p>
<p>A few species of tropical bees, known as vulture bees, survive by <a href="https://doi.org/10.1128/mBio.02317-21">eating carrion</a>. Their guts contain acid-loving bacteria that enable the bees to digest rotting meat. </p>
<p><div data-react-class="InstagramEmbed" data-react-props="{"url":"https://www.instagram.com/p/CGWgbHdgmBB/?utm_source=ig_web_copy_link\u0026igshid=MzRlODBiNWFlZA==","accessToken":"127105130696839|b4b75090c9688d81dfd245afe6052f20"}"></div></p>
<h2>Busy brains</h2>
<p>The world looks very different to a bee than it does to a human, but bees’ perceptions are hardly simple. Bees are intelligent animals that <a href="https://doi.org/10.1016/j.anbehav.2016.05.005">likely feel pain</a>, remember patterns and odors and even <a href="https://doi.org/10.1242/jeb.01929">recognize human faces</a>. They <a href="https://doi.org/10.1006/nlme.1996.0069">can solve mazes</a> and other problems and use simple tools. </p>
<p>Research shows that bees <a href="https://doi.org/10.1016/j.cub.2020.12.027">are self-aware</a> and may even have a <a href="https://doi.org/10.1016/j.cub.2017.08.008">primitive form of consciousness</a>. During the six to 10 hours bees spend <a href="https://doi.org/10.7717/peerj.9583">sleeping daily</a>, <a href="https://doi.org/10.1016/j.neubiorev.2014.09.020">memories are consolidated</a> within their amazing brains – organs the size of a poppy seed that contain 1 million nerve cells. There are some indications that <a href="https://doi.org/10.1016/j.cub.2015.09.001">bees might even dream</a>. I’d like to think so. </p>
<h2>An alien sensory world</h2>
<p>Bees’ sensory experience of the world is markedly different from ours. For example, humans see the world through the primary colors of <a href="http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colcon.html">red, green and blue</a>. Primary colors for bees are <a href="https://doi.org/10.1007/978-3-642-71496-2_15">green, blue and ultraviolet</a>.</p>
<p>Bees’ vision is <a href="https://doi.org/10.1146/annurev.ento.010908.164537">60 times less sharp than that of humans</a>: A flying bee can’t see the details of a flower until it is about 10 inches away. However, bees can see hidden ultraviolet floral patterns that are invisible to us, and those patterns lead the bees to flowers’ nectar.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/kQ8GRJp8bVg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Naturalist David Attenborough uses ultraviolet light to show how flowers may appear different to bees than to humans.</span></figcaption>
</figure>
<p>Bees also can spot flowers by detecting color changes at a distance. When humans watch film projected at 24 frames per second, the individual images appear to blur into motion. This phenomenon, which is called the <a href="https://www.britannica.com/science/movement-perception/Apparent-movement#ref488126">flicker-fusion frequency</a>, indicates how capable our visual systems are at resolving moving images. Bees have a much higher flicker-fusion frequency – you would have to play the film 10 times faster for it to look like a blur to them – so they can fly over a flowering meadow and <a href="https://doi.org/10.1007/BF00610583">see bright spots of floral color</a> that wouldn’t stand out to humans.</p>
<p>From a distance, bees detect flowers by scent. A honeybee’s sense of smell is <a href="https://doi.org/10.1371/journal.pone.0009110">100 times more sensitive</a> than ours. Scientists have used bees to sniff out chemicals <a href="https://entomologytoday.org/2013/11/25/can-trained-bees-detect-cancer-in-patients/">associated with cancer</a> and <a href="https://www.cbsnews.com/boston/news/boston-researchers-train-bees-to-detect-diabetes/">with diabetes</a> on patients’ breath and to detect the presence of <a href="https://www.technologyreview.com/2006/12/07/227361/using-bees-to-detect-bombs/">high explosives</a>. </p>
<p>Bees’ sense of touch is also highly developed: They can feel tiny fingerprint-like ridges <a href="https://doi.org/10.1073/pnas.82.14.4750">on the petals of some flowers</a>. Bees are <a href="https://doi.org/10.1080/0005772X.1995.11099233">nearly deaf</a> to most airborne sounds, unless they are very close to the source, but are sensitive if they are standing on a vibrating surface. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1348645052134944771"}"></div></p>
<h2>Problem solvers</h2>
<p>Bees <a href="https://doi.org/10.1080/0005772X.1995.11099233">can navigate mazes</a> as well as mice can, and studies show that they are self-aware of their body dimensions. For example, when fat bumblebees were trained to fly and then walk through a slit in a board to get to food on the other side, the bees <a href="https://doi.org/10.1073/pnas.2016872117">turned their bodies sideways and tucked in their legs</a>. </p>
<p>Experiments by Canadian researcher Peter Kevan and Lars Chittka in England demonstrated remarkable feats of bee learning. Bumblebees were trained to pull a string – in other words, to use a tool – connected to a plastic disk with hidden depressions filled with sugar water. They could see the sugar wells but couldn’t get the reward <a href="https://doi.org/10.1371/journal.pbio.1002564">except by tugging at the string</a> until the disk was uncovered.</p>
<p>Other worker bees were placed nearby in a screen cage where they could see what their trained hive mates did. Once released, this second group also pulled the string for the sweet treats. This study demonstrated what scientists term <a href="https://www.britannica.com/science/social-learning">social learning</a> – acting in ways that reflect the behavior of others.</p>
<h2>Pollinating with vibrations</h2>
<p>Even pollination, one of bees’ best-known behaviors, can be much more complicated than it seems. </p>
<p>The basic process is similar for all types of bees: Females carry pollen grains, the sex cells of plants, on their bodies from flower to flower as they collect pollen and nectar to feed themselves and their developing grubs. When pollen rubs off onto <a href="https://www.amnh.org/learn-teach/curriculum-collections/biodiversity-counts/plant-identification/plant-morphology/parts-of-a-flower">a flower’s stigma</a>, the result is pollination. </p>
<p>My favorite area of bee research examines a method called <a href="https://doi.org/10.1016/j.pbi.2013.05.002">buzz pollination</a>. Bees use it on about 10% of the world’s 350,000 kinds of flowering plants that have special <a href="https://www.amnh.org/learn-teach/curriculum-collections/biodiversity-counts/plant-identification/plant-morphology/parts-of-a-flower">anthers</a> – structures that produce pollen. </p>
<p>For example, a tomato blossom’s five anthers are pinched together, like the closed fingers of one hand. Pollen is released through one or two small pores at the end of each anther. </p>
<p>When a female bumblebee lands on a tomato flower, she bites one anther at the middle and contracts her flight muscles from <a href="https://doi.org/10.1093/jxb/erab428">100 to 400 times per second</a>. These powerful vibrations eject pollen from the anther pores in the form of a cloud that strikes the bee. It all happens in just a few tenths of a second. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/SZrTndD1H10?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Bumblebees demonstrate buzz pollination on a Persian violet blossom.</span></figcaption>
</figure>
<p>The bee hangs by one leg and scrapes the pollen into “baskets” – structures on her hind legs. Then she repeats the buzzing on the remaining anthers before moving to different flowers.</p>
<p>Bees also use buzz pollination on the flowers of blueberries, cranberries, eggplant and kiwi fruits. My colleagues and I are conducting experiments to determine the biomechanics of <a href="https://doi.org/10.1098/rsif.2022.0040">how bee vibrations eject pollen from anthers</a>. </p>
<h2>Planting for bees</h2>
<p>Many species of bees are <a href="https://theconversation.com/bees-face-many-challenges-and-climate-change-is-ratcheting-up-the-pressure-190296">declining worldwide</a>, thanks to stresses including <a href="http://dx.doi.org/%2010.1126/science.1255957">parasites, pesticides and habitat loss</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Wood cubes filled with twigs and bricks." src="https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/526311/original/file-20230515-18664-v796la.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A backyard ‘insect hotel’ for solitary bees and other nesting insects, made from stems, bricks and wood blocks.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/insect-hotel-for-solitary-bees-and-artificial-nesting-place-news-photo/601067110">Arterra/Universal Images Group vis Getty Images</a></span>
</figcaption>
</figure>
<p>Whether you have an apartment window box or several acres of land, you can do a few <a href="https://theconversation.com/to-help-insects-make-them-welcome-in-your-garden-heres-how-153609">simple things to help bees</a>. </p>
<p>First, plant native wildflowers so that blooms are available in every season. Second, try to avoid using insecticides or herbicides. Third, provide open ground where burrowing bees can nest. With luck, soon you’ll have some buzzing new neighbors.</p><img src="https://counter.theconversation.com/content/203837/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stephen Buchmann does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Scientists are learning amazing things about bees’ sensory perception and mental capabilities.Stephen Buchmann, Adjunct Professor of Entomology and of Ecology and Evolutionary Biology, University of ArizonaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1652342021-08-02T14:03:24Z2021-08-02T14:03:24ZOur brains perceive our environment differently when we’re lying down<figure><img src="https://images.theconversation.com/files/413896/original/file-20210730-23-162v0dk.jpg?ixlib=rb-1.1.0&rect=17%2C17%2C5734%2C3811&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">When we lie down, our brains rely more on touch and pressure to figure out our surroundings.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>You’re agitated by the sound of a mosquito buzzing around your head. The buzzing stops. You feel the tiny pinprick and locate the target. Whack! It’s over. </p>
<p>It’s a simple sequence, but it demands complex processing. How did you know where the mosquito was before you could even see it? </p>
<p>The human body is covered in about two square metres of skin, but somehow even before looking you knew the precise location of the spindly predator. After visual confirmation, your hand found its way to the scene of the crime and applied fatal force to the bug, but you didn’t hurt yourself in the process.</p>
<p>What did it take for all that to happen? Good question.</p>
<p>For all the advancements the world has seen in every field of science, including neuroscience, the mechanics of perception and thinking still elude complete understanding.</p>
<p>Even the list of basic human senses is still up for debate: beyond the five traditional senses, many argue that balance — the body’s mechanism for orienting itself in space — should have been included long ago. </p>
<h2>Balance while lying down</h2>
<p><a href="https://vislab.mcmaster.ca/facilities/the-eeg-lab-for-vision-multisensory-studies">My colleagues and I at McMaster University</a> have recently uncovered a wrinkle in our perception, which is allowing us to learn more about how that sense of balance works and how much it contributes to our perception.</p>
<p>The wrinkle is this: when we lie on our sides, the brain appears to dial down its reliance on information related to the external world and instead increase reliance on internal perceptions generated by touch.</p>
<p>For example, when we cross our arms, <a href="https://doi.org/10.1016/S0926-6410(02)00070-8">we have more difficulty sorting out whether a vibrator went off first in our right or left hand</a>. Somewhat surprisingly, when we close our eyes, <a href="https://doi.org/10.1163/22134808-00002423">performance improves</a>. Blindfolding degrades our representation of the external world, which allows our internal body-centred perception to dominate.</p>
<p>When people lie on their sides, their performance also improves with the hands crossed. </p>
<p>On its own, this information is not likely to affect daily life. But the fact that this difference exists is very meaningful in our quest to understand how we orient ourselves to the spaces we inhabit, and it may open avenues for discovery in other areas, including sleep, for example.</p>
<p>Our experiment was, in a way, quite straightforward. </p>
<h2>Blindfold experiments</h2>
<p>We tested sighted and blindfolded research participants’ ability to identify which hand we were stimulating first with their hands crossed and uncrossed. We have been doing similar experiments in our lab for about 20 years.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A blindfolded woman stands in a park with her hands held out in front of her" src="https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/413796/original/file-20210729-21-s45hlq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Experiments show that we perceive sensations differently when we’re blindfolded.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>In this case, the results were consistent with what we’d seen in other experiments: participants’ performed worse when their hands were crossed. The novel manipulation here was when participants lay on their sides; when their hands were crossed we saw a drastic improvement in the ability to localize the touch. Like blindfolding, lying on the side reduced the influence of the external representation of the world and allowed participants to pay attention to their body-centred signals. </p>
<p>This difference, between doing the task upright and lying down, which we describe in <em>Scientific Reports</em>, <a href="https://doi.org/10.1038/s41598-021-92192-1">leads us to wonder if the brain deliberately dials down the most active orientation functions</a> — the external representation — when we lie down, possibly as a way to help us sleep. </p>
<h2>Environmental awareness</h2>
<p>The discovery leads us to want to know more about the role the body’s <a href="https://www.neuroscientificallychallenged.com/blog/know-your-brain-vestibular-system">vestibular system</a> appears to play in shaping our overall perception. In our inner ear, we have <a href="https://nobaproject.com/modules/the-vestibular-system">a little bit of ocean that came with us when we evolved from the sea</a>. We carry it around to assess gravity, so we can tell which way is up. Issues with this system can cause disorders such as vertigo.</p>
<p>Showing that the brain shifts its reference point toward interior perceptions whenever we lie on our sides, confirms that the brain is deliberately dimming the vestibular system, highlighting the importance of its contribution to our normal perception. </p>
<p>There has been surprisingly little research on how the vestibular system influences input from other senses. Consider this: MRI machines test people when they’re lying down and MRI results inform conclusions about what is happening in people’s brains, when in fact their brain activity might look quite different if they were sitting or standing. </p>
<p>This discovery indicates that the vestibular system shapes perception even in different senses. This raises questions that border on the philosophical: How are we aware of the environment around us? What are the components of consciousness? </p>
<p>We can take our bodies for granted, regarding them as machines for carrying us around, but our bodies themselves actually shape the way we perceive and understand the world. </p>
<p>Try thinking about that next time you’re lying down.</p><img src="https://counter.theconversation.com/content/165234/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David I. Shore consults to The multisensory Mind Inc. and has received funding, through McMaster University, from the Natural Science and Engineering Council of Canada. </span></em></p>Learning that our brains process information differently when we’re standing up or lying down has implications for how we study and assess brain function.David I. Shore, Professor, Psychology, Neuroscience & Behaviour, McMaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1516502021-02-05T13:06:37Z2021-02-05T13:06:37ZDo you see red like I see red?<figure><img src="https://images.theconversation.com/files/382552/original/file-20210204-14-a5xafl.jpg?ixlib=rb-1.1.0&rect=752%2C783%2C6122%2C3968&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's disconcerting to think the way two people perceive the world might be totally different.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/man-and-woman-standing-in-a-gallery-space-with-royalty-free-image/839180292">Mads Perch/Stone via Getty Images</a></span></figcaption></figure><p>Is the red I see the same as the red you see?</p>
<p>At first, the question seems confusing. Color is an inherent part of visual experience, as fundamental as gravity. So how could anyone see color differently than you do?</p>
<p>To dispense with the seemingly silly question, you can point to different objects and ask, “What color is that?” The initial consensus apparently settles the issue.</p>
<p>But then you might uncover troubling variability. A rug that some people call green, others call blue. A <a href="https://en.wikipedia.org/wiki/The_dress">photo of a dress</a> that <a href="https://doi.org/10.1016/j.cub.2015.04.053">some people call blue and black, others say is white and gold</a>.</p>
<p>You’re confronted with an unsettling possibility. Even if we agree on the label, maybe your experience of red is different from mine and – shudder – could it correspond to my experience of green? How would we know?</p>
<p>Neuroscientists, <a href="https://scholar.google.com/citations?user=LNgp00MAAAAJ">including</a> <a href="https://scholar.google.com/citations?user=6I_zDKUAAAAJ">us</a>, have tackled <a href="https://mitpress.mit.edu/books/color-ontology-and-color-science">this age-old puzzle</a> and are starting to come up with some answers to these questions. One thing that is becoming clear is the reason individual differences in color are so disconcerting in the first place. </p>
<h2>Colors add meaning to what you see</h2>
<p>Scientists often explain why people have color vision in cold, analytic terms: Color is <a href="https://doi.org/10.1146/annurev-vision-091517-034231">for object recognition</a>. And this is certainly true, but it’s not the whole story.</p>
<p>The <a href="https://doi.org/10.1167/18.11.1">color statistics of objects</a> are not arbitrary. The parts of scenes that people choose to label (“ball,” “apple,” “tiger”) are not any random color: They are more likely to be warm colors (oranges, yellows, reds), and less likely to be cool colors (blues, greens). This is true even for artificial objects that could have been made any color.</p>
<p>These observations suggest that your brain can use color to help recognize objects, and might explain <a href="https://theconversation.com/languages-dont-all-have-the-same-number-of-terms-for-colors-scientists-have-a-new-theory-why-84117">universal color naming patterns across languages</a>. </p>
<p>But recognizing objects is not the only, or maybe even the main, job of color vision. In <a href="https://doi.org/10.1038/s41467-019-10073-8">a recent study</a>, neuroscientists Maryam Hasantash and Rosa Lafer-Sousa showed participants real-world stimuli illuminated by low-pressure-sodium lights – the energy-efficient yellow lighting you’ve likely encountered in a parking garage.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="people and fruit lit by yellow low sodium lights" src="https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=911&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=911&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=911&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1145&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1145&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382625/original/file-20210204-20-zdq64j.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1145&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The eye can’t properly encode color for scenes lit by monochromatic light.</span>
<span class="attribution"><span class="source">Rosa Lafer-Sousa</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>The yellow light prevents the eye’s retina from properly encoding color. The researchers reasoned that if they temporarily knocked out this ability in their volunteers, the impairment might point to the normal function of color information. </p>
<p>The volunteers could still recognize objects like strawberries and oranges bathed in the eerie yellow light, implying that color isn’t critical for recognizing objects. But the fruit looked unappetizing. </p>
<p>Volunteers could also recognize faces – but they looked green and sick. Researchers think that’s because your expectations about normal face coloring are violated. The green appearance is a kind of error signal telling you that something’s wrong. This phenomenon is an example of how <a href="https://doi.org/10.1111/j.1933-1592.2010.00481.x">your knowledge can affect your perception</a>. Sometimes what you know, or think you know, influences what you see. </p>
<p>This research builds up the idea that color isn’t so critical for telling you what stuff is but rather about its likely meaning. Color doesn’t tell you about the kind of fruit, but rather whether a piece of fruit is probably tasty. And for faces, color is literally a vital sign that helps us identify emotions like anger and embarrassment, <a href="https://www.sciencedirect.com/science/article/pii/S0889159116304986">as well as sickness</a>, as any parent knows. </p>
<p>It might be color’s importance for telling us about meaning, especially in social interactions, that makes variability in color experiences between people so disconcerting. </p>
<h2>Looking for objective, measurable colors</h2>
<p>Another reason variability in color experience is troubling has to do with the fact that we can’t easily measure colors.</p>
<p>Having an objective metric of experience gets us over the quandary of subjectivity. With shape, for instance, we can measure dimensions using a ruler. Disagreements about apparent size can be settled dispassionately.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="spectral power distribution of various wavelengths of light" src="https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382374/original/file-20210203-16-14psnr2.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The spectral power distribution of a 25-watt incandescent lightbulb illustrates the wavelengths of light it emits.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Spectral_power_distribution_of_a_25_W_incandescent_light_bulb.png">Thorseth/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>With color, we can measure proportions of different wavelengths across the rainbow. But these “spectral power distributions” do not by themselves tell us the color, even though they are <a href="https://doi.org/10.1017/S0140525X03000013">the physical basis for color</a>. A given distribution can appear different colors depending on context and assumptions about materials and lighting, as <a href="https://doi.org/10.1167/17.12.25">#thedress proved</a>.</p>
<p>Perhaps color is a <a href="https://aardvark.ucsd.edu/color/hatfield.html">“psychobiological” property</a> that emerges from the brain’s response to light. If so, could an objective basis for color be found not in the physics of the world but rather in the human brain’s response? </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="cross section of retina with different cell types" src="https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=389&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=389&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=389&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=488&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=488&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382371/original/file-20210203-20-1agq2g7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=488&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Cone cells in the eye’s retina encode messages about color vision.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/eye-anatomy-rod-cells-and-cone-cells-royalty-free-illustration/1091261988">ttsz/iStock via Getty Images Plus</a></span>
</figcaption>
</figure>
<p>To compute color, your brain engages <a href="https://doi.org/10.1146/annurev-vision-091517-034202">an extensive network of circuits</a> in the cerebral cortex that <a href="https://doi.org/10.1146/annurev-vision-121219-081801">interpret the retinal signals</a>, taking into account <a href="https://journals.sagepub.com/doi/full/10.1177/1073858419882621">context and your expectations</a>. Can we measure the color of a stimulus by monitoring brain activity?</p>
<h2>Your brain response to red is similar to mine</h2>
<p>Our group used magnetoencephalography – MEG for short – to monitor the tiny magnetic fields created when nerve cells in the brain fire to communicate. We were able to classify the response to various colors using machine learning and then <a href="https://doi.org/10.1016/j.cub.2020.10.062">decode from brain activity the colors</a> that participants saw.</p>
<p>So, yes, we can determine color by measuring what happens in the brain. Our results show that each color is associated with a distinct pattern of brain activity.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person seated in MEG machine looking at screen with color projection" src="https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=378&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=378&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=378&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=475&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=475&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382764/original/file-20210205-13-17w8sz4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=475&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Researchers measured volunteers’ brain responses with magnetoencephalography (MEG) to decode what colors they saw.</span>
<span class="attribution"><span class="source">Bevil Conway</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>But are the patterns of brain response similar across people? This is a hard question to answer, because one needs a way of perfectly matching the anatomy of one brain to another, which is really tough to do. For now, we can sidestep the technical challenge by asking a related question. Does my relationship between red and orange resemble your relationship between red and orange? </p>
<p>The MEG experiment showed that two colors that are perceptually more similar, as assessed by how people label the colors, give rise to more similar patterns of brain activity. So your brain’s response to color will be fairly similar when you look at something light green and something dark green but quite different when looking at something yellow versus something brown. What’s more, these similarity relationships are preserved across people. </p>
<p>Physiological measurements are unlikely to ever resolve metaphysical questions such as “what is redness?” But the MEG results nonetheless provide some reassurance that color is a fact we can agree on.</p><img src="https://counter.theconversation.com/content/151650/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bevil R. Conway receives funding from the Intramural Research Program (IRP) of the National Eye Institute (NEI). </span></em></p><p class="fine-print"><em><span>Danny Garside receives funding from the Intramural Research Program (IRP) of the National Eye Institute (NEI). </span></em></p>Neuroscientists tackling the age-old question of whether perceptions of color hold from one person to the next are coming up with some interesting answers.Bevil R. Conway, Senior Investigator at the National Eye Institute, Section on Perception, Cognition, and Action, National Institutes of HealthDanny Garside, Visiting Fellow in Sensation, Cognition & Action, National Institutes of HealthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1541522021-02-02T15:29:09Z2021-02-02T15:29:09ZWhy your kids know when you’re trying to put on a brave face<figure><img src="https://images.theconversation.com/files/381977/original/file-20210202-15-ai0i2e.jpg?ixlib=rb-1.1.0&rect=16%2C8%2C5590%2C3724&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/cute-young-boy-holds-magnifying-glass-112048604">Prixel Creative/Shutterstock</a></span></figcaption></figure><p>It’s 7:30am on a Monday morning and you’re trying to get your little darlings out of the house for school. The week has only just begun but already you can feel your temper being tested: your children appear to be physically incapable of getting dressed. You put on a nice faux smile and implore them through gritted teeth to “get dressed <em>right</em> now”. Despite your best efforts, though, somehow your real emotions have shone through: your children have started to cry.</p>
<iframe id="noa-web-audio-player" style="border: none" src="https://embed-player.newsoveraudio.com/v4?key=x84olp&id=https://theconversation.com/why-your-kids-know-when-youre-trying-to-put-on-a-brave-face-154152&bgColor=F5F5F5&color=D8352A&playColor=D8352A" width="100%" height="110px"></iframe>
<p>This situation will be familiar to many parents – myself included. Numerous times, I’ve tried to conceal how I’m really feeling when talking to my daughter by “putting on a brave face” that I hope masks my true feelings. However, my team’s <a href="https://www.sciencedirect.com/science/article/abs/pii/S0022096520305221">new research</a> suggests that all of this effort might actually be in vain. </p>
<p>We’ve found that children prioritise sound over sight when identifying emotions – which means that the emotion you carry in your voice’s tone, volume and pitch registers with your kids despite the careful physical mask you put up to hoodwink them. As such, rather than putting on a brave face in difficult moments, parents should perhaps try to “put on a brave voice” instead.</p>
<h2>The reverse Colavita effect</h2>
<p>Our research was inspired by the esteemed psychologist <a href="https://www.ncbi.nlm.nih.gov/books/NBK92851/">Francis Colavita</a>, who ran an experiment in the 1970s that produced a curious result. When presented with flashes of light (visual stimuli) and tones (auditory stimuli) at the same time, adults tended to ignore the auditory stimuli and only report the visual ones. </p>
<p>This was coined the “Colavita effect” and was taken as evidence of visual dominance in adults. More recently, the opposite was found in <a href="https://srcd.onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8624.2012.01856.x">children</a>. Under the same conditions, children – those up to the age of around eight – tended to report the auditory stimuli and ignore the visual. This was dubbed the “reverse-Colavita effect”, a case of auditory dominance.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/should-you-hide-negative-emotions-from-children-104710">Should you hide negative emotions from children?</a>
</strong>
</em>
</p>
<hr>
<p>Since this research was published, the limits of the effect on children have been tested. Instead of simple flashes and tones, more complex stimuli – like pictures of animals and the sounds they make – <a href="https://www.sciencedirect.com/science/article/abs/pii/S0022096515001824?via%3Dihub">have been used</a>. For instance, these studies found that when shown a picture of a dog accompanied by the sound of a cow, children would only report what they heard – not what they saw. </p>
<p>This demonstrated that the reverse-Colavita effect wasn’t simply due to a preference for tones over flashes like in the original study, but instead appeared to be a preference for any auditory stimuli, even complex and meaningful sounds. These sounds were so dominant that they are all the child would report perceiving.</p>
<h2>Sounding out</h2>
<p>We wanted to push this effect further and try and find out whether children show an auditory dominance for emotionally meaningful stimuli. We created an experiment to test this, using <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2011.00181/full">emotional bodies</a> (photos of people’s bodies looking scared, sad, happy, or angry) and emotional <a href="https://link.springer.com/article/10.3758/BRM.40.2.531">voices</a> (recordings of people sounding scared, sad, happy, or angry).</p>
<p>We presented adults and children (aged between 6 and 11) with these images and sounds in different combinations, both matching and mismatched. A happy body and a happy voice made for a matching pair of stimuli, whereas a sad body with an angry voice would be a mismatched pair of stimuli.</p>
<p>We asked our participants two things. First, we asked them to ignore what they saw, telling us instead the person’s emotion based on the voice. Adults and children could do that no problem. Then we showed exactly the same stimuli but this time asked them to ignore what they heard and tell us how the person felt based on their body. Here again the adults could do this with no difficulty, but the children found this extremely difficult.</p>
<p>When viewing a picture of a person cowering in fear, for example, children in our study would tell us that person was happy if they heard a laugh at the same time. In effect, children could not ignore auditory stimuli when judging emotion. Our study is the first evidence of an auditory dominance in children when detecting and recognising emotion.</p>
<h2>Loud and clear</h2>
<p>If children have an auditory dominance when it comes to emotional information, it is the emotion in the parent’s voice that will “override” any visual emotional information in their body language. That means an angry voice is likely to be detected by a child, even if it’s hidden behind a forced smile.</p>
<figure class="align-center ">
<img alt="A child folds her arms in a grumpy posture while her mother looks on concerned" src="https://images.theconversation.com/files/381782/original/file-20210201-17-1mla13t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/381782/original/file-20210201-17-1mla13t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/381782/original/file-20210201-17-1mla13t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/381782/original/file-20210201-17-1mla13t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/381782/original/file-20210201-17-1mla13t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/381782/original/file-20210201-17-1mla13t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/381782/original/file-20210201-17-1mla13t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">‘It’s not what you said – it’s how you said it’.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/mom-psychologist-talking-counseling-upset-offended-1282522006">fizkes/Shutterstock</a></span>
</figcaption>
</figure>
<p>The implications of these findings go beyond just avoiding tantrums. Currently, huge efforts have been made by teachers to make online learning as engaging as possible for home-schooled children during the pandemic. Given our findings, perhaps lesson design should focus less on the visual elements, and more on auditory elements. </p>
<p>If a child’s perception of what they see can be so influenced by what they hear, then their sensory environment may matter a great deal. Our findings suggest that, for remote lessons at least, children may actually benefit from working with headphones on or earphones in – to avoid competing, confusing auditory stimuli.</p>
<p>In any case, next time you want to conceal how you really feel from your child, it may be worth remembering that it’s your voice that will betray you – not your face or your body language.</p><img src="https://counter.theconversation.com/content/154152/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paddy Ross does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Rather than putting on a ‘brave face’, parents might be better putting on a ‘brave voice’ to conceal their emotions.Paddy Ross, Assistant Professor, Department of Psychology, Durham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1452762020-09-03T12:40:13Z2020-09-03T12:40:13Z‘Curing blindness’: why we need a new perspective on sight rehabilitation<figure><img src="https://images.theconversation.com/files/356355/original/file-20200903-16-1ifm79u.jpg?ixlib=rb-1.1.0&rect=50%2C0%2C5637%2C3755&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Many blind people want sight rehabilitation technologies to increase their independence.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/blind-visually-impaired-childkidtoddlerpreschoolerboy-walking-through-1192950880">Tracy Spohn/ Shutterstock</a></span></figcaption></figure><p>In a society <a href="https://www.smh.com.au/opinion/the-challenges-of-using-technology-when-youre-blind-20170517-gw6jp5.html">focused on visual communication</a>, being blind can have severe disadvantages. In fact, research shows blind people are at <a href="https://www.rnib.org.uk/sites/default/files/Employment%20status%20and%20sight%20loss%202017.pdf">higher risk of unemployment</a>, <a href="https://hqlo.biomedcentral.com/articles/10.1186/s12955-019-1096-y">social isolation, and lower quality of life</a> than sighted people. Given the huge impact blindness has on society and those without vision, the drive to find a “cure” for blindness has become a profitable market. </p>
<p>Many new, cutting-edge developments that “<a href="https://www.nationalgeographic.com/magazine/2016/09/blindness-treatment-medical-science-cures/">cure blindness</a>” build on promises they often <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1772467/">cannot keep</a>, leaving many blind people and their families feeling disappointed and disillusioned. But what does “curing blindness” actually mean – and how can it be achieved in a way that it most benefits the blind person? </p>
<p>When we think of “curing blindness”, we often think about <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6350159/">restoring the lost sense</a> – for example, through vision-enhancing technology, bionic eyes, or gene therapy. This is because we typically treat an impaired sense by focusing on the damaged sensory organ. But while our eyes deliver the sensory input, by transforming light into electrical impulses that our brains can use, most visual perception <a href="https://theconversation.com/how-do-our-brains-reconstruct-the-visual-world-49276">happens in the brain</a>. </p>
<p>The perception of a visual object, a coffee cup, say, is created across different <a href="https://www.youtube.com/watch?v=P-7mO2FhaVE">hierarchical levels</a> in the visual cortex of our brain. Simple two-dimensional features, such as edges and colours, are combined into more complex shapes, which are in turn combined into the perception of whole objects, like our coffee cup. Across these different levels, our previous visual and non-visual experiences strongly influence <a href="http://people.psych.cornell.edu/%7Ejec7/pubs/ostrovskyetal.pdf">how we perceive</a> the final object.</p>
<figure class="align-center ">
<img alt="Steaming coffee cup on table." src="https://images.theconversation.com/files/356352/original/file-20200903-24-1vtyc9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/356352/original/file-20200903-24-1vtyc9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/356352/original/file-20200903-24-1vtyc9y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/356352/original/file-20200903-24-1vtyc9y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/356352/original/file-20200903-24-1vtyc9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/356352/original/file-20200903-24-1vtyc9y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/356352/original/file-20200903-24-1vtyc9y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Most of what we ‘see’ happens in the brain itself.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/lighting-window-hot-white-coffee-cup-348244271">NOPPHARAT42896395/ Shutterstock</a></span>
</figcaption>
</figure>
<p>Because of the complex nature of visual perception, sight is incredibly difficult to restore, and achieving a satisfactory level of visual function is not easy. Despite significant advances in <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6350159/">visual restoration technology</a>, even the best visual implants typically only allow visual acuity of 1/60, which is technically still classed as <a href="https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment">blindness</a> by the World Health Organization. While this minimal form of light perception is already great progress, it’s <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0134369">not enough</a> to allow a person to live independently. </p>
<p>While every blind person has their own ideas of what sight rehabilitation should do for them, what resonates with most is the aim to <a href="https://theconversation.com/to-provide-eyes-or-not-to-provide-eyes-36408">increase independence</a> by allowing blind people to gain more access to visual information. </p>
<p>But does the brain need vision for that? Not necessarily. This is why we need to adopt a different perspective on sensory rehabilitation – one that views vision as part of a greater multisensory experience. After all, perception is rarely based on one sense alone but on a combination of multisensory experiences in which our senses influence each other. </p>
<h2>Multisensory perspective</h2>
<p>The brain has the remarkable ability to compensate for sensory loss by <a href="https://theconversation.com/do-blind-people-have-better-hearing-102282">reorganizing</a> how it processes information. In fact, the brain learns to perceive through the sensory experiences it makes during childhood. If all sensory experience is non-visual, perception will develop around these experiences. So if a person was born without sight, or lost their sight early on in childhood, their perception will develop around the non-visual senses.</p>
<p>This is why, on some tasks, blind people perform better than sighted people, while, on other tasks, they may perform worse. This dichotomy seems to underlie a simple principle: is the sense that is typically used for this task the <a href="https://brill.com/view/journals/msr/28/1-2/article-p71_5.xml">best suited</a> for accessing this information? For example, we are well able to locate a buzzing phone using either our vision or hearing. In this case, more experience finding objects through sound will lead to <a href="https://academic.oup.com/brain/article/137/1/6/365182">superior performance in blind people</a> when only hearing is used. However, given that our vision is much better suited to perceiving people’s faces, blind people usually perform worse than sighted people when <a href="https://pubmed.ncbi.nlm.nih.gov/23399994/">recognising other’s faces through touch</a>.</p>
<p>We know that the brain learns best about the environment when it can access the same information <a href="https://faculty.ucr.edu/%7Easeitz/pubs/Shams_Seitz08.pdf">through multiple senses</a>. This benefits our perception by <a href="https://www.semanticscholar.org/paper/Merging-the-senses-into-a-robust-percept-Ernst-B%C3%BClthoff/66ed3eaa6ae8e7a0e48c268d579c890a2968c061">enhancing accuracy and precision</a>. But if we want to make use of this perceptual benefit in vision rehabilitation, we need to know whether the blind brain actually learned to generate it. </p>
<p>It turns out that this <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/desc.13001">depends on the age</a> a person goes blind. Blindness before the age of eight or nine years influences how touch and hearing are used together to estimate object size. But blindness after this age impairs the ability to enhance perception through multisensory combination. </p>
<p>So what does that mean for sensory rehabilitation? We know that there’s not one best solution for all, but we also know that the age of blindness-onset can provide important clues. If a person has been blind since birth or early childhood, the brain does not know how to process visual information, so vision restoration may not bring much benefit. If, however, sight was lost later in life, the brain is best wired to perceive its surroundings through vision.</p>
<p>But there’s still good news for the congenitally and early blind: the enhanced perceptual abilities in the remaining senses can be used to <a href="https://www.sciencedirect.com/science/article/pii/S0149763413002765">substitute vision</a>. In fact, visual information does not have to be taken up through the eyes – it can also reach the brain through our <a href="https://theconversation.com/camera-mobile-headphones-the-low-cost-set-up-that-can-help-blind-people-see-35936">other senses</a>. In order for that to happen, it first needs to be translated into a different “sensory language”. For example, visual information can be directly <a href="https://www.seeingwithsound.com/">translated into sound</a>. Through training, the brain then learns to use this <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2020.01443/full">new sensory language</a>, opening up the visual world through the use of another sense. </p>
<p>While sensory restoration advancements have come a long way, we are still far from an optimal solution that allows blind people to access visual information and equally partake in society. By realising that perception depends on individual experiences, we can better develop solutions that will most benefit each person – whether that aims to restore their sight, or seeks to use their other senses instead.</p><img src="https://counter.theconversation.com/content/145276/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Meike Scheller's work has been carried out at the University of Bath with support from the University, the British Academy, and the NIHR. </span></em></p>Perception is multisensory.Meike Scheller, Research Fellow, University of AberdeenLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1414372020-07-08T12:13:26Z2020-07-08T12:13:26ZSynthetic odors created by activating brain cells help neuroscientists understand how smell works<figure><img src="https://images.theconversation.com/files/346129/original/file-20200707-194405-awzgsl.jpg?ixlib=rb-1.1.0&rect=767%2C8%2C4838%2C3242&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">When you sniff a particular scent, your brain cells fire in a recognizable pattern.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/young-woman-smelling-perfume-from-bottle-at-royalty-free-image/953961844">Maskot via Getty Images</a></span></figcaption></figure><p>When you experience something with your senses, it evokes complex patterns of activity in your brain. One important goal in neuroscience is to decipher how these neural patterns drive the sensory experience.</p>
<p>For example, can the smell of chocolate be represented by a single brain cell, groups of cells firing all at the same time or cells firing in some precise symphony? The answers to these questions will lead to a broader understanding of how our brains represent the external world. They also have implications for treating disorders where the brain fails in representing the external world: for example, in the loss of sight of smell.</p>
<p>To understand how the brain drives sensory experience, <a href="https://rinberglab.com">my colleagues and I</a> focus on the sense of smell in mice. We directly control a mouse’s neural activity, <a href="https://doi.org/10.1126/science.aba2357">generating “synthetic smells”</a> in the olfactory part of its brain in order to learn more about how the sense of smell works.</p>
<p>Our latest experiments discovered that scents are represented by very specific patterns of activity in the brain. Like the notes of a melody, the cells fire in a unique sequence with particular timing to represent the sensation of smelling a unique odor.</p>
<h2>Scents produced by light projections</h2>
<p>Using mice to study smell is appealing to researchers because the <a href="https://doi.org/10.1016/j.conb.2018.04.008">relevant brain circuits have been mapped out</a>, and modern tools allow us to directly manipulate these brain connections.</p>
<p>The mice we use are genetically engineered so we can activate individual brain cells simply by shining light of specific wavelengths onto them – <a href="https://doi.org/10.1038/nn1525">a technique known as optogenetics</a>. Early uses of optogenetics involved light delivered through implanted optical fibers, letting researchers control coarse patches of brain cells. More recent uses of optogenetics allow <a href="https://doi.org/10.1126/science.aaw5202">more sophisticated control</a> of <a href="https://doi.org/10.1016/j.cell.2019.05.045">precise patterns of brain activity</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=393&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=393&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=393&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=494&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=494&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346080/original/file-20200707-22-1rfavl2.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=494&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A simplified image of a mouse brain, looking down from the top. The olfactory bulb (left) is at the front of the brain and receives connections from receptor cells in the nose.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Mouse_brain_top_view.png">Database Center for Life Science/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>For our study, we projected light patterns onto the surface of the brain, targeting a region known as the olfactory bulb. Previous research has found that when mice sniff different scents, cells in the olfactory bulb appear to fire in a sort of patterned symphony, with a <a href="https://doi.org/10.1152/jn.90902.2008">unique pattern formed in response to each distinct smell</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=617&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=617&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=617&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=776&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=776&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346101/original/file-20200707-194405-1ojhb4o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=776&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rather than receiving sensory signals from the nose, the olfactory bulb was activated by light projections.</span>
<span class="attribution"><span class="source">Edmund Chong</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>When we shined light patterns onto a mouse’s olfactory bulb, it generated corresponding patterns of cellular activity. We called these patterns “synthetic smells.” As opposed to a pattern of activity triggered by a mouse sniffing a real odor, we directly triggered the neural activity of a “synthetic smell” with our light projections.</p>
<p>Next we trained each individual mouse to recognize a randomly generated synthetic smell. Since they can’t describe to us in words what they’re perceiving, we rewarded each mouse with water if it licked a water spout whenever it detected its assigned smell. Over weeks of training, mice learned to lick when their assigned smell was activated, and not to lick for other randomly generated synthetic smells. </p>
<p>[<em><a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=experts">Expertise in your inbox. Sign up for The Conversation’s newsletter and get expert takes on today’s news, every day.</a></em>]</p>
<p>We cannot say for sure that these synthetic smells correspond to any known odor in the world, nor do we know what they smell like to the mouse. But we did calibrate our synthetic patterns to broadly resemble olfactory bulb patterns observed when actual scents are present. Furthermore, mice learn to discriminate synthetic smells about as quickly as they did real smells.</p>
<h2>Tweaking the pattern of a synthetic smell</h2>
<p>Once each mouse learned to recognize its assigned synthetic smell, we measured how much they still licked when we modified the assigned smell. Within each synthetic pattern, we altered which cells were activated or when they activated.</p>
<p>Imagine taking a familiar song, changing individual notes in the song, and asking whether you still recognized the song after each change. By testing many different changes, one can eventually understand which precise composition of notes is essential to the song’s identity and which tweaks are extreme enough to make the song unrecognizable.</p>
<p>Likewise, by measuring how mice changed their licking as we modified their projected light patterns, we were able to understand which combinations of cells within the pattern were important for identifying the synthetic smell.</p>
<p>The precise combination of cells activated was crucial. But just as important was when they are activated in an ordered sequence, akin to timed notes in a melody. For example, changing the order of cells in the sequence would render the smell unrecognizable.</p>
<p>It turned out that the cells activated earlier in the sequence were more important for recognition – changing the sequences later in the pattern seemed to have negligible effects.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/lJ2bof_fWgM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Watch an animation of how these sequences in the brain work.</span></figcaption>
</figure>
<p>Changes in recognition were graded, and not drastic: When we changed small parts of the pattern, the smell did not become completely unrecognizable. In fact, the degree to which the smell was recognized was proportional to the degree of change in the pattern. This implies that if I slightly modify the brain activity that represents an orange, you would still smell something similar – maybe recognizing it as citrus, or fruity.</p>
<p>So while the brain has huge capacity to store many different smells in unique timed sequences of cell activity, you can still recognize similar smells by the similarity in their patterns: The smell of coffee is still distinctly recognizable even with a splash of vanilla added to it. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=506&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=506&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346134/original/file-20200707-194418-1oc455r.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=506&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">You know the smell of coffee even if it’s served with a dash of fragrant vanilla.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/hot-espresso-royalty-free-image/981402468">Roland Beerli/500px Prime via Getty Images</a></span>
</figcaption>
</figure>
<p>The next step in this research is to bring the synthetic approach to real smells. To do so, we would need to record brain activity in response to a real smell, then reactivate the very same cells using optogenetics. The synthetic re-creation of real objects in the brain is the current focus of research in <a href="https://doi.org/10.1126/science.aaw5202">multiple</a> <a href="https://doi.org/10.1016/j.cell.2019.05.045">labs</a> <a href="https://doi.org/10.1364/BRAIN.2019.BM3A.2">including ours</a>.</p>
<p>Addressing this issue is exciting because it opens up possibilities not just for understanding how the brain works, but also for developing brain implants that may one day restore the loss of sensory experiences.</p><img src="https://counter.theconversation.com/content/141437/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Edmund Chong does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Brains recognize a smell based on which cells fire, in what order – the same way you recognize a song based on its pattern of notes. How much can you change the ‘tune’ and still know the smell?Edmund Chong, Ph.D. Student in Neuroscience, New York UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1314382020-07-02T12:26:07Z2020-07-02T12:26:07ZDo dogs really see in just black and white?<figure><img src="https://images.theconversation.com/files/343579/original/file-20200623-188900-3set3z.jpg?ixlib=rb-1.1.0&rect=286%2C557%2C4464%2C3080&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Don't worry that your dog's world is visually drab.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/high-angle-view-of-dog-walking-on-colorful-striped-royalty-free-image/677142241">Kevin Short/EyeEm via Getty Images</a></span></figcaption></figure><figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em><a href="https://theconversation.com/us/topics/curious-kids-us-74795">Curious Kids</a> is a series for children of all ages. If you have a question you’d like an expert to answer, send it to <a href="mailto:curiouskidsus@theconversation.com">curiouskidsus@theconversation.com</a>.</em></p>
<hr>
<blockquote>
<p><strong>Do dogs really see in just black and white? – Oscar V., age 9, Somerville, Massachusetts</strong></p>
</blockquote>
<hr>
<p>Dogs definitely see the world differently than people do, but it’s a myth that their view is <a href="https://www.hillspet.com/dog-care/resources/dog-myths">just black, white and grim shades of gray</a>. </p>
<p>While most people see a full spectrum of colors from red to violet, dogs lack some of the light receptors in their eyes that allow human beings to see certain colors, particularly in the red and green range. But canines can still see yellow and blue.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=353&fit=crop&dpr=1 600w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=353&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=353&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=444&fit=crop&dpr=1 754w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=444&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/344613/original/file-20200629-155299-i6prbp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=444&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Different wavelengths of light register as different colors in an animal’s visual system. Top is the human view; bottom is a dog’s eye view.</span>
<span class="attribution"><a class="source" href="https://dog-vision.andraspeter.com/tool.php">Top: iStock/Getty Images Plus via Getty Images. Bottom: As processed by András Péter's Dog Vision Image Processing Tool</a></span>
</figcaption>
</figure>
<p>What you see as red or orange, to a dog may just be another shade of tan. To my dog, Sparky, a bright orange ball lying in the green grass may look like a tan ball in another shade of tan grass. But his bright blue ball will look similar to both of us. <a href="https://dog-vision.andraspeter.com/tool.php">An online image processing tool</a> lets you see for yourself what a particular picture looks like to your pet.</p>
<p>Animals can’t use spoken language to describe what they see, but researchers easily trained dogs to touch a lit-up color disc with their nose to get a treat. Then they trained the dogs to touch a disc that was a different color than some others. When the well-trained dogs couldn’t figure out which disc to press, the scientists knew that they couldn’t see the differences in color. These experiments showed that <a href="https://doi.org/10.1017/s0952523800004430">dogs could see only yellow and blue</a>.</p>
<p>In the back of our eyeballs, human beings’ retinas contain three types of special cone-shaped cells that are responsible for all the colors we can see. When scientists used a technique called electroretinography to measure the way dogs’ eyes react to light, they found that <a href="https://doi.org/10.1017/S0952523800003291">canines have fewer kinds of these cone cells</a>. Compared to people’s three kinds, dogs only have two types of cone receptors.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=393&fit=crop&dpr=1 600w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=393&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=393&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=494&fit=crop&dpr=1 754w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=494&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/344635/original/file-20200629-155334-1ktj47u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=494&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Light travels to the back of the eyeball, where it registers with rod and cone cells that send visual signals on to the brain.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/eye-anatomy-rod-cells-and-cone-cells-royalty-free-illustration/1091261988">iStock/Getty Images Plus via Getty Images</a></span>
</figcaption>
</figure>
<p>Not only can dogs see fewer colors than we do, they probably don’t see as clearly as we do either. Tests show that both the structure and function of the dog eye leads them to <a href="https://ucdavis.pure.elsevier.com/en/publications/vision-in-dogs">see things at a distance as more blurry</a>. While we think of perfect vision in humans as being 20/20, typical vision in dogs is probably closer to 20/75. This means that what a person with normal vision could see from 75 feet away, a dog would need to be just 20 feet away to see as clearly. Since dogs don’t read the newspaper, their visual acuity probably doesn’t interfere with their way of life.</p>
<p>There’s likely a lot of difference in visual ability between breeds. Over the years, breeders have selected sight-hunting dogs like greyhounds to have better vision than dogs like bulldogs.</p>
<p>But that’s not the end of the story. While people have a tough time seeing clearly in dim light, scientists believe dogs can probably see as well at dusk or dawn as they can in the bright middle of the day. This is because compared to humans’, dog retinas have a <a href="https://ucdavis.pure.elsevier.com/en/publications/vision-in-dogs">higher percentage and type of another kind of visual receptor</a>. Called rod cells because of their shape, they function better in low light than cone cells do.</p>
<p>Dogs also have a reflective tissue layer at the back of their eyes that <a href="https://doi.org/10.3758/s13423-017-1404-7">helps them see in less light</a>. This mirror-like tapetum lucidum collects and concentrates the available light to help them see when it’s dark. The tapetum lucidum is what gives dogs and other mammals that glowing eye reflection when caught in your headlights at night or when you try to take a flash photo.</p>
<p>Dogs share their type of vision with many other animals, <a href="https://www.hillspet.com/cat-care/behavior-appearance/cat-vision">including cats</a> <a href="https://doi.org/10.1017/S0952523800003291">and foxes</a>. Scientists think it’s important for these hunters to be able to detect the motion of their nocturnal prey, and that’s why their vision <a href="https://www.theguardian.com/science/2016/aug/03/did-t-rex-make-your-dog-colour-blind">evolved in this way</a>. As many mammals developed the ability to forage and hunt in twilight or dark conditions, they <a href="https://doi.org/10.1016/j.devcel.2016.05.023">gave up the ability to see the variety of colors</a> that most birds, reptiles and primates have. People didn’t evolve to be active all night, so we kept the color vision and better visual acuity. </p>
<p>Before you feel sorry that dogs aren’t able to see all the colors of the rainbow, keep in mind that some of their other senses are much more developed than yours. They can <a href="https://www.akc.org/expert-advice/lifestyle/sounds-only-dogs-can-hear/">hear higher-pitched sounds from farther away</a>, and their <a href="https://www.pbs.org/wgbh/nova/article/dogs-sense-of-smell/">noses are much more powerful</a>.</p>
<p>Even though Sparky might not be able to easily see that orange toy in the grass, he can certainly smell it and find it easily when he wants to. </p>
<hr>
<p><em>Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to <a href="mailto:curiouskidsus@theconversation.com">CuriousKidsUS@theconversation.com</a>. Please tell us your name, age and the city where you live.</em></p>
<p><em>And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.</em></p><img src="https://counter.theconversation.com/content/131438/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nancy Dreschel does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Your faithful friend’s view of the world is different than yours, but maybe not in the way you imagine.Nancy Dreschel, Associate Teaching Professor of Small Animal Science, Penn StateLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1235062019-12-09T13:43:33Z2019-12-09T13:43:33ZWhat makes wine dry? It’s easy to taste, but much harder to measure<figure><img src="https://images.theconversation.com/files/303844/original/file-20191126-112517-1ctucpx.jpg?ixlib=rb-1.1.0&rect=786%2C0%2C2781%2C1752&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A lot of of chemistry and physics are behind how you perceive a sip of wine.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/pouring-wine-647742124">GANNA MARTYSHEVA/Shutterstock.com</a></span></figcaption></figure><p>When you take a sip of wine at a family meal or celebration, what do you notice?</p>
<p>First, you probably note the visual characteristics: the color is generally red, rosé or white. Next, you smell the aromatic compounds wafting up from your glass.</p>
<p>And then there’s the sensation in your mouth when you taste it. White wine and rosé are usually described as refreshing, because they have brisk acidity and little to moderate sweetness. Those <a href="https://www.winemag.com/2017/09/21/why-calling-a-wine-dry-or-sweet-can-be-simply-confusing/">low levels of sugar</a> may lead you to perceive these wines as “dry.”</p>
<p>People also describe wines as dry when alcohol levels are high, usually over about 13%, mostly because the ethanol leads to hot or burning sensations that <a href="https://doi.org/10.1021/acs.jafc.6b03767">cover up other sensations</a>, especially sweetness. People also perceive red wines as dry or astringent because they contain a class of molecules called polyphenols. </p>
<p><a href="https://www.scopus.com/authid/detail.uri?authorId=55360215200">As an enologist</a> – a wine scientist – I’m interested in how all the chemistry in a glass of wine adds up to this perception of dryness. People are good at evaluating a wine’s dryness with their senses. Can we eventually come up with a way to automatically assess this dryness or astringency without relying on human tasters?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/303845/original/file-20191126-112522-dmwsr6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Molecules in grapes give them their various properties.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/ripe-red-wine-grape-ready-harvest-705572797">barmalini/Shutterstock.com</a></span>
</figcaption>
</figure>
<h2>The chemistry at the vineyard</h2>
<p>Everything starts with the grapes. If you taste a mature grape skin or seed at harvest, it will seem dry or astringent to you, thanks to a number of chemical compounds it contains.</p>
<p>Large molecules called condensed <a href="https://www.wineaustralia.com/getmedia/df422991-82ed-4125-b0f7-8395a63d438f/201005-tannin-management-in-the-vineyard.pdf">tannins</a> are mostly responsible for the astringency perception. These compounds are made up of varying types and numbers of <a href="https://doi.org/10.1021/bk-2002-0825.ch015">smaller chemical units called flavanols</a>. Tannins are in the same family of molecules, the polyphenols, that give grapes their red or black color. They tend to be larger in grape skins than in grape seeds, and consequently the skins tend to be more astringent, while the seeds are more bitter.</p>
<p><a href="https://doi.org/10.1021/bk-2002-0825.ch015">Grape varieties differ in how much</a> of each of these compounds they contain. In <em>Vitis vinifera</em> cultivars, like Pinot noir and Cabernet sauvignon, the tannin concentration varies from a relatively high 1 to 1.5 mg/berry. In cold-hardy hybrid grapes found in the Midwestern United States, <a href="https://doi.org/10.3390/fermentation3030047">like Frontenac and Marquette</a>, the concentrations are much lower, ranging from 0.3 to 0.7 mg/berry.</p>
<p><a href="https://www.wineaustralia.com/getmedia/df422991-82ed-4125-b0f7-8395a63d438f/201005-tannin-management-in-the-vineyard.pdf">Factors in the vineyard</a> – including site, soil qualities and amount of sun – affect the final concentration of tannins in the fruit.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/303614/original/file-20191126-84262-htad7o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Extracting tannins from red wines in the lab to characterize their chemical structure.</span>
<span class="attribution"><span class="source">Aude Watrelot</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>The chemistry in your mouth</h2>
<p>Basically, the more tannin there is in a wine, the more astringent it will be.</p>
<p>When you take a sip, the large tannin molecules <a href="https://doi.org/10.1016/j.tifs.2014.08.001">interact with proteins from your saliva</a>. They combine and form complexes, reducing the number of salivary proteins available to help lubricate your mouth. It leaves your mouth with a dry sensation – like if a snail were to lose its mucus layer, it would dry out.</p>
<p>Because everyone has a different composition and concentration of saliva proteins, and because the flow rate of saliva as you bring wine into your mouth varies, your perceptions of an astringent or dry wine won’t be the same as those of your friends or family. The alcohol level, pH and <a href="https://doi.org/10.1016/j.aca.2011.12.042">aroma of the wine</a> also influence how intensely and for how long you perceive a red wine’s dryness.</p>
<p>Since wine dryness is a perception, the most appropriate tool to appraise it is sensory evaluation. It requires panelists trained on the wine aroma, taste and mouthfeel based on prepared standards and other wines.</p>
<p>But winemakers would love to have a quick, simple way to objectively measure astringency without relying on human tasters. That way, they could easily compare this year’s wine to last year’s, or to another wine that is not available to be tested.</p>
<h2>Can we scientifically evaluate dryness?</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=787&fit=crop&dpr=1 600w, https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=787&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=787&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=989&fit=crop&dpr=1 754w, https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=989&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/303846/original/file-20191126-112522-wmdoz4.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=989&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Part of the apparatus the author and Tonya Kuhl used at UC Davis to measure the friction between two surfaces.</span>
<span class="attribution"><span class="source">Aude Watrelot</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>The challenge for me and my colleagues was to <a href="https://doi.org/10.1021/acs.jafc.9b01480">see if we could match up</a> the quantified chemical <a href="https://doi.org/10.1016/j.foodres.2018.09.043">and physical properties</a> in a wine to the trained panelists’ perceptions.</p>
<p>First, we used analytical methods to figure out the different sizes of tannins present in particular wines, and their concentrations. We investigated how these tannins interacted and formed complexes with standard salivary proteins.</p>
<p>My collaborators and I also used a physical approach, relying on a piece of equipment with two surfaces that are able to mimic and measure the forces of friction that occur in a drinker’s mouth between the tongue and the palate as wine and saliva interact. The friction forces increase between drier surfaces and decrease between more lubricated surfaces.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=630&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=630&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=630&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=791&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=791&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304111/original/file-20191127-112484-xas3ab.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=791&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Researchers at Iowa State University’s Sensory Evaluation Lab passing wines to trained volunteers so they can report how dry they found particular wines.</span>
<span class="attribution"><span class="source">Aude Watrelot</span></span>
</figcaption>
</figure>
<p>Then, we trained human panelists to evaluate the intensity of dryness in the same wines and in a wine containing no tannins. </p>
<p>People perceived the wine containing the higher concentration of larger tannins as drier for a longer time than the wine without tannins. That made sense based on what we already knew about these compounds and how people sense them.</p>
<p>We were surprised, though, by our physical measurements in the lab, because they provided the opposite result as our human tasters’ perception. In the presence of too large or too many tannins in the wine, we recorded lower friction forces than in wines low in tannins. Based on the mechanical surfaces test, it seemed like there would be less dry mouthfeel than we’d expect in high-tannin wines. </p>
<p>My colleagues and I are planning to investigate this unexpected result in future research to improve our understanding of the dryness perception.</p>
<p>All its chemical and physical variables are part of what makes drinking wine a richly personal and ever-changing experience. Considering the impact of astringency on how individuals perceive a particular wine, a quick measure could be very helpful to winemakers as they do their work. So far, we haven’t been able to create a simple scale that will tell a winemaker that tannins at one certain level match up with a very particular dryness perception. But we enologists are still trying.</p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/123506/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Aude Watrelot has previously received funding from the American Vineyard Foundation.</span></em></p>Researchers would like to find a way to relate the human perception of dryness to the chemical and physical properties of the wine.Aude Watrelot, Assistant Professor of Enology, Iowa State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/850662017-12-05T04:07:42Z2017-12-05T04:07:42ZA new collaborative approach to investigate what happens in the brain when it makes a decision<figure><img src="https://images.theconversation.com/files/197377/original/file-20171202-5392-1edrpfm.jpg?ixlib=rb-1.1.0&rect=1319%2C238%2C2973%2C2330&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What's going on in there when you decide?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/businesswoman-making-decision-360687236">Sergey Nivens/Shutterstock.com</a></span></figcaption></figure><p>Decisions span a vast range of complexity. There are really simple ones: Do I want an apple or a piece of cake with my lunch? Then there are much more complicated ones: Which car should I buy, or which career should I choose?</p>
<p>Neuroscientists like me have identified some of the individual parts of the brain that contribute to making decisions like these. Different areas <a href="https://doi.org/10.1038/nature12077">process sounds</a>, <a href="https://doi.org/10.1523/JNEUROSCI.0105-17.2017">sights</a> or pertinent <a href="https://doi.org/10.7554/eLife.05457">prior knowledge</a>. But understanding how these individual players work together as a team is still a challenge, not only in understanding decision-making, but for the whole field of neuroscience.</p>
<p>Part of the reason is that until now, neuroscience has operated in a traditional science research model: Individual labs work on their own, usually focusing on one or a few brain areas. That makes it challenging for any researcher to interpret data collected by another lab, because we all have slight differences in how we run experiments.</p>
<p>Neuroscientists who study decision-making set up all kinds of different games for animals to play, for example, and we collect data on what goes on in the brain when the animal makes a move. When everyone has a different experimental setup and methodology, we can’t determine whether the results from another lab are a clue about something interesting that’s actually going on in the brain or merely a byproduct of equipment differences.</p>
<p><a href="https://www.braininitiative.nih.gov/">The BRAIN Initiative</a>, which the Obama administration launched in 2013, started to encourage the kind of collaboration that neuroscience needs. I just think it hasn’t gone far enough. So I co-founded a project called the <a href="https://www.internationalbrainlab.com/">International Brain Laboratory</a> – a virtual mega-laboratory composed of many labs at different institutions – to show that the proverb “alone we go fast, together we go far” holds true for neuroscience. The first question the collaboration is tackling focuses on decision-making by the brain.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/193460/original/file-20171106-1046-ehjqn2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">We know a lot, but not enough, about how the cogs all fit together.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/p_revagar/28777007826">Piyushgiri Revagar</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<h2>The brain’s decision team</h2>
<p>Individual neuroscience labs have already uncovered a lot about how particular brain areas contribute to decision-making.</p>
<p>Say you’re choosing between an apple or a piece of cake to go with lunch. First, you need to know that apples and cake are the two options. That requires action from brain areas that process sensory information – your eyes see the apple’s bright red skin, while your nose takes in the sweet smell of cake.</p>
<p>Those sensory areas often connect to what we call association areas. Researchers have traditionally thought they play a role in <a href="https://doi.org/10.1038/nn.3865">putting different pieces of information</a> together. By collating information from the eyes, the ears and so on, the association areas may give a more coherent, <a href="https://doi.org/10.1038/nature14066">big-picture view</a> of what’s happening in the world. </p>
<p>And why choose one action over another? That’s a question for the brain’s <a href="https://doi.org/10.1016/j.conb.2008.08.003">reward circuitry</a>, which is critical in <a href="https://doi.org/10.1038/nrn2357">weighing the value of different options</a>. You know that the cake will taste sweetly delicious now, but you might regret it when you’re heading to the gym later.</p>
<p>Then, there’s the frontal cortex, which is believed to play a <a href="https://doi.org/10.1038/35036228">role in controlling voluntary action</a>. Research suggests it’s involved in committing to a particular action once enough incoming information has arrived. It’s the part of the brain that might tell you the piece of cake smells so good that it’s worth all of the calories.</p>
<p>Understanding how these different brain areas typically work together to make decisions could help with understanding what happens in diseased brains. Patients with disorders such as autism, schizophrenia and Parkinson’s disease often use sensory information in an unusual way, especially if it’s complex and uncertain. Research on decision-making may also inform treatment of patients with other disorders, such as substance abuse and addiction. Indeed, <a href="https://archives.drugabuse.gov/NIDA_Notes/NNVol18N4/DirRepVol18N4.html">addiction is perhaps a prime example</a> of how decision-making can go very wrong.</p>
<h2>A lab collaborative spread around the world</h2>
<p>Right now, neuroscientists are taking lots of closeup snapshots of what happens in particular areas of the brain when it makes a decision. But they aren’t coordinating with each other much, so these closeup pieces don’t fit together to give us the big picture of decision-making that we need. </p>
<p>That’s why a team of us joined up to form the International Brain Laboratory. With support from the International Neuroinformatics Coordinating Facility, the Wellcome Trust, and the Simons Foundation (also a funder of The Conversation US), we aim to create that big picture by designing one large-scale experiment that uses the exact same approach to study many different brain areas. Because the brain is so complex, we need the expertise of many different labs that each specialize in particular brain areas. But we need them to coordinate and use the same approach so that we can put all of their different pieces of the picture together. </p>
<p>We’re bringing together a team of 21 scientists who will work very closely to understand how billions of neurons work together in a single brain to make decisions. About a dozen different labs will each do part of one big experiment by measuring neuron activity in animals engaged in exactly the same game. Our team members will record activity from hundreds of neurons in each animal’s brain. We’ll collect tens of thousands of neuronal recordings that we can analyze together.</p>
<h2>Keep it simple</h2>
<p>In real-world decisions, you’re combining lots of different pieces of information – your sensory signals, your internal knowledge about what’s rewarding, what’s risky. But implementing that in a laboratory context is pretty hard.</p>
<p>We’re hoping to recreate a mouse’s natural foraging experience. In real life, there are many different paths an animal can take as it navigates the world looking for something to eat. It wants to find food, because food is rewarding. It uses incoming sensory cues, like, “Oh, I see a cricket over there!” An animal might combine that with a memory of reward, like, “I know this area has lush berry bushes, I remember that from yesterday, so I’ll go there.” Or, “I know over here there was a cat last time, so I’d better avoid that area.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=417&fit=crop&dpr=1 600w, https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=417&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=417&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=525&fit=crop&dpr=1 754w, https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=525&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/189663/original/file-20171010-17462-7i2day.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=525&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Imagining the world from a mouse’s perspective is essential for International Brain Laboratory scientists when picking a lab task that mimics a real-world decision.</span>
<span class="attribution"><span class="source">Elena Nikanorovna</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>At first pass, the setup we’re using for the International Brain Laboratory doesn’t look very natural at all. The mouse has a little device that it uses to report decisions – it’s actually a wheel from a Lego set. For example, it might learn that when it sees an image of a vertical grating and turns the wheel until the image is centered, it gets a reward. If you think about what foraging is – exploring the environment, trying to find rewards, making use of sensory signals and prior knowledge – this simple Lego wheel activity does capture its essence.</p>
<p>We really had to think about the trade-off between having a behavior that was complex enough to give us insight into interesting neural computations, and one that was simple enough that it could be implemented in the same way in many different experimental laboratories. The balance we struck was a decision-making task that starts simple and becomes more and more complex as an individual animal achieves different stages of training. </p>
<p>Even in the simplest, very earliest stage we’re looking at, where the animals are just making voluntary movements, they’re deciding when to make a movement to harvest a reward. I’m sure we can go much further, but even if that’s as far as we get, having neural measurements from all over the brain during a simple behavior like this will be very interesting. We don’t know how it happens in the brain that you decide when to take a particular action and how to execute that action. Having neural measurements from all over the brain of what happened just before the animal spontaneously decided to go and get a reward will be a huge step forward.</p><img src="https://counter.theconversation.com/content/85066/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anne Churchland receives funding from NIH, Simons Foundation, The Office of Naval Research, the Pew Trusts and the Klingenstein-SImons Foundation. </span></em></p>A new initiative called the International Brain Laboratory is tackling this fundamental mystery of neuroscience in an unusual way.Anne Churchland, Associate Professor of Neuroscience, Cold Spring Harbor LaboratoryLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/861722017-10-31T14:21:28Z2017-10-31T14:21:28ZCan you train yourself to develop ‘super senses’?<figure><img src="https://images.theconversation.com/files/192486/original/file-20171030-18693-nt624r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Did I just hear 'danger'...or 'container'?</span> <span class="attribution"><span class="source">Kues/Shutterstock</span></span></figcaption></figure><p>Wouldn’t it be great to be able to hear what people whispered behind your back? Or to read the bus timetable from across the street? We all differ dramatically in our perceptual abilities – for all our senses. But do we have to accept what we’ve got when it comes to sensory perception? Or can we actually do something to improve it?</p>
<p>Differences in perceptual ability are most obvious for the more valued senses – hearing and vision. But some people have enhanced abilities for the other senses too. For example, there are “<a href="http://www.sciencedirect.com/science/article/pii/0031938494903611">supertasters</a>” among us mere mortals who perceive stronger tastes from various sweet and bitter substances (a trait linked with a <a href="http://www.sciencedirect.com/science/article/pii/0031938494903611">greater number of taste receptors</a> on the tip of the tongue). It’s not all good news for the supertasters though – they also perceive more burn from oral irritants like alcohol and chilli.</p>
<p>Women have been shown to be <a href="https://www.ncbi.nlm.nih.gov/pubmed/20016091">better at feeling touch than men</a>. Interestingly, this turns out not to really be a gender thing at all, but rather down to having smaller fingers. This means touch receptors that are more closely packed together, and therefore the possibility for perception at a finer resolution. Thus, if a man and woman have the same sized fingers, they will have equivalent touch perception.</p>
<h2>Perceptual learning</h2>
<p>The sensory receptors on our body largely set a limit on what we can perceive. However, this is not the end of the story. Our perception is much more malleable than you might expect. The scientific field of “<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3821996/">perceptual learning</a>” is helping us to understand perception and, therefore, how we can enhance it.</p>
<p>This research reveals that, in the same way we can train to improve skills such as sports or languages, <a href="http://psycnet.apa.org/record/1969-35014-000">we can train to improve what we can see, hear, feel, taste and smell</a>. In a typical sensory training, the trainee is presented with a range of sensory stimuli that vary in how easy they are to perceive. Taking touch as an example, these might be bursts of vibrations on the fingerpads that vary in frequency (how fast they pulse).</p>
<p>The trainee usually has to make a judgement about the two stimuli, such as whether they are the same or different. Typically, this <a href="http://www.jstor.org/stable/1419876">starts with easy comparisons</a> (very different stimuli) and gets successively harder. Feedback on whether a response is correct or not <a href="http://www.sciencedirect.com/science/article/pii/S0042698997000436">significantly improves learning</a>, as it allows people to match what they see/feel with the properties of the actual stimuli.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/192734/original/file-20171031-18683-8rfimv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/192734/original/file-20171031-18683-8rfimv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=288&fit=crop&dpr=1 600w, https://images.theconversation.com/files/192734/original/file-20171031-18683-8rfimv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=288&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/192734/original/file-20171031-18683-8rfimv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=288&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/192734/original/file-20171031-18683-8rfimv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=361&fit=crop&dpr=1 754w, https://images.theconversation.com/files/192734/original/file-20171031-18683-8rfimv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=361&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/192734/original/file-20171031-18683-8rfimv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=361&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">FMRI scanner.</span>
<span class="attribution"><span class="source">John Cairns/Oxford University</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>It was long thought that you could only improve your perception by this explicit training, but it is also possible boost perception <a href="http://www.sciencedirect.com/science/article/pii/S1364661305001506">without actively doing anything</a> or even realising it is happening. <a href="http://science.sciencemag.org/content/334/6061/1413">In one incredible example</a>, scientists trained participants in a brain scanner to generate a pattern of brain activity matching what would be seen if they were looking at particular visual stimuli. They gave them feedback on how well they were generating this pattern – a process known as “<a href="https://en.wikipedia.org/wiki/Neurofeedback">neurofeedback</a>”. </p>
<p>By the end of training, participants were asked to identify various visual stimuli including the one they had “seen” in training. It turned out they were faster and more accurate in reporting the stimulus from the training despite having not physically seen it. Talk about inception.</p>
<h2>Dramatic results</h2>
<p>But how much can we expect our senses to improve? That largely depends on how long and hard you train, and how effective your training is. It can be substantial: in our studies, touch training has produced <a href="http://jn.physiology.org/content/115/3/1088">improvements of up to about 42%</a> of participants’ original acuity, from just two hours of training. What is surprising is that some studies report enhancements of perception into a range beyond what the sensory receptors should allow – into the “<a href="https://en.wikipedia.org/wiki/Hyperacuity_(scientific_term)">hyperacuity</a>” range.</p>
<p>For example, in vision, people are actually able to <a href="http://www.jstor.org/stable/pdf/2877128.pdf?seq=1#page_scan_tab_contents">see at a finer resolution than the spacing between individual receptors</a> in the eye. You can think about this in the terms of pixels in a photo – the more pixels you have, the more details you can see. In the case of hyperacuity, people can see better than the pixel resolution should permit (with similar findings across the senses, including <a href="https://www.ncbi.nlm.nih.gov/pubmed/23516304">touch</a> and <a href="https://www.nature.com/nature/journal/v410/n6829/abs/410686a0.html">audition</a>).</p>
<p>So how on Earth can this occur? It’s due to <a href="https://www.ncbi.nlm.nih.gov/pubmed/961819">clever processing in the brain</a>: our brains look across the whole grid of receptors to determine where the “centre of gravity” of the image falls – revealing position and shape by the spatial clustering of information on the grid. In fact, a surprising amount of perception turns out to be determined less by the receptor organ than by the brain.</p>
<p>For instance, training your vision to improve does not do anything to alter the photoreceptors in your eye. While all the same sensory information is getting into the system through these receptors, the training <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1756-8765.2009.01044.x/full">allows the brain</a> to filter out noise and more effectively “tune into” the sensory signal.</p>
<p>Another piece of evidence that learning can’t be happening at the level of sensory receptors is that sensory learning <em>spreads</em>. For instance, if you train perception to improve on one finger of the hand, this learning <a href="http://psycnet.apa.org/record/2013-25329-001">miraculously spreads to other fingers</a> that are <a href="http://www.pnas.org/content/96/13/7587.short">linked in the brain</a>. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/192631/original/file-20171031-18683-8r2oow.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/192631/original/file-20171031-18683-8r2oow.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/192631/original/file-20171031-18683-8r2oow.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/192631/original/file-20171031-18683-8r2oow.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/192631/original/file-20171031-18683-8r2oow.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/192631/original/file-20171031-18683-8r2oow.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/192631/original/file-20171031-18683-8r2oow.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Could brain training make up for vision loss?</span>
<span class="attribution"><span class="source">Tyler Olson/Shutterstock</span></span>
</figcaption>
</figure>
<p>The fact that we can train our brains to improve the way we extract sensory information from the world really is good news for all of us. Not least because our sensory perception <a href="https://www.ncbi.nlm.nih.gov/pubmed/7571941">declines as we age</a>.</p>
<p>On the upside, savvy tech developers and scientists alike have been hard at work franchising this idea – using concepts of perceptual learning to create brain training apps. These apps cannot <em>overcome</em> the problems of sensory degradation caused by faulty or ageing receptors (and some are ineffective or based on dubious science). However if designed correctly, they can give you a significant boost. There is even some evidence that such sensory training programmes can translate to real world benefits, such as <a href="http://www.sciencedirect.com/science/article/pii/S0960982214000050">visual training boosting baseball performance</a>.</p>
<p>Some are already available on the web, such as <a href="https://ultimeyesvision.com/">UltimEyes</a> – an app designed by perceptual learning researchers at University of California in Riverside. The also have an <a href="https://experiment.com/projects/can-brain-training-help-soldiers-with-brain-injury-regain-hearing">auditory training prototype</a> in crowdfunding, and <a href="http://www.sciencedirect.com/science/article/pii/S0960982217311788">other groups</a> are following suit. Maybe soon we will have the power to modify our own sensory perception in the palm of our hand (well, in the phone in the palm of our hand).</p>
<p>With rapid scientific progress we move towards fantastic opportunities to maximise the function of our senses, aid rehabilitation for people who’ve experienced sensory loss and just generally become more awesome.</p><img src="https://counter.theconversation.com/content/86172/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Harriet Dempsey-Jones does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We can see at a finer resolution than the spacing between individual photo-receptors in the eye – and it’s all down to our brains.Harriet Dempsey-Jones, Postdoctoral Researcher in Clinical Neurosciences, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/574102016-07-27T02:05:42Z2016-07-27T02:05:42ZGambling on limited information: our visual system and probabilistic inference<figure><img src="https://images.theconversation.com/files/131990/original/image-20160726-7041-1ygv854.jpg?ixlib=rb-1.1.0&rect=218%2C336%2C5170%2C3119&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What makes your brain go all-in on what it thinks you're seeing?</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=101012161">Chips image via www.shutterstock.com.</a></span></figcaption></figure><p>Imagine walking along in the African savanna. Suddenly you notice a moving bush partially obscuring a large yellow object. From this limited information, you need to figure out if you’re in danger and decide how to react. Is it a pile of dry grass? Or a hungry lion?</p>
<p>In situations like this, our brains must use complex and uncertain visual information to make split-second decisions. The inferences and subsequent decisions we make based on what we see can be the difference between responding appropriately to a threat and becoming a lion’s next meal.</p>
<p>Traditionally, neuroscientists have thought about visual information processing as a chain of events that happen one after another, filtering the input signal (from the eyes) that changes over space and time. But more recently, we’ve started to think of the process as much more dynamic and interactive. As the visual system tries to resolve uncertainty in the sensory information it receives, it uses both prior knowledge and current evidence to make informed guesses about what’s going on.</p>
<h2>Visual system: much more than eyes</h2>
<p>The eyes are of course crucial for how we see what’s happening around us. But the bulk of the intensively studied human visual system lies within the brain.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=720&fit=crop&dpr=1 600w, https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=720&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=720&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=905&fit=crop&dpr=1 754w, https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=905&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/131993/original/image-20160726-7028-kc0sw1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=905&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The eyes collect the data that get fed into the visual processing parts of the brain.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=101012161">Visual system image via www.shutterstock.com.</a></span>
</figcaption>
</figure>
<p>The retinas at the back of your eyes contain photoreceptors that sense and respond to light in the environment. These photoreceptors, in turn, activate neurons which transmit information to the visual cortex of the brain, located at the back of your head. The visual cortex then processes the raw data so we can make decisions about how to respond and behave appropriately based on the original input to the eyes.</p>
<p>The visual cortex is organized in an anatomical and functional hierarchy. Each stage is distinct from every other one, both in terms of its microscopic anatomy and its functional role and physiology – that is, how the neurons respond to different stimuli.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=835&fit=crop&dpr=1 600w, https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=835&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=835&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1049&fit=crop&dpr=1 754w, https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1049&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/131994/original/image-20160726-7045-ceclyr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1049&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Old way: information funnels higher and higher through the visual cortex and a precise picture emerges at the top.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=395039689">Pyramid image via www.shutterstock.com.</a></span>
</figcaption>
</figure>
<p>Traditionally, researchers thought this hierarchy filtered the information in sequence, stage by stage, from bottom to top. They believed each processing level of the visual brain passes upward a more refined form of the visual signal it received from the lower levels. For instance, at one stage of the hierarchy, high-contrast edges are extracted from the scene in order to form boundaries for shapes and objects later on.</p>
<p>The original thinking held that, in the end, the highest levels of the visual cortex would contain in its pattern of neuron activity a meaningful representation of the world that we could then act upon. But several more recent developments in neuroscience have turned this view on its head.</p>
<p>The world – and therefore, the visual environment – is highly uncertain from moment to moment. Furthermore, we know <a href="http://doi.org/10.1016/j.tics.2005.04.010">from many studies</a> that the capacity of the visual brain is strikingly limited. The brain relies on processes such as <a href="https://theconversation.com/how-do-our-brains-reconstruct-the-visual-world-49276">visual attention and visual memory</a> to help it efficiently make use of these limited resources.</p>
<p>So how exactly does the brain navigate effectively in a highly uncertain environment with a limited amount of information? The answer is, it plays the odds and gambles. </p>
<h2>Taking a chance on best guesstimates</h2>
<p>The brain needs to use limited inputs of ambiguous and variable information to make an informed guess at what is happening in its surroundings. If these guesses are accurate, they can form the basis of good decisions.</p>
<p>In order to do this, the brain essentially gambles on the subset of information it has. Based on a small sliver of sensory information, it bets on what the world is telling it in order to get the best payoff behaviorally.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/131777/original/image-20160725-31190-1hzo42f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Is that what I think it is?</span>
<span class="attribution"><span class="source">Maggie Villiger</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Consider the example of the moving bush in the savanna. You see a blurry, large yellow object obscured by the bush. Did this object cause the bush to move? What is the yellow blob? Is it a threat?</p>
<p>These questions are relevant to what we choose to do next in terms of our behavior. Using the limited visual information (moving bush, large yellow object) in an effective way is behaviorally important. If we infer that the yellow object is indeed a lion or some other predator, we may decide to move quickly in the opposite direction.</p>
<p>Inference can be defined as a conclusion based on both evidence and reasoning. In this instance, the inference (that’s a lion) is based on both evidence (large yellow object, moving bush) and reasoning (lions are large and present in the savanna). Neuroscientists think of <a href="http://doi.org/10.1146/annurev-neuro-071013-014017">probabilistic inference as a computation</a> involving the combination of prior information and current evidence.</p>
<h2>Two-way brain connections</h2>
<p>Neuroanatomical and neurophysiological evidence over the past two decades has shown that the hierarchy in the visual cortex contains large numbers of connections <a href="http://doi.org/10.1016/S0959-4388(98)80042-1">going from lower to higher and higher to lower</a> at each and every level. Rather than information making its way through an inverted funnel, getting refined as it goes higher and higher, it seems like the visual system is more an interactive hierarchy. It apparently works to resolve the uncertainty inherent in the world through a constant feedback and feed-forward cycle. This allows the <a href="http://doi.org/10.1364/JOSAA.20.001434">combination of <em>bottom-up</em> current evidence and <em>top-down</em> prior information</a> at all levels of the hierarchy. </p>
<p>The anatomical and physiological evidence indicating a more interconnected visual brain is nicely complemented by behavioral experiments. On a range of visual tasks – <a href="https://escholarship.org/uc/item/0qr3185s#page-1">recognizing objects</a>, <a href="http://www.sciencedirect.com/science/article/pii/S0042698910002348">searching for a particular object among irrelevant objects</a> and <a href="http://csjarchive.cogsci.rpi.edu/proceedings/2011/papers/0039/paper0039.pdf">remembering briefly presented visual information</a> – human beings perform in line with expectations generated from the rules of probabilistic inference. Our behavioral predictions based on an assumption that probabilistic inference underlies these capacities correspond nicely to the actual experimental data.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=398&fit=crop&dpr=1 600w, https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=398&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=398&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=500&fit=crop&dpr=1 754w, https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=500&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/131787/original/image-20160725-24908-1ovztg3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=500&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Natural selection means individuals who bet on the wrong visual idea might not have made it too long.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/84780407@N04/11396865265">Ali Moradmand</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>Informed guesses, minimizing error</h2>
<p>Neuroscientists have suggested that the brain has evolved, through natural selection, to actively minimize the disparity moment to moment between what is perceived and what is expected. Minimizing this discrepancy necessarily involves using probabalistic inference to predict aspects of the incoming information based on prior knowledge of the world. Neuroscientists have named this process <a href="https://global.oup.com/academic/product/the-predictive-mind-9780199682737?cc=us&lang=en">predictive coding</a>.</p>
<p>Much of the data that have supported the predictive coding approach has come through studying the visual system. However, now researchers are <a href="http://doi.org/10.1007/978-1-4939-2236-9_11">starting to generalize the idea</a> and apply it to other aspects of information processing in the brain. This approach has yielded many potential future directions for modern neuroscience, including understanding the relationship between <a href="http://doi.org/10.1177/107385840100700605">low-level responses of individual neurons and higher-level neuronal dynamics</a> (such as the group activity recorded in an electroencephalogram or EEG).</p>
<p>While the idea that perception is a process of inference is <a href="https://en.wikipedia.org/wiki/Unconscious_inference">not new</a>, modern neuroscience has revitalized it in recent years – and it’s changed the field dramatically. Furthermore, the approach promises to increase our understanding of information processing not just for visual information, but all forms of sensory information as well as higher level processes such as decision making, memory and conscious thought.</p><img src="https://counter.theconversation.com/content/57410/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alex Burmester does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How does your brain deal with the ambiguous and variable visual information your eyes collect? Neuroscientists think it bets on what’s the most likely version of reality.Alex Burmester, Research Associate in Perception and Memory, New York UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/523682016-01-05T11:09:42Z2016-01-05T11:09:42ZSilicon soul: the vain dream of electronic immortality<figure><img src="https://images.theconversation.com/files/105932/original/image-20151215-23166-1593knw.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Our bodies set some requirements for what it would mean to upload our brains.</span> <span class="attribution"><span class="source">Nicolas Rougier</span>, <span class="license">Author provided</span></span></figcaption></figure><blockquote>
<p>In just over 30 years, humans will be able to upload their entire minds to computers and become digitally immortal. – Ray Kurzweil, Global Futures 2045 International Congress (2013)</p>
</blockquote>
<p>Without even considering the ethical, philosophical, social or legal scope of such a statement, it’s important to consider if it actually makes any sense. To try to give an educated guess, we have to move away from computer science and look at what biology and neuroscience can teach us.</p>
<h2>The sensible world</h2>
<p>In his book <a href="https://mitpress.mit.edu/books/being-there">Being There: Putting Brain, Body and World Together Again</a>, Andy Clark explains that:</p>
<blockquote>
<p>The biological mind is, first and foremost, an organ for controlling the biological body. Minds make motions, and they must make them fast – before the predator catches you, or before your prey gets away from you. Minds are not disembodied logical reasoning devices.</p>
</blockquote>
<p>To better understand this assertion, it’s essential to know that our bodies are literally covered with sensors – chemical, mechanical, visual, thermal, proprioceptive (perception of the body), noniceptive (perception of pain). All of them inform the brain about the outside world (exteroception) and the inner world (interoception), allowing it to regulate the body. The majority of our brain is actually dedicated to the processing of sensory information, and the largest part of that is devoted to visual information, occupying the entire occipital lobe and large parts of the temporal and parietal lobes. We are mostly visual beings who, incidentally, think.</p>
<p>If someone wants to “upload his entire mind to a computer,” the problem of sensors must be solved. A quick and dirty solution could be to not worry about them and just pretend that all sensory neurons would remain silent forever. However, 50 years ago, <a href="https://en.wikipedia.org/wiki/Donald_O._Hebb">Donald O. Hebb</a> conducted a series of experiments to study the effects of <a href="http://psycnet.apa.org/psycinfo/1958-00206-001">sensory deprivation</a>. He generously paid students to recline 24/7, taking care to deprive them of most of their senses using glasses, helmets, gloves and so on. The majority of the students abandoned the experiment after two or three days because they were no longer able to develop coherent thoughts and began to suffer from auditory and visual hallucinations. The experiences evoked considerable interest from the CIA (which financed the original study) and they later “improved” the process up to the point where it was transformed into an instrument of <a href="http://original.antiwar.com/engelhardt/2009/06/07/pioneers-of-torture/">psychological torture</a>.</p>
<h2>The body electric</h2>
<p>Consequently, if we want to “upload our brain” without going insane, it’s imperative for the uploaded brain to be connected to an artificial body that can perceive the outside world and act on it. But what kind of artificial body do we have today? Robotic bodies where retinas are replaced by cameras and muscles by motors? To some extent, yes, but this solution would be only a pale replica, far from the complexity and intelligence of the human body, as nicely explained by Rolf Pfeiffer and Josh Bongard’s book <a href="https://mitpress.mit.edu/books/how-body-shapes-way-we-think">How the Body Shapes the Way We Think</a>.</p>
<p>During childhood, a brain learns through experience to control its body and to leverage its specifics. For example, consider the fingertips, which are sufficiently soft and sensitive that we can easily grasp small objects. There’s no need for the brain to send a precise command – the intelligence is in the body itself. Imagine trying to do the same thing with thimbles over each finger, and you will understand how your body automatically solves a number of problems all by itself.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=267&fit=crop&dpr=1 600w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=267&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=267&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=336&fit=crop&dpr=1 754w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=336&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=336&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Diagram of the retina, in its sensory complexity.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Retina-diagram.svg">Cajal</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>What about the artificial eyes we’d need? Even though high-resolution cameras now exist, the eyes we’re born with each have approximately five million cones and 100 millions rods, plus the various “preprocessing” stages, from the horizontal, bipolar, amacrine and ganglion cells. We are indeed very far from being able to reproduce a full artificial retina, even though some <a href="http://www.institut-vision.org">amazing research</a> in Paris has succeeded in helping the vision-impaired see again.</p>
<p>As a first step, we could therefore use those simplified robotic bodies with their reduced sensory and motor skills. Would it affect our minds? Yes. Our cognition depends on the interactions we have with the world, and this interaction is conveyed by both our perceptions and our actions. If you change them, you also change the sensory experience of the world as well as its underlying logic. Cognition is embodied.</p>
<p><em>We further explore this question in the <a href="https://theconversation.com/why-youll-never-be-able-to-upload-your-brain-to-the-cloud-52408">second part</a> of this article.</em></p><img src="https://counter.theconversation.com/content/52368/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicolas P. Rougier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Uploading one’s mind to a computer in order to attain digital immortality has long been the fantasy of geeks and billionaires. So what’s stopping us?Nicolas P. Rougier, Chargé de Recherche, InriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/455372015-08-21T09:44:17Z2015-08-21T09:44:17ZEvery song has a color – and an emotion – attached to it<figure><img src="https://images.theconversation.com/files/92462/original/image-20150819-10847-mjtthu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The electronic band STS9 is known for having intoxicating light shows accompany their live performances.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/shannon_tompkins/9551462104/in/photolist-fy2JV7-fy2J4E-fy2Dum-h7QjrK-abUdxq-fDcTJc-eeyTjP-6ziVer-h7QwhE-fvMejG-h7QP55-h7QG8b-h7QuFU-h7Qu1p-E4dL6-4c55P-6zo1QQ-6ziXqi-6zo3cd-6zo2VS-6zo2Ed-6zo2tu-fxf1Q4-oYhkzV-oYgCW3-fDMJN3-6zo1Gf-4BstKB-7DxkVh-7DxkUq-7DxkTG-7DtxaM-7DxkRo-7DxkQN-5irtg8-6ziWfv-6ziVAH-6zo1oq-6ziXER-6ziY4X-5ivKcA-6ziUDa-7DxkWb-6ziUNH-DFaTd-h7R2Bj-h7QHZX-h7S8TV-h7S8Vt-h7QHAa">Shannon Tompkins/flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>Imagine yourself as a graphic designer for New Age musician <a href="https://www.youtube.com/watch?v=2zkjQVh5KmQ">Enya</a>, tasked with creating her next album cover. Which two or three colors from the grid below do you think would “go best” with her music?</p>
<p>Would they be the same ones you’d pick for an album cover or music video for the heavy metal band <a href="https://www.youtube.com/watch?v=xnKhsTXoKCI">Metallica</a>? Probably not. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/92161/original/image-20150817-5083-1wjtbhd.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/92161/original/image-20150817-5083-1wjtbhd.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=575&fit=crop&dpr=1 600w, https://images.theconversation.com/files/92161/original/image-20150817-5083-1wjtbhd.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=575&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/92161/original/image-20150817-5083-1wjtbhd.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=575&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/92161/original/image-20150817-5083-1wjtbhd.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=723&fit=crop&dpr=1 754w, https://images.theconversation.com/files/92161/original/image-20150817-5083-1wjtbhd.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=723&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/92161/original/image-20150817-5083-1wjtbhd.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=723&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>For years, my collaborators and I have been studying <a href="http://www.pnas.org/content/110/22/8836">music-to-color associations</a>. From our results, it’s clear that emotion plays a crucial role in how we interpret and respond to any number of external stimuli, including colors and songs. </p>
<h2>The colors of songs</h2>
<p>In one study, we asked 30 people to listen to four music clips, and simply choose the colors that “went best” with the music they were hearing from a 37-color array. </p>
<p>In fact, you can listen to the clips yourself. Think about which two to three colors from the grid you would choose that “go best” with each selection. </p>
<p><audio preload="metadata" controls="controls" data-duration="52" data-image="" data-title="Selection A" data-size="1645336" data-source="" data-source-url="" data-license="" data-license-url="">
<source src="https://cdn.theconversation.com/audio/193/a-bach-major-fast-short.mp3" type="audio/mpeg">
</audio>
<div class="audio-player-caption">
Selection A.
</div></p>
<p><audio preload="metadata" controls="controls" data-duration="51" data-image="" data-title="Selection B" data-size="1637864" data-source="" data-source-url="" data-license="" data-license-url="">
<source src="https://cdn.theconversation.com/audio/194/b-bach-minor-slow-short.mp3" type="audio/mpeg">
</audio>
<div class="audio-player-caption">
Selection B.
</div></p>
<p><audio preload="metadata" controls="controls" data-duration="15" data-image="" data-title="Selection C" data-size="2646090" data-source="" data-source-url="" data-license="" data-license-url="">
<source src="https://cdn.theconversation.com/audio/195/c-classic-rock.mp3" type="audio/mpeg">
</audio>
<div class="audio-player-caption">
Selection C.
</div></p>
<p><audio preload="metadata" controls="controls" data-duration="15" data-image="" data-title="Selection D" data-size="2658932" data-source="" data-source-url="" data-license="" data-license-url="">
<source src="https://cdn.theconversation.com/audio/196/d-piano.mp3" type="audio/mpeg">
</audio>
<div class="audio-player-caption">
Selection D.
</div></p>
<p>The image below shows the participants’ first-choice colors to the four musical selections provided above. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/92463/original/image-20150819-10863-141u1ja.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/92463/original/image-20150819-10863-141u1ja.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=123&fit=crop&dpr=1 600w, https://images.theconversation.com/files/92463/original/image-20150819-10863-141u1ja.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=123&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/92463/original/image-20150819-10863-141u1ja.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=123&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/92463/original/image-20150819-10863-141u1ja.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=155&fit=crop&dpr=1 754w, https://images.theconversation.com/files/92463/original/image-20150819-10863-141u1ja.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=155&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/92463/original/image-20150819-10863-141u1ja.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=155&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Selection A, from Bach’s Brandenburg Concerto Number 2, caused most people to pick colors that were bright, vivid and dominated by yellows. Selection B, a different section of the very same Bach concerto, caused participants to pick colors that are noticeably darker, grayer and bluer. Selection C was an excerpt from a 1990s rock song, and it caused participants to choose reds, blacks and other dark colors. Meanwhile, selection D, a slow, quiet, “easy listening” piano piece, elicited selections dominated by muted, grayish colors in various shades of blue.</p>
<h2>The mediating role of emotion</h2>
<p>But why do music and colors match up in this particular way? </p>
<p>We believe that it’s because music and color have common emotional qualities. Certainly, most music conveys emotion. In the four clips you just heard, selection A “sounds” happy and strong, while B sounds sad and weak. C sounds angry and strong, and D sounds sad and calm. (Why this might be the case is something we’ll explore later.)</p>
<p><a href="http://psycnet.apa.org/index.cfm?fa=buy.optionToBuy&id=1995-08699-001">If colors have similar emotional associations</a>, people should be able to match colors and songs that contain overlapping emotional qualities. They may not know that they’re doing this, but the results corroborate this idea. </p>
<p><a href="http://www.pnas.org/content/110/22/8836">We’ve tested our theory</a> by having people rate each musical selection and each color on five emotional dimensions: happy to sad, angry to calm, lively to dreary, active to passive, and strong to weak. </p>
<p>We compared the results and found that they were almost perfectly aligned: the happiest-sounding music elicited the happiest-looking colors (bright, vivid, yellowish ones), while the saddest-sounding music elicited the saddest-looking colors (dark, grayish, bluish ones). Meanwhile, the angriest-sounding music elicited the angriest-looking colors (dark, vivid, reddish ones). </p>
<p>To study possible cultural differences, we repeated the very same experiment in Mexico. To our surprise, the Mexican and US results were virtually identical, which suggests that music-to-color associations might be universal. (We’re currently testing this possibility in cultures, such as Turkey and India, where the traditional music differs more radically from Western music.)</p>
<p>These results support the idea that music-to-color associations in most people are indeed mediated by emotion.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=300&fit=crop&dpr=1 600w, https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=300&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=300&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=377&fit=crop&dpr=1 754w, https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=377&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/92183/original/image-20150817-25727-1gknyud.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=377&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The album cover designers for Enya’s Shepherd Moons and Metallica’s Master of Puppets may have subconsciously chosen colors that matched the emotional qualities of the respective artists’ music.</span>
</figcaption>
</figure>
<h2>People who actually see colors when listening to music</h2>
<p>There’s a small minority of people – maybe one in 3,000 – who have even stronger connections between music and colors. They are called chromesthetes, and they spontaneously “see” colors as they listen to music. </p>
<p>For example, a clip from the 2009 film The Soloist shows the complex, internally generated “light show” that the lead character – a chromesthetic street musician – might have experienced while listening to Beethoven’s Third Symphony.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/PTLdTP-gJeA?wmode=transparent&start=50" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Chromesthesia is just one form of a more general condition called <a href="https://mitpress.mit.edu/books/wednesday-indigo-blue">synesthesia</a>, in which certain individuals experience incoming sensory information both in the appropriate sensory dimension and in some other, seemingly inappropriate, sensory dimension. </p>
<p>The most common form of synesthesia is <a href="http://otherthings.com/uw/syn/">letter-to-color synesthesia</a>, in which the synesthete experiences color when viewing black letters and digits. There are many other forms of synesthesia, including chromesthesia, that affect a surprising number of different sensory domains. </p>
<p><a href="http://cbc.ucsd.edu/pdf/Synaesthesia%20-%20JCS.pdf">Some theories</a> propose that synesthesia is caused by direct connections between different sensory areas of the brain. <a href="http://www.ncbi.nlm.nih.gov/pubmed/21038232">Other theories</a> propose that synesthesia is related to brain areas that produce emotional responses. </p>
<p>The former theory implies little or no role for emotion in determining the colors that chromesthetes experience, whereas the latter theory implies a strong role for emotion. </p>
<p>Which theory is correct? </p>
<p>To find out, we repeated the music-color association experiment with 11 chromesthetes and 11 otherwise similar non-chromesthetes. The non-chromesthetes chose the colors that “went best” with the music (as described above), but the chromesthetes chose the colors that were “most similar to the colors they experienced while listening to the music.” </p>
<p>The left side of the image below shows the first choices of the syensethetes and non-synesthetes for fast-paced classical music in a major key (like selection A), which tends to sound happy and strong. The right side shows the color responses for slow-paced classical music in a minor key (like selection B), which tends to sound sad and weak. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=414&fit=crop&dpr=1 600w, https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=414&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=414&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=520&fit=crop&dpr=1 754w, https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=520&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/92173/original/image-20150817-5127-1bphakg.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=520&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The color choices of synesthetes and non-synesthetes after listening to fast, major key music and slow, minor key music.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The color experiences of chromesthetes (Figure B) turned out to be remarkably like the colors that non-chromesthetes chose as going best with the same music (Figure A). </p>
<p>But we mainly wanted to know how the non-chromesthetes and chromesthetes would compare in terms of emotional effects. The results are depicted in Figure C.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/92176/original/image-20150817-5117-1jodo3r.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Interestingly, the emotional effects for chromesthetes were as strong as those for non-chromesthetes on some dimensions (happy/sad, active/passive and strong/weak), but weaker on others (calm/agitated and angry/not-angry).</p>
<p>The fact that chromesthetes exhibit emotional effects at all suggests that music-to-color synesthesia depends, at least in part, on neural connections that include emotion-related circuits in the brain. That they’re decidedly weaker in chromesthetes than non-chromesthetes for some emotions further suggests that chromesthetic experiences also depend on direct, <em>non-emotional connections</em> between the auditory and visual cortex. </p>
<h2>Musical anthropomorphism</h2>
<p>The fact that music-to-color associations are so strongly influenced by emotion raises further questions. For example, why is it that fast, loud, high-pitched music “sounds” angry, whereas slow, quiet, low-pitched music “sounds” calm? </p>
<p>We don’t know the answers yet, but one intriguing possibility is what we like to call “musical anthropomorphism” – the idea that sounds are emotionally interpreted as being analogous to the behavior of people. </p>
<p>For example, faster, louder, high-pitched music might be perceived as angry because people tend to move and speak more quickly and raise their voices in pitch and volume when they’re angry, while doing the opposite when they’re calm. Why music in a major key sounds happier than music in a minor key, however, remains a mystery. </p>
<p>Artists and graphic designers can certainly use these results when they’re creating light shows for concerts or album covers for bands – so that “listening” to music can become richer and more vivid by “seeing” and “feeling” it as well.</p>
<p>But on a deeper level, it’s fascinating to see how effective and efficient the brain is at coming up with abstract associations. </p>
<p>To find connections between different perceptual events – such as music and color – our brains try to find commonalities. Emotions emerge dramatically because so much of our inner lives are associated with them. They are central not only to how we interpret incoming information, but also to how we respond to them. </p>
<p>Given the myriad connections from perceptions to emotions and from emotions to actions, it seems quite natural that emotions emerge so strongly – and perhaps unconsciously – in finding the best colors for a song.</p><img src="https://counter.theconversation.com/content/45537/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stephen Palmer receives funding from the National Science Foundation that has, in part, supported this research.</span></em></p><p class="fine-print"><em><span>Karen B Schloss does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Why do certain songs and colors make us feel a certain way?Stephen Palmer, Professor of the Graduate School, University of California, BerkeleyKaren B Schloss, Assistant Professor of Research, Brown UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/408952015-05-19T20:02:58Z2015-05-19T20:02:58ZSome people with bipolar struggle to communicate – and here’s why<figure><img src="https://images.theconversation.com/files/82137/original/image-20150519-25403-11wsp8a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Accurate perception of emotional information is crucial for social communication.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/matusfi/6810854382/">Matus Laslofi/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Every day we’re confronted with information that stimulates many of our senses at the same time, but we don’t perceive this information in its component parts. Rather, we perceive it as a whole without being conscious of doing so. But people with bipolar disorder struggle with this integration process, and this might make it hard for them to communicate.</p>
<p>Think about an explosion: the sight of the fireball might be sufficient to signal that an explosion has occurred, but we will know that it definitely has if we also hear a loud bang, smell smoke and feel heat from the fire.</p>
<p>It’s the integration of all these different kinds of sensory information that enables us to experience the world around us in all its glory. And this integration is vital if we are to understand other people.</p>
<h2>Working together</h2>
<p>We are inherently social beings and the emotional expressions of the people we encounter reach us through different sensory channels. Accurate perception of this emotional information is crucial for social communication.</p>
<p>To communicate effectively, we need to appreciate the meaning behind other people’s words, decipher the tones and inflections with which they are spoken, and relate these to the speaker’s facial expressions and body language. More importantly, we need to integrate the information quickly so our perceptions are coherent.</p>
<p>Most of the time, the information we see and hear has the same meaning. In some cases, perception from only one of the senses may be enough for emotional understanding. But cues from only one sense can often be ambiguous. </p>
<p>It would be difficult for us to tell that what we are encountering is an explosion, for instance, based on the smell of burning alone. And because of this occasional ambiguity, <a href="http://www.tandfonline.com/doi/abs/10.1080/026999300378824">integration of information from different senses</a> is vital for helping us better judge incoming information, and key to understanding others.</p>
<p>Sensory integration is particularly important when inputs to one of the senses are distorted, like when someone’s voice is muffled. In this case, it may become difficult to be certain about what the speaker may be thinking and feeling, but coupling the muffled voice with information from her face can help us to figure it out.</p>
<p>Integration is clearly handy any time we communicate with other people. </p>
<h2>A curious difference</h2>
<p>But some people have a permanent communication disadvantage that may affect them socially. <a href="http://www.ncbi.nlm.nih.gov/pubmed/20667057">An increasing number of studies</a> show that in some people with psychiatric conditions such <a href="http://www.ncbi.nlm.nih.gov/pubmed/11240572">as bipolar disorder</a>, interpersonal functioning is compromised. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/82135/original/image-20150519-25428-y45mw8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/82135/original/image-20150519-25428-y45mw8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=332&fit=crop&dpr=1 600w, https://images.theconversation.com/files/82135/original/image-20150519-25428-y45mw8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=332&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/82135/original/image-20150519-25428-y45mw8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=332&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/82135/original/image-20150519-25428-y45mw8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=417&fit=crop&dpr=1 754w, https://images.theconversation.com/files/82135/original/image-20150519-25428-y45mw8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=417&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/82135/original/image-20150519-25428-y45mw8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=417&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The sight of the fireball might be sufficient to signal that an explosion has occurred, but we will know that it definitely has if our other senses are also stimulated.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/puppydogbites/2629895369/">Adam Howarth/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>It’s not clear why this is, but we think it may be related to potential difficulties in integrating different kinds of social information. Just as intact sensory integration can help improve our understanding of others, abnormal integration can impair it. Research shows at least some people with bipolar disorder have difficulty <a href="http://www.ncbi.nlm.nih.gov/pubmed/24423084">recognising other people’s facial</a> and <a href="http://www.ncbi.nlm.nih.gov/pubmed/24012143">vocal emotional expressions</a>.</p>
<p>A <a href="http://www.swinburne.edu.au/health-arts-design/staff-profiles/view.php?who=srossell">colleague</a> and I wondered whether, in addition to these seemingly separate processes being impaired, it was also possible that there was impairment in the way the systems governing these different emotion perception skills interacted in people with bipolar disorder. We <a href="http://www.ncbi.nlm.nih.gov/pubmed/24725656">tested the idea</a> by looking at how people with the disorder integrate emotional signals.</p>
<p>We asked a group of people who had bipolar disorder and a group of people who did not to quickly identify a series of facial expressions while ignoring a series of sentences spoken in an emotional tone at the same time.</p>
<p>Sometimes, the auditory information matched the visual information – a happy face paired with a happy voice, for instance – and sometimes it didn’t. Because conflicting information is likely to interfere with how well a stimulus is recognised, we expected sensory integration would be best reflected in quicker responses when the visual and auditory information were the same.</p>
<h2>What we found</h2>
<p>We found people who didn’t have bipolar disorder showed this effect, but the bipolar group failed to integrate emotional signals to the same extent. Whether the audio and visual information was the same or not made no difference to the responses of the bipolar group. They were, in fact, consistently and significantly slower than the control group at recognising facial emotional expressions. </p>
<p>In effect, the bipolar group didn’t show the usual information-processing boost that occurs when information of the same meaning is presented to different senses. This suggests that for people with bipolar disorder, meaningful information from different sensory channels isn’t integrated in the usual way.</p>
<p>It seems that when people with bipolar encounter ambiguous facial expressions, related auditory information is unlikely to help them identify the underlying emotional state of the person with whom they’re communicating. This could mean that the brain regions that process information from the senses do not communicate well in people who have this disorder. </p>
<p>The idea needs further investigation but, regardless of underlying causes, our results highlight a potential reason for why some people with bipolar disorder may experience interpersonal difficulties.</p><img src="https://counter.theconversation.com/content/40895/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tamsyn Van Rheenen currently receives funding from the NHMRC and the Barbara Dicker Brain Sciences Foundation. She has received funding from the Helen McPherson Smith Trust and Swinburne University in the past.</span></em></p>Social communication requires us to integrate information from all our senses. But it seems the systems that govern different emotion perception skills may be impaired in people with bipolar disorder.Tamsyn Van Rheenen, NHMRC Early Career Research Fellow, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.