tag:theconversation.com,2011:/us/topics/optics-1226/articlesOptics – The Conversation2023-07-12T12:39:24Ztag:theconversation.com,2011:article/2060552023-07-12T12:39:24Z2023-07-12T12:39:24ZA new, thin-lensed telescope design could far surpass James Webb – goodbye mirrors, hello diffractive lenses<figure><img src="https://images.theconversation.com/files/536371/original/file-20230707-21-kxopc5.jpeg?ixlib=rb-1.1.0&rect=44%2C44%2C1209%2C599&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A light, cheap space telescope design would make it possible to put many individual units in space at once.</span> <span class="attribution"><span class="source">Katie Yung, Daniel Apai /University of Arizona and AllThingsSpace /SketchFab</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>Astronomers have discovered more than <a href="https://exoplanets.nasa.gov/discovery/exoplanet-catalog/">5,000 planets outside of the solar system</a> to date. The grand question is whether <a href="https://theconversation.com/to-search-for-alien-life-astronomers-will-look-for-clues-in-the-atmospheres-of-distant-planets-and-the-james-webb-space-telescope-just-proved-its-possible-to-do-so-184828">any of these planets are home to life</a>. To find the answer, astronomers will likely need <a href="https://nap.nationalacademies.org/catalog/26141/pathways-to-discovery-in-astronomy-and-astrophysics-for-the-2020s">more powerful telescopes</a> than exist today.</p>
<p>I am an <a href="https://scholar.google.com/citations?user=2SCIYjIAAAAJ&hl=en&oi=ao">astronomer who studies astrobiology</a> and planets around distant stars. For the last seven years, I have been co-leading a team that is developing a new kind of space telescope that could collect a hundred times more light than the <a href="https://theconversation.com/the-most-powerful-space-telescope-ever-built-will-look-back-in-time-to-the-dark-ages-of-the-universe-169603">James Webb Space Telescope</a>, the biggest space telescope ever built.</p>
<p>Almost all space telescopes, including Hubble and Webb, collect light using mirrors. Our proposed telescope, the <a href="https://nautilus-array.space/">Nautilus Space Observatory</a>, would replace large, heavy mirrors with a novel, thin lens that is much lighter, cheaper and easier to produce than mirrored telescopes. Because of these differences, it would be possible to launch many individual units into orbit and create a powerful network of telescopes.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A blue planet with clouds." src="https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536355/original/file-20230707-21-3gvtcx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Exoplanets, like TOI-700d shown in this artist’s conception, are planets beyond our solar system and are prime candidates in the search for life.</span>
<span class="attribution"><a class="source" href="https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA23408_hires.jpg">NASA's Goddard Space Flight Center</a></span>
</figcaption>
</figure>
<h2>The need for larger telescopes</h2>
<p>Exoplanets – planets that orbit stars other than the Sun – are prime targets in the search for life. Astronomers need to use giant space telescopes that collect huge amounts of light to <a href="https://exoplanets.nasa.gov/discovery/missions/#first-planetary-disk-observed">study these faint and faraway objects</a>. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A massive circular gold mirror with people standing in the foreground." src="https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=899&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=899&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=899&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1130&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1130&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536356/original/file-20230707-23-pdn1e5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1130&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The James Webb Space Telescope is just barely able to search exoplanets for signs of life.</span>
<span class="attribution"><a class="source" href="http://jwst.nasa.gov/multimedia.html">NASA</a></span>
</figcaption>
</figure>
<p>Existing telescopes can detect exoplanets as small as Earth. However, it takes a lot more sensitivity to begin to learn about the chemical composition of these planets. Even Webb is just barely powerful enough to search <a href="https://doi.org/10.3847/1538-3881/ab21e0">certain exoplanets for clues of life</a> – namely <a href="https://theconversation.com/to-search-for-alien-life-astronomers-will-look-for-clues-in-the-atmospheres-of-distant-planets-and-the-james-webb-space-telescope-just-proved-its-possible-to-do-so-184828">gases in the atmosphere</a>. </p>
<p>The James Webb Space Telescope cost more than <a href="https://www.gao.gov/products/gao-18-273">US$8 billion and took over 20 years to build</a>. The next flagship telescope is not expected to fly before 2045 and is estimated to <a href="https://www.science.org/content/article/nasa-unveils-initial-plan-multibillion-dollar-telescope-find-life-alien-worlds">cost $11 billion</a>. These ambitious telescope projects are always expensive, laborious and produce a single powerful – but very specialized – observatory.</p>
<h2>A new kind of telescope</h2>
<p>In 2016, aerospace giant <a href="https://www.northropgrumman.com">Northrop Grumman</a> invited me and 14 other professors and NASA scientists – all experts on exoplanets and the search for extraterrestrial life – to Los Angeles to answer one question: What will exoplanet space telescopes look like in 50 years?</p>
<p>In our discussions, we realized that a major bottleneck preventing the construction of more powerful telescopes is the challenge of making larger mirrors and getting them into orbit. To bypass this bottleneck, a few of us came up with the idea of revisiting an old technology called diffractive lenses. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A cross section of two lenses, with the one on the left showing a jagged surface and the one on the right a rounded surface." src="https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=897&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=897&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=897&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1127&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1127&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536361/original/file-20230707-29-i85svw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1127&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Diffractive lenses, left, are much thinner compared to similarly powerful refractive lenses, right.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Fresnel_lens#/media/File:Fresnel_lens.svg">Pko/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Conventional lenses use refraction to focus light. <a href="https://theconversation.com/can-rainbows-form-in-a-circle-fun-facts-on-the-physics-of-rainbows-202952">Refraction is when light changes direction</a> as it passes from one medium to another – it is the reason light bends when it enters water. In contrast, diffraction is when light bends around corners and obstacles. A cleverly arranged pattern of steps and angles on a glass surface can form a diffractive lens. </p>
<p>The first such lenses were invented by the French scientist Augustin-Jean Fresnel in 1819 to provide lightweight lenses for <a href="https://wwnorton.com/books/9780393350890">lighthouses</a>. Today, similar diffractive lenses can be found in many small-sized consumer optics – from <a href="https://global.canon/en/v-square/34.html">camera lenses</a> to <a href="https://doi.org/10.1889/1.2206112">virtual reality headsets</a>. </p>
<p>Thin, simple diffractive lenses are <a href="http://cplire.ru:8080/2902/1/OGRW_2014_Proceedings.pdf#page=77">notorious for their blurry images</a>, so they have never been used in astronomical observatories. But if you could improve their clarity, using diffractive lenses instead of mirrors or refractive lenses would allow a space telescope to be much cheaper, lighter and larger.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A person holding a round, thin piece of glass." src="https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=389&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=389&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=389&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=488&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=488&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536359/original/file-20230707-17-kdihhg.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=488&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">One of the benefits of diffractive lenses is that they can remain thin while increasing in diameter.</span>
<span class="attribution"><span class="source">Daniel Apai/University of Arizona</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>A thin, high-resolution lens</h2>
<p>After the meeting, I returned to the University of Arizona and decided to explore whether modern technology could produce diffractive lenses with better image quality. Lucky for me, <a href="https://profiles.arizona.edu/person/milster">Thomas Milster</a> – one of the world’s leading experts on diffractive lens design – works in the building next to mine. We formed a team and got to work.</p>
<p>Over the following two years, our team invented a new type of diffractive lens that required new manufacturing technologies to etch a complex pattern of tiny grooves onto a piece of clear glass or plastic. The specific pattern and shape of the cuts focuses incoming light to a single point behind the lens. The new design produces a <a href="https://doi.org/10.1364/OSAC.410187">near-perfect quality image</a>, far better than previous diffractive lenses. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A triangular piece of glass with subtle etchings reflecting in the light." src="https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536358/original/file-20230707-25-gj9ryc.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A diffractive lens bends light using etchings and patterns on its surface.</span>
<span class="attribution"><span class="source">Daniel Apai/University of Arizona</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Because it is the surface texture of the lens that does the focusing, not the thickness, you can easily make the lens bigger while <a href="https://doi.org/10.1364/FIO.2020.JTu7A.1">keeping it very thin and lightweight</a>. Bigger lenses collect more light, and low weight means <a href="https://doi.org/10.3847/1538-3881/ab2631">cheaper launches to orbit</a> – both great traits for a space telescope.</p>
<p>In August 2018, our team produced the first prototype, a 2-inch (5-centimeter) diameter lens. Over the next five years, we further improved the image quality and increased the size. We are now completing a 10-inch (24-cm) diameter lens that will be more than 10 times lighter than a conventional refractive lens would be.</p>
<h2>Power of a diffraction space telescope</h2>
<p>This new lens design makes it possible to rethink how a space telescope might be built. In 2019, our team published a concept called the <a href="https://doi.org/10.3847/1538-3881/ab2631">Nautilus Space Observatory</a>. </p>
<p>Using the new technology, our team thinks it is possible to build a 29.5-foot (8.5-meter) diameter lens that would be only about 0.2 inches (0.5 cm) thick. The lens and support structure of our new telescope could weigh around 1,100 pounds (500 kilograms). This is more than three times lighter than a Webb–style mirror of a similar size and would be bigger than Webb’s 21-foot (6.5-meter) diameter mirror. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A spherical object in space with a lens on one side." src="https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536353/original/file-20230707-21-pbljxz.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The thin lens allowed the team to design a lighter, cheaper telescope, which they named the Nautilus Space Observatory.</span>
<span class="attribution"><span class="source">Daniel Apai/University of Arizona</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>The lenses have other benefits, too. First, they are <a href="https://doi.org/10.1117/12.2633573">much easier and quicker</a> <a href="https://theconversation.com/how-do-you-build-a-mirror-for-one-of-the-worlds-biggest-telescopes-49927">to fabricate than mirrors</a> and can be made en masse. Second, lens-based telescopes work well even when not aligned perfectly, making these telescopes easier to <a href="https://doi.org/10.1117/12.2633760">assemble</a> and fly in space than mirror-based telescopes, which require extremely precise alignment.</p>
<p>Finally, since a single Nautilus unit would be light and relatively cheap to produce, it would be possible to put dozens of them into orbit. Our current design is in fact not a single telescope, but a constellation of 35 individual telescope units.</p>
<p>Each individual telescope would be an independent, highly sensitive observatory able to collect more light than Webb. But the real power of Nautilus would come from turning all the individual telescopes toward a single target. </p>
<p>By combining data from all the units, Nautilus’ light-collecting power would equal a telescope nearly 10 times larger than Webb. With this powerful telescope, astronomers could search hundreds of exoplanets for atmospheric gases that may <a href="https://theconversation.com/to-search-for-alien-life-astronomers-will-look-for-clues-in-the-atmospheres-of-distant-planets-and-the-james-webb-space-telescope-just-proved-its-possible-to-do-so-184828">indicate extraterrestrial life</a>.</p>
<p>Although the Nautilus Space Observatory is still a long way from launch, our team has made a lot of progress. We have shown that all aspects of the technology work in small-scale prototypes and are now focusing on building a 3.3-foot (1-meter) diameter lens. Our next steps are to send a small version of the telescope to the edge of space on a high-altitude balloon.</p>
<p>With that, we will be ready to propose a revolutionary new space telescope to NASA and, hopefully, be on the way to exploring hundreds of worlds for signatures of life.</p><img src="https://counter.theconversation.com/content/206055/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel Apai receives funding from NASA, NSF, and the Gordon and Betty Moore Foundation. He works for The University of Arizona.</span></em></p>Space telescopes are limited in size due to the difficulties and cost of getting into orbit. By revamping an old optical technology, researchers are working on a lightweight and thin telescope design.Daniel Apai, Associate Dean for Research and Professor of Astronomy and Planetary Sciences, University of ArizonaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2046182023-04-27T05:14:11Z2023-04-27T05:14:11ZNew nanoparticle source generates high-frequency light<figure><img src="https://images.theconversation.com/files/523123/original/file-20230427-26-fls8hc.jpeg?ixlib=rb-1.1.0&rect=8%2C0%2C5982%2C3997&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>High-frequency light is useful. The higher the frequency of light, the shorter its wavelength – and the shorter the wavelength, the smaller the objects and details the light can be used to see.</p>
<p>So violet light can show you smaller details than red light, for example, because it has a shorter wavelength. But to see really, really small things – down to the scale of billionths of a metre, thousands of times less than the width of a human hair – to see those things, you need <em>extreme ultraviolet light</em> (and a good microscope).</p>
<p>Extreme ultraviolet light, with wavelengths between 10 and 120 nanometres, has many applications in medical imaging, studying biological objects, and deciphering the fine details of computer chips during their manufacture. However, producing small and affordable sources of this light has been very challenging.</p>
<p>We have found a way to make nanoparticles of a common semiconductor material emit light with a frequency up to seven times higher than the frequency of light sent to it. We generated blue-violet light from infrared light, and it will be possible to generate extreme ultraviolet light from red light with the same principles. Our research, carried out with colleagues from the University of Brescia, the University of Arizona and Korea University, is <a href="https://www.science.org/doi/10.1126/sciadv.adg2655">published in Science Advances</a>.</p>
<h2>The power of harmonics</h2>
<p>Our system starts out with an ordinary laser that produces long-wavelength infrared light. This is called the pump laser, and there’s nothing special about it – such lasers are commercially available, and they can be compact and affordable.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A diagram illustrating the setup of the light-emitting system" src="https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=507&fit=crop&dpr=1 600w, https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=507&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=507&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=637&fit=crop&dpr=1 754w, https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=637&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/523143/original/file-20230427-18-xja1n3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=637&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Incoming laser light hitting a nanoparticle which then emits higher frequency light.</span>
<span class="attribution"><span class="source">Zalogina et al. / Science Advances</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>But next we fire short pulses of light from this laser at a specially engineered nanoparticle of a material called aluminium gallium arsenide, and that’s where things get interesting.</p>
<p>The nanoparticle absorbs energy from the laser pulses, and then emits its own burst of light. By carefully engineering the size and shape of the nanoparticle, we can create powerful resonances to amplify certain harmonics of the emitted light.</p>
<p>What does that mean, exactly? Well, we can make a useful analogy with sound.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A diagram showing the first seven harmonics of a guitar string." src="https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=666&fit=crop&dpr=1 600w, https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=666&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=666&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=837&fit=crop&dpr=1 754w, https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=837&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/523144/original/file-20230427-28-fgl3ea.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=837&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Harmonics in a guitar string: in the fundamental frequency, the wavelength is the length of the whole string, but in the higher harmonics multiple shorter wavelengths fit within the length of the string.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Harmonic">Wikimedia / Y Landman</a></span>
</figcaption>
</figure>
<p>When you pluck a string on a guitar, it vibrates with what’s called its <em>fundamental frequency</em> – which makes the main note you hear – plus small amounts of higher frequencies called harmonics, which are multiples of the fundamental frequency. The body of the guitar is designed to produce resonances that amplify some of these harmonics and dampen others, creating the overall sound you hear.</p>
<p>Both light and sound share similarities in their physics – these are both propagating waves (acoustic waves in the case of sound, and electromagnetic waves in the case of light).</p>
<figure class="align-center ">
<img alt="A close up of a hand strumming an acoustic guitar" src="https://images.theconversation.com/files/523126/original/file-20230427-18-a14ek3.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/523126/original/file-20230427-18-a14ek3.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/523126/original/file-20230427-18-a14ek3.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/523126/original/file-20230427-18-a14ek3.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/523126/original/file-20230427-18-a14ek3.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/523126/original/file-20230427-18-a14ek3.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/523126/original/file-20230427-18-a14ek3.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Just as the body of a guitar dampens some frequencies and amplifies others, carefully designed nanoparticles can boost high-frequency harmonics of laser light.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>In our light source, the pump laser is like the main note of the string, and the nanoparticles are like the guitar body. Except what’s special about the nanoparticles is that they massively amplify those higher harmonics of the pump laser, producing light with a higher frequency (up to seven times higher in our case, and a wavelength correspondingly seven times shorter).</p>
<h2>What it’s good for</h2>
<p>This technology allows us to create new sources of light in parts of the electromagnetic spectrum such as the extreme ultraviolet, where there are no natural sources of light and where current engineered sources are too large or too expensive.</p>
<p>Conventional microscopes using visible light can only study objects down to a size of about a ten-millionth of a metre. The resolution is limited by the wavelength of light: violet light has the wavelength of about 400 nanometres (one nanometre is one billionth of a metre). </p>
<p>But there are plenty of applications, such as biological imaging and electronics manufacturing, where being able to see down to a billionth of a metre or so would be a huge help.</p>
<p>At present, to see at those scales you need “super-resolution” microscopy, which lets you see details smaller than the wavelength of the light you are using, or electron microscopes, which do not use light at all and create image using a flux of electrons. However, such methods are quite slow and expensive.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-quantum-hack-for-microscopes-can-reveal-the-undiscovered-details-of-life-161182">A quantum hack for microscopes can reveal the undiscovered details of life</a>
</strong>
</em>
</p>
<hr>
<p>To understand the advantages of a light source like ours, consider computer chips: they are made of very tiny components with feature sizes almost as small as a billionth of a metre. During the production process, it would be useful for manufacturers to use extreme ultraviolet light to monitor the process in real time.</p>
<p>This would save resources and time on bad batches of chips. The scale of the industry is such that even a 1% increase in chip yields could save billions of dollars each year. </p>
<p>In future, nanoparticles like ours could be used to produce tiny, inexpensive sources of extreme ultraviolet light, illuminating the world of extremely small things.</p><img src="https://counter.theconversation.com/content/204618/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sergey Kruk receives funding from the Australian Research Council (DE210100679). </span></em></p><p class="fine-print"><em><span>Anastasiia Zalogina does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A new way to make high-frequency light could make it easier to look at things 10 times smaller than conventional microscopes can see.Anastasiia Zalogina, Postdoctoral researcher, Australian National UniversitySergey Kruk, ARC DECRA Fellow, Research School of Physics, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2030602023-04-04T00:57:42Z2023-04-04T00:57:42ZFamous double-slit experiment recreated in fourth dimension by physicists<figure><img src="https://images.theconversation.com/files/519143/original/file-20230403-14-9ynrlj.jpeg?ixlib=rb-1.1.0&rect=897%2C997%2C3569%2C2123&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Tobias Carlsson / Unsplash</span></span></figcaption></figure><p>More than 200 years ago, the English scientist Thomas Young carried out a famous test known as the “<a href="https://royalsocietypublishing.org/doi/10.1098/rstl.1804.0001">double-slit experiment</a>”. He shone a beam of light at a screen with two slits in it, and observed the light that passed through the apertures formed a pattern of dark and bright bands.</p>
<p>At the time, the experiment was understood to demonstrate that light was a wave. The “interference pattern” is caused by light waves passing through both slits and interfering with each other on the other side, producing bright bands where the peaks of the two waves line up and dark bands where a peak meets a trough and the two cancel out.</p>
<p>In the 20th century, physicists realised the experiment could be adapted to demonstrate that light not only behaves like a wave, but also like a particle (called a photon). In quantum mechanical theory, this particle still has wave properties – so the wave associated with even a single photon passes through both slits, and creates interference.</p>
<p>In a new twist on the classic experiment, we replaced the slits in the screen with “slits” in time – and discovered a new kind of interference pattern. Our results are <a href="https://www.nature.com/articles/s41567-023-01993-w">published today</a> in Nature Physics.</p>
<h2>Slits in time</h2>
<p>Our team, led by Riccardo Sapienza at Imperial College London, fired light through a material that changes its properties in femtoseconds (quadrillionths of a second), only allowing light to pass through at specific times in quick succession. </p>
<p>We still saw interference patterns – but instead of showing up as bands of bright and dark, they showed up as changes in the frequency or colour of the beams of light.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-is-light-a-wave-or-a-particle-162514">Curious Kids: is light a wave or a particle?</a>
</strong>
</em>
</p>
<hr>
<p>To carry out our experiment, we devised a way to switch on and off the reflectivity of a screen incredibly quickly. We had a transparent screen that became a mirror for two brief instants, creating the equivalent of two slits in time. </p>
<h2>Colour interference</h2>
<p>So what do these slits in time do to light? If we think of light as a particle, a photon sent at this screen might be reflected by the first increase of reflectivity or by the second, and reach a detector.</p>
<p>However, the wave nature of the process means the photon is in a sense reflected by both temporal slits. This creates interference, and a varying pattern of colour in the light that reaches the detector. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-what-is-wave-particle-duality-7414">Explainer: what is wave-particle duality</a>
</strong>
</em>
</p>
<hr>
<p>The amount of change in colour is related to how fast the mirror changes its reflectivity. These changes must be on timescales comparable with the length of a single cycle of a light-wave, which is measured in femtoseconds.</p>
<p>Electronic devices cannot function quickly enough for this. So we had to use light to switch on and off the reflectivity of our screen. </p>
<p>We took a screen of indium tin oxide, a transparent material used in mobile phone screens, and made it reflective with a brief pulse of laser light.</p>
<h2>From space to time</h2>
<p>Our experiment is a beautiful demonstration of wave physics, and also shows how we can transfer concepts such as interference from the domain of space to the domain of time.</p>
<p>The experiment has also helped us in understanding materials that can minutely control the behaviour of light in space and time. This will have applications in signal processing and perhaps even light-powered computers.</p><img src="https://counter.theconversation.com/content/203060/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stefan Maier does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In an update of one of the most famous experiments in physics, scientists have used ‘slits in time’ to explore the properties of light and ultrafast optical materials.Stefan Maier, Head of School of Physics and Astronomy, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1963862022-12-29T20:56:44Z2022-12-29T20:56:44ZThe sky isn’t just blue – airglow makes it green, yellow and red too<figure><img src="https://images.theconversation.com/files/502097/original/file-20221220-6052-nvokjs.JPG?ixlib=rb-1.1.0&rect=0%2C7%2C4920%2C3268&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://eol.jsc.nasa.gov/SearchPhotos/photo.pl?mission=ISS043&roll=E&frame=143486">NASA</a></span></figcaption></figure><p>Look up on a clear sunny day and you will see a blue sky. But is this the true colour of the sky? Or is it the only colour of the sky? </p>
<p>The answers are a little complicated, but they involve the nature of light, atoms and molecules and some quirky parts of Earth’s atmosphere. And big lasers too – for science!</p>
<h2>Blue skies?</h2>
<p>So first things first: when we see a blue sky on a sunny day, what are we seeing? Are we seeing blue nitrogen or blue oxygen? The simple answer is no. Instead the blue light we see is scattered sunlight. </p>
<p>The Sun produces a broad <a href="https://science.nasa.gov/ems/09_visiblelight">spectrum of visible light</a>, which we see as white but it includes all the colours of the rainbow. When sunlight passes through the air, atoms and molecules in the atmosphere scatter blue light in all directions, far more than red light. This is called <a href="https://theconversation.com/curious-kids-why-is-the-sky-blue-and-where-does-it-start-81165">Rayleigh scattering</a>, and results in a white Sun and blue skies on clear days.</p>
<p>At sunset we can see this effect dialled up, because sunlight has to pass through more air to reach us. When the Sun is close to the horizon, almost all the blue light is scattered (or absorbed by dust), so we end up with a red Sun with bluer colours surrounding it. </p>
<p>But if all we are seeing is scattered sunlight, what is the true colour of the sky? Perhaps we can get an answer at night.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-why-is-the-sky-blue-and-where-does-it-start-81165">Curious Kids: Why is the sky blue and where does it start?</a>
</strong>
</em>
</p>
<hr>
<h2>The colour of dark skies</h2>
<p>If you look at the night sky, it is obviously dark, but it isn’t perfectly black. Yes, there are the stars, but the night sky itself glows. This isn’t light pollution, but the atmosphere glowing naturally. </p>
<p>On a dark moonless night in the countryside, away from city lights, you can see the trees and hills silhouetted against the sky. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/502094/original/file-20221220-13-trarsj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trees are silhouetted against the glowing night sky.</span>
<span class="attribution"><span class="source">Rodney Campbell / flickr</span></span>
</figcaption>
</figure>
<p>This glow, called <a href="https://theconversation.com/beautiful-green-airglow-spotted-by-aurora-hunters-but-what-is-it-68188">airglow</a>, is produced by atoms and molecules in the atmosphere. In visible light, oxygen produces green and red light, hydroxyl (OH) molecules produce red light, and sodium produces a sickly yellow. Nitrogen, while far more abundant in the air than sodium, does not contribute much to airglow.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/beautiful-green-airglow-spotted-by-aurora-hunters-but-what-is-it-68188">Beautiful green 'airglow' spotted by aurora hunters – but what is it?</a>
</strong>
</em>
</p>
<hr>
<p>The distinct colours of airglow are the result of atoms and molecules releasing particular amounts of energy (quanta) in the form of light. For example, at high altitudes ultraviolet light can split oxygen molecules (O₂) into pairs of oxygen atoms, and when these atoms later <a href="https://theconversation.com/beautiful-green-airglow-spotted-by-aurora-hunters-but-what-is-it-68188">recombine into oxygen molecules they produce a distinct green light</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/EwqWdy_ZUX8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">You can see airglow at dark sites, such as the European Southern Observatory in Chile.</span></figcaption>
</figure>
<h2>Yellow light, shooting stars and sharp images</h2>
<p>Sodium atoms make up a minuscule fraction of our atmosphere, but they make up a big part of airglow, and have a very unusual origin – shooting stars. </p>
<p>You can see shooting stars on any clear dark night, if you’re willing to wait. They are teensy tiny meteors, produced by grains of dust heating up and vaporising in the upper atmosphere as they travel at over 11 kilometres per second. </p>
<p>As shooting stars blaze across the sky, at roughly 100 kilometres altitude, they leave behind a trail of atoms and molecules. Sometimes you can see shooting stars with distinct colours, resulting from the atoms and molecules they contain. Very bright shooting stars can even leave visible smoke trails. And among those atoms and molecules is a smattering of sodium.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A shooting star and airglow seen from the International Space Station." src="https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/502235/original/file-20221220-6028-umwpul.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A shooting star and airglow seen from the International Space Station.</span>
<span class="attribution"><span class="source">NASA</span></span>
</figcaption>
</figure>
<p>This high layer of sodium atoms is actually useful to astronomers. Our atmosphere is perpetually in motion, it’s turbulent, and it blurs images of planets, stars and galaxies. Think of the shimmering you see when you look along a long road on a summer’s afternoon. </p>
<p>To compensate for the turbulence, astronomers take quick images of bright stars and measure how the stars’ images are distorted. A special deformable mirror can be adjusted to remove the distortion, producing images that can be sharper than the ones from space telescopes. (Although space telescopes still have the advantage of not peering through airglow.)</p>
<p>This technique – called “adaptive optics” – is powerful, but there’s a big problem. There are not enough natural bright stars for adaptive optics to work over the whole sky. So astronomers make their own artificial stars in the night sky, called “laser guide stars”. </p>
<p>Those sodium atoms are high above the turbulent atmosphere, and we can make them glow brightly by firing a power laser at them tuned to the distinct yellow of sodium. The resulting artificial star can then be used for adaptive optics. The shooting star you see at night helps us see the Universe with sharper vision.</p>
<p>So the sky isn’t blue, at least not always. It is a glow-in-the-dark night sky too, coloured a mix of green, yellow and red. Its colours result from scattered sunlight, oxygen, and sodium from shooting stars. And with a little bit of physics, and some big lasers, we can make artificial yellow stars to get sharp images of our cosmos. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/mEJnEMtGYD8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Sodium laser guide stars at ESO’s Very Large Telescope in Chile.</span></figcaption>
</figure><img src="https://counter.theconversation.com/content/196386/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael J. I. Brown receives research funding from the Australian Research Council and Monash University.
</span></em></p><p class="fine-print"><em><span>Matthew Kenworthy receives research funding from the Nederlandse Organisatie voor Wetenschappelijk (Dutch Science Council) and has previously received funding from NASA, the National Science Foundation and the Nederlandse Onderzoekschool voor Astronomie (NOVA). </span></em></p>The sky looks blue on a sunny day – but at night we can see the faint glow of its true colour.Michael J. I. Brown, Associate Professor in Astronomy, Monash UniversityMatthew Kenworthy, Associate professor in Astronomy, Leiden UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1931922022-10-25T20:22:51Z2022-10-25T20:22:51ZDevelopment of vision in early childhood: No screens before age two<figure><img src="https://images.theconversation.com/files/491691/original/file-20221025-22-wx4aqi.jpg?ixlib=rb-1.1.0&rect=14%2C7%2C979%2C655&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Electronic devices are not, in and of themselves, a source of visual problems. Using these devices inappropriately can interfere with the natural development of the eye, as well as reading and learning skills. </span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Things are busy on a rainy Saturday afternoon when I make a trip to the mall to finalize some back-to-school shopping. I pass by a lot of people, including several parents with young children under two years old, in strollers, and am struck by the fact that all of the children have a tablet or phone in their hands. Has technology become the ultimate tool for keeping children calm?</p>
<p>As an optometrist and eye health expert, this observation saddens me every time I see it, since I know all the harmful effects such exposure to electronic tools can have on children.</p>
<p>These effects are all the more critical during the first years of life, both on the <a href="https://pubmed.ncbi.nlm.nih.gov/34625399/">visual level</a> and on the <a href="https://pubmed.ncbi.nlm.nih.gov/36190219/">cognitive and social development of children</a>.</p>
<h2>Visual development of children</h2>
<p>The human eye develops <a href="https://www.nationwidechildrens.org/family-resources-education/health-wellness-and-safety-resources/helping-hands/infant-vision-birth-to-one-year">through stimulation</a>. The quality of the optical stimulus influences the growth of the eyeball via a complex and balanced mechanism. At birth, the eye is hyperopic, that is to say, its power is not perfectly adjusted to its size. A child sees at short distances and is barely able to distinguish a shadow when grandpa comes to the bedroom door.</p>
<p>In the first few weeks, the eye grows, the retina matures and a balance is established between the growth of the eyeball and the power of the inner lens. At six months of age, each of the toddler’s two eyes has the vision of an adult eye. From this moment on, the eyes will develop their coordination, in order to generate vision in three dimensions. It’s also starting at the age of six months that the communication between the eyes develops in the visual brain as well.</p>
<p>Billions of neurological connections will have to be made during the <a href="https://opto.umontreal.ca/clinique/pdf/EFFETS%20DES%20ECRANS%20SUR%20LE%20D%C3%89VELOPPEMENT%20VISUEL%20DES%20ENFANTS.pdf">first eight years of life</a>. This maturation time is long, but necessary, considering that <a href="https://www.sciencedirect.com/science/article/pii/S0149763413001917">more than a third of the brain’s neurons are dedicated to vision</a>.</p>
<h2>A question of distance</h2>
<p>Electronic devices are not, in themselves, a source of visual problems. Rather, the inappropriate use of these devices can interfere with the natural development of the eye, as well as reading and learning skills.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two small children with glasses sitting on white chairs : a boy with a tablet computer, a girl with a cell phone" src="https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/489407/original/file-20221012-17-g43eu3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">For normal visual development, it is recommended that exposure to electronic devices be avoided between the ages of zero and two years.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>The first thing to consider is viewing distance. The eye is designed to look at a near distance that is about equal to the length of the forearm (distance from the elbow to the fingertips of the hand). That means about <a href="https://www.sciencedirect.com/science/article/pii/S0042698913000795">30 cm for a young child, and 40 cm for an adult</a>. However, tablets and phones are held on average 20-30 cm from the eye, and this distance <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/cxo.12453">becomes shorter with prolonged exposure</a>. The visual effort required to maintain a clear image at this distance is therefore doubled.</p>
<p>A distance that is too short influences the quality of the retinal image (and therefore visual development) and causes <a href="https://books.google.ca/books?hl=fr&lr=&id=jGGROHBFYt8C">excessive eye fatigue</a>. It is also important to understand that when eyes must accommodate short distances, they automatically converge towards the nose in order to focus at the normal reading distance. Too much effort spent accommodating the short distance is therefore accompanied by a greater than normal convergence. As the eye cannot maintain this prolonged effort over a long period of time, it will relax its effort and the perceived image will become blurred for a while, a sensory penalty that we want to avoid. After a period of rest, the eye will resume its effort, and this alternation between the clearness and the blur will continue as long as attention to the close image is required. So, ideally, the tablet or phone should always be kept at the distance of the forearm.</p>
<h2>Constant stimulation is not recommended</h2>
<p>The use of electronic tools, with games or videos, requires a constant attention span, without breaks. This is the second factor to consider. When a child draws in a notebook or reads a paper book, he or she will instinctively stop at some point, look elsewhere, far away, and become interested in something else around them. These pauses and breaks are beneficial <a href="https://www.aoa.org/healthy-eyes/eye-and-vision-conditions/computer-vision-syndrome?sso=y">for the visual system to recover from its effort</a>. Focusing on targets at a distance is also beneficial to the child’s visual development. With electronic tablets, it is not uncommon to see children doing sessions of more than two to three hours continuously, without looking up from the screen.</p>
<p>The visual apparatus of children from zero to two years old is simply not sufficiently developed and robust to undergo such stress from constant stimulation in front of the screen. In particular, the structural elements of the sclera (the deep layer of the eye), which give the eye rigidity and determine its size, develop between zero and two years of age and then stabilize. The visual stimulus at these ages can interfere and therefore <a href="https://www.researchgate.net/publication/335108098_Scleral_structure_and_biomechanics">influence the development of visual defects and pathology in later life</a>.</p>
<p>It is also important to note that the screen can emit blue light. Children’s eyes do not filter these rays like those of an adult. This means that children are exposed to more blue light, which may stimulate nearsightedness and disrupt the secretion of melatonin, <a href="https://www.myopiainstitute.com/eye-care/how-blue-light-affects-your-vision-and-overall-health/">which regulates our biological clock</a>. This can disrupt the naps necessary for children of this age, as well as sleep during the night. Sleep loss can also lead to myopia.</p>
<h2>Let’s learn about electronics</h2>
<p>For normal visual development, it is therefore recommended to <a href="https://publications.aap.org/pediatrics/article/128/5/1040/30928/Media-Use-by-Children-Younger-Than-2-Years?_ga=2.208746386.1459529850.1665228699-655911314.1665228699?autologincheck=redirected?nfToken=00000000-0000-0000-0000-000000000000">avoid all exposure to electronic devices between the ages of zero and two</a>. The exception would be occasional video conversations, under the supervision of a parent, to say hello to a grandparent who lives far away, for a few minutes.</p>
<p>From the age of two years on, an hour of exposure per day can be considered, especially to consult educational sites, always accompanied by a parent or an educator.</p>
<p>When the visual system is mature, around the age of six to eight, exposure can be increased gradually, without exceeding two to three hours per day, with 10-minute breaks every hour. Electronic device use should be avoided during meals, family activities, and at least one hour before sleep.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Young mother holding her cute, crying baby daughter, looking at a tablet during a virtual video call business or family meeting at a distance" src="https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/489410/original/file-20221012-24-ip7l62.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rare video conversations, with parental supervision, to wave to a grandparent from a distance, for a few minutes, can be considered.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Let’s play outside!</h2>
<p>The best advice for successful visual development is to encourage exposure to outdoor light for <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6678505/#:%7E:text=Each%20additional%20hour%20of%20daily,by%2013%25%20%5B23%5D.">at least one hour per day, ideally two hours</a>. We are talking about playing, walking, and activities that are done outside. The amount of light is then much greater than indoors, which would stimulate the production of dopamine, a chemical mediator essential to regulating the growth of the eye. This is the most effective way to prevent the onset of myopia in children.</p>
<p>It is also important to make sure that a child’s visual system is normal and developing naturally. Therefore, the first examination by an optometrist should be done at six months of age (to validate that the eye has normal optics and that there are no congenital defects), and then at three years of age to evaluate eye coordination. If everything is normal, the next examination will take place at five years of age, and annually thereafter, <a href="http://nada.ca/wp-content/uploads/2018/09/BK-ChildrenAndTheirVision-2018-EN.pdf">considering that vision can change rapidly</a>.</p>
<p>In the case of an abnormality, the earlier we intervene in the process, the easier it is to restore normal oculo-visual function, either by exercise or by optical means.</p>
<p>By following these recommendations for visual hygiene, we will protect children’s visual system and ensure their normal development.</p>
<p>And let’s not forget that the most beautiful screen in the world is nature! We should offer it to our children more often.</p><img src="https://counter.theconversation.com/content/193192/count.gif" alt="La Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Langis Michaud ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>The impact of using electronic devices is critical during the first years of life, both visually and on the cognitive and social development of the child.Langis Michaud, Professeur Titulaire. École d'optométrie. Expertise en santé oculaire et usage des lentilles cornéennes spécialisées, Université de MontréalLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1848282022-07-14T12:34:04Z2022-07-14T12:34:04ZTo search for alien life, astronomers will look for clues in the atmospheres of distant planets – and the James Webb Space Telescope just proved it’s possible to do so<figure><img src="https://images.theconversation.com/files/473980/original/file-20220713-20-g1f04j.png?ixlib=rb-1.1.0&rect=34%2C116%2C691%2C572&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">TRAPPIST-1e is a rocky exoplanet in the habitable zone of a star 40 light-years from Earth and may have water and clouds, as depicted in this artist's impression.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:TRAPPIST-1e_artist_impression_2018.png#/media/File:TRAPPIST-1e_artist_impression_2018.png">NASA/JPL-Caltech/Wikimedia Commons</a></span></figcaption></figure><p>The ingredients for life are <a href="https://doi.org/10.1073/pnas.98.3.805">spread throughout the universe</a>. While Earth is the only known place in the universe with life, detecting life beyond Earth is a <a href="https://www.planetary.org/articles/the-2020-astrophysics-decadal-survey-guide">major goal</a> of <a href="https://www.planetary.org/space-policy/what-is-the-decadal-survey">modern astronomy</a> and <a href="https://www.planetary.org/space-policy/what-is-the-decadal-survey">planetary science</a>.</p>
<p>We are two scientists who study <a href="https://scholar.google.com/citations?user=2SCIYjIAAAAJ&hl=en&oi=ao">exoplanets</a> and <a href="https://scholar.google.com/citations?user=OrRLRQ4AAAAJ&hl=en&oi=ao">astrobiology</a>. Thanks in large part to next-generation telescopes like James Webb, researchers like us will soon be able to measure the chemical makeup of atmospheres of planets around other stars. The hope is that one or more of these planets will have a chemical signature of life.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A diagram showing green bands around stars." src="https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/473981/original/file-20220713-24-ei1562.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">There are many known exoplanets in habitable zones – orbits not too close to a star that the water boils off but not so far that the planet is frozen solid – as marked in green for both the solar system and Kepler-186 star system with its planets labeled b, c, d, e and f.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Kepler186f-ComparisonGraphic-20140417_improved.jpg#/media/File:Kepler186f-ComparisonGraphic-20140417_improved.jpg">NASA Ames/SETI Institute/JPL-Caltech/Wikimedia Commons</a></span>
</figcaption>
</figure>
<h2>Habitable exoplanets</h2>
<p>Life <a href="https://doi.org/10.1073/pnas.1816535115">might exist in the solar system</a> where there is liquid water – like the subsurface aquifers on Mars or in the oceans of Jupiter’s moon Europa. However, searching for life in these places is incredibly difficult, as they are hard to reach and detecting life would require sending a probe to return physical samples.</p>
<p>Many astronomers believe there’s a <a href="https://exoplanets.nasa.gov/news/1675/life-in-the-universe-what-are-the-odds/">good chance that life exists on planets orbiting other stars</a>, and it’s possible that’s where <a href="https://doi.org/10.1016/j.actaastro.2022.03.019">life will first be found</a>.</p>
<p>Theoretical calculations suggest that there are around <a href="https://www.technologyreview.com/2020/11/06/1011784/half-milky-way-sun-like-stars-home-earth-like-planets-kepler-gaia-habitable-life/">300 million potentially habitable planets</a> in the Milky Way galaxy alone and <a href="https://doi.org/10.3847/1538-3881/abc418">several habitable Earth-sized planets</a> within only 30 light-years of Earth – essentially humanity’s galactic neighbors. So far, astronomers have <a href="https://exoplanets.nasa.gov/">discovered over 5,000 exoplanets</a>, including hundreds of potentially habitable ones, using <a href="https://sci.esa.int/web/exoplanets/-/60655-detection-methods">indirect methods</a> that measure how a planet affects its nearby star. These measurements can give astronomers information on the mass and size of an exoplanet, but not much else.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A chart showing two lines each with two peaks in the blue and red wavelengths." src="https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/473983/original/file-20220713-17654-sd7qoy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Every material absorbs certain wavelengths of light, as shown in this diagram depicting the wavelengths of light absorbed most easily by different types of chlorophyll.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Chlorophyll_ab_spectra-en.svg#/media/File:Chlorophyll_ab_spectra-en.svg">Daniele Pugliesi/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>Looking for biosignatures</h2>
<p>To detect life on a distant planet, astrobiologists will study starlight that has <a href="https://doi.org/10.1089/ast.2017.1729">interacted with a planet’s surface or atmosphere</a>. If the atmosphere or surface was transformed by life, the light may carry a clue, called a “biosignature.”</p>
<p>For the first half of its existence, Earth sported an atmosphere without oxygen, even though it hosted simple, single-celled life. Earth’s biosignature was very faint during this early era. That changed abruptly <a href="https://asm.org/Articles/2022/February/The-Great-Oxidation-Event-How-Cyanobacteria-Change">2.4 billion years ago</a> when a new family of algae evolved. The algae used a process of photosynthesis that produces free oxygen – oxygen that isn’t chemically bonded to any other element. From that time on, Earth’s oxygen-filled atmosphere has left a strong and easily detectable biosignature on light that passes through it.</p>
<p>When light bounces off the surface of a material or passes through a gas, certain wavelengths of the light are more likely to remain trapped in the gas or material’s surface than others. This selective trapping of wavelengths of light is why objects are different colors. Leaves are green because chlorophyll is particularly good at absorbing light in the red and blue wavelengths. As light hits a leaf, the red and blue wavelengths are absorbed, leaving mostly green light to bounce back into your eyes.</p>
<p>The pattern of missing light is determined by the specific composition of the material the light interacts with. Because of this, astronomers can learn something about the composition of an exoplanet’s atmosphere or surface by, in essence, measuring the specific color of light that comes from a planet. </p>
<p>This method can be used to recognize the presence of certain atmospheric gases that are associated with life – such as oxygen or methane – because these gasses leave very specific signatures in light. It could also be used to detect peculiar colors on the surface of a planet. On Earth, for example, the chlorophyll and other pigments plants and algae use for photosynthesis capture specific wavelengths of light. These pigments <a href="https://doi.org/10.1073/pnas.1304213111">produce characteristic colors</a> that can be detected by using a sensitive infrared camera. If you were to see this color reflecting off the surface of a distant planet, it would potentially signify the presence of chlorophyll.</p>
<h2>Telescopes in space and on Earth</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A giant gold mirror in a lab." src="https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=899&fit=crop&dpr=1 600w, https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=899&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=899&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1130&fit=crop&dpr=1 754w, https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1130&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/473985/original/file-20220713-17654-d5rtyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1130&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The James Webb Space Telescope is the first telescope able to detect chemical signatures from exoplanets, but it is limited in its capabilities.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:JWST_Full_Mirror.jpg#/media/File:JWST_Full_Mirror.jpg">NASA/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>It takes an incredibly powerful telescope to detect these subtle changes to the light coming from a potentially habitable exoplanet. For now, the only telescope capable of such a feat is the new <a href="http://jwst.nasa.gov/">James Webb Space Telescope</a>. As it <a href="https://blogs.nasa.gov/webb/2022/07/11/nasas-webb-telescope-is-now-fully-ready-for-science/">began science operations</a> in July 2022, James Webb took a reading of the spectrum of the <a href="https://www.nytimes.com/2022/07/12/science/wasp-96b-exoplanet-webb-telescope.html">gas giant exoplanet WASP-96b</a>. The spectrum showed the presence of water and clouds, but a planet as large and hot as WASP-96b is unlikely to host life.</p>
<p>However, this early data shows that James Webb is capable of detecting faint chemical signatures in light coming from exoplanets. In the coming months, Webb is set to turn its mirrors toward <a href="https://www.space.com/42512-trappist-1-planet-could-host-life.html">TRAPPIST-1e</a>, a potentially habitable Earth-sized planet a mere 39 light-years from Earth.</p>
<p>Webb can look for biosignatures by studying planets as they pass in front of their host stars and capturing <a href="https://www.physics.uu.se/research/astronomy-and-space-physics/research/planets/exoplanet-atmospheres/">starlight that filters through the planet’s atmosphere</a>. But Webb was not designed to search for life, so the telescope is only able to scrutinize a few of the nearest potentially habitable worlds. It also can only detect changes to <a href="https://doi.org/10.3847/1538-3881/ab21e0">atmospheric levels of carbon dioxide, methane and water vapor</a>. While certain combinations of these gasses <a href="https://doi.org/10.1038/s41550-021-01579-7">may suggest life</a>, Webb is not able to detect the presence of unbonded oxygen, which is the strongest signal for life.</p>
<p>Leading concepts for future, even more powerful, space telescopes include plans to block the bright light of a planet’s host star to reveal starlight reflected back from the planet. This idea is similar to using your hand to block sunlight to better see something in the distance. Future space telescopes could use small, internal masks or large, external, umbrella-like spacecraft to do this. Once the starlight is blocked, it becomes much easier to study light bouncing off a planet.</p>
<p>There are also three enormous, ground-based telescopes currently under construction that will be able to search for biosignatures: the <a href="http://gmto.org/">Giant Magellen Telescope</a>, the <a href="https://www.tmt.org/">Thirty Meter Telescope</a> and the <a href="https://www.eso.org/sci/facilities/eelt/">European Extremely Large Telescope</a>. Each is far more powerful than existing telescopes on Earth, and despite the handicap of Earth’s atmosphere distorting starlight, these telescopes might be able to probe the atmospheres of the closest worlds for oxygen.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A cow and its calf standing in a field." src="https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=406&fit=crop&dpr=1 600w, https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=406&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=406&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=511&fit=crop&dpr=1 754w, https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=511&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/473982/original/file-20220713-12-4xssot.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=511&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Animals, including cows, produce methane, but so do many geologic processes.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Cows_eating_grass_(42882305160).jpg#/media/File:Cows_eating_grass_(42882305160).jpg">Jernej Furman/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Is it biology or geology?</h2>
<p>Even using the most powerful telescopes of the coming decades, astrobiologists will only be able to detect strong biosignatures produced by worlds that have been completely transformed by life.</p>
<p>Unfortunately, most gases released by terrestrial life can also be produced by nonbiological processes – cows and volcanoes both release methane. Photosynthesis produces oxygen, but sunlight does, too, when it splits water molecules into oxygen and hydrogen. There is a <a href="https://doi.org/10.1089/ast.2017.1727">good chance astronomers will detect some false positives</a> when looking for distant life. To help rule out false positives, astronomers will need to understand a planet of interest well enough to understand whether its <a href="https://doi.org/10.1089/ast.2017.1737">geologic or atmospheric processes could mimic a biosignature</a>. </p>
<p>The next generation of exoplanet studies has the potential to pass the bar of the <a href="https://quoteinvestigator.com/2021/12/05/extraordinary/">extraordinary evidence</a> needed to prove the existence of life. The first data release from the James Webb Space Telescope gives us a sense of the exciting progress that’s coming soon.</p><img src="https://counter.theconversation.com/content/184828/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chris Impey receives funding from the National Science Foundation.</span></em></p><p class="fine-print"><em><span>Daniel Apai receives funding from NASA and the Gordon and Betty Moore Foundation.</span></em></p>Life on Earth has dramatically changed the chemistry of the planet. Astronomers will measure light that bounces off distant planets to look for similar clues that they host life.Chris Impey, University Distinguished Professor of Astronomy, University of ArizonaDaniel Apai, Professor of Astronomy and Planetary Sciences, University of ArizonaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1666712021-09-19T18:47:12Z2021-09-19T18:47:12ZThe mysterious optical device Jan van Eyck may have used to paint his masterpieces – new research<figure><img src="https://images.theconversation.com/files/421412/original/file-20210915-20-gsikb7.png?ixlib=rb-1.1.0&rect=2%2C8%2C1794%2C1015&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Reconstruction of the execution of the Arnolfini portrait. Top: Postures of the painter during the painting process. Bottom: views obtained from the four lenses.</span> <span class="attribution"><span class="source">Université de Lorraine</span>, <span class="license">Fourni par l'auteur</span></span></figcaption></figure><p>For centuries, the work of Flemish painter <a href="https://fr.wikipedia.org/wiki/Jan_van_Eyck">Jan van Eyck</a> (c. 1390-1441) has perplexed art historians. Van Eyck is famed for his empirical use of perspective, yet many have struggled to find geometrical coherence in his representation of space.</p>
<p>In one of his most celebrated works, the Arnolfini Portrait, which depicts a wealthy, Italian married couple, it is seemingly impossible to find a single vanishing point – the spot furthest from the viewer, at which all of the parallel lines in a painting meet.</p>
<p>In 1905, mathematician Karl Doehlemann demonstrated in a journal article that the parallel lines in the Arnolfini Portrait do not converge toward a single point, but rather toward a circular zone of many vanishing points. The Doehlemann interpretation is still widely accepted today, but a handful of art historians have continued to search for a hidden order behind the painting’s apparent disorder.</p>
<p>Since the early 1990s, researchers have used computer analysis to try to understand the use of perspective in the painting. But the Arnolfini Portrait continues to present difficulties to those who try to analyse it with algorithms.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=268&fit=crop&dpr=1 600w, https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=268&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=268&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=337&fit=crop&dpr=1 754w, https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=337&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/415870/original/file-20210812-23-an8jmr.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=337&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">From left to right: reconstructions proposed by J.G. Kern in 1912, J. Elkins in 1991, and P. H. Jansen and Z. Ruttkay in 2007.</span>
<span class="attribution"><span class="license">Fourni par l'auteur</span></span>
</figcaption>
</figure>
<p>Designed primarily for processing photographs, current algorithms do not take certain important factors into account, namely the fact that there are often fewer parallel lines in a painting than in a photograph. As such, computer vision specialists do not typically use paintings as test subjects.</p>
<h2>Finding van Eyck’s vanishing points</h2>
<p><a href="https://hal.univ-lorraine.fr/hal-03287031">Our new research</a> into van Eyck’s work takes into account the inherent uncertainty in the accepted understanding of parallel lines and posits an <em>a contrario</em> reasoning.</p>
<p>A well-known concept in computer vision, <a href="https://www.springer.com/gp/book/9780387726359"><em>a contrario</em> methods</a> rely on a psychological concept known as the Helmholtz principle, which states that “we perceive immediately what cannot be due to chance” or, reinterpreted mathematically, “our algorithm will detect what cannot be due to chance”.</p>
<p>When the Helmholtz principle is applied to a probability map of the vanishing points in the <em>Arnolfini Portrait</em>, a surprisingly ordered structure appears, comprising four main points aligned periodically along a slightly inclined vertical axis.</p>
<p>Similar structures are found in the painter’s other works, such as <a href="https://en.wikipedia.org/wiki/Saint_Jerome_in_His_Study_(after_van_Eyck)"><em>Saint Jerome in His Study</em></a>, the <a href="https://en.wikipedia.org/wiki/Lucca_Madonna"><em>Lucca Madonna</em></a>, the <a href="https://en.wikipedia.org/wiki/Dresden_Triptych"><em>Dresden Triptych</em></a> and <a href="https://en.wikipedia.org/wiki/Madonna_in_the_Church"><em>Madonna in the Church</em></a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/415873/original/file-20210812-17-187wxmr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/415873/original/file-20210812-17-187wxmr.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=393&fit=crop&dpr=1 600w, https://images.theconversation.com/files/415873/original/file-20210812-17-187wxmr.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=393&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/415873/original/file-20210812-17-187wxmr.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=393&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/415873/original/file-20210812-17-187wxmr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=494&fit=crop&dpr=1 754w, https://images.theconversation.com/files/415873/original/file-20210812-17-187wxmr.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=494&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/415873/original/file-20210812-17-187wxmr.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=494&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Application of the <em>a contrario</em> method to the <em>Arnolfini Portrait</em>. Left: probability map of vanishing points taking into account an uncertainty at the ends of the extracted edges (visible in red in the right-hand image). Right: application of the <em>a contrario</em> method to this probability map. The extracted edges relate to their corresponding vanishing point, while the color of the link indicates its consistency, from dark blue (0) through to light yellow (1). The edges are grouped into horizontal strips, as marked out here with white lines.</span>
<span class="attribution"><span class="source">Université de Lorraine</span>, <span class="license">Fourni par l'auteur</span></span>
</figcaption>
</figure>
<p>Each of these works may be partitioned into several horizontal strips equal to the number of vanishing points, with each strip containing all the edges associated with one particular point.</p>
<p>When the painting is split into parts, we can see that van Eyck’s perspectives were far from disordered. In fact, they were rigorously exact.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=719&fit=crop&dpr=1 600w, https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=719&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=719&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=903&fit=crop&dpr=1 754w, https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=903&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/415874/original/file-20210812-21-spw70v.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=903&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Reconstruction of the vanishing points in <em>Madonna in the Church</em>.</span>
<span class="attribution"><span class="source">Université de Lorraine</span>, <span class="license">Fourni par l'auteur</span></span>
</figcaption>
</figure>
<p>The case of <em>Madonna in the Church</em> is particularly interesting. Measuring just 14 x 31 cm, this quasi-miniature painting makes use of extremely precise converging lines.</p>
<p>More surprisingly still, the positions of the vanishing points found in the upper strip of the painting are in perfect coherence with the half-decagon geometry of a church choir gallery. This was an unexpected finding, as no one at the time could have known how to place a vanishing point on the horizon line according to its direction in three-dimensional space.</p>
<p>Our argument based on this finding is that van Eyck used an optical device to produce his works.</p>
<h2>A perspective machine</h2>
<p>Almost half a century after van Eyck’s death, <a href="https://en.wikipedia.org/wiki/Leonardo_da_Vinci">Leonardo da Vinci</a> sketched a simplified version of what is called a <a href="https://drawingmachines.org/results.php?tags=Linear%20perspective&order_by=date">“perspective machine”</a>.</p>
<p>Da Vinci’s sketch depicts the artist drawing out the visible objects using a pane of glass, while gazing through an eyepiece.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Leonardo Da Vinci Drawing Device, pictured in his Codex Atlanticus" src="https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=570&fit=crop&dpr=1 600w, https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=570&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=570&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=716&fit=crop&dpr=1 754w, https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=716&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/421385/original/file-20210915-20-18rd0cm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=716&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Da Vinci’s ‘perspective machine’ from the Codex Atlanticus, 1478-1519.</span>
</figcaption>
</figure>
<p>Van Eyck’s device would have been more elaborate, with several eyepieces equally spaced out along an inclined axis, just like the vanishing points in the Arnolfini Portrait. Using it, he could have outlined parts of reality strip by strip (eyepiece by eyepiece) with a carbon ink that he then transferred to a primed wood panel before painting it.</p>
<p>The glass pane – <a href="https://hal.univ-lorraine.fr/hal-03287031">probably a mirror</a> – could itself be moved within its plane such that the edge of the previously drawn image strip could be joined to the actual image <a href="https://youtu.be/pARXlP82sPI">as seen through the eyepiece</a>.</p>
<p>This crucial step enabled the painter to produce smooth transitions between the strips, which would have been difficult to perceive with the naked eye alone. In the video below, we have illustrated how this might have worked in practice.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/pARXlP82sPI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Painting reality as we perceive it</h2>
<p>Our reconstruction of the painting of the <em>Arnolfini Portrait</em> lets us see what van Eyck would have seen through the eyepieces; for instance, the rise in the ceiling between the view from below and the view from above, which was the one he finally chose (and vice versa for the floor), perhaps to avoid distortion around the painting’s edges.</p>
<p>From an optics viewpoint, amplified perspective distortions on the edges of a painting are not technically incorrect, but we are unaccustomed to them. This is because the visual field of the human eye is more restricted compared to what can be achieved in a short-distance artificial perspective or, perhaps, through a glass pane.</p>
<p>For the <em>Arnolfini Portrait</em>, our analysis suggests that the horizontal distance between the eyepieces placed at each end of the view axis was the same as the distance between the pupils of an adult man.</p>
<p>It is up to individuals to decide whether this was a coincidence, but I would wager that it was not. I imagine that van Eyck would have alternately closed his left and right eyes, observing how this action affected the perception of his own hand and deciding then to equip his device with both viewing options.</p>
<h2>Focusing on the important aspects</h2>
<p>With regard to the <em>Arnolfini Portrait</em>, researchers have underlined the importance of properly representing hands and feet in this era, both in terms of symbolism and aesthetics. Although most of the objects in the painting were drawn only once through the perspective of the eyepiece placed farthest forward, our models suggest the male figure’s feet and raised hand were drawn using other eyepieces.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/wM6d9BOj4Ww?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Given that the painting was divided into strips of varying thickness, one might suggest that van Eyck focused his attention on four zones of interest: the ceiling, the male figure’s head and hat, his raised hand, and his lower body. It would seem that he placed particular care on producing the patron’s portrait, perhaps even more so than the surrounding architecture.</p>
<p>Van Eyck’s polyscopic (multi-lensed) device could well have evolved from an earlier monoscopic one, like the device drawn by da Vinci. This may have coincided with the need to produce a full-length portrait of Adam on his masterpiece, the <a href="http://closertovaneyck.kikirpa.be/ghentaltarpiece/#viewer/rep1=1&id1=1">Ghent Altarpiece</a>, following his earlier completion of several head-and-shoulders portraits.</p>
<hr>
<p><em>Translated from the French by Enda Boorman for <a href="http://www.fastforword.fr/en">Fast ForWord</a></em></p><img src="https://counter.theconversation.com/content/166671/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gilles Simon ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Researchers have long tried to unravel the puzzle of Jan van Eyck’s use of perspective in his masterpiece, the Arnolfini Portrait. New research suggests he may have had help from a novel machine.Gilles Simon, Maître de conférences HDR en informatique, Université de LorraineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1631672021-07-01T19:52:43Z2021-07-01T19:52:43ZThe scientific genius who eschewed fame: remembering Thomas Harriot, 400 years on<figure><img src="https://images.theconversation.com/files/409214/original/file-20210701-21065-1n6x87b.JPG?ixlib=rb-1.1.0&rect=3%2C0%2C2584%2C3237&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Harriot_at_Syon_Park.JPG">Rita Greer</a>, <a class="license" href="http://artlibre.org/licence/lal/en">FAL</a></span></figcaption></figure><p>Four hundred years ago, on July 2 1621, a remarkable Englishman named Thomas Harriot died in London. He left behind some 8,000 pages of scientific research, but it is only in recent decades that scholars have uncovered their treasures. </p>
<p>And what they show is that Harriot independently made many significant discoveries now attributed to other, more famous scientists. Some scholars have called him “the English Galileo” and “the greatest British mathematical scientist before Newton”. </p>
<p>Yet Harriot died without publishing a single word of this extraordinary output. His tale reminds us that, while we may sometimes think science progresses through a series of famous pioneers who single-handedly overturn entrenched beliefs, the story is rarely so simple.</p>
<h2>What did Harriot discover?</h2>
<p>For instance, we learn in school that Galileo Galilei initiated telescopic astronomy and discovered the law of falling motion. But Harriot independently did both of these things. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A pen and ink drawing of the surface of the Moon." src="https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=585&fit=crop&dpr=1 600w, https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=585&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=585&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=735&fit=crop&dpr=1 754w, https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=735&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/409209/original/file-20210701-21056-104m7cb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=735&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Thomas Harriot’s 1609 map of the Moon, drawn by observing through a telescope.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Thomas_Harriot#/media/File:Harriot_Lunar_Map.jpg">Wikimedia</a></span>
</figcaption>
</figure>
<p>He also deduced fledgling general laws governing the motion of everyday objects, again independently of Galileo, and before René Descartes. (Half a century later, Isaac Newton developed the definitive laws of motion.)</p>
<p>Harriot studied light, too, discovering the secret of colour and the nature of the rainbow before Newton, and finding the law of refraction (which we know today as Snell’s law) before the Dutch astronomer Willebrord Snell.</p>
<p>He also made a mathematical study of population growth before Thomas Malthus, developed a <a href="https://www.ems-ph.org/books/book.php?proj_nr=94&srch=series%7Chem">completely symbolic</a> form of sophisticated <a href="https://www.springer.com/gp/book/9780387495118">algebra</a> before Descartes, discovered binary arithmetic before Gottfried Leibniz, and took
steps on the road to calculus with his work on infinite series. </p>
<h2>The law of falling bodies</h2>
<p>It wasn’t until 2008 that Harriot’s work on gravity was <a href="https://www.springer.com/gp/book/9781402054983">fully reconstructed</a>, by the German scholar Matthias Schemmel. </p>
<p>As Schemmel pointed out, Harriot and his contemporary Galileo were heirs to essentially the same body of knowledge. It’s perhaps not so surprising, then, that they made some of the same breakthroughs. There are plenty of examples of independent co-discoveries in history, most famously that of calculus by Newton and Leibniz.</p>
<p>The law of falling motion says that without air resistance all objects, no matter their size or mass, fall from the same height at the same rate. </p>
<p>Legend has it Galileo dropped balls from the Leaning Tower of Pisa to study how they fell. Nobody knows if this is true, but Harriot had the same idea: he recorded the time, in pulse beats, that it took for different objects falling from as high as 55½ feet (about 17 metres) to reach the ground. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/copernicus-revolution-and-galileos-vision-our-changing-view-of-the-universe-in-pictures-60103">Copernicus' revolution and Galileo's vision: our changing view of the universe in pictures</a>
</strong>
</em>
</p>
<hr>
<p>Both Harriot and Galileo devised more accurate experiments, however, from which they derived a mathematical understanding of how things fall. </p>
<p>This combination of experiment and mathematics is now the accepted way to derive a law of nature. Quantifying observations means others can test the results, and use them to make useful predictions. </p>
<p>Harriot and Galileo were not the first to understand the role of observation and mathematics in this context, of course. But they were among the most successful of the pre-Newtonian pioneers.</p>
<p>Galileo didn’t publish his work on gravity until after Harriot had died, and there’s no evidence that the two men ever met or corresponded.</p>
<h2>The law of refraction and the shape of the rainbow</h2>
<p>The German astronomer Johannes Kepler, however, did correspond briefly with Harriot. Kepler had been working on the nature of light and vision when word reached him that Harriot had unravelled two mysteries: the law of refraction, and why the rainbow has its magical colours and its unique shape.</p>
<p>The law of refraction describes how light bends when it passes from one medium into another, which explains how an image can be focused by a glass lens or why your leg looks wobbly when you dip it in a swimming pool. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=864&fit=crop&dpr=1 600w, https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=864&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=864&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1086&fit=crop&dpr=1 754w, https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1086&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/409217/original/file-20210701-21135-wg5m9l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1086&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A diagram from Ibn Sahl’s 10th-century treatise on optics showing the path of light refracted by a lens.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/File:Ibn_Sahl_manuscript.jpg">Wikimedia</a></span>
</figcaption>
</figure>
<p>Harriot derived this law 20 years before Snell, but there’s a popular belief that the 10th-century Baghdad-based scholar Abū Saʿd al-ʿAlāʾ ibn Sahl beat even Harriot. This is not quite right: Ibn Sahl is a notable pioneer whose geometrical diagram of light focussed by a lens gives, in hindsight, the correct refractive path. But there’s no evidence he deduced his result from experiment, or that he understood the general properties of refraction.</p>
<p>Judging from his surviving papers even Snell failed to generalise his result, which he, like Ibn Sahl, never wrote as the neat trigonometric equation we use today. Harriot, by contrast, did: his derivation of the general law of refraction is another example of his rigorous blend of experiment and mathematics.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-why-are-rainbows-round-81187">Curious Kids: Why are rainbows round?</a>
</strong>
</em>
</p>
<hr>
<h2>Harriot’s other adventures</h2>
<p>If only Harriot had published! In the early stage of his career, though, he was bound by commercial secrecy, for his first patron was the controversial statesman and entrepreneur Sir Walter Raleigh. Harriot was also busy dodging heretic hunters and sailing the high seas as Raleigh’s navigational advisor. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A heavily decorated title page reading 'A briefe and true report of the new found land of Virginia'." src="https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=805&fit=crop&dpr=1 600w, https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=805&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=805&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1011&fit=crop&dpr=1 754w, https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1011&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/409210/original/file-20210701-21296-xaa1cu.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1011&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Thomas Harriot published only one work in his lifetime: a report on his stay in North America in the 1580s.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/File:English_title_page,_A_Briefe_and_True_Report_of_the_Newfound_Land_of_Virginia.JPG">Wikimedia</a></span>
</figcaption>
</figure>
<p>Raleigh had delusions of empire and glory, and wanted to establish a trading colony in today’s USA before the Spanish beat him to it. The one work Harriot did publish in his lifetime was “a brief and true report” on the economic potential of Raleigh’s chosen American site.</p>
<p>Harriot’s contribution to colonialism has justly attracted its share of criticism. Nonetheless, his report is still widely praised for its sympathetic depiction of the way of life of the North Carolina Algonquian people, as it was when Europeans first set foot on their land. Harriot learned the local language, and enjoyed much about the year he spent living with the Algonquians.</p>
<p>What he loved doing most, though, was mathematics and physics. He was neither flamboyant nor ambitious, and when he was wrongfully imprisoned through an unlucky connection with the Gunpowder Plot (a failed attempt to assassinate King James I), he told his jailers he just wanted</p>
<blockquote>
<p>to live a private life for the love of learning that I might study freely. </p>
</blockquote>
<h2>Conclusion</h2>
<p>In the late 1590s Harriot had found a second patron, Henry Percy, the ninth earl of Northumberland. It was then that he was able to study the mysteries of nature and the marvels of mathematics for their own sakes, rather than the “applied” work he had done for Raleigh.</p>
<p>Having two generous patrons meant Harriot did not need to publicise his discoveries to attract funding, the way Galileo did. Nor did he care about fame, despite being urged by friends to claim his priority. His manuscripts do contain several almost finished treatises, but it seems he was so busy doing science that he never managed to put his results together for the printer.</p>
<p>After his death, well-meaning scholars carved up his manuscripts in an attempt to study and publish them. In the process, however, all the papers disappeared, seemingly lost forever. Then, 150 years later, the Hungarian astronomer Franz Xaver Zach discovered them, locked safely away in Northumberland’s castle.</p>
<p>Most of the papers were then given to the British Museum. They are now in the British Library, where I had the privilege of studying them. (They’re also available <a href="https://echo.mpiwg-berlin.mpg.de/content/scientific_revolution/harriot">online</a>.)</p>
<p>As for Harriot, no-one knows much about him as a person – not even his birthday. Nevertheless, he has fascinated scholars for the past half century (as I discovered some years ago when I set out to bring his story to a <a href="https://global.oup.com/academic/product/thomas-harriot-9780190271855?cc=us&lang=en&#">wider, non-specialist</a> readership). </p>
<p>That’s because despite the lack of biographical data, those precious manuscripts show that what mattered most to Harriot himself was mathematics and science. Four hundred years on, his mix of genius and dedication is something to honour.</p><img src="https://counter.theconversation.com/content/163167/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robyn Arianrhod does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The English astronomer and navigator Thomas Harriot died in 1621, leaving behind 8,000 pages of notes containing a trove of unpublished scientific discoveries.Robyn Arianrhod, Affiliate, School of Mathematics, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1474362020-10-26T18:46:49Z2020-10-26T18:46:49ZReimagining the laser: new ideas from quantum theory could herald a revolution<figure><img src="https://images.theconversation.com/files/361830/original/file-20201006-20-k5owis.jpg?ixlib=rb-1.1.0&rect=0%2C617%2C1346%2C1068&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Original artwork by Ludmila Odintsova</span>, <span class="license">Author provided</span></span></figcaption></figure><p>Lasers were created 60 years ago this year, when three different laser devices were unveiled by independent laboratories in the United States. A few years later, one of these inventors called the unusual light sources “<a href="https://www.nytimes.com/1964/05/06/archives/developer-of-the-laser-calls-it-a-solution-seeking-a-problem.html">a solution seeking a problem</a>”. Today, the laser has been applied to countless problems in science, medicine and everyday technologies, with a market of more than <a href="https://www.photonics.com/Articles/A_History_of_the_Laser_1960_-_2019/a42279">US$11 billion</a> per year.</p>
<p>A crucial difference between lasers and <a href="https://arxiv.org/abs/1510.04805">traditional sources of light</a> is the “temporal coherence” of the light beam, or just coherence. The coherence of a beam can be measured by a number <em>C</em>, which takes into account the fact light is <a href="https://theconversation.com/explainer-what-is-wave-particle-duality-7414">both a wave and a particle</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-what-is-wave-particle-duality-7414">Explainer: what is wave-particle duality</a>
</strong>
</em>
</p>
<hr>
<p>From even before lasers were created, physicists thought they knew exactly how coherent a laser could be. Now, two new studies (one by myself and colleagues in Australia, the other by a team of American physicists) have shown <em>C</em> can be much greater than was previously thought possible. </p>
<h2>How coherent can a laser get?</h2>
<p>The coherence <em>C</em> is roughly the number of photons (particles of light) emitted consecutively into the beam with the same phase (all waving together). For typical lasers, <em>C</em> is very large. Billions of photons are emitted into the beam, all waving together.</p>
<p>This high degree of coherence is what makes lasers suitable for high-precision applications. For example, in many <a href="https://theconversation.com/explainer-quantum-computation-and-communication-technology-7892">quantum computers</a>, we will need a highly coherent beam of light at a specific frequency to control a large number of <a href="https://theconversation.com/double-or-nothing-could-quantum-computing-replace-moores-law-362">qubits</a> over a long period of time. Future quantum computers may need light sources with even greater coherence.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-quantum-computation-and-communication-technology-7892">Explainer: quantum computation and communication technology </a>
</strong>
</em>
</p>
<hr>
<p>Physicists have long thought the maximum possible coherence of a laser was governed by an iron rule known as the Schawlow-Townes limit. It is named after the two American physicists who derived it <a href="https://journals.aps.org/pr/abstract/10.1103/PhysRev.112.1940">theoretically in 1958</a> and went on to win Nobel prizes for their laser research. They stated that the coherence <em>C</em> of the beam cannot be greater than the square of <em>N</em>, the number of energy-excitations inside the laser itself. (These excitations could be photons, or they could be atoms in an excited state, for example.)</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/364934/original/file-20201022-19-mxv0j2.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/364934/original/file-20201022-19-mxv0j2.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/364934/original/file-20201022-19-mxv0j2.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/364934/original/file-20201022-19-mxv0j2.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/364934/original/file-20201022-19-mxv0j2.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/364934/original/file-20201022-19-mxv0j2.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/364934/original/file-20201022-19-mxv0j2.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Laser beams contain huge numbers of photons all waving together.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Laser#/media/File:Lasers.JPG">Peng Jiajie / Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Raising the limit</h2>
<p>Now, however, two theory papers have appeared that overturn the Schawlow-Townes limit by reimagining the laser. Basically, Schawlow and Townes made assumptions about how energy is added to the laser (gain) and how it is released to form the beam (loss). </p>
<p>The assumptions made sense at the time, and still apply to lasers built today, but they are not required by quantum mechanics. With the amazing advances that have occurred in quantum technology in the past decade or so, our imagination need not be limited by standard assumptions. </p>
<p>The first paper, published this week in <a href="https://www.nature.com/articles/s41567-020-01049-3">Nature Physics</a>, is by my group at Griffith University and a collaborator at Macquarie University. We introduced a new model, which differs from a standard laser in both gain and loss processes, for which the coherence <em>C</em> is as big as <em>N</em> to the fourth power. </p>
<p>In a laser containing as many photons as a regular laser, this would allow <em>C</em> to be much bigger than before. Moreover, we show a laser of this kind could in principle be built using the technology of superconducting qubits and circuits which is used in the currently <a href="https://theconversation.com/why-are-scientists-so-excited-about-a-recently-claimed-quantum-computing-milestone-124082">most successful quantum computers</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-are-scientists-so-excited-about-a-recently-claimed-quantum-computing-milestone-124082">Why are scientists so excited about a recently claimed quantum computing milestone?</a>
</strong>
</em>
</p>
<hr>
<p>The second paper, by a team at the University of Pittsburgh, has not yet been published in a peer-reviewed journal but recently appeared on the physics <a href="https://arxiv.org/abs/2009.03333">preprint archive</a>. These authors use a somewhat different approach, and end up with a model in which <em>C</em> increases like <em>N</em> to the third power. This group also propose building their laser using superconducting devices. </p>
<p>It is important to note that, in both cases, the laser would not produce a beam of visible light, but rather microwaves. But, as the authors of this second paper note explicitly, this is exactly the type of source required for superconducting quantum computing.</p>
<h2>Can we get even higher?</h2>
<p>The standard limit is that <em>C</em> is proportional to <em>N</em> ², the Pittsburgh group achieved <em>C</em> proportional to <em>N</em> ³, and our model has <em>C</em> proportional to <em>N</em> ⁴. Could some other model achieve an even higher coherence? </p>
<p>No, at least not if the laser beam has the ideal coherence properties we expect from a laser beam. This is another of the results proven in our Nature Physics paper. Coherence proportional to the fourth power of the number of photons is the best that quantum mechanics allows, and we believe it is physically achievable. </p>
<p>An ultimate achievable limit that surpasses what is achievable with standard methods, is known as a Heisenberg limit. This is because it is related to <a href="https://theconversation.com/explainer-heisenbergs-uncertainty-principle-7512">Heisenberg’s uncertainty principle</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-heisenbergs-uncertainty-principle-7512">Explainer: Heisenberg’s Uncertainty Principle</a>
</strong>
</em>
</p>
<hr>
<p>A Heisenberg-limited laser, as we call it, would not be just a revolution in the design and performance of lasers. It also requires a fundamental rethinking of what a laser is: not restricted to the current kinds of devices, but any device which turns inputs with little coherence into an output of very high coherence. </p>
<p>It is the nature of revolutions that it is impossible to tell whether they will succeed when they begin. But if this one does, and standard lasers are supplanted by Heisenberg-limited lasers, at least in some applications, then these two papers will be remembered as the first shots.</p><img src="https://counter.theconversation.com/content/147436/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Howard Wiseman receives funding from the Australian Research Council. </span></em></p>For 60 years, physicists thought they knew exactly how coherent a laser could get. Now the ultimate quantum limit to laser coherence has been found, and it’s much much bigger than anybody thought.Howard Wiseman, Director, Centre for Quantum Dynamics, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1263442019-12-05T18:33:25Z2019-12-05T18:33:25ZWe’re using lasers and toaster-sized satellites to beam information faster through space<figure><img src="https://images.theconversation.com/files/305310/original/file-20191205-16520-78opnr.jpg?ixlib=rb-1.1.0&rect=63%2C0%2C4185%2C2828&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The electromagnetic spectrum we can access with current technologies is completely occupied. This means experts have to think of creative ways to meet our rocketing demands for data.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/nasa2explore/14812017458/">NASA Johnson/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>Satellites are becoming increasingly important in our lives, as they help us meet a demand for more data, exchanged at higher speeds. This is why we are exploring new ways of improving satellite communication.</p>
<p>Satellite technology is used to navigate, forecast the weather, monitor Earth from space, receive TV signals from space, and connect to remote places through tools such as satellite phones and <a href="https://www.nbnco.com.au/learn/network-technology/sky-muster-explained">NBN’s Sky Muster satellites</a>. </p>
<p>All these communications use radio waves. These are electromagnetic waves that propagate through space and, to a certain degree, through obstacles such as walls.</p>
<p>Each communication system uses a frequency band allocated for it, and each band makes up part of the <a href="https://imagine.gsfc.nasa.gov/science/toolbox/emspectrum1.html">electromagnetic spectrum</a> – which is the name given to the range of all types of electromagnetic radiation.</p>
<p>But the electromagnetic spectrum we are able to use with current technology is a finite resource, and is now completely occupied. This means old services have to make room for new ones, or higher frequency bands have to be used. </p>
<p>While this poses technological challenges, one promising way forward is optical communication. </p>
<h2>Communication with lasers</h2>
<p>Instead of using radio waves to carry the information, we can use light from lasers as the carrier. While technically still part of the electromagnetic spectrum, optical frequencies are significantly higher, which means we can use them to transfer data at higher speeds.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/twisted-light-could-dramatically-boost-internet-speeds-57340">Twisted light could dramatically boost internet speeds</a>
</strong>
</em>
</p>
<hr>
<p>However, one disadvantage is that a laser cannot propagate through walls, and can even be blocked by clouds. While this is problematic on Earth, and for communication between satellites and Earth, it’s no problem for communication between satellites.</p>
<p>On Earth, optical communication via fibre optic cables connects continents and provides enormous data exchanges. This is the technology that allows <a href="https://www.vox.com/2015/4/30/11562024/too-embarrassed-to-ask-what-is-the-cloud-and-how-does-it-work">the cloud</a> to exist, and online services to be provided. </p>
<p>Optical communication between satellites doesn’t use fibre optic cables, but involves light propagating through space. This is called “free space optical communication”, and can be used to not only deliver data from satellites to the ground, but also to connect satellites in space. </p>
<p>In other words, free space optical communication will provide the same massive connectivity in space we already have on Earth. </p>
<p>Some systems such as the <a href="https://artes.esa.int/edrs-global">European Data Relay System</a> are already operational, and others like SpaceX’s <a href="https://www.space.com/see-spacex-starlink-satellites-in-night-sky.html">Starlink</a> continue to be developed.</p>
<p>But there are still many challenges to overcome, and we’re limited by current technology. My colleagues and I are working on making optical, as well as radio-frequency, data links even faster and more secure.</p>
<h2>CubeSats</h2>
<p>So far, a lot of effort has gone into the research and development of radio-frequency technology. This is how we know data rates are at their highest physical limit and can’t be further increased. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/305308/original/file-20191205-16538-drnyo5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The first CubeSats were launched in 2003 on a Russian Rockot launch vehicle.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/alloyjared/13278111165/">Jared/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>While a single radio-frequency link can provide data rates of 10Gbps with large antennas, an optical link can achieve rates 10 to 100 times higher, using antennas that are 10 to 100 times smaller.</p>
<p>These small antennas are in fact optical lenses, and their compact size allows them to be integrated into small satellites called CubeSats. </p>
<p>CubeSats are not larger than a shoebox or toaster, but can employ high speed data links to other satellites or the ground.</p>
<p>They are currently used for a wide range of tasks including earth observation, communications and scientific experiments in space. And while they’re not able to provide all services from space, they play an important role in current and future satellite systems.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-problems-with-small-satellites-and-what-australias-space-agency-can-do-to-help-108156">The problems with small satellites – and what Australia's Space Agency can do to help</a>
</strong>
</em>
</p>
<hr>
<p>Another advantage of optical communication is increased security. The light from a laser forms a narrow beam, which has to be pointed from a sender to a receiver. Since this beam is very narrow, the communication doesn’t interfere with other receivers and it’s very hard, if not impossible, to eavesdrop on the communication. This makes optical systems more secure than radio electromagnetic systems. </p>
<p>Optical communication can also be used for <a href="https://qt.eu/understand/underlying-principles/quantum-key-distribution-qkd/">Quantum Key Distribution</a>. This technology allows the absolute secure exchange of encryption keys for safe communications.</p>
<h2>What can we expect from this?</h2>
<p>While it’s exciting to develop systems for space, and to launch satellites, the real benefit of satellite systems is felt on Earth. </p>
<p>High speed communication provided by optical data links will improve connectivity for all of us. Notably, remote areas which currently have relatively slow connections will experience better access to remote health and remote learning. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-new-technologies-are-shaking-up-health-care-42318">How new technologies are shaking up health care</a>
</strong>
</em>
</p>
<hr>
<p>Better data links will also let us deliver images and videos from space with less delay and higher resolution. This will improve the way we manage our resources, including <a href="https://www.ga.gov.au/scientific-topics/community-safety/flood/wofs">water</a>, agriculture and forestry. </p>
<p>They will also <a href="https://www.ga.gov.au/scientific-topics/earth-obs/case-studies/mapping-bushfires">provide vital real-time information in disaster scenarios such as bushfires</a>. The potential applications of optical communication technology are vast.</p>
<h2>Banding knowledge together</h2>
<p>Working in optical satellite communication is challenging, as it combines many different fields and research areas including telecommunication, photonics and manufacturing. </p>
<p>Currently, our technology is far from achieving what is theoretically possible, and there’s great room for improvement. This is why there’s a strong focus on collaboration. </p>
<p>In Australia, there are two major programs facilitating this - the Australian Space Agency run by the federal government, and the <a href="https://smartsatcrc.com/">SmartSat Cooperative Research Centre</a> (CRC), also supported by the federal government.</p>
<p>Through the SmartSat CRC program, my colleagues and I will spend the next seven years tackling a range of applied research problems in this area.</p><img src="https://counter.theconversation.com/content/126344/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gottfried Lechner works for the University of South Australia and the SmartSat CRC. He receives funding from the Australian Research Council, Defence and the Department of Industry, Innovation and Science. </span></em></p>Free space optical communication will allow the same connectivity in space we already have on Earth. And this will provide benefits across a number of sectors.Gottfried Lechner, Associate Professor and Director of the Institute for Telecommunications Research, University of South Australia, University of South AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1042822018-10-03T05:14:24Z2018-10-03T05:14:24ZArthur Ashkin’s optical tweezers: the Nobel Prize-winning technology that changed biology<figure><img src="https://images.theconversation.com/files/238983/original/file-20181002-85608-q4o4py.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/abstract-red-laser-beam-isolated-on-631003721?src=ynKx9mYkRVMRbxz64yP0sg-1-5">Maryna Stamatova/Shutterstock</a></span></figcaption></figure><p>The 2018 Nobel Prize in Physics has been awarded to three pioneers of the laser technology that has made a big impact on the world. Gérard Mourou and Donna Strickland were recognised for their method of generating high-intensity, ultra-short optical pulses, which today is used in laser eye surgery. The other recipient was Arthur Ashkin for his groundbreaking work on optical tweezers. This method of using light to capture and manipulate tiny objects has changed the way we’re able to study microscopic life. </p>
<p>But how can light be used to move matter? The energy carried by light is fundamental to life on our planet. But as well as energy, light beams also have a momentum, which is called <a href="https://phys.org/news/2018-08-momentum-year-mystery.html">radiation pressure</a>. This means that if I shine a laser pointer at you, in addition to making you ever so slightly hotter, it will push you away with a very small force.</p>
<p>To use this force to lift something as big as, say, an apple would be almost impossible. The laser power required would run to many megawatts, probably enough to vaporise the apple before it got off the ground. But when an object gets ten times smaller in each direction it also gets 1,000 times lighter. So moving from something the size of an apple to a single cell means that the laser power needed to lift it falls from megawatts to milliwatts, a similar power to that of a laser pointer.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/238998/original/file-20181002-85620-rx6l89.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/238998/original/file-20181002-85620-rx6l89.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/238998/original/file-20181002-85620-rx6l89.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/238998/original/file-20181002-85620-rx6l89.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/238998/original/file-20181002-85620-rx6l89.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/238998/original/file-20181002-85620-rx6l89.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/238998/original/file-20181002-85620-rx6l89.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Arthur Ashkin.</span>
<span class="attribution"><span class="source">Nobel Foundation</span></span>
</figcaption>
</figure>
<p>As long ago as 1970, Ashkin (working at the world famous Bell Telephone Laboratories) began studying how you could use radiation pressure to <a href="https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.24.156">accelerate and trap</a> individual particles. Over the next 15 years he refined his ideas, brilliantly making the laser systems involved ever less complicated as time went on.</p>
<p>In 1986, working with Steven Chu (who later won his own Nobel Prize in Physics in 1997 for work on trapping atoms and ultimately became US secretary for energy) he published his <a href="https://www.osapublishing.org/ol/abstract.cfm?uri=ol-11-5-288">seminal paper</a> on what we now call optical tweezers. In this paper, Ashkin showed that if the laser beam was focused very tightly using a microscope then, rather than pushing objects away with radiation pressure, it would counter-intuitively attract particles towards it. When the laser beam was then moved, the particles would follow it, held in the focus of the beam at all times. </p>
<p>Since then, optical tweezers have been used by many physicists and engineers, who have extended the technique so that it can <a href="https://www.sciencedirect.com/science/article/abs/pii/S0030401807008784">trap many particles at once</a> and even transform the tweezers into <a href="https://link.springer.com/article/10.1023/A:1006911428303">optical spanners</a> that cause the objects to spin. This later area is one of my own research interests and I remember, as a young researcher, the thrill of Ashkin asking me for a copy of my talk at a conference.</p>
<h2>Impact in biology</h2>
<p>Perhaps the greatest impact of optical tweezers has been in biophysics. Optical tweezers can be used to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2408388/">sort healthy cells</a> from infected ones, or identify those that <a href="https://www.nature.com/articles/s41598-017-13205-6">might be cancerous</a>. It is also possible to use optical tweezers to measure both the <a href="https://arxiv.org/abs/1507.05321">minute movements</a> of a trapped object (equivalent to a few atoms in diameter) and <a href="https://link.springer.com/chapter/10.1007/978-3-642-02525-9_32">similarly tiny forces</a>. </p>
<p>Turning optical tweezers from a manipulation tool into a measurement device has allowed biologists to study the workings of the <a href="https://pubs.acs.org/doi/full/10.1021/acs.chemrev.6b00638">individual molecular motors</a> which are responsible for movement in the biological world. Such motors transport chemicals within molecules, allow cells to swim and, when acting collectively, allow whole creatures to move.</p>
<p>Ashkin showed us all just what can be done by having an idea and then seeing it through to completion. For years he worked in a minority field, pioneering and then refining his ideas inventing techniques that scientists now use as as essential tools of their trade - thank you Arthur.</p><img src="https://counter.theconversation.com/content/104282/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Miles Padgett receives funding from the Engineering and Physical Sciences Research Council and the European Union
Miles Padgett is employed by the University of Glasgow</span></em></p>Using lasers to trap and move particles changed the way we’re able to study microscopic life.Miles Padgett, Kelvin Chair of Natural Philosophy (Physics and Astronomy), University of GlasgowLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1022722018-08-29T16:36:07Z2018-08-29T16:36:07ZAnish Kapoor’s “Cloud Gate”: playing with light and returning to Earth, our finite world<figure><img src="https://images.theconversation.com/files/233825/original/file-20180828-75993-1hc8e30.jpg?ixlib=rb-1.1.0&rect=8%2C0%2C1488%2C750&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Anish Kapoor, _Cloud Gate_, 2004. Stainless steel, 1,006 x 2,012 x 1,280 cm.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/--mike--/6143335396/">Mike Warot/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Every day thousands of people play with Anish Kapoor’s <a href="http://anishkapoor.com/210/cloud-gate"><em>Cloud Gate</em></a> in Chicago’s Millennium Park. Affectionately nicknamed “the bean” for obvious reasons, the immense mirrored sculpture is made up of 168 stainless steel plates welded together and placed on the ground. It’s about 10 meters tall, with a base of about 20 by 13 meters. That’s what it takes – smaller is not possible. Such an impressive size is needed for the sculpture to impose itself in the Millennium Park and to achieve the ambition of the project, to play with the light and mix all the reflections of Chicago and its environment, day and night.</p>
<p>It’s striking: the city of Chicago is the right place for <em>Cloud Gate</em>, with a very simple surrounding geography. On one side is the Midwest, the flat American plain over seemingly infinite distances. Unlike France, where the landscape can change completely within just a few kilometres, in the Midwest, you can drive for hours and nothing will change. On the other side of <em>Cloud Gate</em> is Lake Michigan, its surface equivalent to 10% of the area of France. Water goes to the horizon on one side and endless cornfields are on the other. In this flat world, the city’s skyscrapers point to the sky. More than 100 culminate at more than 150 meters, with 442 meters for the highest. At night, it’s just magical.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/GBHrpd26JIw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p><em>Cloud Gate</em> is in Millennium Park, between downtown Chicago and Lake Michigan. Depending on the position you take, <em>Cloud Gate</em> is the mirror you choose for yourself. You can mix the elements as you wish: the sky, the city, the space above the lake and, and of course, the skyscrapers. It is gorgeous night or day, when weather is good and when it is not. You can be in the image you construct or not. If you want, you can even walk underneath to hide everything. Thousands of photos on the Web show all the possibilities.</p>
<h2>Geometric optics, rules the game with light</h2>
<p>Mirrors and lenses are two pillars of geometric optics, the basic tools for changing the direction of light rays. <em>Transmission</em>, <em>reflection</em> and <em>refraction</em> are the associated words. What we want to look at and how we observe determine how we build and assemble these optical elements. Professionals of this game are astronomers and microscopists. Anish Kapoor does the very same thing with <em>Cloud Gate</em>. But anyone who plays with light does not do so without consequences. Light brings into our eyes pictorial information about the world at the speed… of light. Having this information, seeing it, determines our lives. Nothing less.</p>
<p>Mirrors and lenses allow us to see at different scales: from the infinitely large with the <a href="http://hubblesite.org/">Hubble Space Telescope</a> to the invisibly small all around us with optical microscopes. Imaging at different scales has always been one of the major issues in science. Anish Kapoor approached sculpture with this idea, so he ended up building curved mirrors. This is no surprise for a physicist, but what Kapoor does with it artistically is amazing. Say why with a physicist’s eyes is the subject of this article.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/228084/original/file-20180717-44103-k75w16.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/228084/original/file-20180717-44103-k75w16.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=839&fit=crop&dpr=1 600w, https://images.theconversation.com/files/228084/original/file-20180717-44103-k75w16.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=839&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/228084/original/file-20180717-44103-k75w16.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=839&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/228084/original/file-20180717-44103-k75w16.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1054&fit=crop&dpr=1 754w, https://images.theconversation.com/files/228084/original/file-20180717-44103-k75w16.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1054&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/228084/original/file-20180717-44103-k75w16.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1054&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Moon phases drawn by Galileo in 1616. (Wikimedia).</span>
</figcaption>
</figure>
<h2>Galileo’s telescope changes the view of the universe</h2>
<p>The most famous example of this game, in which one looks beyond the visible with an optical instrument, is the observation of the surface of the Moon with a telescope by Galileo in 1609. Two references help to situate the importance of the event. First of all, in the 1610 treatise <a href="https://en.wikipedia.org/wiki/Sidereus_Nuncius">“Sidereus Nuncius”</a> (Messenger of the Stars), Galileo makes it evident at the very first page:</p>
<blockquote>
<p>“Great, certainly, are the subjects that in this thin treatise I propose to each of those who observe Nature, so that they examine and contemplate them. Great, I say, first because of the importance of matter itself, then because of its unprecedented novelty over the centuries, finally, also because of the Instrument through which these subjects have offered themselves to our perception.”</p>
</blockquote>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/228085/original/file-20180717-44079-1kwhkub.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/228085/original/file-20180717-44079-1kwhkub.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=725&fit=crop&dpr=1 600w, https://images.theconversation.com/files/228085/original/file-20180717-44079-1kwhkub.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=725&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/228085/original/file-20180717-44079-1kwhkub.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=725&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/228085/original/file-20180717-44079-1kwhkub.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=911&fit=crop&dpr=1 754w, https://images.theconversation.com/files/228085/original/file-20180717-44079-1kwhkub.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=911&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/228085/original/file-20180717-44079-1kwhkub.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=911&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"><em>Y’a quelqu’un ?</em> Painting by Pierre Rouillon 2013 (image with permission of artist).</span>
</figcaption>
</figure>
<p>As an experimental physicist, I particularly appreciate the capital I of the word <em>instrument</em>. As a teacher, I have always noted the phrase “are offered to our perception”. The result of the observation, in other words, is that any viewer in Galileo’s time, even one previously convinced that the Moon is an ideal celestial sphere, would see it like the Earth, “covered on all sides with enormous protuberances, deep hollows, and sinuosities”. Too late, you should not have looked… Believe what you may, but with one look at the Moon with a telescope, and it will immediately lose its status as the ideal sphere to become like the Earth itself. Your vision of the world and the universe will be irremediably transformed, as Galileo emphasised in his book.</p>
<p>The philosopher Hannah Arendt, in <a href="https://en.wikipedia.org/wiki/The_Human_Condition_(book)"><em>The Human Condition</em></a>, underlines the radical break that this Galileo experience introduces. For her, three events founded modernity: the discovery by Europeans of the New World, the Reformation and the invention of the astronomical telescope:</p>
<blockquote>
<p>“What Galileo did and what nobody had done before was to use the telescope in such a way that the secrets of the universe were delivered to human cognition ‘with the certainty of sense-perception’, that is, he put within the grasp of an earth-bound creature and its body-bound senses what had seemed forever beyond his reach, at best open to the uncertainties of speculation and imagination.”</p>
</blockquote>
<p>We do not play with light with impunity – it opens doors and transports us far away. Today, the images of the Hubble telescope show us the incredible diversity of deep space.</p>
<h2>With <em>Cloud Gate</em> of Anish Kapoor, a return to Earth</h2>
<p>In this context, which combines the technical capacity of observation and its overwhelming implications for our vision of our situation on Earth and in the universe, how can Anish Kapoor’s mirror sculpture be approached? In <a href="http://the-talks.com/interview/anish-kapoor/">“I have nothing to say”</a>, Anish Kapoor insists on the link between his mirrors and their place of installation, the space they open by deciphering the world around them.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/228086/original/file-20180717-44079-sdd6gs.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/228086/original/file-20180717-44079-sdd6gs.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/228086/original/file-20180717-44079-sdd6gs.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/228086/original/file-20180717-44079-sdd6gs.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/228086/original/file-20180717-44079-sdd6gs.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/228086/original/file-20180717-44079-sdd6gs.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/228086/original/file-20180717-44079-sdd6gs.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"><em>Cloud Gate</em> at night (Dave Wilson).</span>
</figcaption>
</figure>
<p>Chicago, with its elementary geography of an immense plain, lake and magnificent man-made urban landscape, is made for <em>Cloud Gate</em>. The sculpture, in a dialogue with the skyscrapers that create verticality in this flat space, is an instrument that allows everyone to play with light to propose all the mixtures, all the distortions, all the reconstructions of this landscape. It is a new scope, which, by distortion and dilatation, changes how we see us in the world. This sculpture processes the elements of the landscape around the viewer, and changes his or her perception. The Skyline is now in front of us. We are in immediate proximity with the massive buildings behind Michigan Avenue. This new kind of scale focuses our attention on what is immediately around us.</p>
<p>The contrast with Hannah Arendt’s analysis of Galileo’s observation of the Moon and its bezel is radical:</p>
<blockquote>
<p>“We always handle nature from a point in the universe outside the earth. Without actually standing where Archimedes wished to stand, still bound to the earth through the human condition, we have found a way to act on the earth and within terrestrial nature as though we dispose of it from outside, from the Archimedean point. And even at the risk of endangering the natural life process we expose the earth to universal, cosmic forces alien to nature’s household.”</p>
</blockquote>
<p>You don’t play with light without consequences…</p>
<p>We’re now back on Earth, but times have changed. With the exception of Elon Musk, perhaps, we all now know our planet is a finite one and the only one.</p>
<h2>Sculpting one’s environment</h2>
<p>In its simplicity, <em>Cloud Gate</em> is just a giant mirror, after all. Its curved shape provides all kinds of mirrors with different curves and viewing angles. This is how everyone builds their own image of Chicago. Unlike Galileo’s telescope, the viewer sets up his or her own presence, choosing a place and that of others during the construction of the image. For a moment, we all becomes sculptors of our own environment and its ephemeral representation. And one day I will have to write how the discovery of exoplanets is again changing everything… after an artist seizes it.</p><img src="https://counter.theconversation.com/content/102272/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joël Chevrier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Anish Kapoor made “Cloud Gate”, a giant bean-shaped mirror in Chicago. Visitors play with the light in the city and its surroundings, where our future lays.Joël Chevrier, Professeur de physique, Université Grenoble Alpes (UGA)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/966942018-06-12T19:59:58Z2018-06-12T19:59:58ZIn physics, a famous paradox that hangs by a thread of light…<figure><img src="https://images.theconversation.com/files/219379/original/file-20180517-155573-15imsa5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">file pwxht</span> </figcaption></figure><p>Imagine a metal bar that has been heated at one end. Instead of the heat gradually spreading over its entire length, the bar eventually becomes hot again at the place where it was originally. The fact that, paradoxically, a complex system returns to its original state instead of evolving toward equilibrium has drawn the attention of physicists for more than 60 years. Thanks to a series of advances in optical fibres, much richer and complete than before, our French-Italian team of researchers has just taken a crucial step in better understanding this phenomenon.</p>
<p><a href="https://www.researchgate.net/publication/324162686_Fibre_multi-wave_mixing_combs_reveal_the_broken_symmetry_of_Fermi-Pasta-Ulam_recurrence">Our publication</a>, which describes his progress, was featured on the cover of <em>Nature Photonics</em>. These are not only top results in fundamental physics but also of primary interest for the general public – the process in question is at the heart of phenomena such as the formation of rogue ocean waves or the design of high-precision optical clocks.</p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/217226/original/file-20180502-153891-xotn7t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/217226/original/file-20180502-153891-xotn7t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=744&fit=crop&dpr=1 600w, https://images.theconversation.com/files/217226/original/file-20180502-153891-xotn7t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=744&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/217226/original/file-20180502-153891-xotn7t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=744&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/217226/original/file-20180502-153891-xotn7t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=934&fit=crop&dpr=1 754w, https://images.theconversation.com/files/217226/original/file-20180502-153891-xotn7t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=934&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/217226/original/file-20180502-153891-xotn7t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=934&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Enrico Fermi.</span>
<span class="attribution"><a class="source" href="https://fr.wikipedia.org/wiki/Enrico_Fermi#/media/File:Enrico_Fermi_1943-49.jpg">Department of Energy/Wikipedia</a></span>
</figcaption>
</figure>
<h2>The Manhattan project at the origin of the paradox</h2>
<p>The paradox was first discovered in 1954 by leading scientists, some of whom were involved in the <a href="https://www.britannica.com/event/Manhattan-Project">Manhattan Project</a>, which would provide the United States with the atomic bomb. They were <a href="https://en.wikipedia.org/wiki/Stanislaw_Ulam">Stanislaw Ulam</a>, <a href="https://en.wikipedia.org/wiki/John_Pasta">John Pasta</a>, and <a href="https://en.wikipedia.org/wiki/Mary_Tsingou">Mary Tsingou</a>, and <a href="https://en.wikipedia.org/wiki/Enrico_Fermi">Enrico Fermi</a>, winner of the 1938 Nobel Prize in physics. Fermi has the idea of using one of the first-ever computers to explore new complex physical phenomena whose resolution was not possible by calculation. This marks the beginning of a revolution – numerical simulations – that has become essential in all areas of physics.</p>
<p>But for Fermi and his colleagues, the results of the first computer test revealed some completely unexpected behaviour: The system they were studying returned to its initial state.</p>
<p>Since then, the problem has been studied and written about extensively. The repeated efforts of physicists to solve it have been particularly fruitful for many branches of physics where it can be observed. In particular, they led to the discovery of the theory of <a href="https://en.wikipedia.org/wiki/Soliton">solitons</a>, pulses that propagate without deformation that can be observed in oceans, plasma physics and optics.</p>
<p>Some models predicted that the Fermi, Pasta and Ulam phenomenon was actually cyclical – the system returning several times to its initial state. But the experiments that had highlighted it had never detected anything more than a return to the original state: intrinsic losses of the system mitigated its manifestations too quickly.</p>
<h2>Optical fibres observe the paradox</h2>
<p>Our research team, based at the University of Lille’s <a href="http://www.phlam.univ-lille1.fr/">PHLAM Laboratory</a> and associated with an Italian theorist from the University of Ferrara, has managed to find a way to compensate these losses over more than 8 kilometres of optical fibre by adding a light source of a very different colour that served as an energy reservoir. This unprecedented process allowed us to observe for the first time a second return to the initial state. The experiment took place at the <a href="http://fibertech.univ-lille.fr/presentation">FiberTech Lille</a> facility, part of the <a href="http://www.ircica.univ-lille1.fr/">IRCICA</a> research institution.</p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/217185/original/file-20180502-153914-1q248c6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/217185/original/file-20180502-153914-1q248c6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/217185/original/file-20180502-153914-1q248c6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/217185/original/file-20180502-153914-1q248c6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/217185/original/file-20180502-153914-1q248c6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/217185/original/file-20180502-153914-1q248c6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/217185/original/file-20180502-153914-1q248c6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Light scattering in an optical fibre.</span>
</figcaption>
</figure>
<p>Thanks to an ingenious device that looked at diffusion of light by impurities within the fibre, known as <a href="http://hyperphysics.phy-astr.gsu.edu/hbase/atmos/blusky.html">Rayleigh scattering</a>, we were able to measure not only the intensity of the light but also what the optical specialists call its phase, and this along the whole fibre length. We then observed an unprecedented behaviour: recurrent shifts from one cycle to another, the maxima taking the place of the minima.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/217259/original/file-20180502-153888-lgno7r.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/217259/original/file-20180502-153888-lgno7r.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=607&fit=crop&dpr=1 600w, https://images.theconversation.com/files/217259/original/file-20180502-153888-lgno7r.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=607&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/217259/original/file-20180502-153888-lgno7r.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=607&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/217259/original/file-20180502-153888-lgno7r.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=762&fit=crop&dpr=1 754w, https://images.theconversation.com/files/217259/original/file-20180502-153888-lgno7r.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=762&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/217259/original/file-20180502-153888-lgno7r.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=762&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Several Fermi-Pasta-Ulam recurrences, with alternating maxima (red) and minima (light blue).</span>
</figcaption>
</figure>
<p>This result, predicted by some models, opens a new way in the understanding of this phenomenon, which at the root of many other complex processes: <a href="https://en.wikipedia.org/wiki/Frequency_comb">frequency combs</a>. These “laser rules”, advancing swiftly in recent years, bring light into a large number of new applications, ranging from distance measurement for autonomous cars to the discovery of exoplanets, to name just a few.</p><img src="https://counter.theconversation.com/content/96694/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Arnaud Mussot has received funding from the Agence Nationale de la Recherche (ANR), the Labex Centre Europeen pour les Mathematiques, la Physique et leurs Interactions (CEMPI), the FLUX team and the Hauts-de-France region. He is a member of the Institut Universitaire de France.</span></em></p><p class="fine-print"><em><span>Matteo Conforti has received funding from Agence Nationale de la Recherche (ANR) under projects NoAWE and CEMPI, and the Hauts-de-France region.
</span></em></p><p class="fine-print"><em><span>Stefano Trillo receives funding from Italian Ministry of University and Research (MIUR) under PRIN action, and from University of Ferrara under FAR action.</span></em></p>In 1954, three scientists observed a paradox to which they gave their name: the Fermi-Pasta-Ulam recurrence. Now, fibre optics are on the way to finally providing an explanation.Arnaud Mussot, Professeur au Laboratoire de Physique des Lasers Atomes et Molécules (PHLAM), CNRS UMR8523, IRCICA, Université de LilleMatteo Conforti, Chercheur au Laboratoire de Physique des Lasers, Atomes et Molécules, Université de LilleStefano Trillo, Professor of Optics, University of FerraraLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/942822018-05-18T10:41:43Z2018-05-18T10:41:43Z75 years of instant photos, thanks to inventor Edwin Land’s Polaroid camera<figure><img src="https://images.theconversation.com/files/219447/original/file-20180517-26274-1f6mmvc.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2618%2C2070&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Edwin Land, on the left, invented and commercialized a number of technologies, most of which centered on light.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Watchf-AP-A-OH-USA-APHS150797-Polaroid-Land-Camera/155ca24494f748d3aae778e1db3f8755/2/0">AP Photo</a></span></figcaption></figure><p>It probably happens every minute of the day: A little girl demands to see the photo her parent has just taken of her. Today, thanks to smartphones and other digital cameras, we can see snapshots immediately, whether we want to or not. But in 1943 when <a href="https://www.acs.org/content/acs/en/education/whatischemistry/landmarks/land-instant-photography.html">3-year-old Jennifer Land</a> asked to see the family vacation photo that her dad had just taken, the <a href="https://www.library.hbs.edu/hc/polaroid/instant-photography/the-idea-of-instant-photography/">technology didn’t exist</a>. So her dad, <a href="https://www2.rowland.harvard.edu/book/export/html/16141">Edwin Land, went to work inventing it</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Polaroid camera faces the viewer" src="https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=884&fit=crop&dpr=1 600w, https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=884&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=884&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1111&fit=crop&dpr=1 754w, https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1111&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/218832/original/file-20180514-100703-7r2u85.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1111&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The original Polaroid camera freed users from needing to trek to a darkroom to develop their images.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/cNomGxIq6MI">Lindsay Moe/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Three years later, after plenty of scientific development, Land and his Polaroid Corp. realized the miracle of nearly instant imaging. The film exposure and processing hardware are contained within the camera; there’s no muss or fuss for the photographer, who just points and shoots and then watches the image materialize on the photo once it spools out of the camera. Land demonstrated his new technology publicly for the first time on <a href="https://mobile.twitter.com/OpticaWorldwide/status/1098613395765501955">Feb. 21, 1947, at a meeting</a> of the Optical Society of America.</p>
<p>Land is probably best known for the “instant photo” – or the spiritual progenitor of today’s <a href="http://www.dailymail.co.uk/sciencetech/article-3619679/What-vain-bunch-really-24-billion-selfies-uploaded-Google-year.html">ubiquitous selfie</a>. His Polaroid camera was first released commercially in 1948 at retail locations and prices aimed at the postwar middle class. But this is just one of a host of technological breakthroughs Land invented and commercialized, most of which centered around light and how it interacts with materials. The technology used to show a 3D movie and the goggles we wear in the theater were made possible by Land and his colleagues. The camera aboard the U-2 spy plane, as featured in the movie “<a href="https://www.imdb.com/title/tt3682448/">Bridge of Spies</a>,” was a Land product, as were even some aspects of the plane’s mechanics. He also worked on theoretical problems, drawing on a deep understanding of both chemistry and physics.</p>
<p><a href="https://scholar.google.com/citations?user=8hzH2SoAAAAJ&hl=en&oi=ao">I’m a vision scientist</a> who has touched many of the fields in which Land made great advances, through my own work on new imaging methods, image processing techniques and human color vision. As the 2018 recipient of the <a href="https://www.osa.org/en-us/awards_and_grants/awards/award_description/edwinland/">Edwin H. Land Medal</a>, awarded by the Optical Society of America and the <a href="https://www.optica.org//en-us/about/newsroom/news_releases/2018/the_optical_society_and_society_for_imaging_scienc/">Society for Imaging Science and Technology</a>, my own work relies on Land’s technological innovations that made modern imaging possible.</p>
<h2>Controlling light’s properties</h2>
<p>Edwin Land had his first optics breakthrough as a young man, when he figured out a convenient and affordable method to control one of the fundamental properties of light: polarization.</p>
<p>You can think of light as waves propagating from a source. Most light sources produce a mixture of waves with all different physical properties, such as wavelength and amplitude of vibration. Light is considered polarized if the amplitude varies in a consistent manner perpendicular to the direction the wave is traveling.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="diagram of only vertical lightwaves passing through filter" src="https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=280&fit=crop&dpr=1 600w, https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=280&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=280&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=352&fit=crop&dpr=1 754w, https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=352&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/219275/original/file-20180516-155569-1a1sjoe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=352&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A polarizing filter can block all the light waves that don’t match its orientation.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/ko/image-vector/polarization-light-waves-421267105">Fouad A. Saad/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>Given the right material for the light waves to pass through, the light waves may be rotated into another plane, slowed down or blocked. Modern 3D goggles work because one eye receives light waves vibrating along the horizontal plane while the other eye receives the light vibrating along the vertical plane. </p>
<p>Before Land, researchers built components to control polarization from rock crystals, which were assigned almost magical names and properties, though they merely decreased the velocity or amplitude of light waves traveling at specific orientations. Land created “polarizers” by growing small crystals and embedding them in plastic sheets, altering the light passing through depending on its orientation in relation to the rows of crystals. His inexpensive polarizer made it possible to reliably and practically filter light so only wavelengths with a particular orientation would pass through.</p>
<p>Land founded the Polaroid Corp. in 1937 to commercialize his new technology. His sheet polarizers found applications ranging from the identification of chemical compounds to adjustable sunglasses. Polarizing filters became standard in photography to reduce glare. Today the principles of polarized light are used in most computer and cellphone screens to enhance contrast, decrease glare and even turn on or off individual pixels.</p>
<p><a href="https://doi.org/10.1167/iovs.03-0124">Polarizing filters help researchers visualize structures</a> that might not be seen otherwise – from astronomical features to biological structures. In my own field of vision science, polarization imaging localizes classes of chemicals, such as <a href="https://doi.org/10.1364/JOSAA.24.001468">protein molecules leaking from blood vessels</a> in diseased eyes. Polarization is also combined with high-resolution imaging techniques to detect <a href="https://doi.org/10.1038/s41598-017-03529-8">cellular damage</a> beneath the reflective retinal surface. </p>
<h2>A new way to get the data out</h2>
<p>Before the days of high-speed digital capture of data and affordable high-resolution displays, or use of videotape, Polaroid photography was the method of choice to obtain output in many scientific labs. Experiments or medical tests needed graphical or pictorial output for interpretation, often from an analog oscilloscope which plotted out a voltage or current change over time. The oscilloscope was fast enough to capture key features of the data – but recording the output for later analysis was a challenge before Land’s instant camera came along.</p>
<p>A common example in vision science is the recording of eye movements. A research study reported in 1960 plotted light reflected from an observer’s moving eye on an oscilloscope screen, which was photographed with a <a href="https://doi.org/10.1364/JOSA.50.000245">mounted Polaroid camera</a> – not unlike the consumer Polaroid camera a family might pull out at a birthday party. For decades, research labs and medical facilities used <a href="https://www.ebay.com/p/Tektronix-C-5c-Oscilloscope-Camera-for-Polaroid-Film-B054450/1437576020">setups consisting of a Polaroid camera and a mounting rig</a> to collect electrical signals displayed on oscilloscope screens. The format sizes are less than dazzling compared to modern digital resolutions, but they were revolutionary at the time.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=599&fit=crop&dpr=1 600w, https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=599&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=599&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=753&fit=crop&dpr=1 754w, https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=753&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/218867/original/file-20180514-100693-jtafii.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=753&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Land’s inventions led to the widespread use of polarized light to characterize tissues and objects, as in this pseudo-color image of a diabetic patient’s retina that unmasks irregular structures caused by edema.</span>
<span class="attribution"><span class="source">Ann Elsner</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>In 1987, with the founding of my new retinal imaging laboratory, there was no inexpensive method to provide shareable output of our <a href="https://doi.org/10.1016/0042-6989(95)00100-E">novel images</a>. After a few years of struggling to obtain high-quality output for conferences and publications, the Polaroid Corp. came to our rescue, with the donation of a printer, allowing our scientific contributions to reach an audience beyond our lab.</p>
<h2>Eyes are not cameras</h2>
<p>Land’s contributions go beyond patenting over 500 innovations and inventing products that millions purchased. His understanding of the interaction of light and matter promoted novel ways of characterizing chemicals with polarized light. And he provided insights into the workings of the human visual system that had seemed to defy the laws of physics, coming up with what he called the <a href="https://pdfs.semanticscholar.org/8b2a/d82ce40117417fa36ba16941ce022f2185f3.pdf">Retinex theory</a> of color vision to explain how people perceive a broad range of color <a href="https://doi.org/10.1364/JOSAA.3.000916">without the expected wavelengths</a> being present in the room.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Polaroids clipped to a string agains brick wall" src="https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/219101/original/file-20180515-195311-6j3cax.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Quick prints can be shared and displayed.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/hillaryandanna/760585681">Hillary Hartley</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Despite his brilliance, Land’s Polaroid Corp. eventually hit hard times in the decades after his death in 1991. Heavily invested in its film sales, Polaroid wasn’t prepared as all tiers of the imaging market went digital, with everyone from consumer photographers to high-end medical and optical imagers abandoning film and processing.</p>
<p>But rather than sink with the film market, Polaroid reinvented itself with new products that could help output the new world of digital images. And in a case of history repeating itself, <a href="https://us.polaroid.com/collections/instant-cameras">Polaroid</a> and other manufacturers of instant cameras are enjoying renewed popularity with younger generations who had no exposure to the original versions. Just like little Jennifer Land, plenty of people today still want a tangible version of their pictures, right now.</p>
<p><em>This is an updated version of an article originally published on May 18, 2018. It corrects the year Jennifer Land inspired her father’s invention.</em></p><img src="https://counter.theconversation.com/content/94282/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ann Elsner receives funding from NIDILRR and NIH. She owns shares in Aeon Imaging, LLC.</span></em></p>Whether at a family gathering or in a research lab, getting access to images immediately was a game-changer. And Land’s innovations went far beyond the instant photo.Ann Elsner, Professor of Optometry, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/897262018-04-23T10:40:48Z2018-04-23T10:40:48ZDelivering VR in perfect focus with nanostructure meta-lenses<figure><img src="https://images.theconversation.com/files/213474/original/file-20180405-189824-cn6r37.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Could there be a future with smaller, less bulky VR headsets?</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Innovation-Day-USA-2018/34b2f157d4c241879634f9abf5309858/1/0">Jean-Marc Giboux/AP Images for Siemens</a></span></figcaption></figure><p>If wearing a virtual reality or augmented reality headset is ever to become commonplace, hardware manufacturers will need to figure out how to make the devices small and lightweight while ensuring their images are sharp and clear. Unfortunately, this task faces a key limitation in optics: Conventional lenses are curved glass objects that focus different wavelengths of light in different locations, which would show viewers blurry images. As a result, pretty much anything with a lens – from tiny smartphone cameras to large-scale projectors – uses multiple lenses, which add weight, thickness and complexity, increasing cost.</p>
<p>We’ve figured out a <a href="http://doi.org/10.1038/s41565-017-0034-6">new way to manufacture fully transparent, ultracompact lenses</a> capable of properly focusing every color in the spectrum to the same point. Because our lens comprises specially designed nanostructures, which do not exist in nature, to focus light, we call it a “meta-lens.” It has the advantages of being ultracompact while capable of delivering higher-quality imaging across a wider spectrum of light than most traditional lenses, without requiring multiple lenses.</p>
<h2>Bending light</h2>
<p>For centuries, most lenses for telescopes, glasses and <a href="https://www.nikoninstruments.com/Learn-Explore/Nikon-Craftsmanship/Lens-Polishing-Hand-polishing-spherical-front-lenses-for-microscopes">other optical equipment have been manufactured</a> by grinding glass into a rough curved shape and then polishing it to cleanly and clearly bend light. However, these lenses can’t focus light of every color on the same point.</p>
<p>It is a basic property of light that different colors – or frequencies – travel at different speeds in a lens. They cannot reach the same point at the same time, resulting in blurred images.</p>
<figure>
<img src="https://cdn.theconversation.com/static_files/files/72/Light_dispersion_conceptual_waves.gif?1522955022">
<figcaption><span class="caption">Different frequencies of light bend and travel differently in a lens. <a href="https://commons.wikimedia.org/wiki/File:Light_dispersion_conceptual_waves.gif">Lucas V. Barbosa</a></span></figcaption>
</figure>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=419&fit=crop&dpr=1 754w, https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=419&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/213469/original/file-20180405-189804-14rz8tv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=419&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Even a smartphone camera has many intricate components layered together.</span>
<span class="attribution"><a class="source" href="http://laptopmedia.com/smartphone-review/apple-iphone-6-review-the-phone-weve-been-expecting-for-years/#camera">Laptop Media</a></span>
</figcaption>
</figure>
<p>To reduce this effect, commercial lens manufacturers construct complicated optical devices with many separate lenses, each precisely ground into curves and aligned to focus its range of wavelengths in just the right place. However, they end up with large, heavy and complex lenses – nothing that would be easy to wear comfortably as part of a VR experience.</p>
<h2>The power of nanostructures</h2>
<p>To replace these enormous and expensive precision-engineered products, we start with a millimeter-thick sheet of regular flat glass. On it, we place a layer of carefully designed rectangular nanostructures, a million times thinner than the glass layer, made of titanium dioxide, which is totally transparent to visible light.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=427&fit=crop&dpr=1 600w, https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=427&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=427&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=537&fit=crop&dpr=1 754w, https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=537&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/215280/original/file-20180417-163971-zzlzp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=537&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The nanonstructures as viewed by a scanning electron microscope.</span>
<span class="attribution"><a class="source" href="https://www.seas.harvard.edu/capasso/">Capasso Group, Harvard University</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>The nanostructures are designed to bend incoming light rays by increasingly greater angles the farther they hit the meta-lens from its center so that all rays are focused in the same spot. To secure the nanostructures onto the glass substrate, we use <a href="https://www.bloomberg.com/news/articles/2016-06-09/how-intel-makes-a-chip">lithography</a>, a technique widely used to mass-produce computer chips.</p>
<p>In 2016, we showed that using flat glass with nanostructures could <a href="https://doi.org/10.1126/science.aaf6644">focus light of one specific color</a> just as well as a traditional curved lens. But in that research, what we made suffered from the same age-old problem as curved glass: Each color focused on a different location. To have our flat lenses form high-quality images, all the light – regardless of its color – must focus on the same point.</p>
<h2>Including all colors</h2>
<p>In our latest work, we design a more sophisticated set of nanostructures, which even on a flat surface can do much more than a traditional curved lens. The nanostructures still bend the light at higher angles the farther from the center they are, but with an important modification inspired by a key insight. After leaving the meta-lens, the light has to travel to the focus point, which is farther from the edges than it is from the center of the lens. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=789&fit=crop&dpr=1 600w, https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=789&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=789&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=992&fit=crop&dpr=1 754w, https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=992&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/215281/original/file-20180417-163978-tf7nzk.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=992&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A diagram of how a meta-lens can focus all colors of light on a single point.</span>
<span class="attribution"><a class="source" href="https://www.seas.harvard.edu/capasso/">Capasso Group, Harvard University</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>To travel a longer distance in the same period of time, that light has to travel faster. So we built some nanostructures that transmit the light more quickly, and others that do so more slowly. We put the faster-transmitting nanostructures at the edges of the lens, so <a href="http://doi.org/10.1038/s41565-017-0034-6">light travels through them faster</a> than in those in the middle. This effectively helps the light from the meta-lens edges catch up with light at the center, so that all the rays focus together.</p>
<p>This approach can be modified for any number of specialized situations, allowing construction of meta-lenses that have a wide range of properties, such as the ability to affect certain colors but not others: A custom-designed nanostructure can make that adjustment relatively simply, without the constraints or complexities of polishing curved glass lenses to highly precise specifications.</p>
<p>Once designed, meta-lenses can be created as part of a wider mass production process: for instance, of VR headsets or augmented reality glasses. They can also be used in place of more expensive ground-glass camera lenses on smartphones and laptops, reducing weight, thickness and cost of portable devices.</p>
<p>It may seem surprising that the centuries-old challenge of multi-color focusing can be solved by a thin piece of glass underneath nanostructures barely visible to the human eye. But indeed, the meta-lens approach can provide what all those bulky traditional lenses cannot: a clear image across a broad range of colors.</p><img src="https://counter.theconversation.com/content/89726/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Federico Capasso owns shares in and is a board member of a startup, METALENZ, which he co-founded in 2015. The research described in this article is funded in part by the Air Force Office of Scientific Research and DARPA.</span></em></p><p class="fine-print"><em><span>Alexander Yutong Zhu and Wei-Ting Chen do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Using nanostructures on a flat piece of glass can make lenses smaller, lighter and much cheaper – while providing better image quality.Federico Capasso, Professor of Applied Physics, Senior Research Fellow in Electrical Engineering, Harvard UniversityAlexander Yutong Zhu, Ph.D. Candidate in Applied Physics, Harvard UniversityWei-Ting Chen, Postdoctoral Fellow in Applied Physics, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/902582018-01-22T12:25:46Z2018-01-22T12:25:46ZThe next generation of cameras might see behind walls<figure><img src="https://images.theconversation.com/files/202620/original/file-20180119-110121-1wggumj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><span class="license">Author provided</span></span></figcaption></figure><p>You might be really pleased with the camera technology in your latest smartphone, which can recognise your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.</p>
<p>The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modelling how light travels through the scene or the camera.</p>
<p>This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.</p>
<h2>Single pixel cameras</h2>
<p>One extreme example is the <a href="https://dx.doi.org/10.1098%252Frsta.2016.0233">single pixel camera</a>, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel. </p>
<p>To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image. </p>
<p>Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors <a href="https://www.nature.com/articles/ncomms12010">cannot be made into cameras</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-amazing-camera-that-can-see-around-corners-51948">The amazing camera that can see around corners</a>
</strong>
</em>
</p>
<hr>
<p>These cameras could be used to take photos through <a href="https://www.osapublishing.org/oe/abstract.cfm?uri=oe-23-11-14424">fog or thick falling snow</a>. Or they could <a href="http://advances.sciencemag.org/content/3/4/e1601782">mimic the eyes of some animals</a> and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.</p>
<p>It is even possible to capture images from light particles that have <a href="https://www.nature.com/articles/nature13586">never even interacted</a> with the object we want to photograph. This would take advantage of the idea of “quantum entanglement”, that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?</p>
<h1>Multi-sensor imaging</h1>
<p>Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms an picture. But we are currently witnessing a surge of interest for systems where that use lots of information but traditional techniques only collect a small part of it.</p>
<p>This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. <a href="https://www.nasa.gov/mission_pages/hubble/multimedia/index.html">The Hubble telescope</a> was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the <a href="https://www.lytro.com/255D">Lytro camera</a> that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/202582/original/file-20180119-80168-3gleod.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/202582/original/file-20180119-80168-3gleod.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/202582/original/file-20180119-80168-3gleod.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/202582/original/file-20180119-80168-3gleod.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/202582/original/file-20180119-80168-3gleod.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/202582/original/file-20180119-80168-3gleod.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/202582/original/file-20180119-80168-3gleod.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Light L16.</span>
<span class="attribution"><span class="source">Light</span></span>
</figcaption>
</figure>
<p>The next generation camera will probably look something like the <a href="https://light.co/camera">Light L16 camera</a>, which features ground-breaking technology based on more than ten different sensors. Their data are combined combined using a computer to provide a 50Mb, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.</p>
<p>Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, <a href="https://www.nature.com/articles/ncomms1747">seeing behind walls</a>, and even imaging deep inside the <a href="https://www.nature.com/articles/nphoton.2014.107">human body and brain</a>.
All of these techniques rely on combining images with models that explain how light travels through through or around different substances.</p>
<p>Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to <a href="https://www.osapublishing.org/optica/abstract.cfm?uri=optica-4-9-1117">recognise objects from the data</a>. These techniques are inspired by learning processes in the human brain and are likely to play a major role in <a href="https://arxiv.org/abs/1709.07244">future imaging systems</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/cDbGFT5rM0I?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images <a href="https://www.nature.com/articles/ncomms7021">of light itself</a> travelling across as scene.</p>
<p>Some of these applications might require a little time to fully develop but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.</p><img src="https://counter.theconversation.com/content/90258/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniele Faccio receives funding from EPSRC, QuantIC - The Quantum Hub for Imaging, The Leverhulme Trust, DSTL.</span></em></p><p class="fine-print"><em><span>Stephen McLaughlin receives funding from EPSRC for a variety of research grants which analyse data which require the computational imaging methods described in the article</span></em></p>Single-pixel cameras, multi-sensor imaging and quantum technologies will change the way we take photos.Daniele Faccio, Professor of Quantum Technologies, University of GlasgowStephen McLaughlin, Head of School of Engineering and Physical Sciences, Heriot-Watt UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/896892018-01-09T16:01:26Z2018-01-09T16:01:26ZSuper-black feathers can absorb virtually every photon of light that hits them<figure><img src="https://images.theconversation.com/files/201228/original/file-20180108-142334-1h044en.jpg?ixlib=rb-1.1.0&rect=0%2C54%2C925%2C708&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Super-black feathers on these guys are like looking into a dark cave.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/sdnatasha/4514108926">Natasha Baucas</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>What do birds and aerospace engineers have in common? Both have invented incredibly dark, “super-black” surfaces that absorb almost every last bit of light that strikes them. </p>
<p>Of course scientists worked intentionally to devise these materials. It’s evolution that brought this amazing trait about in birds. My co-lead author <a href="http://vertebrates.si.edu/birds/birds_staff_pages/TeresaFeo_staffpage.html">Teresa Feo</a>, our colleagues <a href="http://www.graphics.cornell.edu/%7Etodd/pcg/Home.html">Todd A. Harvey</a> and <a href="https://prumlab.yale.edu/">Rick Prum</a> and I <a href="http://nature.com/articles/doi:10.1038/s41467-017-02088-w">investigated the super-black feathers</a> in some of the most outlandish animals on earth: <a href="http://www.birdsofparadiseproject.org/">the Birds of Paradise</a>.</p>
<p>These are resplendent birds native to Papua New Guinea and surrounding areas. Males are brilliantly colored, with complicated mating dances. Females, who are drab and brown in comparison, carefully inspect the ornaments and dances of males before choosing their mate.</p>
<p>We wanted to know more about these birds’ super-black plumage and how it works. What mechanism do these feathers employ to be so effective at absorbing light?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/201237/original/file-20180108-83581-d443ug.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A male Superb Bird of Paradise displays his super-black and brilliant blue plumage to an onlooking female.</span>
<span class="attribution"><span class="source">Ed Scholes</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Fanciest feathers, under the microscope</h2>
<p>The Birds of Paradise have evolved many remarkable traits, but none are more mysterious than the males’ velvety black plumage.</p>
<p>This black is so dark that your eyes cannot focus on its surface; it looks like a cave, or a fuzzy black hole in space. Using optical measurements, we found that these feather patches <a href="http://nature.com/articles/doi:10.1038/s41467-017-02088-w">absorb up to 99.95 percent of directly incident light</a>. That’s comparable to human-made very black materials such as solar panels, the lining of space telescopes, and even the “blackest black” material: <a href="http://www.cnn.com/2017/11/15/world/vantablack-blackest-black-material/index.html">Vantablack</a>, which absorbs 99.96 percent of light.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=298&fit=crop&dpr=1 600w, https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=298&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=298&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=374&fit=crop&dpr=1 754w, https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=374&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/201235/original/file-20180108-83567-ish8tm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=374&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">On the left, a normal black feather from a Lesser Melampitta. On the right, a super-black feather from the Paradise Riflebird.</span>
<span class="attribution"><span class="source">Dakota McCoy</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Normal feathers are flat, and look like fractals; when you zoom in using a microscope, each branch of the feather looks like a tiny, flat feather. Under a powerful scanning electron microscope, we were surprised to see that the super-black feathers look like miniature coral reefs, bottle brushes or trees with tightly packed leaves.</p>
<p>These tiny, specially shaped bits stick up to form a jagged, complex surface; together they act as microscopic light traps. When light rays strike these surface microstructures, they repeatedly scatter around the shapes and are absorbed, rather than being reflected back to an observer. It’s an iterative process: Each time a scattering event occurs, a portion of the light is absorbed until it’s almost completely absorbed.</p>
<p>Human-made super-black materials such as “<a href="https://www.pv-tech.org/guest-blog/black-silicon-theres-more-than-meets-the-eye">black silicon</a>” also rely on what materials scientists call structural absorption. Like the super-black feathers, their microscopic “<a href="https://doi.org/10.1063/1.4719108">light traps</a>” are due to a rough surface that scatters light repeatedly, but the actual surface shapes they use are different. Rather than the feathers’ bottle brush shapes, human engineers designed regularly spaced microscopic cones and pits. With almost no exposed flat surface, these structurally black materials are the opposite of a mirror.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=301&fit=crop&dpr=1 600w, https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=301&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=301&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=378&fit=crop&dpr=1 754w, https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=378&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/201236/original/file-20180108-83556-1i5x62g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=378&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Due to its unusual microstructure, the feather from the Paradise Riflebird (on the right) still appears super-black when coated with gold, as compared to a regular black feather (on the left).</span>
<span class="attribution"><span class="source">Dakota McCoy</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>The Birds of Paradise’s super-black feathers are so good at absorbing light that even when we coated them in gold, a shiny metal, they still looked black. That’s because it’s not the inside of the feather making the color via pigment or ordered nanostructures; instead, just as with human-made <a href="https://doi.org/10.1039/C4EE01152J">black silicon</a>, the super black comes from the physical surface structure. Evolution and human ingenuity arrived at the same solution.</p>
<h2>Advantages of super-black feathers</h2>
<p>But why do these birds have such incredibly dark black patches? What selective advantage caused this trait to evolve? It’s tempting to think that super black somehow helps with camouflage, to keep predators away. In fact, some <a href="https://doi.org/10.1038/srep01846">snakes have super-black scales</a> that mimic shadows between leaves, helping them blend into the forest floor. The snake example illustrates evolution by natural selection – “survival of the fittest.”</p>
<p>But other factors can also influence evolution’s course, including random chance or sexual selection. As my colleague Rick Prum points out in his new book “<a href="https://www.penguinrandomhouse.com/books/224257/the-evolution-of-beauty-by-richard-o-prum/9780385537216/">The Evolution of Beauty: How Darwin’s Forgotten Theory of Mate Choice Shapes the Animal World – and Us</a>,” mate choice is a powerful force driving evolution. In Birds of Paradise, super-black feathers help male birds look more beautiful to a female’s eye.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/UYbn9R11Rrs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A Superb Bird of Paradise displays his best plumage to potential mate.</span></figcaption>
</figure>
<p>To understand how, it helps to look at Bird of Paradise mating dances. Males vigorously display their super-black patches to females, making sure that females can’t get a view from the side. This is because these feathers are highly directional, and they look darkest from straight ahead. </p>
<p>And super-black patches always sit around or next to brilliant color patches. A super-black, anti-reflective frame makes nearby colors appear brighter, almost glow. In other words, super black is an <a href="https://en.wikipedia.org/wiki/Checker_shadow_illusion">evolved optical illusion</a> that relies on the way animal eyes and brains adjust our perceptions based on ambient light.</p>
<p>In the high-stakes game of choosing a mate, a single feather that isn’t quite blue enough <a href="https://www.penguinrandomhouse.com/books/224257/the-evolution-of-beauty-by-richard-o-prum/9780385537216/">could be enough to turn off</a> a female Bird of Paradise. Clearly, female Birds of Paradise prefer males with super-black plumage. As females <a href="https://doi.org/10.1111/evo.13196">pick the most impressive males to mate with</a>, those dazzling feather genes are passed on to future generations while the genes of less splendid males, overlooked by females, are not. Sexual selection drove evolution toward super-black plumage.</p>
<p>Evolution is not an orderly, coherent process; evolutionary arms races can produce great innovation. Perhaps these super-black feathers with their unique microscopic structure could eventually inspire better solar panels, or new textiles; super-black butterfly wings <a href="https://link.springer.com/article/10.1186/s11671-015-1052-7">already have</a>. Evolution has had millions of years to tinker; we still have much to learn from its solutions.</p><img src="https://counter.theconversation.com/content/89689/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>This research was
funded by the W. R. Coe Fund of Yale University, by a Sigma XI student research
fellowship to D.E.M., and by a Mind, Brain, and Behavior Graduate Student Award
to D.E.M. D.E.M. was supported by the Department of Defense (DoD) through the
National Defense Science and Engineering Graduate Fellowship (NDSEG)
Program. Tomography data collections at the Advanced Photon Source beamline 2-
BM, Argonne National Laboratory were supported by the U.S. Department of
Energy Office of Science (Proposal ID 41887). T.J.F. was supported by a NSF
Postdoctoral Fellowship in Biology (#1523857). Richard Pfisterer of Photon
Engineering graciously licensed FRED to T.A.H. for this research. This work was
performed in part at the Harvard University Center for Nanoscale Systems (CNS), a
member of the National Nanotechnology Coordinated Infrastructure Network
(NNCI), which is supported by the National Science Foundation under NSF ECCS
award no. 1541959.</span></em></p>Male Birds of Paradise have patches of super-black plumage that absorb 99.95 percent of light. New research identified their feathers’ microscopic structures that make them look so very dark.Dakota McCoy, PhD Student in Organismic and Evolutionary Biology, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/549842016-02-25T10:52:44Z2016-02-25T10:52:44ZFootwear forensics device could catch criminals who put a foot wrong<figure><img src="https://images.theconversation.com/files/112156/original/image-20160219-25901-136fyqy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> </figcaption></figure><p>Imagine the scene of a household burglary or murder where the criminals have left no other evidence than a series of footprints on the kitchen floor. If the perpetrators were caught, would the police have enough information to arrest and convict them, or would they simply be allowed to walk free? My colleagues and I have developed a new forensic imaging technique for analysing shoeprints that could help police to identify the criminal in such a situation.</p>
<p>How could a shoeprint be so useful to a criminal investigation? Clearly, it would not be nearly as effective for identifying a person as their DNA or fingerprints. Shoe soles are rarely unique and aren’t inherently connected to a suspect. But each person does have their own style of walking or moving, <a href="http://www.forensicmag.com/articles/2014/08/considerations-gait-crime-scenes">known as their gait</a>. This is because we each distribute our weight differently when we walk and so we wear down our shoe soles in different ways. So if two individuals were to wear an identical type of shoe, then over time they would each wear them down differently.</p>
<p>It is these differences which police forensic services hope to exploit when they find shoeprint evidence at the scene of a serious crimes. The hope is that by matching contact images of shoe prints <a href="http://www.sciencedirect.com/science/article/pii/S0262885608001376">obtained in custody</a> with marks found at crime scenes, police will rapidly be able to identify or eliminate suspects in their enquiries.</p>
<h2>Collecting the evidence</h2>
<p>The main method of getting forensic footwear evidence has, until now, involved collecting the shoeprint from the suspect’s footwear using vegetable dye-based ink pads and paper in a similar way to that used to take fingerprints. This approach has a number of disadvantages. First, the impressions obtained can often get quite smudged and are sometimes useless in identifying key features on the shoe soles.</p>
<p>These impressions then have to be scanned into digital form and transmitted to a forensic footwear specialist, who can then resize the image and manually compare it to photographs of shoeprints found at crime scenes. This is quite a time-consuming and laborious process that can take many hours. </p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/112157/original/image-20160219-25861-88xnpp.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/112157/original/image-20160219-25861-88xnpp.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=688&fit=crop&dpr=1 600w, https://images.theconversation.com/files/112157/original/image-20160219-25861-88xnpp.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=688&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/112157/original/image-20160219-25861-88xnpp.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=688&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/112157/original/image-20160219-25861-88xnpp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=865&fit=crop&dpr=1 754w, https://images.theconversation.com/files/112157/original/image-20160219-25861-88xnpp.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=865&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/112157/original/image-20160219-25861-88xnpp.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=865&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">If the shoeprint fits.</span>
<span class="attribution"><span class="source">James Sharp</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>To provide the police with a more accurate way of matching footprints, we have harnessed a technique called frustrated <a href="http://www.physicsclassroom.com/class/refrn/Lesson-3/Total-Internal-Reflection">total internal reflection</a>. Under normal conditions, light travelling through a piece of glass will be completely reflected when it hits the edge of the glass at the right angle. This is basically how optical fibres are used to transmit light signals over long distances. The light continues to bounce around inside the glass, creating a so-called waveguide, until it comes out of the other end of the fibre.</p>
<p>However, if certain materials are brought into contact with the surface of the glass, some of the light can leak out and be seen by an observer. If you place a shoe sole onto glass when total internal reflection is occurring inside it, the light that leaks out through the areas of contact is scattered by the sole and can easily be imaged using conventional digital cameras.</p>
<p>This <a href="http://www.nature.com/articles/srep21290">simple technique</a> is only sensitive to the areas where the shoe touches the surface of the glass, meaning it can be used to detect specific wear patterns or any marks or nicks in the shoe sole that may be clearly identifiable.</p>
<h2>Money saving</h2>
<p>In the UK, <a href="http://www.bbc.co.uk/news/uk-england-manchester-31440038">cuts to police budgets</a> mean forensic services are being amalgamated and more efficient ways of analysing forensic evidence are required. These financial constraints are encouraging forensic science services to <a href="http://www.independent.co.uk/news/uk/crime/private-forensics-firms-use-scientific-advances-to-combat-crime-and-cuts-a6714191.html">turn to digital methods</a> of evidence gathering and to the increased automation of evidence processing.</p>
<p>The contact imaging technique that we have developed is entirely digital and will enable police to generate, transmit and compare contact images obtained in custody with those found at crime scenes via an online <a href="http://raven-technology.com/products/national-footwear-database/">database</a>. This will greatly reduce the time it takes to identify or eliminate suspects from ongoing investigations, saving the police valuable time and money.</p><img src="https://counter.theconversation.com/content/54984/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Sharp does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A new technique could help the police identify more criminals from just their footprints.James Sharp, Associate Professor of Physics, University of NottinghamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/499272016-01-15T11:14:14Z2016-01-15T11:14:14ZHow do you build a mirror for one of the world’s biggest telescopes?<figure><img src="https://images.theconversation.com/files/106709/original/image-20151218-27894-sl57k3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">20 tons of Ohara E6 borosilicate glass being loaded onto the mold of one of the GMT's mirrors.</span> <span class="attribution"><span class="source">Ray Bertram, Steward Observatory</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>When astronomers point their telescopes up at the sky to see distant supernovae or quasars, they’re collecting light that’s traveled millions or even billions of light-years through space. Even huge and powerful energy sources in the cosmos are unimaginably tiny and faint when we view them from such a distance. In order to learn about galaxies as they were forming soon after the Big Bang, and about nearby but much smaller and fainter objects, astronomers need more powerful telescopes. </p>
<p>Perhaps the poster child for programs that require extraordinary sensitivity and the sharpest possible images is the <a href="http://www.seti.org/seti-institute/weeky-lecture/beyond-kepler-direct-imaging-earth-planets">search for planets around other stars</a>, where the body we’re trying to detect is extremely close to its star and roughly a billion times fainter. Finding earth-like planets is one of the most exciting prospects for the next generation of telescopes, and could eventually lead to discovering extraterrestrial signatures of life.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/107834/original/image-20160111-6968-4vj025.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Size comparison of optical telescopes’ primary mirrors.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Comparison_optical_telescope_primary_mirrors.svg">Cmglee</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Detectors in research telescopes are already so sensitive that they capture almost every incoming photon, so there’s only one way to detect fainter objects and resolve structure on finer scales: build a bigger telescope. A large telescope doesn’t just capture more photons, it can also produce sharper images. That’s because the wave nature of light sets a limit to the telescope’s resolution, known as the <a href="http://www.astro.cornell.edu/academics/courses/astro201/diff_limit.htm">diffraction limit</a>; the sharpness of the image depends on the wavelength of the light and the telescope’s diameter.</p>
<p>As optical scientists, our contribution to the next generation of telescopes is figuring out how to craft the gargantuan mirrors they rely on to collect light from far away. Here’s how we’re perfecting the technology that will enable tomorrow’s astrophysical discoveries.</p>
<h2>Multiple mirrors</h2>
<p>The question is how to build something substantially bigger than the current generation of telescopes, which have effective diameters of 8 to 12 meters (26 to 40 feet). One of the biggest challenges is making a bigger mirror to collect the light.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=566&fit=crop&dpr=1 600w, https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=566&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=566&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=711&fit=crop&dpr=1 754w, https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=711&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/107958/original/image-20160112-6964-mpe02.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=711&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Optical diagram of the Giant Magellan Telescope.</span>
<span class="attribution"><span class="source">Giant Magellan Telescope - GMTO Corporation</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>First, it helps to know the basic optical layout of a telescope, illustrated here by the Giant Magellan Telescope (<a href="http://www.gmto.org/overview/">GMT</a>) that is being built in Chile. A large <em>primary mirror</em> collects incoming light and reflects it to a focus. The light is reflected a second time by the smaller <em>secondary mirror</em>, to form an image on an instrument located at a safe, accessible place below the primary mirror, where the image is recorded.</p>
<p>A mirror much larger than eight meters, made of a single piece of glass, would be too expensive and too hard to handle. Everyone involved in building giant telescopes agrees that the solution is to make the primary mirror out of multiple smaller mirrors. Multiple pieces of glass are shaped and aligned to form one gigantic mirror, called a segmented mirror. Gaps between the segments are acceptable as long as the segments’ surfaces lie on a continuous nearly parabolic surface, called the parent surface. </p>
<p>The three extremely large telescope (ELT) projects now in development have made very different decisions about the design of this segmented primary mirror. Two of the ELTs, the <a href="http://www.eso.org/public/teles-instr/e-elt/e-elt_con/">European ELT</a> and the <a href="http://www.tmt.org/observatory">Thirty Meter Telescope</a>, have adopted the approach pioneered by the <a href="http://www.keckobservatory.org">10-meter Keck Observatory telescopes</a> in Hawaii – they’ll make a giant mirror out of hundreds of 1.5-meter segments.</p>
<p>The third project, the Giant Magellan Telescope, takes a different tack. Its 25-meter primary mirror will have only seven segments. They’re the largest single mirrors that can be made, the 8.4-meter (28-foot) honeycomb mirrors we produce here at the <a href="http://mirrorlab.as.arizona.edu">Richard F. Caris Mirror Lab</a> at the University of Arizona. The GMT’s 3-meter secondary mirror also has seven segments, each paired with one of the primary mirror segments.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/107310/original/image-20160105-28991-1bp389o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Artist’s representation of the seven giant mirrors installed in the Giant Magellan Telescope.</span>
<span class="attribution"><a class="source" href="http://www.gmto.org/gallery/">Giant Magellan Telescope – GMTO Corporation</a></span>
</figcaption>
</figure>
<h2>Large, stiff and light</h2>
<p>Big mirror segments guarantee a smooth surface over their entire large areas. The more segments there are in the primary mirror, the more its accuracy depends on their precise alignment to keep them on the parent surface. Because of the pairing of primary and secondary mirror segments in the GMT, the fine control needed to form sharp images can be done by moving the small, agile segments of the secondary mirror rather than the 8.4-meter primary segments. A second advantage of the 8.4-meter honeycomb mirrors is their strong legacy, including use in what is currently the world’s largest telescope, the <a href="http://www.lbto.org/overview.html">Large Binocular Telescope</a> here in Arizona.</p>
<p>One of the challenges of using a large mirror is that it tends to bend under its own weight and the force of wind. The mirror is exposed to wind like a sail on a yacht, but it can only bend by about 100 nanometers before its images become too blurry. The best way to overcome this problem is to make the mirror as stiff as is practical, while also limiting its weight.</p>
<p>We accomplish this feat by casting the mirror into a lightweight honeycomb structure. Each mirror has a continuous glass facesheet on top and an almost continuous backsheet, each about one inch thick. Holding the two sheets together is a honeycomb structure consisting of half-inch-thick ribs in a hexagonal pattern. Our honeycomb mirrors are 70 centimeters thick, making them stiff enough to withstand the forces of gravity and wind. But they’re 80 percent hollow and weigh about 16 tons each, light enough that they don’t bend significantly under their own weight.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/106706/original/image-20151218-27894-4doo4f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mold for casting an 8.4-meter honeycomb mirror for the GMT. The glass will melt around the hexagonal boxes to form the honeycomb.</span>
<span class="attribution"><span class="source">Ray Bertram, Steward Observatory</span></span>
</figcaption>
</figure>
<h2>Crafting the mirror</h2>
<p>We start by melting glass into a complex mold that’s the negative of the honeycomb mirror we want to end up with. While the glass is molten, the furnace spins at five revolutions per minute; the centrifugal force pushes the glass’ surface into the concave parabolic shape that can focus light from a distant star. Watch the video below to see the construction of the honeycomb mold and the spin-casting process.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/c-lBKuHqHk0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Spin-casting the honeycomb mirror.</span></figcaption>
</figure>
<p>The spin-cast mirror surface doesn’t yet have the optical quality needed to make sharp images. But spinning gives it the right overall curvature and saves our having to grind out 14 tons of glass from a flat surface – almost as much glass as is left in the finished mirror. </p>
<h2>Polishing the surface</h2>
<p>Next we need to polish the surface to an accuracy of a small fraction of the light’s wavelength, so it will form the sharpest images possible. The mirror surface has to match the ideal, nearly parabolic surface to about 25 nanometers – about 3 ten-thousandths of the width of a human hair. That’s really, really smooth; if the mirror were scaled up to the size of North America, the tallest mountain would be one inch high and the deepest canyon would be one inch low.</p>
<p>To guide our polishing, the first step is to create a superfine contour map of the mirror’s surface, with steps of less than 10 nanometers. As our “ruler,” we use red laser light; its divisions are the light’s wavelength – about 630 nanometers – and it can be read to about one hundredth of a division.</p>
<p>The measuring instrument illuminates the mirror surface, collects the reflected light, and compares the path lengths of the rays reflected by different locations on the mirror. A ray that reflects off a high spot will have a shorter path than a ray that hits a low spot. The instrument uses this information to construct the contour map of the mirror’s surface.</p>
<p>The basic principle of polishing is to rub the surface with a disk-shaped tool, removing glass selectively from the spots that are too high. A fine abrasive such as rouge (iron oxide) slowly removes glass, atom by atom, through mechanical and chemical processes.</p>
<p><em>Figuring</em> is removing glass explicitly from high spots identified in the contour map, for example by having the tool rub there longer. This is effective on scales larger than about 10 centimeters. <em>Smoothing</em> is what happens when you rub a stiff tool over a rough surface: the tool naturally sits on the high spots and removes more material there, even without any guidance from a contour map. This is effective on scales smaller than 10 centimeters. Both methods are more difficult when the mirror surface is aspheric, meaning its curvature changes from point to point, which is very much the case for the GMT segments.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/107217/original/image-20160104-28966-yf299h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An 8.4-meter mirror for the Large Synoptic Survey Telescope being polished at the Richard F. Caris Mirror Lab.</span>
<span class="attribution"><span class="source">Steward Observatory</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>We’ve developed several new polishing tools to address the challenges of polishing large mirrors for telescopes. One essential feature of any polishing tool is that it match the shape of the mirror surface to an accuracy of around 1 micron. The larger tool in the background is a <a href="http://doi.org/10.1364/AO.33.008094">complex electro-mechanical system</a> that changes the shape of a stiff aluminum disk as it moves over the surface, so it always matches the local curvature of the mirror.</p>
<p>The smaller tool in the foreground is much simpler. Similar to <a href="http://dx.doi.org/10.1038/457028a">Galileo’s reinvention of a carnival toy</a> as an astronomical telescope, our <a href="http://dx.doi.org/10.1364/OE.18.002242">new idea came from Silly Putty</a> – a non-Newtonian fluid that flows like a liquid over a long period of time but acts like a solid on short timescales. We <a href="http://dx.doi.org/10.1364/OE.18.022515">harness those intrinsic properties</a> to achieve both figuring and smoothing. </p>
<p>Our tool, containing Silly Putty enclosed by a thin rubber diaphragm, slowly moves over the surface of the mirror while simultaneously rapidly orbiting around itself. The Silly Putty is stiff over the quick period of the orbit, which smooths out small-scale irregularities in the mirror surface. Over the longer time it takes to move across the mirror, the Silly Putty flows easily, so the tool always matches the surface’s shape. As a result, it removes glass at a predictable rate and in a predictable pattern that doesn’t vary as it moves across the mirror.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/106374/original/image-20151216-30110-7a4vts.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Giant Magellan Telescope as it will look after construction on Cerro Las Campanas in Chile.</span>
<span class="attribution"><a class="source" href="http://www.gmto.org/gallery/">Giant Magellan Telescope – GMTO Corporation</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Countdown to installation</h2>
<p>Here at the Mirror Lab, we finished making the first Giant Magellan Telescope segment in 2012. After a pause for work on two other mirrors, the lab is in the process of grinding Segments 2 and 3. Segment 4 has just finished cooling to room temperature after spin-casting in September 2015. We are well on the way to manufacturing the full 25-meter primary mirror. </p>
<p>Getting these near-perfect mirrors from our lab in Arizona to a mountaintop in Chile presents another set of challenges. They travel by tractor-trailer on land, and by freight ship from California to Chile. The keys to safe transport are distributing the weight of the mirror over hundreds of support points and having several layers of suspension between the mirror and the road or ship deck. </p>
<p>The GMT project schedule calls for a preliminary first light, with four segments installed in the telescope, in 2022. We expect all seven segments to be scanning the cosmos starting in 2024. </p>
<p>Many of us who work on the GMT see it as the way to open new windows into the universe, as the Hubble Space Telescope (HST) has done over the last 25 years. That orbiting telescope was a generous gift to the next generation from the people who worked on the project for decades before it launched. HST’s deep space images amazed, motivated and inspired many of us on Earth. The GMT project team dreams of passing on a similar gift for future generations.</p><img src="https://counter.theconversation.com/content/49927/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Buddy Martin works for Steward Observatory, part of the University of Arizona. He receives funding from the Giant Magellan Telescope Organization. The University of Arizona is a partner in the Giant Magellan Telescope.</span></em></p><p class="fine-print"><em><span>Dae Wook Kim works for College of Optical Sciences and Richard F. Caris Mirror Lab, part of the University of Arizona. He receives funding from the Giant Magellan Telescope Organization.</span></em></p>The laws of physics dictate that to pick out ever fainter objects from space and see them more sharply, we’re going to need a bigger telescope. And that means we need massive mirrors.Buddy Martin, Project Scientist at the Steward Observatory and Associate Research Professor of Optical Sciences, University of ArizonaDaewook Kim, Associate Professor of Optical Sciences, University of ArizonaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/476632015-09-23T14:41:28Z2015-09-23T14:41:28ZTen years on, invisibility cloaks are close to becoming a manufacturable reality<figure><img src="https://images.theconversation.com/files/95534/original/image-20150921-31495-li34pr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A new invisibility cloak can hide objects using an ultrathin layer of nanoantennas that reflect off light. Are humans next?</span> <span class="attribution"><a class="source" href="http://media.eurekalert.org/multimedia_prod/pub/media/99394.jpg">Courtesy of Xiang Zhang group, Berkeley Lab/UC Berkeley</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Invisibility has long been one of the marvels in science fiction and fantasy – and more recently in physics. But while physicists have figured out the concept for how to make invisibility cloaks, they are yet to build a practical device that can hide human-sized objects in the way that Harry Potter’s cloak can. </p>
<p>Objects are visible to the human eye because they distort light waves according to their shape. We see the objects by registering these distortions when the light from the objects hit our eyes. In a similar way an object can also be visible to a radar, which transmits radio waves or microwaves that bounce off objects in their path. </p>
<p>So far, most invisibility cloaks are made from engineered materials that can bend light in a way that manipulates the eye – or another device such as a radar. However, these typically only work for <a href="http://www.sciencemag.org/content/314/5801/977">tiny objects</a>. But that may be about to change. A <a href="http://www.sciencemag.org/lookup/doi/10.1126/science.aac9411">new experiment</a> has created a cloak that, for the first time, can hide small objects of any shape completely from visible light. The cloak, which is thinner and more flexible than any of its predecessors, can also be scaled up to hide bigger objects – potentially transforming the science into something that can be manufactured and sold.</p>
<h2>Messy metamaterials</h2>
<p>The <a href="http://www.sciencemag.org/content/314/5801/977">first invisibility cloak</a> was created in 2006 by British scientist John Pendry. It consisted of a material that could bend microwaves, but not visible light, around a tiny, 2D object measuring just a couple of micrometers – making it look like they had travelled straight and never touched the object. Since then, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3105339/">better versions</a> that work for other wavelengths in both two and three dimensions have been created. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/95686/original/image-20150922-16695-1iyj1yd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/95686/original/image-20150922-16695-1iyj1yd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=800&fit=crop&dpr=1 600w, https://images.theconversation.com/files/95686/original/image-20150922-16695-1iyj1yd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=800&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/95686/original/image-20150922-16695-1iyj1yd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=800&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/95686/original/image-20150922-16695-1iyj1yd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1005&fit=crop&dpr=1 754w, https://images.theconversation.com/files/95686/original/image-20150922-16695-1iyj1yd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1005&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/95686/original/image-20150922-16695-1iyj1yd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1005&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">We may still be far away from making humans invisible, but at least we’re now one step closer.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/cdpm/3721117577/in/photolist-6EPGBB-4Po27B-4G1D4n-dC5tXn-apuhVz-wRBZ5P-4YPjDH-4qGyw9-dyGzWb-7fcfT2-ddo7g-5zUtmq-ddo6K-gKu4wk-b8iMre-avFtLM-bb7huH-7UGa95-a3TYxh-b1YoT2-dwMPsd-dwMPm9-2W1Hu-dwuDme-dwMNWw-dwMP7f-dwMPeY-dwGiw8-9YhmAu-8xLhPS-dCaVkW-dCaUQ9-a56XuD-dC5ugF-dvgzLP-dC5tRR-dC5u2M-dCaUUG-dCaVfQ-dC5tP4-doLiCf-bnHyfW-aQLTFB-9xQfTx-eBH2ZY-bwv9ZM-fNrq7-fNrqy-fNrpB-fNrpY">Charles D P Miller/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Nearly all of these cloaks rely on the use of <a href="https://theconversation.com/invisibility-cloaks-closer-thanks-to-digital-metamaterials-31562">metamaterials</a>, which are a class of material engineered to produce properties that don’t occur naturally. They typically have small internal structures built out of glass, metal or plastics or dielectrics, electrical insulators loaded with nano-particles. In this way they can be made to interact with light in unusual ways. However, these are generally bulky and can be hard to scale up. </p>
<p>Another problem is that <a href="https://www.osapublishing.org/oe/abstract.cfm?uri=oe-14-25-12457">it is difficult</a> to make invisibility cloaks conceal light completely. If there’s just a little bit leaking out the hidden object can’t be completely invisible. </p>
<h2>Promising technique</h2>
<p>The new cloak is more sophisticated than past devices. It is ultra thin and able to conceal a small three-dimensional object measuring 36 by 36 micrometers by completely reflecting a wavelength of visible light, which has not been done before. And perhaps the most important feature is that the technology could be scaled up to hide bigger objects. </p>
<p>The downside? It only works for light at 730-nanometer wavelength, which is visible light near the infrared part of the spectrum. While this could be useful to hide things from for specific devices, such as radar, it would have to be improved to scatter lights from all wavelengths on the visible spectrum to be able to hide from the human eye. While we are still some way away from doing this, we are getting closer.</p>
<p>The cloak hides objects by wrapping them in layer of gold <a href="http://www.physicscentral.com/explore/action/nanoantennas.cfm">nanoantennas</a> — only 80 nanometers thick. The antennas in the cloak manipulate the light as it hits the object in a way that makes it look like it’s bouncing off a flat surface instead – making it impossible to see the geometry of the object.</p>
<p>The technology of invisibility cloaking has <a href="http://news.discovery.com/tech/gear-and-gadgets/top-10-uses-invisibility-tech-130130.htm">many potential uses</a>, ranging from military applications to bio medicine, computing and even energy harvesting.</p>
<p>For example, it could be used to render an aircraft <a href="http://www.dailymail.co.uk/sciencetech/article-2505168/Could-invisibility-cloak-militarys-best-ally.html">invisible from radar</a>. Stealth aircraft, which have been built to avoid detection by radar, are thought to have first been produced in Germany during World War II and use a number of technologies that reduce reflection and emission of light. The cloak can be also used to isolate closely placed antennas, which eventually reduces the footprint of <a href="http://www.baesystems.com/article/BAES_166271/british-scientists-defy-the-laws-of-physics-to-create-a-flat-lens-that-thinks-its-curved;baeSessionId=l_z6WGXjj33GMkyhH8O9l5gcp_E_GXnhPETvJ4XWwUzmT65x-Nc1!-491440614?_afrLoop=879318440906000&_afrWindowMode=0&_afrWindowId=null#!%40%40%3F_afrWindowId%3Dnull%26_afrLoop%3D879318440906000%26_afrWindowMode%3D0%26_adf.ctrl-state%3D11fdl0izoq_4">antenna arrays</a> and makes future communication systems extra compact.</p>
<p>Meanwhile, the UK <a href="http://www.quest-spatial-transformation.org/">QUEST project</a>, led by Queen Mary, University of London to come up with new ways to manipulate electromagnetic fields, has challenged the fundamental physics of <a href="http://www.nature.com/articles/srep04130">thin absorbers</a>, which can dissipate unwanted incoming waves, by combining graphene with metamaterials to develop “stealthy” wallpapers to create wireless-secure environments, reduce the interference of handheld devices and reuse the radio to increase mobile communication capacity.</p>
<p>With so many important applications, it is surely just a matter of time before the cloaks get better and more practical. With the help of ever-emerging advanced manufacturing tools, ten years on, the future of invisibility is coming into view.</p><img src="https://counter.theconversation.com/content/47663/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Yang Hao receives funding from the EPSRC.</span></em></p>Research into invisibility cloaks has been flourishing over the past decade yet they have still not reached the market. But that may be about to change.Yang Hao, Professor of Antennas and Electromagnetics, Queen Mary University of LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/457962015-08-07T18:09:21Z2015-08-07T18:09:21ZRevealed: why animals’ pupils come in different shapes and sizes<figure><img src="https://images.theconversation.com/files/91167/original/image-20150807-27582-1kizo41.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Spot the fox, wolf, sheep and...cuttlefish.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/treehouse1977/2063709940/; https://www.flickr.com/photos/radio_free_rlyeh/17467389555/; https://www.flickr.com/photos/60740813@N04/10401207706/in/; https://www.flickr.com/photos/wwarby/4695232153/">Jim Champion (sheep); R'lyeh (wolf); Michele Lamberti (fox); William Warby (cuttlefish)</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Wolves and foxes are closely related and share many of the same characteristics. But look at their eyes – where wolves have rounded pupils like humans, foxes instead have a thin vertical line. But it isn’t just canines –across the animal kingdom, pupils come in all shapes and sizes. So why the differences?</p>
<p>It’s a question that has long interested scientists working on vision and optics. In a new study published in the journal <a href="http://advances.sciencemag.org/content/1/7/e1500391">Science Advances</a>, colleagues from Durham, <a href="http://bankslab.berkeley.edu/">Berkeley</a> and I explain why these pupil shapes have developed.</p>
<p>Goats, sheep, horses, domestic cats, and numerous other animals have pupils which vary from fully circular in faint light to narrow slits or rectangles in bright light. The <a href="https://archive.org/details/vertebrateeyeits00wall">established theory</a> for this is that elongated pupils allow greater control of the amount of light entering the eye. For instance, a domestic cat can change its pupil area by a factor of 135 from fully dilated to fully constricted, whereas humans, with a round pupil, can only change area by a factor of 15. This is particularly useful for animals that are active both day and night, allowing for much better vision in low light conditions.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=327&fit=crop&dpr=1 600w, https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=327&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=327&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=411&fit=crop&dpr=1 754w, https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=411&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/91195/original/image-20150807-27593-1nargp4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=411&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The cat on the right has got its night-vision goggles on.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/markjsebastian/1394560975/%20;%20https://www.flickr.com/photos/kurt-b/4730333545/">Mark Sebastian (L); Kurt Bauschardt (R)</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>However, if the only reason for elongated pupils was to control the amount of light entering the eye, the orientation would not be important: horizontal, vertical, or diagonal would all offer the same advantages. Instead, the pupils are almost always horizontal or vertical, which suggests there must be other benefits which explain this orientation.</p>
<h2>Pupils fit for every niche</h2>
<p>Our work has focused on the visual benefits of vertical and horizontal pupils in mammals and snakes. One of the most interesting factors we found is that the orientation of the pupil can be linked to an animal’s ecological niche. This has been <a href="http://www.ncbi.nlm.nih.gov/pubmed/20629855">described before</a>, but we went one step further to quantify the relationship. </p>
<p>We found animals with vertically elongated pupils are very likely to be ambush predators which hide until they strike their prey from relatively close distance. They also tend to have eyes on the front of their heads. Foxes and domestic cats are clear examples of this. The difference between foxes and wolves is down to the fact wolves are not ambush predators – instead they hunt in packs, chasing down their prey.</p>
<p>In contrast, horizontally elongated pupils are nearly always found in grazing animals, which have eyes on the sides of their head. They are also very likely to be prey animals such as sheep and goats.</p>
<p>We produced a computer model of eyes which simulates how images appear with different pupil shapes, in order to explain how orientation could benefit different animals. This modelling showed that the vertically elongated pupils in ambush predators enhances their ability to judge distance accurately without having to move their head, which could give away their presence to potential prey.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=904&fit=crop&dpr=1 600w, https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=904&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=904&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1135&fit=crop&dpr=1 754w, https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1135&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/91203/original/image-20150807-27617-znst88.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1135&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sheep can usually see you coming.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/pocheco/14908306056/">Sarah Nichols</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Grazing animals have different problems to deal with. They need to check all around for prey and they need to flee rapidly in case of attack. Having eyes towards the side of their head helps them to see nearly all around them. Having a horizontal pupil enhances the amount of light they can receive in front of and behind them while reducing the amount of light from above and below. This allows them panoramic vision along the ground to help detect potential predators as early as possible. The horizontal pupil also enhances the image quality of horizontal planes and this enhanced view at ground level is also an advantage when running at speed to escape.</p>
<p>So, vertically elongated pupils help ambush predators capture their prey and horizontally elongated pupils help prey animals avoid their predators.</p>
<p>We realised our hypothesis predicted that shorter animals should have a greater benefit from vertical pupils than taller ones. So we rechecked the data on animals with frontal eyes and vertical pupils and found that 82% are what is considered “short” (which we defined as having a shoulder height of less than 42cm) compared with only 17% of animals with circular pupils. </p>
<p>We also realised that there is a potential problem with the theory for horizontal elongation. If horizontal pupils are such an advantage to grazing animals, what happens when they bend their head down to graze? Is the pupil no longer horizontally aligned with the ground? </p>
<p>We checked this by observing animals in both a zoo and on farms. We found that eyes of goats, deer, horses, and sheep rotate as they bend their head down to eat, keeping the pupil aligned with the ground. This remarkable eye movement, which is in opposite directions in the two eyes, is known as cyclovergence. Each eye in these animals rotates by 50 degrees, possibly more (we can only make the same movement by a few degrees).</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/ViXBQsirTeY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Cyclovergence, explained.</span></figcaption>
</figure>
<p>There are still some unexplained pupils in nature. For example, mongooses have forward-facing eyes but horizontal pupils, geckos have huge circular pupils when dilated which reduce down to several discrete pinholes when constricted and <a href="http://www.sciencedirect.com/science/article/pii/S0042698913000539">cuttlefish have “W”-shaped pupils</a>. Understanding all these variations is an interesting challenge for the future.</p><img src="https://counter.theconversation.com/content/45796/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gordon Love receives funding from the Engineering and Physical Sciences Research Council for some of this work.</span></em></p>Study shows how eyes that work for hunters are no use for the hunted.Gordon Love, Professor of Physics, Durham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/450722015-07-27T15:00:25Z2015-07-27T15:00:25ZWe transformed living cells into tiny lasers<figure><img src="https://images.theconversation.com/files/89538/original/image-20150723-22821-ernjsq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Green lasers glowing within cells.</span> <span class="attribution"><span class="source">Matjaž Humar and Seok Hyun Yun</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>In the last few decades, lasers have become an important part of our lives, with applications ranging from laser pointers and CD players to medical and research uses. Lasers typically have a very well-defined direction of propagation and very narrow and well-defined emission color. We usually imagine a laser as an electrical device we can hold in our hands or as a big box in the middle of a research laboratory.</p>
<p>Fluorescent dyes have also become commonplace, routinely used in research and diagnostics to identify specific cell and tissue types. Illuminating a fluorescent dye makes it emit light with a distinctive color. The color and intensity are used as a measure, for example, of concentrations of various chemical substances such as DNA and proteins, or to tag cells. The intrinsic disadvantage of fluorescent dyes is that only a few tens of different colors can be distinguished. </p>
<p>In a combination of the two technologies, researchers know that if a dye is placed in an optical cavity – a device that confines light, such as two mirrors, for example – they can create a laser.</p>
<p>Taking it all a step even further, our research, described in the journal Nature Photonics, shows we can create a miniature laser that can <a href="http://nature.com/articles/doi:10.1038/nphoton.2015.129">emit light inside a single live cell</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/SHbXDlnLIYA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Tiny, tiny lasers</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=583&fit=crop&dpr=1 600w, https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=583&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=583&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=732&fit=crop&dpr=1 754w, https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=732&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/89689/original/image-20150724-8478-c0ljzm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=732&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Green laser bead in a cell.</span>
<span class="attribution"><span class="source">Matjaž Humar and Seok Hyun Yun</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>We made our lasers out of solid polystyrene beads ten times smaller than the diameter of a human hair. The beads contain a fluorescent dye and the surface of the bead confines light, creating an optical cavity. We fed these laser beads to live cells in culture, which eat the lasers within a few hours. After that, we can operate the lasers by illuminating them with external light without any harm to the cells.</p>
<p>Then we capture the light emitted from the cells via a spectrometer and analyze the spectrum. The lasers can act as very sensitive sensors, enabling us to better understand cellular processes. For example, we measured the change in the refractive index – the way light travels through the cell – while varying the concentration of salt in the medium surrounding the cells. The refractive index is directly related to the concentration of chemical constituents within the cells, such as DNA, proteins and lipids.</p>
<p>Further, lasers can be used for cell tagging. Each laser within a cell emits light with a slightly different fingerprint that can be easily detected and used as a bar code to tag the cell. Since a laser has a very narrow spectral emission, a huge number of unique bar codes can be produced, something that was impossible before. </p>
<p>With careful laser design, up to a trillion cells (1,000,000,000,000) could be uniquely tagged. That’s comparable to the total number of cells in the human body. So in principle, it could be possible to individually tag and track every single cell in the human body. This is a huge leap from cell-tagging methods demonstrated until now, which can tag at most a few hundred cells. So far we’ve tagged cells only in Petri dishes, but there’s no reason it shouldn’t also work for cells within a living body.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/89691/original/image-20150724-8451-xrye2l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Green cells with their blue nuclei were injected with red oil droplets that act as deformable lasers.</span>
<span class="attribution"><span class="source">Matjaž Humar and Seok Hyun Yun</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Alternative materials for cellular lasers</h2>
<p>Instead of a solid bead, we also used an droplet of oil as a laser inside cells. Using a micro pipette, we injected a tiny drop of oil containing fluorescent dyes into a cell. In contrast to the solid bead, forces acting inside the cells can deform the droplets. By analyzing the light emitted by a droplet laser, we can measure that deformation and calculate the force acting on the droplet. It’s a way to get a very precise picture of the kinds of mechanical forces exerted within cells by processes such as cellular migration and division.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/89693/original/image-20150724-8457-1qsc6cy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Yellow lipid cells within subcutaneous fat tissue, which can be used as natural lasers.</span>
<span class="attribution"><span class="source">Matjaž Humar and Seok Hyun Yun</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Finally, we realized that fat cells already contain lipid droplets that can work as natural lasers. They don’t need to eat or be injected with lasers, just supplied with a nontoxic fluorescent dye. That means each of us already has millions of lasers inside our fat tissue that are just waiting to be activated to produce laser light. Next time you’re thinking about trimming down, you could just reconceptualize your body fat as a huge number of tiny lasers.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=547&fit=crop&dpr=1 600w, https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=547&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=547&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=687&fit=crop&dpr=1 754w, https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=687&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/89696/original/image-20150724-8457-11ux4ju.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=687&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Inserting an optical fibre into a piece of pig’s skin to excite and extract the laser light generated by subcutaneous fat cells.</span>
<span class="attribution"><span class="source">Matjaž Humar and Seok Hyun Yun</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Our new cell laser technology will help us understand cellular processes and improve medical diagnosis and therapies. They could eventually provide remote sensing inside the human body without the need for sample collection. A cell is a smart machine, equipped with a computer with “DNA Inside.” Specialized cells, such as immune cells, can find the disease and site of inflammation, carrying the laser to the target for laser-based diagnosis and therapies. Imagine rather than a biopsy for a lump that doctors suspect to be cancer, cell lasers helping determine what its made of. Cell lasers also hold promise as a way of deliver laser for therapies, for example, to activate a photosensitive drug at the target to kill microbes or cancerous cells.</p><img src="https://counter.theconversation.com/content/45072/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Matjaž Humar receives funding from Marie Curie International Outgoing Fellowship within the 7th European Community Framework Programme.</span></em></p><p class="fine-print"><em><span>Seok-Hyun Yun receives funding from National Science Foundation and National Institutes of Health.</span></em></p>Using fluorescent dye, researchers figured out how to turn cells into lasers – with applications for cell tagging and tracking as well as medical diagnoses and therapies.Matjaž Humar, Research Fellow in Dermatology, Harvard UniversitySeok-Hyun Yun, Associate Professor of Dermatology, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/359832015-01-08T06:10:01Z2015-01-08T06:10:01ZTwisted light beams are coming – and they will boost your internet speed<figure><img src="https://images.theconversation.com/files/68393/original/image-20150107-1995-79x1n6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Wicked fast communication.</span> <span class="attribution"><span class="source">Universities of Bristol and Dundee</span></span></figcaption></figure><p>Your home internet connection works in one of two ways. One involves using a copper wire, probably your telephone line, to send electrical signals from the internet provider to your home and back. This technology hasn’t changed much since the days of the telegraph. The other technology involves the use of optical cables, which convert electrical signals into light and back. This increases the speed of data transfer because light signals can travel longer distances without distortion.</p>
<p>Now scientists are exploring a new way of <a href="http://www.bbc.co.uk/news/science-environment-29953239">transmitting data through using “twisted” light beams</a>. The light waves in these beams form helical patterns – similar to the structure of DNA – rotating about a central axis. Because this is different from the standard light beam used for communication, twisted light could be a new communication channel in optical cables. </p>
<p>Data transmission across optical cables involves the transfer of 1s and 0s from one point to the other. Standard methods use the presence and absence of a light beam to represent those 1s and 0s. However, this method puts a physical limitation on how many beams of light can travel through a single optical cable.</p>
<h2>Boosting data transfer</h2>
<p>Twisted light beams represent a different method to transfer the same data. Because its properties are unlike that of normal light beams, they can be coded differently. And because this coding won’t interfere with the standard methods, it could dramatically increase the data-carrying capacity of the same optical cables. Despite recent progress, however, the methods for generating twisted light are few and, crucially, they are slow.</p>
<p>To address these limitations, our team of engineers has developed a new acousto-optic device that can twist beams of light at speeds never before achieved. The device is also able to form a wide range of patterns, shaping and steering light beams with more dexterity than was previously possible. The results of the study were published in the journal <a href="http://www.opticsinfobase.org/oe/home.cfm">Optics Express</a>.</p>
<p>The new device consists of 64 tiny “piezoelectric” sound sources arranged in a ring, each of which act as high frequency loudspeakers. Piezoelectric materials convert electrical signals into vibrations – and thus sound. Together these sources are used to generate carefully controlled sound fields. </p>
<h2>Let sound do the work</h2>
<p>The key to twisting the light is that the presence of sound subtly changes the refractive index of the material through which it travels. The changing refractive index means that the direction of light is slightly changed, causing the shape of the light beam to change. With knowledge of the link between sound-wave intensity and its effect on light, almost any light beam shape can be created. In the device we used twisting sound waves to imprint the laser beam and form twisted light beams. </p>
<p>While this form of twisting has been achieved before, our device can produce the results extremely quickly. All we need to do is change the electrical patterns on the piezoelectric material, which changes the sound field, eventually causing the shape of the light beam to change. </p>
<p>We can achieve millions of different patterns per second. This means that in the future laser beam-based devices will be able to be reconfigured much faster than is currently possible. Previously, the fastest achieved is a few thousand refreshes per second. This difference matters because, without such rapid refresh rate, the technology won’t be able to compete with standard optical communication technology. In some way this difference in speed is like that between dial-up internet and broadband.</p>
<p>In addition to communication, the ability to shape, steer and twist laser beams is useful for many other optical applications, such as optical tweezers, which are light beams that can hold onto cells or similar tiny objects. In many of these applications speed is the key so we are hoping that our new device opens up some exciting possibilities well beyond what we have imagined.</p><img src="https://counter.theconversation.com/content/35983/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bruce Drinkwater receives funding from the Engineering and Physical Sciences Research Council (EPSRC).</span></em></p>Your home internet connection works in one of two ways. One involves using a copper wire, probably your telephone line, to send electrical signals from the internet provider to your home and back. This…Bruce Drinkwater, Professor of Ultrasonics, University of BristolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/348372014-12-18T10:34:57Z2014-12-18T10:34:57ZWelcome to Politics4K<p>While much of the 2014 midterm election analysis centered on the Republican takeover of the Senate, the pundits may have overlooked an important development: the end of a time when politicians looked a little less lifelike, even to viewers in HD.</p>
<p>Thanks to bigger and better processors inside journalists’ cameras, and, especially, a fourfold increase in resolution on viewers’ digital displays, the next era in political campaigning – let’s call it “Politics4K” – has arrived. </p>
<p>Earlier this fall, New York Times technology columnist Molly Wood <a href="http://www.nytimes.com/2014/10/09/technology/personaltech/sharper-image-4k-tv-gimmick-worth-having.html?_r=0">explained 4K</a>:</p>
<blockquote>
<p>From a technical perspective, the term 4K refers to displays with twice the vertical resolution and twice the horizontal resolution of high-definition TVs. The UHD designation combines the higher pixel count of 4K with improvements to on-screen colors that make the on-screen picture brighter and more realistic.</p>
</blockquote>
<p>So by the 2016 presidential election, voters will be able to screen their candidates in unprecedented clarity and color. With nothing less than the White House in the balance, campaigns of all political stripes now need to rethink their campaign optics – or watch their rivals come shining through.</p>
<h2>A milestone moment in campaign optics</h2>
<p>Presidential campaign adviser William P. Wilson – who died last week – may have been the first to understand the importance of campaign optics; <a href="http://www.nytimes.com/2014/12/12/us/william-p-wilson-kennedys-tv-aide-for-historic-1960-debate-is-dead-at-86.html">according to his obituary</a>:</p>
<blockquote>
<p>In 1960 little was understood about the potential reach of television in American politics. Still, though he was just 32 at the time, Mr. Wilson was as experienced with the medium as anyone in the field. He already had the distinction of being the first television consultant ever hired by a presidential campaign.</p>
</blockquote>
<p>In his classic 1979 media study “The Powers That Be,” David Halberstam explains how Wilson – minutes before Senator John F. Kennedy’s first debate against sitting Vice President Richard M. Nixon – convinced a reluctant Kennedy that his face needed some touching-up.</p>
<blockquote>
<p>…Wilson insisted he needed some kind of makeup, mostly to close the pores and keep the shine down, and Kennedy asked if Wilson could do it, and Wilson, who knew the neighborhood, ran two blocks to a pharmacy, bought Max Factor Creme Puff, and made Kennedy up very lightly… On such decisions – Max Factor Creme Puff instead of Shavestick – rode the future leadership of the United States and the free world.</p>
</blockquote>
<p>The Kennedy-Nixon debates in 1960 launched presidential politics into the television age; the medium became a game-changer, even though network broadcasts were black and white, analog and low-resolution by contemporary standards.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/QazmVHAO0os?wmode=transparent&start=48" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The first televised Kennedy-Nixon debate in 1960 was a milestone moment in campaign optics.</span></figcaption>
</figure>
<h2>Image control key for pols</h2>
<p>As the 20th century progressed, camera and television technology improved significantly – and become increasingly unforgiving.</p>
<p>Trust me: as a documentary filmmaker who has worked on a number of political films, I’ve come to realize that nothing correlates with campaign control more than optics. </p>
<p>The staff of former President Gerald Ford expressed displeasure with the close-up I framed up before a two-camera interview in his library studio. After we wrapped, his staff apologized; somehow, the videotape of his preferred wide-shot (think “White House Briefing”) had been perfectly recorded, but my tight-shot (think “60 Minutes”) suffered “technical difficulties” throughout.</p>
<p>In the middle of another interview – this one with a sitting Vice-President Al Gore – a staffer looking over my shoulder sucker-punched me when I quietly asked my cinematographer to “push in” for an extreme close-up.</p>
<p>Nothing like a shot to the kidney to prove how politics remains a perpetual exercise in control. </p>
<p>During the final year of the Clinton Administration, High-Definition television was in its infancy. After the White House granted me the first access to the Oval Office by a documentary filmmaker since the Kennedy administration, I was awarded a grant to produce my project in HD.</p>
<p>When I showed President Clinton’s special assistant some of our footage on (what was then) Washington’s only HD display, her jaw dropped: never before had she seen her boss depicted so vividly on screen.</p>
<p>In that instant we both realized the game had changed again; politicians would appear even more life-like on television.</p>
<p>Fifteen years later – as the prospects for another Clinton White House loom – another digital technology has reached new heights.</p>
<h2>Optics influences outcomes</h2>
<p>As of October 2014, the market penetration of Ultra HD television was only 7% of American homes. But due to steadily dropping prices for 4K displays – along with the availability of more 4K media – that number <a href="http://www.nytimes.com/2014/10/09/technology/personaltech/sharper-image-4k-tv-gimmick-worth-having.html?_r=0">is expected to grow exponentially</a> by the next presidential election.</p>
<p>Following Netflix’ lead, Amazon Prime commenced streaming 4K media in December. Election Night 2016 broadcast coverage in 4K should be a foregone conclusion. Furthermore, reasonably-priced 4K camcorders are already available to the reporters who will be embedded inside the 2016 primary campaigns.</p>
<p>While in the past, journalists wielding bulky cameras may have been able to be corralled, the proliferation of these camcorders will make it impossible for aides to shield their candidates from unflattering, high-resolution shots.</p>
<p>There’s a reason why this makes political operatives anxious. <a href="http://www.politico.com/news/stories/0810/40590.html">Study</a> after <a href="https://www.uni-muenster.de/imperia/md/content/psyifp/aeechterhoff/wintersemester2011-12/vorlesungkommperskonflikt/efranpatterson_effphysappnationelect_canadjbehsc1974.pdf">study</a> has shown that to voters, the candidates’ looks matter – in many cases, more than their party affiliation or policy stances. </p>
<p>In the world of politics, optics reign.</p>
<p>So while the next set of presidential candidates can run, they can’t hide from revealing 4K coverage – under all kinds of conditions, indoors and out, many less-than-flattering.</p>
<p>The likeliest prediction is that the Politics4K era will usher in plenty of unintended political consequences. With an electorate getting younger and more tech-savvy every year, how will politicians manage to maintain a youthful, energetic image?</p>
<p>Will the adage “the camera adds 10 pounds” become “the UltraHD camera adds 20 years” for certain candidates?</p>
<p>And will Politics4K become the great equalizer – or will age, gender and racial differences emerge in sharper contrast?</p>
<p>Too bad William P. Wilson didn’t live to see the day that UltraHD politics could be practiced in earnest. My guess is he’d already be working with the younger, more telegenic candidate, just as he did in 1960.</p><img src="https://counter.theconversation.com/content/34837/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ted Bogosian does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>While much of the 2014 midterm election analysis centered on the Republican takeover of the Senate, the pundits may have overlooked an important development: the end of a time when politicians looked a…Ted Bogosian, Instructor and Visiting Filmmaker, Duke UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/153062013-06-19T09:00:12Z2013-06-19T09:00:12ZMore data storage? Here’s how to fit 1,000 terabytes on a DVD<figure><img src="https://images.theconversation.com/files/25828/original/t5gbpxcm-1371621655.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Using nanotechnology, researchers have developed a technique to increase the data storage capacity of a DVD from a measly 4.7GB to 1,000TB.</span> <span class="attribution"><span class="source">Nature Communications</span></span></figcaption></figure><p>We live in a world where digital information is exploding. Some 90% of the world’s data <a href="http://www.sciencedaily.com/releases/2013/05/130522085217.htm">was generated in the past two years</a>. The obvious question is: how can we store it all?</p>
<p>In <a href="http://www.nature.com/ncomms/2013/130619/ncomms3061/full/ncomms3061.html">Nature Communications today</a>, we, along with Richard Evans from CSIRO, show how we developed a new technique to enable the data capacity of a single DVD to increase from 4.7 gigabytes up to one petabyte (1,000 terabytes). This is equivalent of 10.6 years of compressed high-definition video or 50,000 full high-definition movies. </p>
<p>So how did we manage to achieve such a huge boost in data storage? First, we need to understand how data is stored on optical discs such as CDs and DVDs.</p>
<h2>The basics of digital storage</h2>
<p>Although optical discs are used to carry software, films, games, and private data, and have great advantages over other recording media in terms of cost, longevity and reliability, their low data storage capacity is their major limiting factor. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/25829/original/n2yt78jq-1371621811.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Adam Foster | Codefor</span></span>
</figcaption>
</figure>
<p>The operation of optical data storage is rather simple. When you burn a CD, for example, the information is transformed to strings of binary digits (0s and 1s, also called <a href="http://en.wikipedia.org/wiki/Bit">bits</a>). Each bit is then laser “burned” into the disc, using a single beam of light, in the form of dots.</p>
<p>The storage capacity of optical discs is mainly limited by the physical dimensions of the dots. But as there’s a limit to the size of the disc as well as the size of the dots, many current methods of data storage, such as DVDs and Blu-ray discs, continue to have low level storage density.</p>
<p>To get around this, we had to look at light’s fundamental laws.</p>
<h2>Circumnavigating Abbe’s limit</h2>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=849&fit=crop&dpr=1 600w, https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=849&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=849&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1067&fit=crop&dpr=1 754w, https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1067&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/25831/original/n42jg3cs-1371622079.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1067&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ernst Abbe.</span>
<span class="attribution"><span class="source">Wikimedia Commons</span></span>
</figcaption>
</figure>
<p>In 1873, German physicist <a href="http://en.wikipedia.org/wiki/Ernst_Abbe">Ernst Abbe</a> published a law that limits the width of light beams.</p>
<p>On the basis of this law, the diameter of a spot of light, obtained by focusing a light beam through a lens, cannot be smaller than half its wavelength - around 500 nanometres (500 billionths of a metre) for visible light.</p>
<p>And while this law plays a huge role in modern optical microscopy, it also sets up a barrier for any efforts from researchers to produce extremely small dots - in the nanometre region - to use as binary bits.</p>
<p>In our study, we showed how to break this fundamental limit by using a two-light-beam method, with different colours, for recording onto discs instead of the conventional single-light-beam method.</p>
<p>Both beams must abide by Abbe’s law, so they cannot produce smaller dots individually. But we gave the two beams different functions:</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=498&fit=crop&dpr=1 600w, https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=498&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=498&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=626&fit=crop&dpr=1 754w, https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=626&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/25830/original/npmycqrs-1371621965.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=626&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Nature Communications</span></span>
</figcaption>
</figure>
<ul>
<li>The first beam (red, in the figure right) has a round shape, and is used to activate the recording. We called it the writing beam</li>
<li>The second beam - the purple donut-shape - plays an anti-recording function, inhibiting the function of the writing beam</li>
</ul>
<p>The two beams were then overlapped. As the second beam cancelled out the first in its donut ring, the recording process was tightly confined to the centre of the writing beam.</p>
<p>This new technique produces an effective focal spot of nine nanometres - or one ten thousandth the diameter of a human hair.</p>
<h2>The technique, in practical terms</h2>
<p>Our work will greatly impact the development of super-compact devices as well as nanoscience and nanotechnology research.</p>
<p>The exceptional penetration feature of light beams allow for 3D recording or fabrication, which can dramatically increase the data storage - the number of dots - on a single optical device.</p>
<p>The technique is also cost-effective and portable, as only conventional optical and laser elements are used, and allows for the development of optical data storage with long life and low energy consumption, which could be an ideal platform for a Big Data centre.</p>
<p>As the rate of information generated worldwide <a href="http://www.economist.com/node/15557443">continues to accelerate</a>, the aim of more storage capacity in compact devices will continue. Our breakthrough has put that target within our reach.</p><img src="https://counter.theconversation.com/content/15306/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Min Gu is a Laureate Fellow of the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Yaoyu Cao and Zongsong Gan do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We live in a world where digital information is exploding. Some 90% of the world’s data was generated in the past two years. The obvious question is: how can we store it all? In Nature Communications today…Min Gu, Professor of Optoelectronics, Swinburne University of TechnologyYaoyu Cao, Postdoctoral research fellow, Swinburne University of TechnologyZongsong Gan, PhD candidate, Swinburne University of TechnologyLicensed as Creative Commons – attribution, no derivatives.