Last week, I gave a lecture at the Coastal Institute of the University of Rhode Island entitled “Auditing the Seven Plagues of Coastal Ecosystems”. This lecture was presented as the inaugural annual Scott W. Nixon lecture, celebrating the memory and achievement of Scott W. Nixon, a leading coastal ecologist – and a dear friend- that passed away in 2012. Scott’s research was dedicated to addressing the problems of the coastal ocean, helping define the problems and guide managerial and policy actions to address these problems around the world.
Global population growth and per capita resource use have led to mounting pressure on the biosphere.
These pressures are focused on the coastal zone where human density is highest. The large number of pressures are driving coastal ecosystems to deteriorate globally and creating increasing demands on managers to remove these pressures, mitigate their impacts, and restore coastal ecosystems.
Whereas these problems are real and require urgent attention, they are often conveyed in an unbalanced way, with a tendency to maximize or exaggerate some of these problems. The result is the risk of falling prey to “crying wolf” syndrome.
This is unfortunate because it (a) leads to a dispersal of efforts and resources away from the major problems into addressing issues that are not clearly demonstrated, and (b) deters the public from engaging in addressing the problem as ecologists become identified as bearers of bad news. In my talk I critically assessed the different problems that affect coastal ecosystems at a global scale and evaluated the strength of the applied evidence.
In my assessment I tried to sort out those problems that can be considered major threats needing urgent attention by managers and policymakers from others that remain hypothetical or may not occur at all. Through this process I identified where the narrative dispersed from science and the media to society needs be toned down.
I believe this exercise maybe of interest to the readership of this blog, and thus, will elaborate on these “plagues” in future posts.
But, let me introduce here the plagues.
The Exodus text referring to the plagues already makes reference to coastal plagues: “This is what the Lord says: By this you will know that I am the Lord: With the staff that is in my hand I will strike the water of the Nile, and it will be changed into blood. The fish in the Nile will die, and the river will stink and thus the Egyptians will not be able to drink its water…” (Exodus 7:17–18). This text seems to describe a red tide, a bloom of dinoflagelates (photosynthetic protists) that often leads to a red color in the water, and that contains toxins that can affect humans, livestock and pets. Red tides often lead to mass fish kills that, when decomposing, lead to sulfide development and bad smell. Red tides have been linked to eutrophication, one of the drivers of the modern plagues, referring to the excess production of organic matter resulting from increased nutrient (nitrogen and phosphorus) inputs.
Indeed, the drivers of the problems affecting coastal ecosystems are:
All of these pressures are well described and demonstrated, and some of them have affected the coastal ocean for centuries. The earliest evidence of overfishing dates from centuries ago, whereas ocean acidification is possibly the most recent driver, and, for the most part, will only stress marine organisms in some decades into the future.
The consequences of these drivers, the “seven plagues” of the coastal ocean, include:
Increased jellyfish blooms
Increased harmful algal bloom
Proliferation of invasive species
Decline of calcifiers
Decline of vegetated coastal habitats.
Decline of megafauna.
Some of these “plagues” are well documented as global problems, while others act at local or regional scales, and yet others need be reconsidered, as they refer to events for which evidence of increase or deterioration is not yet sufficiently robust.
I argue that some of these “plagues” need be reconsidered, as they may derive from increased research and observation effort providing an increase in reports of some adverse conditions and events. In addition, the severity of some of the “plagues” have been overstated by the media, leading to public concern and the rise of these “plagues” in the research agenda.
There is ample and robust evidence that vegetated coastal habitats and megafauna have declined globally and that the occurrence of invasive species has increased, but is the evidence for the other plagues equally compelling? I will dedicate the next posts to addressing this question. In my next post I will argue that whereas ocean acidification is a real problem, this is largely an open ocean syndrome and evidence of present – and even future – impact on coastal ecosystems need be reconsidered.
A healthy R&D system is one of the underpinnings, if not sufficiently recognized, of the prosperity Australia enjoys, and a likely silver bullet to maintain our economy healthy in the unavoidable aftermath of the resources bloom.
A healthy R&D system sits on three pillars: sufficient funding, high-quality human resources, and robust and effective processes for resource allocation.
Australian R&D funding and high-quality human resources are in healthy standing, bench-marked against international standards. However, the third pillar, effective processes, is so flawed that it has Australia’s R&D system limping with a significant risk of falling down.
The problem is with the excessive proportions the red tape around the processes governing the Australian R&D system has achieved, entangling the system so tightly as to holding it on the verge of suffocation.
This situation should be addressed in everyone’s interests: government, researchers and eventually, tax payers, who support all three pillars with their effort. The red-tape malady of the Australian R&D process is composed of three major syndromes:
(1) A gluttony for thick documents
The ARC, arguably the jewel in the crown of Australia’s R&D system, application processes require the production of inordinately thick documents, typically involving applications ranging between 70 and over 500 pages across programs. I have no doubt that the NHMRC suffers from equally lengthy processes.
This is in contrast with the international trend towards parsimony and reduction to the absolute essential of processes. Applications to the European Research Council program are restricted to a maximum of 15 pages for individual-researcher grants of up to 4.5 million $, and the US National Science Foundation requires even thinner proposals.
Much of the detail in these voluminous proposals is spurious, of marginal interest to evaluate the proposal or involves information redundant or already available in the files of the ARC. Provided thousands of applications are submitted every year, with a success rate between 10% to 30%, the collective time wasted by allocating the talented minds of Australian researchers to produce and review this inordinate amount of paper represents a huge loss of productivity to the R&D system, wasted in red tape.
(2) A system Hostage to Legal Departments
Lawyers have taken a firm grip on the interactions between research partners within the Australian R&D system. Contracts regulating ordinary research interactions between partners often take over a year, if not longer, to be agreed and signed. This compares to bench marks of three to six months elsewhere for projects including complex international partnerships.
This excess time is consumed by pit bull battles between legal departments of the participant organizations often agonizing about minute details of wording. Unnecessarily detailed discussions about deliverables and milestones also contribute to delays. If we could anticipate the outcome of the research to such a level of detail there must be, necessarily, very little room for discovery in the ideas proposed.
Particularly significant dead weight on the contract negotiations is the endless discussion about IP registers. The nature of this IP in my field of research, marine ecology and oceanography, I haven’t yet managed to fully understand despite my strongest interest in resolving what these discussions on IP are really about and why they are not handled, as they are elsewhere, by the widely accepted rules of professional ethics and best practice.
These lengthy discussions are not only prevalent in the normal mode of operation, but they take place even in national emergencies. For instance, the Montara commission of inquiry report on the handling of the Montara oil spill in 2009, reads: “The Monitoring Plan needed to be in place shortly after the 21 August 2009; that it was not in place until October 2009 is unacceptable”. The delay was due to lengthy negotiations as to which government department was to bear the costs. Yet, it is widely recognized that immediate scientific assessment of the scale of the accident is key to minimizing the possible impacts of oil spills.
The consequence is also one of productivity loss and heavy costs resulting from a high ratio of administrative to research personal in Australian research institutions and large overheads consumed by the heavy administrative apparatus that shrinks the funds available to actually conduct the research.
_(3) A focus on dollars over ideas _
Scientific research is expensive in a high-wave economy as Australia is, even if researchers earn a fraction of what workers in the resource industry do. This justifies the fact that discussions on funding and monetary compensations for time be prominent in the R&D process, but only to a point.
A focus on monetary transactions and compensations has taken control of interactions between research partners and individual researchers, often dominating research workshops and meetings, with the consequence that the discussion of ideas – which ultimately set all the value research projects can possibly deliver – is subordinate to the monetary and associated operational discussions. The culture of monetary transactions has percolated academic institutions where interactions between departments are dominated by complex negotiations on the sharing of benefits and monetary compensations and flows.
I refer to this as the Australia R&D Monopoly Syndrome, where talented minds are distracted from generating ideas with the potential to bring about breakthroughs and major discoveries by being applied to bitter negotiations over small monetary exchanges. The value these transactions add to advancing R&D is no greater than that in Monopoly tokens while consuming vast spans of valuable time and talent.
As a result, Australian research providers are migrating, in mind and souls, towards a commercial culture. In this commercial culture economic benefit, rather than advancing the frontiers of knowledge to address the great challenges of our times, drives the institutions often conflicting with their mission statements. Yet, non-monetary outcomes, such as excellence in research and discoveries, probably deliver a much larger, by orders of magnitude, value to R&D organizations and society than the sum of all the transactions that occupy so much of the attention of research organizations. Research excellence does so through prestige, recognition, branding and eventually disruptive technologies and solutions to ease the problems of everyday life.
I believe that the consequence of the syndromes defined above is that the Australian R&D system delivers well below the potential of its excellent researchers and the effort of Australian citizens, who appreciate the value of science and who propel scientific progress.
The Australian R&D system needs to engage in an independent red tape review benchmarked against the best international standards, to release the power of the talent currently tied up in red tape in Australian research institutions to advance science and continue to support the prosperity of Australia and the world.
A likely reply from those responsible for setting processes is that this cannot be done. This is the response that the Scientific Council of the European Research Council (ERC), of which I am a member, was used to hear when making specific recommendations to cut red tape in the EU funding system, notorious for an excess of red tape. Yet, persistence, common sense and compelling rationale proved effective at cutting through red tape. The ERC processes now provide a benchmark for streamlined processes propagating across European R&D systems and beyond. Likewise, there is no reason why excessive red tape cannot be trimmed down in the Australian R&D system.
I am only a 457 visa holder, but is seems to me that these are propositions of value to consider in an election year.
The EU prides itself to have established, in 2011, the European Union’s Emissions Trading System (ETS). The EU ETS is claimed to be the cornerstone of the European Union’s policy to combat climate change and a primary tool for reducing industrial greenhouse gas emissions cost-effectively. Involving 31 countries, it is the first and thus far the biggest international greenhouse gas emission trading system. But I argue it is also a failure.
As Australia joins a carbon tax and trading system there is much to learn from examining the effectiveness of the EU ETS and what controls need be implemented in a system to deliver the desired outcomes.
The price of carbon in the European Union’s Emissions Trading System (ETS) just fell below 7 $ a ton for the first time since it was initiated in 2011. This represents a collapse from the price of near 28 $ per ton in 2011. In fact, by mid 2011 the price had already fell down to 10 $ a ton. Seven $ a tone is comparable to the price of carbon in voluntary systems, such as those in place in the US and should, therefore, be considered a failure of the trading system, as there is no point in regulating carbon price if it ends up having the same price as it does in a voluntary system.
The role of any environmental tax is not to collect funds for the state to compensate for the environmental costs of actions that are not embedded in the price system (i.e. so called environmental externalities), but to act as a deterrent promoting the development of alternative, cleaner technologies. The consequence of the collapse of the cost of carbon emissions in the ETS is that companies have no incentive to migrate their processes to low-emission technologies, as it will be cheaper to just pay the tax. This results in a failure of the EU ETS to meet its goals.
The EU ETS works on a ‘cap and trade’ principle where the total amount of greenhouse gas emissions is reduced over time so that total emissions fall. The target is that emissions will be 21 % lower in 2020 than in 2005. The allowable emissions are distributed among companies, which can buy or sell them and can also buy limited amounts of international credits from emission-saving projects around the world.
Whereas total emissions could be regulated by law, without the need for a trading system, the trading system placing a price on carbon was expected to provide incentives for companies to lower emissions and, thereby, achieve a greater reduction in emissions than imposed by the slowly shrinking cap.
The ambition is that the EU ETS be expanded to other nations that are establishing comparable schemes, such as Australia. The European Commission and Australia have reach an agreement in principle to link the EU ETS with the Australian system in mid-2015. Shall this happen at current carbon price in the EU ETS and the targeted starting price in Australia, the consequence will be that Australian companies will purchase EU emission permits massively, thereby quickly drawing down the price of carbon to a level as low as that in EU.
A low green house emission price will never provide incentives to achieve the transition to a CO2 neutral society: it will not drive investment to carbon capture technologies, nor increased efficiency, nor promote renewable energy.
Hence, the current system, that simply regulates total emissions, the cap, and leaves the regulation of the cost per ton entirely to the market of trading permits will not achieve its goals. To achieve the goals the cap and trade system must be complemented with a regulated minimum price on carbon emissions, i.e. must be a cap and trade bottom-line system.
What should this bottom line be? This bottom line should use as reference the calculated social cost of CO2 emissions. Moreover in doing so, it should be considered that, unlike many other gasses which are relatively short-lived in the atmosphere, CO2 emitted to the atmosphere has a life-time of centuries to millennia, so that the every ton of CO2 emitted exerts impacts over a very long time, and can be considered effectively, irreversible (Solomon et al. 2009).
The social costs of CO2 emissions have been calculated at about 43 US $ per ton CO2 (Glaeser and Kahn 2010), implying that a carbon tax of 7 $ a ton only covers one sixth of the impacts the emissions have to society. This implies that humanity is absorbing about 85 % of the cost of the impacts causes by the emissions, as the impacts are felt globally although the benefits and advantages derived from the processes of emissions are felt only within the societies trading the goods produced.
I argue here that the carbon tax and trade system will only achieve the desired outcomes if both the cap is progressively reduced and a bottom line implemented that is progressively raised to close the huge gap between the market-regulated price of CO2 and its social costs.
The upward regulation of the bottom-line should provide incentives for the migration to low-emission technologies. The profits collected from emission permits should be used to pay for the costs of adaptation to climate change not only in Australia but internationally, particularly in the nations that, with very low emissions per capita, are already suffering the impacts of climate change. These are the neighboring island states threatened by sea level rise in the Pacific and Indian Oceans.
Glaeser, E.L., and M. E. Kahn. 2010. The greenness of cities: Carbon dioxide emissions and urban development. Journal of Urban Economics 67: 404-418.
Solomon, S., G.K. Plattner, R. Knutti, R. and P. Friedlingstein, P. 2009. Irreversible climate change due to carbon dioxide emissions. Proceedings of the national academy of sciences, 106: 1704-1709.<!DOCTYPE html PUBLIC “–//W3C//DTD HTML 4.01//EN” “http://www.w3.org/TR/html4/strict.dtd”>
It’s turning out to be a great summer for jellyfish. Spectacular blooms of ‘blue blubbers’ are occurring in Moreton Bay, Queensland; Cable Beach near Broome was littered with tonnes of stranded ‘sea tomatoes’ and ‘jimbles’ are thriving in the waters of Sydney. Indeed blooms of jellies are a conspicuous feature of coastal waters throughout the world and are notorious for interfering with tourism, fishing and industries, such as desalination and power generation, depending on seawater intakes.
Increased reporting of jellyfish blooms by the media has fuelled a perception that jellyfish blooms are on the rise. Indeed, even the scientific literature regularly reports that jellyfish are increasing globally as a symptom of a degrading ocean. But are the blooms of jellyfish really increasing? Claims of a global increase have been largely inferred by extrapolating from several case studies that indicate jellyfish have increased in some regions of the world. Until now, however, a rigorous analysis of all available time series data on jellyfish, has been missing.
The Global Jellyfish Group, a consortium of experts on gelatinous organisms, climatology, oceanography, time-series analyses and socioeconomics, met regularly over the past three years at the National Center for Ecological Analysis and Synthesis, a cross-discipline ecological and data synthesis research centre affiliated with the University of California, Santa Barbara, receiving funding from NSF, to undertake the study. The group assembled all available time series data on jellyfish from around the world, to provide the first formal test of whether available data support the hypothesis that jellyfish blooms are increasing. The data set stretched back more than 200 years and contained 1,140 observation-years of jellyfish abundance. The surprising results of the study have just been published in Proceedings of the National Academy of Sciences (Condon et al. 2012).
The key finding was that globally, jellyfish populations undergo synchronous oscillations with successive decadal periods of rise and fall, including a rising phase in the 1990s and early 2000s that contributed to the current perception of a global increase in jellyfish abundance. The previous increasing phase of jellyfish populations, which occurred in the 1970s, went largely unnoticed, probably because fewer people were studying jellyfish, there was less awareness of global-scale problems, and, without the internet, there was less capacity to share information.
There is, however, just a hint that jellyfish populations could be starting to increase because the most recent minimum in the time series was well above the preceding minima. This slight trend, however, was countered by the observation that there is no difference in the proportion of jellyfish populations that have increased versus decreased over time and the uncertainty in interpreting a small baseline shift against an order of magnitude larger increase as part of the cycle. Thus, confirmation of whether we are now seeing the start of an emerging trend will have to wait until we observe where the next minima in the time series will fall.
Natural long-term cycles are not a new phenomenon in nature. North American cicadas invade en mass every 17 years, tree-ring exhibit multi-decadal growth patterns, and even oceanic oxygen concentrations generated by phytoplankton production rise and fall over 20 year periods. The most pressing question, however, is how do anthropogenic practices, such as excess fossil fuel burning and increased urbanization along coastlines, compound or synergistically interact with natural oscillations to cause potential shift in these baselines. Of course, without long-term monitoring or data to analyse this is difficult to answer, but this underscores the importance of oceanic time-series programs for they enable interpretation of the baseline over appropriate spatiotemporal scales.
The realisation that jellyfish populations synchronously rise and fall around the world should now redirect researchers to search for the long-term natural and climate drivers of jellyfish populations. Moreover, the analysis also revealed regions of the world, such as the open ocean, and much of the southern hemisphere, where data are scarce and so highlights where new research efforts should be directed.
Although we found little evidence for a global rise in jellyfish, there are regions of the world where blooms have indeed increased over time, including the Sea of Japan, the North Atlantic shelf regions and parts of the Mediterranean Sea. For these regions, sustained increases in jellyfish populations continue to present problems for coastal industries and research on how to mitigate the effects of jellyfish blooms must be prioritised, including a search for drivers, such as the growth in artificial surfaces around the coast, which may provide habitats for polyps producing jellyfish or climate change, which can alter the phenology or timing of seasonal blooms.
If the global oscillations in jellyfish populations that have occurred for hundreds of years persist, there will continue to be periods in the future where jellyfish abound and coastal industries should prepare for these, as every new rising phase meet a society that is interacting with the oceans not only more intensively but also in new ways and, therefore, more vulnerable. Importantly, however, we now have a solid baseline from which to assess future changes in jellyfish populations.
This post is coauthored by Kylie Pitt, Lecturer, Griffith School of Environment, Australia Rivers Institute at Griffith University; and Robert H. Condon, Senior Scientist, Dauphin Island Sea Lab, Alabama, USA.
Recurrent jellyfish blooms are a consequence of global oscillations
Robert H. Condon, Carlos M. Duarte, Kylie A. Pitt, Kelly L. Robinson, Cathy H. Lucas, Kelly R. Sutherland, Hermes W. Mianzan, Molly Bogeberg, Jennifer E. Purcell, Mary Beth Decker, Shin-ichi Uye, Laurence P. Madin, Richard D. Brodeur, Steven H. D. Haddock, Alenka Malej, Gregory D. Parry, Elena Eriksen, Javier Quiñones, Marcelo Acha, Michel Harvey, James M. Arthur, and William M. Graham
PNAS 2012 ; published ahead of print December 31, 2012, doi:10.1073/pnas.1210920110
The $4.5 billion (US) fine levied on BP for their role in the 2010 Deepwater Horizon oil spill disaster in the northern Gulf of Mexico was the “largest-ever criminal resolution in US history” and has been generally praised by environmental managers, conservationists and the public. The BP settlement has certainly set an historic precedent, and with criminal enquiries ongoing, similar fines are expected for BP under the Clean Water Act.
The environment seems to be the winner from this agreement. Too often it seems that the fate of the environment is in the hands of legal argument, as was the case for Exxon Valdez where courts ruled that over 80% of the $150 million fine didn’t have to be paid because of the company’s cleanup efforts. As part of this recent settlement, BP will allocate $2.4 billion to the National Fish & Wildlife Foundation, an independent not-for-profit conservation group, for restoration and conservation efforts in the Gulf of Mexico. A further $350 million will be allocated to the National Academy of Science.
By comparison, the fines levied on the Thai-based oil company responsible for the Montara wellhead blowout, Australia’s third largest oil spill, at $ 510,000 was dwarf and did not include funding for monitoring and research. Environment Minister Burke was quoted saying ``When we’re talking about protecting something as precious as our oceans, no amount of money ever provides genuine compensation for environmental catastrophe, ever”. We concur, but this is no reason to settle an accident of the magnitude and consequence of the Montara accident for just half a million $.
Perhaps the most refreshing aspect of the Deepwater Horizon settlement has been the overall response by BP. By all accounts, BP pled guilty to all the charges and took full responsibility for the incident. This is an encouraging step and reinstalls confidence in industry and their willingness to accept their role as one of the leaders in management of environmental resources, amongst other things. It also reinforces what we wrote in a recent piece about finding common ground between academia, industry and government on policy and research priorities in advance of spill events. This is a unique opportunity for all parties and the public to respond and be proactive by bridging gaps and building partnerships toward a common goal.
It has been predicted that the US will be the biggest oil producers by the year 2020. For political and economic reasons this is an important forecast but there are equal if not more important consequences for the environment as well. Increased offshore means expanded operations into deep-water environments that sustain oil rich deposits. For the US, this means less reliance on foreign oil supply and opening up deep-sea drilling operations in equally vulnerable areas of Alaska and Gulf of Mexico. For Australia, this means expanding drilling operations in the proximity of national and indigenous heritage sites such as the Ningaloo Reef (drilling by BHP Billiton at 5 miles), as the GBR a World Heritage Site, and the remote Rowley Shoals (proposed drilling by Woodside) in the Western Australian coast and shelf area. With a vast span of cost spreading across 21,000 Km, it is hard to explain why high-risk activities, such as drilling for oil and gas near unique heritage sites including vulnerable fauna, are necessary. Science-based spatial planning is necessary to ensure that all outcomes, from biological conservation to resource extraction, can be achieved in ways that minimize risks while maximizing benefits.
Since Deepwater Horizon there have been at least a dozen spills of 100 tonnes or more into vulnerable and economically important marine systems, including Christmas Island, Rana spill New Zealand, Bonga Field in Nigeria, the Marshall Islands, Singapore and China. Given it took over 20 years for environmental impact assessments from the Exxon Valdez spill, it will be many decades before we begin to know the long-term effects of these recent spills on ecosystems thus monitoring efforts should be financially sustained for this period. The increased demand for fossil fuels cannot be a surrogate for environmental integrity and the quest for alternative energy sources must be given a priority.
What is more concerning though is the increased rate of spills with unknown volumes of oil released or regions that simply do not keep records on oil spills, such as Russia where there is an annual estimate of 500,000 tonnes of oil released into the Arctic. Furthermore, access to many spill records are restricted because they are considered “too sensitive” to release to the public. Simply knowing how much oil was released can provide all parties with vital information about the spatial and temporal scope of spills and how to best to administer clean up efforts. The trust barriers must be broken down and a common sense approach should be adopted with information sharing channels opened up between all parties.
Oil spills are stressful for everyone involved, thus such open discussions and collaborations are required now for clarity in advance of any oil spill incident. In particular, we identify the following needs:
Developing a cohesive network between science, government and industry with common goals set for efficient management of resources
Establishing communication channels for the information and data sharing and trust between all parties
Creating a list of research priorities and a achievable timeline to reach these goals
Establishing regular exercises testing our preparedness to react to spills and other disasters at sea including all parties involved in the response to spills, in which scientists had not yet participated.
Establish a science-based spatial planning framework for the governance of our safe and sustainable operation in the marine environment.
Improve infrastructure for research and industry, in particular focusing on regional 3-D models and remote sensing technology that forecast and monitors effects of spills and the necessary research platforms to deploy the observational capacity.
Moreover, there is a need to integrate partnerships and information sharing across borders. For example, international collaborations have been proposed between US and Australian research, government and industry involved in monitoring the effects of Deepwater Horizon, Montara and other oil spills on marine ecosystems. This proposed collaboration will clearly improve information sharing and allow establishment of new research and monitoring initiatives between all parties that will help develop relevant and adequate environmental policy and management of marine ecosystems on a case-by-case basis. The upcoming joint US-Australia Education, Science and Technology workshop in Canberra next March provides the ideal launch pad for this initiative.
Where there is oil drilling or shipping operations there is always the risk of release or exposure of hydrocarbons (and subsequently dispersants) to marine ecosystems, including pelagic and benthic organisms some of which humans depend on in commerce. Indeed, at least 200 tons of oil is illegally released into the water from the cleaning of oil holding tanks in large crude oil tankers.
Risk-management should not be about managing the size of fines following accidents, but taking measures to minimize the likely of such accidents to an absolute minimum and to ensure that sound and effective response plans are in place to contain the impacts of accidents, shall these happen.
We should all learn from our mistakes and heed warning from the Deepwater Horizon case to make sure appropriate management and industry strategies are in place through compromise and collaboration.
Co-authored by Robert Condon (Dauphin Island Sea Lab, Alabama, US).
Google News contains more than 51,000 English-language news items about the melting Arctic. Five hundred of them were generated in the 60 minutes before we decided to write this article.
This avalanche of news follows confirmation last week of a catastrophic melting in the Arctic Ocean in the summer of 2012. That this summer would be particularly devastating to the stability of the ice was already evident. In July U.S. scientists had found that the portion of Greenland’s surface affected by melting ice rose from 40%, which is usual, to more than 90%, a record extent, in just four days. The trajectory of the extent of Arctic sea ice already anticipated a new minimum this year, confirmed a few days ago, with an area of 700.000 kilometres squared of sea ice below the previous minimum registered in the summer of 2007.
Once again we are forced to revise our forecasts for the future. In 2007 the IPCC predicted that in 2100 there would still be a third of the extent of the ice in summer of 1980. In 2007 these predictions were revised to anticipate an ice-free Arctic Ocean in summer 2030. Today researchers, such as my colleague Peter Wadhams, from Cambridge Univ., speculate that this can happen in just four years.
In an article published in the journal Nature Climate Change in January of this year, which Peter and I co-authored along with colleagues in the UK and Norway, we warned that the trajectory of ice dynamics pointed at an imminent abrupt change (Duarte et al. 2012), anticipating the changes that have been recorded this summer.
For us, Arctic ice dynamics is not just a line on a time series graph. For the past six years we have been developing an intense research activity in the Arctic, with three to four expeditions a year, to complete seven expeditions to Greenland and fourteen to the Arctic Ocean since 2006 with the support of Danish and Norwegian colleagues and funding from Spain and the European Framework R&D Programme. This research effort is justified by the speed and significance, for the planet and for all of us, of what is happening there.
Our recurring presence in the Arctic, Longyearbyen (Svalbard, 78 ° N) and Nuuk (capital of Greenland, 64.2 º N) as logistics bases also allowed us to observe how these changes are affected Arctic communities. In both locations we have seen a steep growth in the numbers of houses and tourists. But this year was different. This year Chinese and Australian companies arrived.
This summer the Chinese icebreaker Xuelong (Snow Dragon) crossed the North route along the Russian coast. Rapid ice loss allowed her to reach the North Pole. China has commissioned a second icebreaker that will be equipped with sophisticated instruments for geological and geophysical prospecting. China had also in 2008 opened an Arctic base named Arctic Yellow River in Ny-Ålesund, Svalbard.
Meanwhile the Chinese have also arrived in Nuuk, but this time not just to do research. When travelling in Nuuk this summer we found a flurry of mining exploration companies and environmental impact studies, parallel to the development of large resource development plans and infrastructure construction. Air Greenland’s magazine anticipated what we would find, because logistics companies prospecting for oil and gas offered their services from its pages.
In October this year the government of Greenland has to make a decision on the opening of an iron ore mine called “Isua”, adjacent to the ice sheet. This mining project is promoted by a company based in London, but with Chinese capital. In fact, the project envisages the deployment of 3,000 Chinese workers to construct, over three years, the mine infrastructure and pipelines to transport iron transport to the coast. This includes the construction of a harbour capable of accommodating large freighters that would transport the iron back to China.
The repertoire of minerals to be exploited is vast, including iron, gold, aluminium and rare earths (whose market is dominated (90%) by China). Whereas Chinese investors may have been the quickest to arrive, they were soon followed by Australian mining companies. Currently 131 mineral deposits of Greenland are licensed for prospecting and exploration and four for exploitation. According to the report to Inatsisartut, the Parliament of Greenland, on Activities Concerning Mineral Resources in Greenland (Greenland Government, Greenland Bureau of Minerals and Petroleum, 2012, http://bmp.gl/minerals) about 40% of the licences are in the hands of companies registered in Australia, almost 20% from Canada, and the rest from Greenland, Scandinavia (Denmark, Norway and Iceland), Switzerland and the Czech Republic.
An indication of the scale of these projects is given by the fact that the energy consumption of the Isua mine would increase Greenland’s total energy consumption by 80% through a plant fuelled by diesel and because only this mine will generate benefits from exploitation license equivalent to 20% of the Greenland government’s current budget (including the large contribution from Denmark). It will also be necessary to build a new high-capacity airport to welcome the large labour population and develop additional infrastructure, all of which will generate huge profits for the public coffers.
The International Architecture Biennale in Venice this year featured a project by Danish and Greenland architects and engineers entitled: “A Possible Greenland”. It offered a vision of a Future Greenland packed with futuristic buildings and infrastructure to accommodate these developments.
The Greenlandic government sees the development of the resources industry as an opportunity to raise the standard of living of its inhabitants. But this view of prosperity comes along with big risks to social and cultural integrity of the Greenlanders. With a population of only 57,000 people, Greenland has fewer inhabitants than there are employees in some of the big companies that are landing in the area (ALCOA: 61,000, Shell: 90,000, Maersk: 108,000).
If the resources boom has brought prosperity but also a number of social imbalances to Western Australia, it is no wonder that the Greenlanders, the Inuit people native of Greenland, expressed mixed feelings about all of this. They are worried, rightly, that the influx of workers from remote areas and the flood of money that will surely pour in the streets of Nuuk might aggravate the difficulties already experienced by the Greenlandic people. Rates of suicide, which also occur in early adolescence, triple those of the Danish inhabitants of the territory. The Greenlandic government efforts have succeeded in reducing alcohol consumption per capita (population over 14 years) from 22 L per year in 1987 to just over 11 L today.
The opportunity is too attractive for Greenlanders and their government to to turn away, but the risks are significant. Will Greenlandic society be strong enough to resist this gold rush?
This post is coauthored with Dr. Núria Marbà, IMEDEA, Spanish National Research Council
Duarte, C.M., T. M. Lenton, P. Wadhams and P. Wassmann. 2012. Abrupt climate change in the Arctic. Nature Climate Change 2, 60–62.
News that the Dutch owned “Abel Tasman” super trawler may have to leave Australian waters after the lower house passed laws to freeze its operations for at least two years is well received. Allowing the operation of the super trawler with current uncertainties on impacts would have been irresponsible.
Just last week, the publication of a paper by Spanish scientists in Nature provided evidence of major impacts and long lasting effects of trawling on the sea floor. This paper looked at bottom-trawling boats – a different method to that used by the mid-water trawler “Abel Tasman”. But it showed how little we know about the damage fishing can do.*
Trawl fisheries has a long history, but it is a brutal practice. This perception is not a recent sentiment of environmental softness, as France banned trawling in 15th Century as a practice that destroyed productive marine ecosystems and was punished with decapitation. I recommend a precautionary approach to trawling, but do not recommend such stringent penalties!
This early awareness and extreme zeal, however, did not help conserve the French seafloor ecosystems on the long run.
France, through the IFREMER is one of the leading nations in the exploration of the ocean depths. Their fleet of advanced submersibles and ultradeep ROVs and submersibles able to reach 6,000 m like Nautile, has been used intensively in recent years to explore deep corals ecosystems.
Deep corals have been known to grow in Norwegian fjords for over a century. However, only recently, these fascinating coral ecosystems have been observed and explored in any detail. These deep-coral ecosystems grow on a cold, dark environment, and generate complex habitats that provide refuge for fish and invertebrates.
As advanced technologies for deep sea exploration developed, deep coral ecosystems were found to be present along the continental shelfs and slopes of all continents, including Antarctica, forming a belt of coral that must have extended from 300 to 1,000 m depth along most of the 170,000 Km of shelf edge in the ocean.
But the initial excitement of these discoveries soon turned into concern as many of the corals discovered were completely trashed and devastated by trawlers. This was the fate of most of the deep corals that once paved the French Atlantic shelf edge, as French scientists discovered, with dismay, just last year.
Seagrass meadows in the Mediterranean were also damaged and injured by trawling boats that left deep scars in the meadows and opened, as the attached acoustic image illustrates.
How many seagrass meadows and coral shelfs may have been lost to trawling before been ever observed or documented? Many of these ecosystems will never recover or will recover after centuries.
The ecological damages of trawling are devastating, but are far deeper than hitherto realised. The article just published last week in Nature by Puig and coauthors (“Ploughing the deep sea floor”) showed that trawling in the NW Mediterranean disrupted the sea floor and caused, when conducted along slopes, a destabilistion of the sea floor leading to intense erosive impacts.
As stated by one of the coauthors in an interview, trawling if the sea floor can be compared to plowing agricultural land. However, the key difference is that agricultural fields are plowed only once a year, whereas in many fishing grounds around the world the sea floor is often “plowed” every day. There is no chance of ecosystem recovery.
I do not advocate for an immediate end of trawling, but will like to see this damaging practice be phased out within my life span. We cannot just pretend we do not see, for now we have the technology to see the damage trawling makes. While trawling in the Aegean Sea in a study of red shrimp in the Mediterranean, my friends in the crew of spanish R/V García del Cid were devastated when the net emerged carrying one toilet. An ironic portrait of what we treat our seas like.
Fisheries should be sustainable or shall not be, for our future will strongly depend on our capacity to maintain healthy marine ecosystems. Industry, regulators and scientists must work side by side to this end.
*This paragraph was updated by an editor after publication to clarify the author’s meaning.
Meanwhile Europe is leaving behind the northern heat wave this boreal summer, the US continues to struggle with crop failure due to unusually high warm temperatures and extended drought, and the Caribbean and Asian nations prepare for an intense cyclone season.
I received a phone call from a journalist this week interested in my thoughts on the new minimum of Arctic ice sheet, since I published a paper earlier this year, arguing that the dynamics of the Arctic ice sheet is signaling at the proximity of a tipping point.
In our conversation he asked whether I was prepared to speculate on the possible causes of this new minimum. I found the question a little perplexing and conveying an intense sense of deja-vú, as this is a recurrent experience every month of August for at least the past 6 years.
In the 1993 movie Groundhog Day), Bill Murray is a TV weather reporter who finds himself locked onto the same day, waking up to same sequence of events repeating themselves over an over on a loop.
I too have the sense that I am locked onto a time loop, where journalists declare themselves perplexed to report that the Arctic is actually melting, and convey this sentiment of surprise to the public, despite a wealth of research that:
identifies the Arctic as the region most rapidly warming on Earth (e.g. ACIA 2004)
predicts the steepest warming rates for the Arctic as a consequence of anthropogenic green-house emissions (e.g. Meehl et al. 2007)
predicts an acceleration of ice loss in the Arctic (e.g. Holland et al. 2006, Velicogna and Whar 2006).
If this was a one-off event, I could understand the surprise and questions around the causes of Arctic ice loss and its prospects for the coming years. However, there has been a series of recurrent minima starting in 1996 to date.. Given the close match between predicted and realised trends, perplexity at a new record melting event either signals that the robust scientific understanding available on the response of Arctic ice to climate change is met with skepticism or that this knowledge is still confined within the scientific community and does not percolate to media or the public.
Either option is unsatisfactory, because the developments in the Arctic are not locked in the realm of Arctic ice experts, but every citizen can follow daily the status of Arctic ice, and even check this on a smart phone through an app. The changes in the Arctic are broadcasted as if this would be a remake of the movie Death Watch, but this time involving the opportunity for a global audience to watch the demise of Arctic ice in real time.
Possibly, we scientists are to be blamed, as we cherish uncertainty so much that we deliver messages to the media crowded with caveats and cautionary alerts to possible uncertainties. The consequences is the messages passed on are confusing, with uncertainties overplayed relative to robust understanding, supported by both evidence and validation of predictions by observed trends.
For instance, earlier this month NASA scientists reported a surprisingly rapid spread of surface melting from affecting 40 to 97 % of the ice-covered surface of Greenland in only four days. However, the press release included the statement by Lora Koenig, a Goddard glaciologist and a member of the research team analysing the satellite data, that:
Ice cores from Summit show that melting events of this type occur about once every 150 years on average. With the last one happening in 1889, this event is right on time. But if we continue to observe melting events like this in upcoming years, it will be worrisome.
This suggests that :
this event was not yet worrisome, and
that it was due to occur as expected from an apparent 150 year recurrence.
No connection was made in the note and associated statements to anthropogenic climate change, recent trends or predictions of climate models on the de-stabilisation of the Greenland ice sheet. So the notion conveyed was that this was a one-odd event that may be expected once every 150 years, unconnected to any previous trends or regional climate trends.
With so much evidence there, how can we continue to ponder on the possible ultimate causes of Arctic ice loss?
The noise on the climate change debate has reached such level that my colleagues in the US, particularly scientists within Federal agencies, tell me that they avoid taking a position on climate change in public conversations and news releases. The reluctance of the US public to agree with the wealth of scientific information pointing to an on-going and future warming of the climate due to anthropogenic green-house gas emissions seems to be curving now with the severe heat, drought and crop failure in the US this year. This curving NASA scientists now connect these heat waves to anthropogenic climate change (Hansen et al. 2012).
Nevertheless our predictions may turn to be wrong and our models can break in the future. Indeed, the statement “all swans are white” was almost a truism until black swans were discovered to the western world at the landing of Dutch navigators in the Swan River, WA.
We are certain to find “black swans”, now used as a synonym for the unexpected (Taleb, 2010), with ice trends in the Arctic, but, if anything these “black swans” are likely to consist on a yet faster acceleration of ice than expected, with the associated impacts on global climatic regulation.
The changes in the Arctic now meet the requirements to be considered “dangerous climate change” under the UN Climate Convention (Duarte et al. 2012a). The risk of ignoring these signals and taking serious action to mitigate climate change rests on the possibility that “dangerous climate change” will propagate, through existing tipping mechanisms in the Arctic, to the entire planet (Duarte et al. 2012a,b).
With all due consideration to uncertainty, policy makers need to accept the reality that ice loss in the Arctic is accelerating further, propelled, beyond a reasonable doubt, by anthropogenic greenhouse gas emission, and take due action.
Taking no action to mitigate climate change will eventually get us out of Groundhog Day to experience a new, unprecedented series of events.
But only to find a new reality of dangerous climate change spread, unchecked, throughout the planet.
ACIA. 2004. Impacts of a warming Arctic: Arctic climate impact and assessment, ed. S.J. Hassol.Cambridge:Cambridge University Press.
Duarte, C.M., T. M. Lenton, P. Wadhams and P. Wassmann. 2012. Abrupt climate change in the Arctic. Nature Climate Change 2: 60–62.
Duarte, C.M., S. Agustí, P. Wassmann, J. M. Arrieta, M. Alcaraz, A. Coello, N. Marbà, I. E. Hendriks, J. Holding, I. García-Zarandona, E. Kritzberg and D. Vaqué. 2012. Tipping elements in the Arctic marine ecosystem. AMBIO 41:44–55
Holland, M.M., C.M. Bitz, and B. Tremblay. 2006. Future abrupt reductions in the summer Arctic sea ice. Geophysical Research Letters 33: L23503. doi:10.1029/2006GL028024.
Meehl, G.A., T.F. Stocker, W.D. Collins, P. Friedlingstein, A.T. Gaye, J.M. Gregory, A. Kitoh, R. Knutti, J.M. Murphy, A. Noda, S.C.B. Raper,I.G. Watterson, A.J. Weaver and Z.-C. Zhao, 2007. Global Climate Projections. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S.,D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
The US corn crop has failed due to the severe drought and extreme temperatures experienced in the US this year.
But the US is not alone, Ukraine, also a major exporter of grain, has banned wheat exports due to poor harvest this year.
As a consequence only 50 days’ worth of stocks are left in world’s granaries. This will lead to big food shortages and higher food prices in poorer countries in Asia and Africa. Indeed, food prices have already risen by 7% this year and cereal prices rose 17% between June and July. A rise in food prices is often linked to social and political turmoil in poor countries, where people already use most of their income to buy food.
A recent paper published by NASA scientist James Hansen and coworkers in PNAS, alerts that heat waves, such as those affecting the US, Greenland and Southern Europe this year, are becoming more frequent and are linked to anthropogenic climate change. Their analysis shows that extremely hot summers have affected an estimated 10% of global land area in recent years, compared with less than 1% of the Earth’s surface between 1951-1980.
With climate change progressing unchecked due to failure of global leaders to reduce global green house emissions, global population already exceeding 7 billion and heading to 9 billion by 2050, and a forecasted 15 to 20 % increase in per capita calories intake and meat consumption, the future is looking ugly.
Nations that depend on international markets for food supply must develop a food security policy to reach food sufficiency in the likely event that the international market will run dry of food supply or will only supply food at exceedingly high prices.
Whereas Australia is self-sufficient for cereal supply, this is, again, at risk due to climate change, including droughts, heat waves and floods. Moreover, Australia already depends on imports to supply 70% of domestic seafood consumption. As for cereals, there is no guarantee that excess seafood will be available in the international market for import in the future. A shortage of seafood by 70% seems hardly acceptable for a nation that prides itself of being a maritime nation.
The Commonwealth has announced the development of the world’s largest network of marine protected areas. With 3.1 million Km2 this network will encompass more ocean surface than that protected in the rest of the world. This is a bold, game-changing decision in marine protection and conservation, and one that I value positively, although it poses significant challenges.
In addition to preserving key marine habitats and biodiversity, the network seeks to ensure the sustainability of fish stocks in Australian waters, thereby avoiding the depletion of fish stocks that have impacted many nations, such as Canada.
But, how will Australia be able to satisfy present seafood demands and the increase forecasted if we place a cap on domestic landings?
Whereas many, but certainly not all, Australian fisheries appear to be well managed and sustainable, pushing them closer to overfishing to help satisfy domestic demands further would not alleviate the current deficit significantly. The capacity of wild fisheries to satisfy Australian demands for seafood in the future is even meager, provided the increased demand expected. Hence, pushing fish quotas further, as some argue, would place our fishery resources at risk while not enhancing significantly seafood supply security for Australians.
The only option to improve the capacity of Australia to come closer to satisfy present and project seafood requirements through domestic production is the rapid development of aquaculture. With the third largest economic exclusive zone in the world, Australia has a huge, untapped capacity to develop aquaculture. Yet, it ranks poorly both as a producer of aquaculture and a supplier of new technologies and innovation in this industry.
Australia is a giant in the extent of its economic exclusive zone, but a dwarf in its capacity to develop food from its huge expanse of productive ocean.
The development of Aquaculture in Australia is an imperative and, as a key component of food security, must be introduced as a pivotal element of our national defense policy.
The development of aquaculture in Australia will fail if it was to follow that of the leading Asian nations. In addition of being a component of food security, marine aquaculture must also be a viable business, and yet, aquaculture has traditionally be considered a technology of the poor, arguably unsuitable for a high-wage economy, as the Australian economy is. In addition, marine aquaculture has generated multiple environmental impacts through its dependence on wild fisheries and the emission to the marine environment of nutrients and organic detritus.
The pending Australian aquaculture strategy must be economically viable and environmentally sustainable. Is such strategy possible?
The development of Australian aquaculture must be closely linked to that of biotechnological applications. I submit that an economically viable model for aquaculture in Australia must develop the combined capacity to deliver sufficient mass production as to satisfy national demand, and possibly export, while targeting high-value molecules, such as carotenoids, omega-3 and others, for biotechnological applications. Marine biotechnology is a growing business, growing at 12 % annually and with a global value in excess of 20 billion annually. Comprising a considerable fraction of the global marine biodiversity, the scope for new domestications for aquaculture and biotechnology applications in Australia is immense. Developing an “aquaculture 2.0” industry, underpinning the growth of marine biotechnology and its growing applications, will also flip Australia’s global position from being in the demands side, from being in the supply side of new technologies and innovations for aquaculture applications.
The development of an “aquaculture 2.0” industry Australia must also lead the world into transforming aquaculture into an environmentally sustainable industry. In a previous blog I posed the rhetoric question of whether sustainable aquaculture is an oxymoron. It is not. To become sustainable, aquaculture must close its production cycle to abandon its present dependence on wild catches, focus on the production of macroalgae and animals low in the trophic chain, and combine them in polycultures that maximize production while minimizing detritus production. Moreover, aquaculture can become a tool in conservation biology, helping catalyze the recovery of threatened species and fisheries stocks, and, through the mass culture of macroalgae, help recover degraded marine ecosystems, such as many of Australia’s estuaries.
Regulators should facilitate this transformation, and should leave behind the prevalent attitude to consider aquaculture as a threat to the marine environment. In fact, some regulators in Australia impose more stringent restrictions to license aquaculture farms than they do impose on, for instance, the far more dangerous gas and oil industry. The impacts of aquaculture can be best brought in perspective if compared with the environmental costs of food production on land, responsible for much of water consumption and the deterioration of our waters due to excess fertilizer application, the introduction of dangerous chemicals, such as herbicides and pesticides, in the environment, loss of habitats for biodiversity, a substantial proportion of anthropogenic greenhouse gas emission and risks to human health.
In summary, Australia is at the crossroads.
It must now develop an all-Australian “aquaculture 2.0” model or be relegated to be on the demand side for both seafood products and know-how from Asian nations.
The development of aquaculture 2.0, combining mass production with key high-value molecules for biotechnology applications and developing best practices to render it a positive force in the environment, will play a pivotal role in ensuring our seafood security in the future. As such it should be a pillar of our defense strategy.
Australians, with one of the highest prevalence of health problems in the world caused by exposure to UV radiation, are very aware of the dangers of elevated UV radiation. The prevalence of skin cancer in Australian is nearly four times the rates of citizens in Canada, the US and the UK.
However, the role of elevated UVB radiation on marine biota and its possible role as a driver of widespread declines of marine biota has not been assessed in a systematic manner. The misconception that the Montreal Protocol succeeded in reverting UVB levels to its pre-disturbance values. Following the compelling demonstration of the role of CFC gases in the erosion of the stratospheric ozone layer, and hence, elevated UVB radiation, the Montreal Protocol ruled the banning of production and emission of CFCs. The Montreal Protocol has been invoked recurrently as a success-story in environmental regulation. Indeed, CFCs emissions and production declined sharply. However, UVB levels have not recovered as yet, and are not expected to recover before 2050. Hence, incident UVB levels have remained elevated for over four decades, a fact that stats on prevalence of UVB-induced health problems in humans provide evidence of.
Evidence that marine life is vulnerable to UVB radiation abounds, and indeed the Hunter action spectrum, used to evaluate damaging UVB doses, is based on the levels causing mortality to anchovy eggs (Hunter et al. 1979). However,the believe that UVB radiation would not penetrate to significant depths in the ocean lead to the assumption that elevated UVB index was not a reason for concern for marine biota. Yet, development of submarine UVB profiling instruments showed that UVB doses sufficient to cause mortality of marine plankton penetrated below 30 m in clear ocean waters (LLabrés et al. 2010).
On the light of abundant evidence on the vulnerability of marine biota to UVB radiation, a team of Australian, Spanish and Chilean researchers led by UWA Professor Susana Agustí conducted a meta-analysis of available experiments assessing the response of marine biota to elevated UVB radiation or removal of UVB radiation. The results of this research has just been reported on a paper (LLabrés et al. 2012) published on line in Global Ecology and Biogeography.
The analysis revealed a general deterioration of organismal performance with elevated doses of UVB radiation and a general improvement when incident UVB was removed. The marine life most affected by UVB are protists (such as algae), corals, crustaceans and fish larvae and eggs, thereby affecting marine ecosystems from the bottom to the top of the food web. Mortality was the trait most sensitive to increased UVB radiation.
The analysis also provided evidence that marine organisms in the Southern Hemisphere are more resistant to elevated UVB radiation than those in the Northern Hemisphere, and that resistance of organisms in the Southern Hemisphere has increased slowly over time. We interpreted these observations as evidence that high mortality of sensitive marine organisms in the Southern Hemisphere, where UVB levels have increased the most, has already selected for the more resistant organisms.
The experiments included in this research involve organisms and species that have survived after the erosion of the ozone layer caused by CFCs. Therefore, the results suggest that an increase in UVB radiation could have a heavy impact on marine biota. A clear evidence of this impact is the finding of a reduction of mortality rates by up to 81 per cent when reducing exposure to UVB present in larvae of commercial fish such as cod, anchovies and other organisms,.
The effects of ultraviolet radiation detailed in this study mainly affect organisms growing near the ocean surface, such as eggs and larvae of invertebrates and fish, which are exposed to very high UVB levels.
Our results strongly suggest that increased UVB radiation over the past four decades may be a hidden driver of the widespread decline of marine life, from corals to fish, often attributed to other pressures, such as climate warming, overfishing and other impacts.
Most global impacts on marine biota have been documented since the 1970’s, including those attributed to overfishing, ocean warming, ocean acidification and hypoxia, concurrent with elevated UVB levels at the global scale. The evidence is to a large extent correlational. However, because a global increase in UVB radiation, greatest at higher latitudes and in the Southern Hemisphere, has occurred in parallel to all these other impacts there effects maybe linked.
It is very likely that the general decline of marine life in the past three decades reflect the compound effects of multiple stresses, including that to UVB radiation, and not warming alone. The additional mortality, particularly of early life stages of marine life, due to elevated UVB radiation may have sufficed for already stressed populations to shift to negative net population growth, further accelerating their decline.
Because current models do not anticipate the stratospheric UVB levels to recover before the middle of this century, the impacts of elevated UVB radiation will continue to operate. Understanding the role of elevated UVB radiation as an additional driver of the decline of marine life is of fundamental importance to predict and catalyse, through effective managerial actions, their recovery.
Carlos M. Duarte and Susana Agustí
The UWA Oceans Institute and School of Plant Biology, The University of Western Australia
Hunter, J.H., J.H. Taylor, and H.G. Moser, 1979. The effect of ultraviolet irradiation on eggs and larvae of the northern anchovy, Engraulis mordax, and the pacific mackerel, Scomber japonicus, during the embryonic stage", Photochemistry and Photobiology, 29, 325-338.
Llabrés, M., Agustí, S., Alonso-Laita, P. & Herndl, G. 2010. Synechococcus and Prochlorococcus cell death induced by UV radiation and the penetration of lethal UVR in the Mediterranean Sea. Marine Ecology–Progress Series, 399, 27–37.
Llabrés, M., S. Agustí, M. Fernández, A. Canepa, F. Maurin, F. Vidal, and C.M. Duarte. 2012. Impact of Elevated UVB Radiation on Marine Biota: a Meta-Analysis. Global Ecology and Biogeography (published on line).
The fifth deadly attack in 10 months by a White Shark (Carcharodon carcharias) has released all alarms among Western Australians. The fatalities in Western Australia have been said to exceed, by a factor of 5, the average number of deaths caused by shark attacks in the entire nation. Something unusual and alarming seems to be happening, and the public demands that action be taken.
But, is it really unusual? Do we know why? Do we know what to do? Who should the public turn for answers? At this point I must declare that although a marine ecologist I have no particular expertise on sharks and have never published on the subject. However, we can apply scientific logic to try to narrow down the questions above before we do require expert opinion.
Statistics is all about testing the likelihood that a particular observation or result departs from that expected by chance alone. How unlikely an event should be to consider that it is not likely to be explained by change depends on a convention around an arbitrary number that identifies a level where invoking chance is no longer convincing. For most applications scientists chose a probability of 0.05 as the threshold (technically alpha value) separating a result possibly obtained by chance from a significant one. This is equivalent to consider that when the probability that an observation occurs by change is less than one in twenty times, it is unlikely that this observation is indeed the result of chance. For instance, it will take that the flip of a coin returns a head five times on a row ( probability that occurs by chance = 0.031) before we could consider that the coin is faulty or not flipped with equal chance of either outcome; or we need to roll a dice more than twice with the same outcome to suspect that the dice maybe loaded. The higher the number of observations around a particular event, the more likely we are to be able to detect a departure from a result attributable to chance alone. Hence, statistics are highly unreliable when dealing with small numbers.
Shark attacks are counted on single digits, so that our capacity to infer that the larger number of shark attacks reported in the past 10 months in Western Australia relative to the average cannot be explained by chance is limited. Indeed, the added mortality due to Shark attacks is such a minimal component of the annual mortality in Australia (143,500 deaths in 2010), that it is not visible in these statistics. The international shark attach file maintained at the Florida Museum of Natural History reports fatal shark attacks in Australia to range from 0 to 3 per natural year, with the total number, fatal and non-fatal, of attacks ranging from 7 to 21 per natural year. Hence, the 3 fatal attacks in 2011 and the 2 attacks thus far in 2012 are toward the high side but do not represent a significant departure from those observed in Australia in previous years. Yet, so much is at stake that we should make sure that the possibility that we erroneously assign these events to chance when there maybe a cause – an error known in statistics as Type II error – is ruled out.
Questions (a) and (b) require the existence of a baseline against which we can assess unusual increases in shark abundance or that of swimmers. Despite efforts to produce fisheries statistics for Western Australia, our baselines on shark abundance are probably very thin and too weak to detect a significant increase in shark abundance even if this may have occurred, so we are in a rather weak position to resolve question (a) with confidence. Question (b), however, can be resolved. Western Australia is the state in Australia with the steepest population growth, and the beach remains an environment of choice for recreational activities, so it is likely that the number of people on the beach and in the water any one day has increased over the past years. Whereas I do not have access to those data, I am sure they do exist, so we should quickly assemble the data required to test the merit of question (b). To resolve whether the encounter probability of sharks and humans has increased as a possibly cause for the (apparent) increase in shark-derived casualties also requires high-quality data on the abundance of regular preys of white sharks, for it is not just the probability of encounter between humans and sharks that matter but the proportion of encounters with potential preys that end up being humans. So, even if the abundance of sharks and that of humans in their environment may be the same, a decline in the abundance or availability of shark preys would result in a higher encounter probability of sharks and human-preys. Hence, our search for causes need include also possible trends in populations dynamics of seals and sea lions, the preferred preys of white sharks.
Detection systems include acoustic curtains that detect acoustically-tagged sharks transiting in the area, such as that deployed between Rottenest island and Cottesloe beach in Perth. Once a shark is detected a message is issued that alerts of the presence, triggering beach closure among other responses. Surveys with small airplanes, a traditional system in WA, is, however, costly and not very effective, as the airplanes surveyed the beach area only in the early morning. New technologies, such as oceanic gliders fitted with acoustic and video capabilities interfaced with target recognition software would represent an effective option. Australia is leading the world in the use of gliders for oceanographic purposes, but new innovations are required to produce the “biological sentinels” required for this application. These are in development in the UWA Oceans Institute. Also drones, which have been developed for war and intelligence purposes, could be developed to survey for sharks at a much lower cost than the use of airplanes. Both sets of autonomous vehicles would allow a sustained patrolling of beaches where humans are most likely to interact with sharks.
Systems to affect shark behavior upon a contact, such as the use of shark repellents, is also an effective option. These are also in development by the Marine Neurobiology group at the UWA Oceans Institute.
Lastly, human behavior can be managed through simple advice and best practices can help reduce the risk of encounter and attacks. For instance, Warren “Starry” Starr, Boat and Diving Officers the UWA Oceans Institute has provided our members with a number of rules, including, among other recommendations, the adhesion to any beach closure by local authorities, no in water activity to be carried out within a 10 nautical mile radius of a great white incident or sighting for a minimum of 72 hrs, and the correct use by all in-water parties of shark shield or equivalent deterrent devices. Additional advice is provided at the File Attack File at the Florida Museum of Natural History.
There is, however, an additional element that need be managed, the social alarm that has been released by the sequence of attacks. This requires the involvement of social psychologists and psychologists specialized in the management of fear. Indeed, as indicated above the deaths causes by these attacks are, even in the case of the current five-fold increased rates, insubstantial compared to death statistics in Western Australia. However, neither one citizen is satisfied by statistical detail once his or her own life is at risk. The reason for our alarm with shark attacks to be far greater than that associated with car accidents, causing far greater mortality rates, has to do with fear. This fear is, in turn, associated with a sentiment that we lack control on shark attacks. Citizens feel, correctly, that we can reduce our chances of suffering a car accident by implementing a number of safety measures and prudent practices which confer confidence and dissipate fear.
Hence, the effective communication of the actions adopted to manage these risks and ensuring that citizens are aware of best practices to avoid shark attacks should be a top priority. The mass media, which in some cases play a role in escalating fear and alarm by choosing extreme headlines, can chose to play a more positive role by communicating the measures above. This communication strategy need be complemented with the capacity to respond to citizens concerns. Moreover, reports and insights from the public can also provide guidance to scientists in assessing the likely causes for the perceived increase in shark attacks. A “town hall” meeting format, which the UWA Oceans Institute is currently organizing, where scientists and the public meet and engage in a dialogue about the issues above, will be effective as a step towards such effective communication.
In summary, whereas the perceived rise in shark attack may not be statistically significant, the case remains one causing high concern among the public, as the public is not interested in statistics but on their individual well being. Assessing the possible drivers of the increased shark attacks require effective and sustained monitoring efforts of the sharks and their natural prey, to detect departures from baselines possibly contributing to the attacks. It also requires information on the increase in the presence, attitudes and exposure of humans to shark habitat. Effective management measures should contain effective shark detection methods, triggering appropriate responses on the citizens, and the use of deterrents and protective curtains. Effective communication remains central to manage the fears the movie “Jaws” seeded in generations of citizens.
After a long silence while trying to keep afloat as my home country, Spain, sinks in a storm of greed I am back at The Conversation!
Oceans Day, on June 8, was a springboard to ponder on the status of the ocean in preparation for the Rio+20 summit, which will address the sustainability of our use of planet Earth 20 years after the Río summit.
The Río Summit, formally the United Nations Conference on Environment and Development (UNCED, Río de Janeiro, Brazil, 3-14 June, 1992) lead to two important conventions, the Climate Change Convention which led to the Kyoto Protocol, The Convention on Biological Diversity and an agreement to “not carry out any activities on the lands of indigenous peoples that would cause environmental degradation or that would be culturally inappropriate”.
The Río+20 Convention (Río de Janeiro, 20-22 June, 2012) will revisit the Río Summit 20 years later and will focus on a green economy in the context of sustainable development poverty eradication; and the institutional framework for sustainable development.
The sustainability of the oceans continues to be an unresolved problem, as we continue to see disturbing trends in the ocean: coastal sprawl, loss of water and sediment quality in the coastal ocean, ocean acidification, eutrophication, siltation, hypoxia, with the associated loss of valuable habitats and fish stocks. Things are definitely NOT going well, and little improvement – and in fact much deterioration – has occurred since the Rio summit in 1992.
We can only conclude that despite much sweat talk, development has not been sustainable,as “development” takes priority, again and again, over “sustainability”.
In fact, I would claim that sustainable development emerged, as a concept, as an outcome of The Club of Rome’s exercise leading to the book by Donnella Meadows and co-workers entitled “The Limits to Growth”, which predictions, formulated 40 years, ago we still follow closely.
“The Limits to Growth”, developed the first global forecast model, which predicted an ongoing growth of population and of the economy until a turning point around 2030, leading to a collapse of industry, the economy and society. Avoidance of this turnpoint required strict measures of environmental protection to achieve “sustainable development”, where both world population and wealth could be maintained. These steps towards environmental stewardship have not yet been adopted. Will they ever be?
The sustainability of the oceans are further complicated because of the diffuse nature of ownership and governance, a heritage from Roman Law. In 535 AD, under the direction of Tribonian, the Corpus Iurus Civilis [Body of Civil Law] was issued in three parts, in Latin, at the order of the Emperor Justinian. The Book II, Part III. on “The Division of Things” stated that:
By the law of nature these things are common to mankind—-the air, running water, the sea, and consequently the shores of the sea. No one, therefore, is forbidden to approach the seashore, provided that he respects habitationes, monuments, and buildings which are not, like the sea, subject only to the law of nations.
The seashore extends as far as the greatest winter flood runs up. …
The public use of the seashore, too, is part of the law of nations, as is that of the sea itself; and, therefore, any person is at liberty to place on it a cottage, to which he may retreat, or to dry his nets there, and haul them from the sea; for the shores may be said to be the property of no man, but are subject to the same law as the sea itself, and the sand or ground beneath it. …
Garret Hardin revisited this concept in his seminal paper (Hardin 1969) that lead to the term “tragedy of the commons” referring to those goods, such as the oceans (and air, hence climate change), who belong to everyone and for which no one takes stewardship responsibilities.
Slowly, nations took ownership of the shore adjacent to their land, first through the three-mile rules still separating the statutory waters and resources under the watch of Australia’s state from those, offshore of three miles, under the watch of the Australian Commonwealth. The basis for the three-mile rule, still very much part of the governance framework of Australia’s marine governance is no other than the furthest distance that a cannon ball could be shot, the “cannonball rule” developed in the 16th Century by the Dutch jurist Cornelius van Bynkershoek. Is this a sensible basis to govern our marine waters and resources?
The wish to protect fisheries resources from overexploitation by foreign fleets, led President Truman to legislate 1945, US President Harry S. Truman issued two proclamations that established government control of natural resources in the US Continental Shelf. In 1972, a small nation, Iceland, unilaterally declared an Exclusive Economic Zone (EEZ) extending beyond its territorial waters, leading to a series of incident with British trawlers that prompted the Royal Navy to deploy warships, resulting in direct confrontations with Icelandic patrol vessels, the so-called “cod wars”. This ended in 1976 with Britain conceding to Iceland claim to 200 nautical mile (370 km) EEZ. All other nations followed soon after.
EEZ’s are regulated by the UN Law of the Sea, initiated in 1956 and still under development. Over half a century later, this law has not yet regulated the governance of living resources in the high seas, nor did the Convention for Biological Convention. Hence, the biological resources of the High Seas remain subject to the tragedy of the commons, and complex regulatory arrangements and redundancies render marine governance within economic exclusive zones cumbersome, including those in Australia.
However, the situation is changing as nations take bolder steps towards effective marine conservation, bringing into the oceans the goal, originated on land through the Convention of Biological Diversity to protect 10 % of the marine territory.
The announcement by Environment Minister Tony Burke’s of a national network of Commonwealth marine parks, including a huge protected area in the Coral Sea, is a major step forward for effective conservation of Australia’s rich marine resources, and a huge opportunity to achieve the sustainably deliver of wealth we deliver from our oceans. This development brings Australia at the forefront of marine conservation.
Hardin, Garret. (1969) “The Tragedy of the Commons.” Science. 162: 1243-8.
Codex Justinianus. Medieval Sourcebook: The Institutes, 535 CE.
Meadows, D.H., D. L. Meadows, J. Randers and and W. W. Behrens III.. 1972. The limits to growth. Universe Books.
Seagrass meadows are prevalent elements of Australian coastal waters, a submarine laurel crown around our country. The dark patches of water, in contrast with the turquoise blue of bare sand, that we see when sailing our coastal waters and the source of much of beach cast materials on our beaches.
Australian coastal waters probably harbor much of the global area of seagrass meadows and possibly the largest continuous seagrass meadows in the ocean. Yet, seagrass meadows lack the glamour of coral reefs and many Australians, as most people worldwide, do not think much of their seagrass. Indeed, seagrasses have been termed the “ugly ducklings of marine conservation” (Duarte et al. 2008) for the low public interest in these marine ecosystems.
Most Australians? Well, not really. Aboriginal Australians, particularly the “Salt Water People” in Australia’s North have a deep appreciation for seagrass meadows. Seagrass meadows are recognized as good Country, recognizing them to be the habitat and food source of dugongs and some turtle species. Salt water people have dozens of words to refer to the various types of seagrass species and the various configurations of the meadows they form, and their songlines and dreamlines contain a wealth of knowledge on the ecology of seagrass meadows.
New to Australia, I am fascinated by the deep understanding of marine ecology and ecosystems by “Salt Water People”, from which I have much to learn despite 25 years of professional research in marine ecology. Their effective oral transmission systems carry forward knowledge from rare events that provide a millenary repository of ecological knowledge across time scales we can otherwise only access through paleosciences.
For example, their deep knowledge of seagrass meadows includes insights onto the impacts of floods, and the subsequent plumes of turbid waters that fill the coast, on the demise of seagrass meadows and the decline of the animals they harbor, a chain of events the large 2010-2011 Queensland floods are now confirming.
Aboriginal Australians continue to be effective custodians of seagrass meadows and the, now endangered, turtles and dugongs, through, for instance, the Dugong and Marine Turtle Project, where Traditional Owners and Indigenous communities are contributing their deep understanding of these ecosystems to driving research and management activities towards the conservation of these species.
Seagrass meadows play an important role in the maintenance of marine biodiversity in coastal ecosystems (Hemming and Duarte 2000), but recent research shows that they also have the capacity to act as intense carbon sinks. In a paper published just this Sunday in Nature Geoscience with the participation of scientists at the Oceans Institute of the University of Western Australia,(Fourqurean et al. 2012), we demonstrate that the carbon stocks under seagrass meadows exceed those associated with forests. In particular, we show, on the basis of 3640 observations of 946 distinct seagrass meadows across the globe, that coastal seagrass beds store up to 83,000 metric tons of carbon per square kilometre, compared to around 30,000 metric tons per square kilometre in a typical land forest. This adds to evidence (Duarte et al. 2005, Duarte et al. 2011, Kennedy et al. 2011) that seagrass meadows are strong CO2 sinks, with a hectare of the most effective seagrass meadows exceeding by ten fold the CO2 sink capacity of the pristine Amazonian forest (Nelleman et al. 2009, McLeod et al. 2011).
Seagrass are highly productive, and are able to store much of the excess carbon they produce on the sediments. Moreover, their canopies act as efficient filters for suspended particles that also add the carbon loads to seagrass sediments, typically contributing 50% of the organic carbon deposited in seagrass sediments. Contrary to forest soils, which carbon deposits are regularly lost as CO2 to the atmosphere following forest fires, the organic carbon deposits in seagrass meadows are stable and accumulate without the risk of being lost to fires, which do not occur underwater! Seagrass lock vast amounts of carbon in the several-meter thick sediment deposits that sit under seagrass meadows.
Yet, these carbon deposits are compromised by the loss of seagrass meadows, due to dredging, coastal alterations, moorings, loss of water and sediment quality and other impacts, responsible for the loss of about 1/3 of the world’s seagrass meadows (Waycott et al. 2009), including some of Australia’s seagrass meadows. But the future of seagrass meadows is looking grim (Duarte 2002), as climate change has been identified as a threat conducive to the functional extinction of seagrass meadows by mid Century.
Research conducted in collaboration between scientists at Spain’s National Research Council and the Oceans Institute of the University of Western Australia has coupled knowledge on the thermal thresholds triggering mass mortality of Mediterranean seagrass meadows, at about 29 º C, with model projections of the probability of exceeding this threshold in the Balearic Islands (Spain) along the 21st Century under a “moderately optimistic” scenario of greenhouse gas emissions (Jordá et al. 2012). These results, reported in the issue of Nature Climate Change released last Sunday, predict that seagrass meadows in the Balearic Islands will likely become functionally extinct, declining to 10% of their current density, before 2050.
Most disturbing, this research shows that removing local impacts will increase the resistance of the seagrass to climate change only marginally, as functional extinction will only be postponed by a decade through these management actions. We conclude that there is no alternative to reducing green-house emissions to conserve seagrass meadows. But this hardly comes as a surprise, as we really have plenty of evidence – despite fiddling with spurious arguments while dragging our global feet – that there is no alternative to reducing green house gas emissions to mitigate anthropogenic climate change below dangerous levels.
It is indeed paradoxic that CO2 emissions can, through its effect on climate, do away with some of the ecosystems that play a most important role as CO2 sinks. Indeed, realization of the benefits of conserving ecosystems with a strong CO2 sink capacity to mitigate climate change has prompted strategies to fund the conservation of these ecosystems through revenues from voluntary and mandatory carbon taxes. These schemes, such as REDD, have generally focussed on tropical forests but realization of the strong CO2 sink capacity of seagrass meadows has lead to the formulation of strategies, which we nicknamed “Blue Carbon” strategies, to mitigate climate change through the conservation and restoration of seagrass meadows (Nelleman et al. 2009). Since we made this proposal in 2009 the science of “Blue Carbon” strategies has progressed greatly (McLeod et al. 2011), although caveats and uncertainty remain.
With a significant tax on CO2 emissions, Australia has now an unique opportunity to add to efforts to conserve seagrass meadows using carbon tax revenues to support the development of the knowledge and actions required to conserve and restore seagrass meadows with their associated carbon sinks. This will be a win-win situation, because in addition to be likely self-funding as the CO2 sequestered within a few decades will probably pay back the dollars invested in the conservation and restoration of the meadows, restoring seagrass meadows will improve biodiversity, as they are habitat to a number of endangered species, including sea horses, dugongs and turtles, protect our shoreline from sea level rise and storm surges, and improve water quality.
The CSIRO has launched an initiative, the Coastal Carbon Cluster, to provide, in collaboration with universities across Australia, the scientific underpinnings necessary to resolve the role of seagrass meadows and other coastal ecosystems in Australia’s carbon budget and explore the potential to include “Blue Carbon” initiatives among the slate of strategies Australia is deploying to contribute to mitigate climate change.
I hope that realization of the key role and importance of seagrass meadows will step up efforts to conserve them and restore lost and damaged meadows across Australia so that the grandchildren of our grandchildren will be able to match the teachings about seagrass meadows in Aboriginal Australian songlines with the reality of the lush ecosystems they will enjoy while swimming in our beaches.
Duarte, C.M., J. Middelburg, and N. Caraco.2005. Major role of marine vegetation on the oceanic carbon cycle. Biogeosciences, 2: 1–8.
Duarte, C.M., W.C. Dennison, R.J.W. Orth, T.J.B. Carruthers. 2008. The charisma of coastal ecosystems: addressing the imbalance"– Estuaries and Coasts 31:233–238.
Duarte, C. M., N. Marbà, E. Gacia, J. W. Fourqurean, J. Beggins, C. Barrón, and E. T. Apostolaki. 2011. Seagrass community metabolism: Assessing the carbon sink capacity of seagrass meadows. Global Biogeochem. Cycles, 24, GB4032, doi:10.1029/2010GB003793.
Fourqurean, J. W., C. M. Duarte, H. Kennedy, N. Marbà, M. Holmer, M.A. Mateo, E.T. Apostolaki, G. A. Kendrick, D. Krause-Jensen, K. J. McGlathery, O. Serrano. 2012. Seagrass ecosystems as a significant global carbon stock. Nature Geosciences DOI: 10.1038/NGEO1477
Jordà, G., N. Marbà and C. M. Duarte. 2012. Warming sets Mediterranean seagrass on a collision course. Nature Climate Change, DOI: 10.1038/NCLIMATE1533.
Kennedy H, Beggins J, Duarte CM, Fourqurean JW, Holmer M, Marbà N, and Middelburg JJ. 2011. Seagrass sediments as a global carbon sink: Isotopic constraints. Global Biogeochemical Cycles 24, doi:10.1029/2010GB003848.
McLeod, E., G. L. Chmura, S. Bouillon, R. Salm, M. Björk, C. M. Duarte, C. E. Lovelock, W. H. Schlesinger, B. Silliman. 2011. A Blueprint for Blue Carbon: Towards an improved understanding of the role of vegetated coastal habitats in sequestering CO2. Frontiers in Ecology and the Environment, doi:10.1890/110004
Nellemann, C., Corcoran, E., Duarte, C. M., Valdes, L., DeYoung, C., Fonseca, L., Grimsditch, G. (Eds). 2009. Blue Carbon. A Rapid Response Assessment. United Nations Environment Programme, GRID-Arendal. 80 p.
A number of comments to my previous post on the role of aquaculture as a milestone in the history of humanity have clearly identified one of the key problems for the growth of aquaculture: that some of marine aquaculture today can hardly be considered sustainable, particularly where the production targets predatory fish high-up in the food web.
Indeed, in an assessment of the bottlenecks to progress towards a sustainable aquaculture, my coworkers and I identified the high trophic level at which aquaculture is exploited as a major driver of the environmental impact of aquaculture (Duarte et al. 2009).
A comparison between the way we exploit marine and terrestrial food webs can illustrate the problem. The efficiency in the transference of organic matter up the food webs is typically below 10% (i.e. an organism grows in weight by less than 100 g for each kg of food ingested) (note 1). This means that production is dissipated as it moves up in the food web, so for every ton of plant production introduced in the food web, we can harvest up to 100 Kg of herbivores, 10 Kg of carnivores feeding on herbivores, and so on. On land, we eat largely plants and herbivores, with a few omnivores and very few carnivores (e.g. dogs in some Asian countries). Accordingly, the mean food production on land has a mean weighted trophic level of 1.008, where 1 is a plant, 2 is a herbivore, etc.
In contrast, we eat many large predatory fish, such as tuna or sharks, that sit high up in the marine food web, which has many more steps than the terrestrial food web does. For instance, tuna has a trophic level of about 5 (i.e. four other steps in between plankton production and tuna), which is unparalleled in terrestrial food webs, equivalent to imaginary monsters eating wolf-eaters (Duarte et al. 2009).
This means that the production of 1 Kg of tuna (trophic level 5) requires about 100,000 kg of plankton production, which is equivalent to the annual primary production of 5 hectares of ocean surface. If we, however, consume 1 kg of small pelagic fish, such as anchovies (trophic level 3), we are effectively harvesting the annual production of an ocean surface 100 fold smaller.
Hence, the production of predatory fish high up in the food web requires the appropriation of massive amounts of ocean production, typically as fish converted in flour and oil for fish feed production.
Yet, aquaculture, as practiced today produces food at a lower trophic level (1.84) compared to that of fisheries (trophic level 3.2, Duarte et al. 2009), indicating that, for a given unit production, aquaculture co-opts 50 times less ocean production than fisheries.
Understanding the efficiency of food webs, and introducing food web concepts in planning aquaculture production and human diets is important. For instance, a food web can use any given primary production far more effectively than a single species can. Accordingly, polycultures, where different species are cultured jointly composing a small food web rather than in isolation, can increase the yield of aquaculture by 30 %, for a give use of feed, while reducing environmental impacts (Duarte et al. 2009). For instance, polycultures combine fish cultures with a belt of filter-feeders, such as mussels or oysters, that filter out the excess particles in the water, and an outer belt of algae that strip the nutrients (nitrogen and phosphorus) released by excretory and decomposition processes out of the water. Cages with bottom detritivores can also remove excess feed and feces reaching the sea floor and turn them into valuable production.
However, the ultimate solution to the sustainability of aquaculture relies in the mass production of macroalgae and filter-feeding and herbivore organisms, bringing the trophic level of production down to levels comparable to those of food production on land. This shift will also allow aquaculture to close their production cycle, producing, in the farm, the fish feed required, thereby releasing aquaculture from its present dependence of wild fisheries catches.
In doing so, aquaculture can shift from being a source of (comparatively minor) problems to being a positive force in the marine environment. Large scale production of macroalgae, such as those already existing in China and Korea, can help rehabilitate degraded coastal waters by stripping excess nutrients from the water, injecting photosynthetic oxygen into hypoxic waters, and providing habitat to increase biodiversity. Much of that production can be used to produce biofuels, free of the problems (competition with crops for water and fertile land) that affect biofuel production on land, thereby helping mitigate climate change (Duarte et al. 2009).
The capacity to control the life cycles of marine organisms can also be instrumental as a tool in conservation biology, where populations of endangered marine organisms can be subsidized by the release – with proper consideration to avoid genetic dilution – of organisms grown in culture, catalyzing the recovery of endangered wild populations. This use would be comparable to successful breeding programs on endangered terrestrial birds and mammals, such as the Tasmanian devil. Indeed, the Pacific Salmon fishery off Alaska is already subsidized by the release of fry from aquaculture.
As evidenced by some comments to my earlier post, the perception that aquaculture is detrimental to the environment is widespread. Whereas impacts do exist in many operations, these do not fully justify the negative perceptions and biases the public often has against aquaculture.
For instance, experiments in Scotland have shown that when wild and aquaculture salmon are offered to the public, with each labelled both wild and aquaculture, a significant fraction of the test subjects consider the wild salmon to be superior in taste to the aquaculture one. However, this is the same fraction of respondents for the wild salmon labelled as “wild” and the aquaculture salmon also labelled as “wild” (Holmer et al. 200x). This clearly illustrates societal biases against agriculture that need be addressed.
In fact, I became first involved with aquaculture through research on its environmental impacts in Europe, across the Mediterranean and in SE Asia (The Philippines, Vietnam and Thailand). As I learned more about aquaculture and its impacts I realized that the impacts were relatively small and easily addressed, and that the approach taken to the assessment of the impacts of aquaculture are intrinsically unfair.
Provided we agree (and I hope we do!) that we ought to produce food to feed humans, then the relevant question is not only what is the environmental value of a pristine coastal area vs. one supporting aquaculture – the approach typically used in evaluation the impacts of aquaculture – but what is the environmental cost of producing food on land and at sea. I submit that this comparison clearly indicates that aquaculture is a relatively benign form of food production, in terms of its environmental impacts as well as risks to human health, than food production on land.
Many past pests with catastrophic consequences on human populations and contemporary risks to human health (mad cow disease, avian flue, porcine flue, etc.) derive from the fact that the animals we grow on land and evolutionary close to humans, so that pathogens and parasites may jump from them onto humans (Duarte et al. 2007). In contrast, marine organisms and, with the exception of mammals, too distant in evolution for their pathogens to easily jump across to humans. The conversion of wild ecosystems onto cropland and pastures is still responsible for much of deforestation, mass application of agricultural fertilizers have deteriorated aquatic ecosystems, both marine and inland, globally and contributed to climate change, and many hazardous persistent organic pollutants have been introduced to protect crops from insect pests and weeds. Yet, we have come to accept that about two thirds of our landscape be used for food production on land, while many people are appalled if they see an aquaculture form protruding far into the horizon.
The solutions to sustainable aquaculture are relatively simple, but will sustainable aquaculture be economically feasible? Can the changes I recommend above be implemented while still delivering benefits? Is this an industry for poor nations, with low labour costs, only or can aquaculture be a successful source of food, jobs, and still deliver benefits to the environment of developed countries with high labour costs such as Australia?
I invite you to offer your views on these questions, and – after listening – I will provide mine in my next post.
Note 1: Fortunately our own growth efficiency is well below this! Try calculating your own growth efficiency to figure out what would happen if your body weight was to increase by 10% of all the weight of the food you ingest in a year… Scary!
Duarte, C.M., N. Marbà, and M. Holmer. 2007. Rapid Domestication of Marine Species. Science 316: 382-383.
Duarte, C.M., M. Holmer, Y. Olsen, D. Soto, N. Marbà, J. Guiu, K. Black and I. Karakassis. 2009. Will the Oceans Help Feed Humanity? BioScience 59: 967–976.
Holmer, M., Black, K., C.M. Duarte, Marbà, N., Karakassis, I. (Eds.) 2008. Aquaculture in the Ecosystem. X, 326 p., Springer Netherlands.
About 12,000 years humans initiated the domestication of plants and animals in the fertile crescent thereby developing the capacity to control the production of their food. The development of agriculture is a fundamental milestone in the history of humanity, conducive to a re-arrangement of societies, the development of cities, complex economies and taxes and even the development of writing, as the oldest known writing, the Umm el-Qaab clay tablets, dated 3400 BC to 3200 BC, are an account of agricultural yields and the corresponding taxes.
Meanwhile, we have continued to engage with the oceans in a primitive way, delivering food from the oceans through harvesting of wild stocks of fish and marine organisms, as hunters-gatherers, so our interaction with the oceans continue to be Paleolithic in Nature.
This is now changing. About 50% of all marine fish currently consumed derive from aquaculture, which now provides 30% of all marine food products, and has allowed the yield of marine food to continue to increase despite the decline in wild captures by about 10% over the past 20 years (Duarte et al. 2009). Indeed, there is no longer a margin to increase marine fish yield through harvesting of wild stocks, as most fisheries are overexploited and the challenge to reach sustainable levels requires that catches be lowered by at least an additional 20%. Indeed, fisheries should be reduced even further as aquaculture develops the capacity to produce a greater share of our marine food requirements, to eventually bring fisheries to a comparable residual activity as that of hunting.
Although marine aquaculture originated in Egypt 4,000 years ago, the domestication of marine species is a recent phenomenon, progressing at an unprecedented rate, with over 300 marine species domesticated, and more than 10 additional species being brought into aquaculture every year (Duarte et al. 2007).
Meanwhile, increased human population continues to drive a growing demand for marine food products. Marine food is not just supplementing food production on land, as the consumption of marine food delivers demonstrated benefits to human health, through the supply of healthy fatty acids, such as omega 3, as well as oligo-elements, such as iodine and selenium, and is a comparatively safer source of protein than livestock products. Health organizations typically recommend two meals of seafood per week, effective in improving our coronary, mental and reproductive health.
Aquaculture is the only avenue to meet the growing demand for seafood. However, aquaculture, as practiced today, is often not sustainable. Fin-fish aquaculture is dependent on the supply of fish flour and oils from wild catches, consuming 33 million catches of wild stocks for oil and flour to deliver 35 million tons of fish per year. In addition, excess feed generates problems in the environment and escapees from aquaculture may turn into invasive species or affect the genetic composition of wild stocks (Holmer et al. 2008). Aquaculture need to progress to achieve sustainable practices, which is an achievable target. With simple and appropriate measures, which I will describe in a subsequent post, marine aquaculture has the potential to shift from being a problem to become a positive force in the marine environment (Duarte et al. 2009).
Whereas the production of meat products from livestock represents 2% of food production, it consumes 45% of all water used by agriculture. Shifting the production of the animal protein component of our diet to the ocean can free most of this water to be used to produce additional agricultural products, while delivering healthier diets. Marine aquaculture does not require either significant freshwater nor arable land, the two major bottlenecks for further growth in global food production.
Integrating food production on land and the oceans has, thus, the potential to overcome current Malthusian ceilings and help provide an answer to the “9 billion people question” (i.e. how to feed them). In doing so, we must be intelligent and avoid the major errors made as agriculture and livestock production developed, with a big toll on the environment and human health.
Ten thousand years later we have progressed enough in our understanding of the biosphere and the oceans to recognize our capacity to impact on the environment and the consequences of these impacts on our own health and well being. We are now prepared to shift the way in which we interact with the oceans and to do so to deliver wealth and well being while improving the state of our oceans.
Duarte, C.M., N. Marbà, and M. Holmer. 2007. Rapid Domestication of Marine Species. Science 316: 382-383.
Duarte, C.M., M. Holmer, Y. Olsen, D. Soto, N. Marbà, J. Guiu, K. Black and I. Karakassis. 2009. Will the Oceans Help Feed Humanity? BioScience 59: 967–976.
Holmer, M., Black, K., C.M. Duarte, Marbà, N., Karakassis, I. (Eds.) 2008. Aquaculture in the Ecosystem. X, 326 p., Springer Netherlands.
The Royal Society of the UK has released a new report, People and the Planet, addressing the problem of human overpopulation and the depletion of key resources.
As always, the report is well written, although, in my opinion it lacks novelty, in that the trajectory toward exhaustion of key resources, including water, arable land and essential elements, such as phosphorus or iron, have been well understood and remarkably forecasted since the publication forty years ago of “The Limits to Growth”, by Donella H. Meadows and coworkers (1972).
In brief, we are aiming at 9.3 billion people on Earth by 2050, reaching the median of 23 independent estimates of the maximum human population on Earth (Cohen 1995), imposed largely by freshwater and arable land available to produce sufficient food to feed them.
Key recommendations of the report include a request to alleviate poverty and reduce inequality, the need to reduce per capita consumption in the most developed nations, implement voluntary programs for family planning, and integrate economic development and environmental conservation.
Really no surprises there, as we have known what we should do for over 40 years, yet we seem unable to implement these recommendations and continue to march toward a grim future, among increasing symptoms of overshoot.
Whereas the plan outlined in the “People and the Planet” report should continue to be our priority, as the most responsible plan, we must move onto considering a Plan B. This, in the opinion of some, such as Stephen Hawking and others (e.g. Bainbridge 2009), must include the search of an Earth-like exoplanet in our galaxy, an argument that is often used as one of the drivers for the search of exoplanets (Bainbridge 2009).
But rather than looking to space for a Plan(et) B, I suggest we should look from space, and consider our own planet, an unique blue marble, whose blue colour derives from the abundance of water, covering 72% of its surface down to a mean depth of 3800 m. If we were aliens in search for exoplanets to carry on with our lives, we would consider planet Earth to be a perfect candidate, unparallelled among the 500 plus known planets for the abundance of water. Yet, we insist in using water in the dry parts of our planet, the continents, where water is increasingly scarce.
Indeed, humans are, without fully realizing the significance of these developments, taming the oceans and starting to deliver significant amounts of water, food, energy and other resources from the oceans.
Estimates of the potential of the oceans to deliver these critical resources indicates that this is well in excess of those required to satisfy the livelihoods of 9.3 billion people (Duarte et al. 2009). The challenge, and not a difficult one, is to do so sustainably.
So we do already have our Plan(et) B, and this is called the Ocean. The Ocean has the capacity to safely and sustainably deliver resources to face the gran challenges of humanity. Indeed, the the motto of the UWA Oceans Institute, which I lead, is “Ocean Solutions for Humanity’s Grand Challenges”, as we are committed to deliver the knowledge to pursue this Plan(et) B.
I will devote the next series of blogs to present you with our Plan(et) B: Oceans.
Bainbridge, W.S. 2009. Motivations for space exploration. Futures 41: 514–522
Cohen J.E. 1995. How many people can the earth support? WW Norton & Company
Duarte, C.M., M. Holmer, Y. Olsen, D. Soto, N. Marbà, J. Guiu, K. Black and I. Karakassis. 2009. Will the Oceans Help Feed Humanity? BioScience 59: 967–976.
Meadows, D.H., D.L. Meadows,J. Randers and W.W. Behrens III. 1972. The limits to growth. Universe Books.
The fields of Majorca (Spain) are in full blooming at the height of spring, and bees are very busy pollinating this unplanned garden.
I am happy to welcome them to the citrus orchard in my backyard, as their busy work on their flowers offer a promise of oranges, tangerines and lemons to be enjoyed toward the end of the year. It is spring in Majorca, and is not a silent one!
Bees and silent springs were discussed on my previous blog. But perhaps we should this time reflect on the role of bees and the soft, mutually benefiting interaction between plants and their pollinators as one of the corner stones for the emerging paradigm of cooperation as a powerful force.
Pollination has long been acknowledged to be a key motor of evolution and the emergence of huge species diversity in terrestrial ecosystems. The co-evolution between plants and their insect pollinators is believed to explain the vast diversity of angiosperms (about 300,000 species) and insects (about 1,000,000 species) comprising much of the named species in the planet.
Yet, mutualistic interactions between species, where both interacting species obtain benefits (food in the case of bees and pollination in the case of plants in our example), have been considered to represent “soft” interactions, with a far smaller role than hard ones, such as competitive interactions or predator-prey interactions in ecosystems and evolution.
Other examples of such mutualistic interactions involve those between animals feeding on fruits and their host plants, which benefit from the dispersal resulting from the mobility of the animals that eat their fruits and seeds. These animals include humans, as our dependence on a few crop plants (< 1 in every 1,000 plant species), have lead to their global propagation, with a few of them (wheat, rice, potatoes, etc.) having multiplied their biomass, and consequently fitness, by many orders of magnitude relative to those before these interactions emerged, about 10,000 years ago.
However, recent assessments, based on the application of complex network theory to mutualistic networks in ecosystems – particularly plant-pollinator interactions at the ecosystem level – have revealed that mutualistic interactions often involve hundreds of species that form complex networks of interdependences, where most species have a few interactions, but a few species are much more connected, i.e. either pollinate many plant species or are pollinated by many insect species, than expected by chance. The structure, more formally topology, of these mutualistic networks has been found to have important implications for the stability of species and can be regarded as the foundation of the architecture of biodiversity (Bascompte and Jordano 2007). For instance, mutualistic networks are highly nested, with the more specialist species interacting only with proper subsets of the species that interact with the more generalist ones.
The way subsets of interacting species are nested within mutualistic networks reduces competition and enhances the number of coexisting species, thereby increasing diversity (Bastolla et al. 2009). Because of the tight interactions and inter-dependence of species within these networks, species extinctions within mutualistic networks often lead to a cascade of extinctions, propagating across the network of species they interact with. This research has lead to the understanding that species are not driven to extinction in isolation but as sets of closely-connected species within ecological networks.
The topology of mutualistic networks determines, to a large extent, their robustness against species extinctions. Cascades of extinctions are more likely if the most linked pollinators in mutualistic networks are lost (Memmott et al. 2009), so that the same species that have a greater contribution to the persistence of the network are also the most vulnerable to extinctions (Saavedra et al. 2011). This role typically include bumble–bees and some solitary bees in pollination systems.
Understanding the web of mutualistic interactions is, therefore, crucial to understand evolution, the maintenance of biodiversity and the consequences of species extinction and can be thus be used a a tool in conservation biology. The possible impacts of synthetic chemicals and other stressors on bees is not, therefore, a problem of conservation of bees alone, but a problem affecting the whole network of species they interact with. Mutualistic networks are powerful drivers of biodiversity but, at the same time, link species in a way that makes them prone to domino effects triggered by extinctions of the most-connected species.
Demonstration of the power of cooperative processes between species involved in mutualistic interactions inspired the exploration of a similar role for soft interactions between firms and businesses in societies. This research has revealed that, as in ecological networks, networks of mutualistic interactions between companies play a central role in diversifying economic networks. In particular, this research has revealed striking similarities between pollinator-plant networks and the networks of manufacturer–contractor interactions (Saavedra et al. 2008).
As in biodiversity research, economic theory had focussed on competitive interactions between firms in markets and had neglected, to a large extent, the key, and obvious, importance of interactions between firms and businesses that yield mutual benefits.
The accumulation of molecular and genomic data is also shifting our views on evolutionary processes. As we learn more about evolution we start to better understand the importance of cooperative over competitive processes, as mutualistic interactions, symbiotic interactions and lateral gene transfer have produced major evolutionary changes, compared to the small step-changes derived from competitive interactions.
Indeed cooperative processes are emerging as powerful mechanism to drive change, innovation and maintain diversity and stability in a broad array of systems. In addition to ecology and evolution, the emergence of cooperative processes as a powerful paradigm has occurred in other fields of biology (molecular cooperative processes in gene expression regulation and metabolic regulation), computation (crowd computing), economics (crowd-funding, cooperation between mutualistic companies) social sciences (opinion shifts and cooperation in societies), cognitive sciences (crowd intelligence), and learning (crowd learning).
The architecture of nature – as well as that of society – is not as depicted in the David Attenborough documentaries; which I often thought of as ecological metaphores for social darwinism, where the key drivers were the survival of the fittest and the predation of the week by the powerful. The architecture and evolution of life – and society – is intimately linked to cooperative processes as a powerful creative force.
Social darwinism and competitive interactions prevail also in many domains of Australian life, including universities, among as well as within them.
We have much to learn from realizing that the competitive processes that take such a pre-eminent role in our lives lead to minor advantages and that the real power for innovation and major leaps forward rests with cooperative processes.
We have, in conclusion, much to learn from observing bees and flowers, an activity I quite happily embrace.
Bascompte, J., and P. Jordano. 2007. Plant-Animal Mutualistic Networks: The Architecture of Biodiversity. Annual Review of Ecology, Evolution, and Systematics
Bastolla, U., M. A. Fortuna, A. Pascual-García, A. Ferrera, B. Luque, and J. Bascompte. 2009. The architecture of mutualistic networks minimizes competition and increases biodiversity. Nature 458: 1018-1020
Memmott, J., N. M. Waser, and M. V. Price. 2004. Tolerance of pollination networks to species extinctions. Proc. R. Soc. Lond. B 271: 1557 2605-2611.
Saavedra, S., F. Reed-Tsochas, and B. Uzz. 2009. A simple model of bipartite cooperation for ecological and organizational networks. Nature 457, 463-466
Saavedra, S., D.B. Stouffer, B. Uzzi and J. Bascompte. 2011. Strong contributors to network persistence are the most vulnerable to extinction. Nature 478: 233–235.
This Sunday we are celebrating Earth’s day, and Earth corresponded to our recognition by slowly spinning once again around its own axis, thereby allowing us to enjoy yet one more beautiful sunrise and, later on, a beautiful sunset.
As our planet completes one more spin, my thoughts on the 2012 Earth Day are with Rachel Carson and her landmark book “Silent Spring” published in September 1962, 50 years ago.
Silent Spring alerted society, with very compelling, scientifically-sound arguments and beautiful prose, of the risks of the massive use of synthetic pesticides, in particular DDT. Over a decade before this publication, Paul Müller received the Noble Award in Physiology for the demonstration of the efficiency of DDT to control insect populations.
Whereas massive application of DDT proved to be effective in reducing the prevalence of malaria, arguably saving many human lives; it came at a huge cost to all life, as DDT was not only effective in controlling mosquitoes carrying human pathogens, but all insects alike. DDT accumulated in living tissues and increasing in concentration upward in the food web. Decades after the ban on the use of DDT, all of us can still detect significant, but decreasing, concentrations of DDT in our own blood. Hence, massive application of DDT affected insects, both beneficial and noxious, wildlife and humans alike.
Rachel Carson’s work reacted to emerging evidence at the high environmental impacts of DDT, but was met with vigorous opposition from industry, which tried to prevent the publication of her book and made every possible attempt at discrediting her.
Much too often we have seen this same pattern: with the role of CFCs in destroying the ozone layer; the dirty campaign of industry to suppress the evidence for the health impacts of smoking; and the concerted campaign of some industrial sectors to discredit climate science and scientists.
Rachel Carson was, however, a brave woman, if with fragile health, and held her ground against these pressures, driving President J.F. Kennedy to consider seriously the risks of synthetic pesticides. Almost a decade later, following Rachel’s death from cancer, her efforts came to fruition in the form of US legislation regulating the application of synthetic pesticides.
Whereas Rachel Carson’s fight was hugely successful, it is not yet over. Our biosphere continues to receive emissions of synthetic chemicals, many of which have important negative consequences on biota, humans and the Earth System (e.g. acting as powerful greenhouses, impacting upon the ozone layer or interfering with the immune and reproductive systems of organisms, including our own).
The inventory of synthetic chemicals ever synthesized by humans is in the order or a million compounds, of which tens of thousands have been produced industrially and thus released in the environment. Many of these chemicals are biologically active, persistent in the environment and enter global transport. The resulting “anthropogenic chemosphere” (Dachs and 2010) is an important, if not sufficiently acknowledged, vector of global change.
Indeed, Rachel Carson’s fight is not yet over, and we need to continue to be alert at the pervasive effects of synthetic chemicals. A recent paper in Science (Whitehorn et al. 2012) provided compelling experimental evidence, linking the widespread and hereto mysterious decline of bumble bees with Neonicotinoid insecticides. Bees, tiny as they are, play a fundamental role in maintaining biodiversity, and even the production of fruits for human consumption, through their role as pollinators. The widespread decline of bees can have, thus, major impacts on biodiversity and food webs.
The Stockholm Convention on Persistent Organic Pollutants, effective from May 2004, aims at eliminating or restricting the production and use of persistent organic pollutants (POPs). The original list of twelve compounds was expanded to add ten more compounds in 2010, and an additional 3 have been proposed for regulation. Yet, the rate of adding synthetic compounds to the Stockholm Convention is far slower than the rate of release of new POPs.
One of the POPs included in 2010 is PolyBrominated Diphenyl Ethers or PBDEs. PBDEs are used as flame retardants in a diversity of materials, including building materials, electronics, furnishings, vehicles, airplanes, plastics, polyurethane foams and textiles. They have been shown to reduce human fertility at levels found in households; and we can be certain that these households include our own.
A mature industry should produce sufficient robust scientific evidence for the absence of significant impacts of new synthetic chemicals in the environment, assess their likely persistent and transport pathways in the environment, and provide the scientific community and governmental labs with analytical techniques to resolve the ambient levels of these chemicals. A mature society should demand that industry does just that.
This is, however, not the case, and much too often we discover dangerous impacts of synthetic chemicals long after they were introduced in markets and, therefore, in the environment.
As the Earth Day comes to a quiet end, I see the reflections of the purple tones of the clouds in the quiet waters of the Mediterranean, and remember that Rachel Carson should also be celebrated for other contributions, including her beautiful books about the marine environment. Try, for one, Rachel Carson’s “The Sea Around Us”.
Dachs, J., and L. Méjanelle. 2010. Organic Pollutants in Coastal Waters, Sediments, and Biota: A Relevant Driver for Ecosystems During the Anthropocene?. Estuaries and Coasts 33:1–14
Whitehorn, P.R., S. O’Connor, F.L. Wackers, and D. Goulson. 2012. Neonicotinoid Pesticide Reduces Bumble Bee Colony Growth and Queen Production Science DOI: 10.1126/science.1215025.
Last week I visited the University of Virginia, one of the oldest in the US, invited to deliver a Moore Lecture (title: “Warming, Hypoxia and Ocean Acidification: A deadly cocktail for marine biota”).
The University of Virginia was founded in 1819 by Thomas A. Jefferson, third president of the US, whom I deeply admire for many reasons. Primarily for composing the original draft of the US Declaration of Independence adopted by the Continental Congress on July 4, 1776. The Albert and Shirley Small Special Collections Library in the University of Virginia contains a number of documents, letters and drafts corresponding to the making of the Declaration, which I was able to inspect last week, providing insight into the crafting of this document.
The US Declaration of Independence is a beautiful piece of literature, containing a potent statement on the meaning of life, particularly contained in its second sentence:
“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness”.
The ending statement, that we are all endowed with the right of pursuing happiness provides, in my opinion, a thought worth considering when pausing and thinking about global change.
As stated in my previous post, global change derives, ultimately, from population growth and the growth in per capita use of resources, as reflected in our global ecological foot print. But does increased use of resources make us happier?
Happiness is a subjective trait, but can be quantified, mostly through self-declared, statements on personal satisfaction. Metrics of happiness are being devised to complement or replace other metrics of human development, such as the happy planet index. The Happy Planet Index is a measure of the ecological efficiency with which human well-being is delivered around the world. It is calculated as the ratio between the product of the self-declared satisfaction, on a scale 0 to 1, and the life expectancy – yielding happy-life years, and the ecological footprint, as the resources required to support those happy-life years.
You can check world stats (see figures), but also take a survey to calculate your own happiness. I took mine, but did not fare very well, because turns out I travel too much by plane, going to conferences and workshops, and this does consume a lot of energy… must reduce my own footprint!
Most importantly, research on happiness across countries shows that there is a relationship between monetary capacity, as GDP per capita, and happiness across nations, but only at very low GDP. At very low GDP per capita, for the poorest countries (< 5,000 $ per capita per year), the degree of happiness rises quickly with even modest improvements in GDP per capita. However, at moderate levels of income, the happy planet index ceases to improve with additional wealth as this yields only marginal or no self-satisfaction, but involves an increase in the use of resources.
Essentially, happiness follows, once a moderate income is reached, a law of diminishing returns.
These empirical results are anticipated in my favourite definition of happiness, by Channing Pollock as:
“_a way station between too little and too much”
Pause for a second and ask yourself: are you happy?
Can you be just as happy or happier while imposing a smaller footprint on the planet resources? I can.
The running title of this column, the blue marble, reads “Traveling the world investigating what global change is doing to aquatic ecosystems ”, but what is global change?
I suppose all of us have an intuitive understanding of what this term refers to, but perhaps I should try to articulate my own understanding, acknowledging that there may be different views.
By Global Change I refer to the impact of human activity on the key processes that govern the functioning of the biosphere. These include, but are not limited to, the climate system, the stability of the stratospheric ozone layer, the cycles of elements and materials essential for life (biogenic materials), such as nitrogen, carbon, phosphorus or water, and the balance and distribution of species and ecosystems.
Whereas the effects of human activity on these processes may appear to be independent, these changes are connected by a common driver: the combination of the growth of human population, now exceeding 7 billion people, and the increased per capita consumption of resources, including water, energy, biogenic and synthetic materials, land, and biodiversity.
The global use of resources can be represented by the ecological foot print (shown in a diagram below), typically computed as hectares per capita, or the hectares of land required to produce the resources consumed annually by an average person. The product of the global average ecological foot print and human population size equals the total foot print of humanity. Estimates indicate that since 1986 the total demand of resources by humanity exceeded the surface available to deliver them, indicating that human consumption of resources is based on the use of non-renewable resources, such as fossil water in deep aquifers or fossil fuels, and is, therefore, not sustainable.
The use of resources in excess of the capacity of the Earth System to replenish them leads to changes in the Earth System, including anthropogenic climate change, increased UVB radiation resulting from the decline in stratospheric ozone, changes in the water cycle and land use, eutrophication of coastal and continental aquatic ecosystems, loss of biodiversity, desertification, and an overall decline in the quality of air, water and soil. These changes, in turn, interact with one another, and affect the patterns of resource use by humans, creating feedbacks and blurring the path of cause and effect.
Collectively, these changes impact on society leading to economic losses, migrations, conflicts, risks to human health and lives, compromise water and food security, loss of ecosystem services, reduced resilience of human societies, and a decline of the environmental basis of our well being, as described in the figure attached.
Simple, linear thinking can lead, potentially, to major errors in addressing global change. For instance, the believe that climate change results from use of fossil fuels led to the promotion of biofuels. Yet, large scale production of biofuels is currently the main driver of deforestation in tropical countries, competes with crops for fertile land and water, and requires increased application of fertilizer and chemicals to protect the crops, thereby generating added impacts on the climate system – from emissions of green-house gases from deforestation and NO2 emissions from fertilized soils, and fueling other components of global change.
The complexity of this web of interactions defies the capacity of science to predict the outcome of these simultaneous changes with the associated interactions and synergies, and requires an approach based on complex system analyses and the consideration of non linear responses and threshold effects.
Ultimately, however, the root of these processes rely on our patterns of consumption of resources, which we can manage, at least at the individual level. Our power to mitigate global changes relies on the options we exert every day as consumers, even to a larger extent than that we can exert with our votes.
The world leaders that will ultimately manage this situation are not in political offices. The effective leaders that have the capacity to slow down and stop global change visit us every time we look ourselves in the mirror.
The oceans have absorbed almost 50 % of the CO2 humans released into the atmosphere, which has driven CO2 in the oceans to rise, causing – because of the effect of increasing CO2 in producing carbonic acid – a decline in ocean pH, termed ocean acidification. Ocean acidification has been argued to threaten calcifying organisms, such as corals and planktonic calcifiers, as coccolhitophores and pteropods.
However, CO2 does not only affect pH, but also affects the efficiency of aquatic aerobic respiration, which depends on the relative partial pressures of oxygen and CO2 in the water with which the organisms are to exchange their gases. In addition, reduced pH reduces the binding affinity for oxygen in blood. As a result, increased partial pressure of CO2 reduces the efficiency of aerobic respiration of aquatic organisms and, most importantly, raises the thresholds of oxygen required to support respiration.
Although not as widely discussed as ocean acidification, the role of increased CO2 in raising the oxygen levels required to support aerobic respiration in the ocean is most important. Oxygen concentrations are declining in the ocean as CO2 levels increase, particularly in coastal waters but also in the open ocean. Ocean deoxygenation is an emerging problem, that is already expressed in mass mortality events in hypoxic coastal waters, which are growing worldwide.
In a recent paper, my co-workers and I demonstrated how elevated CO2 acts as a hinge, connecting two otherwise independent threats to marine life, acidification and hypoxia. In particular, we demonstrated how elevated CO2 in Pacific waters off the upwelling region along the Chilean coast (Mayol et al. 2012).
Our results showed that a significant fraction of the water column along the Chilean sector of the Humboldt Current System suffers from CO2–driven compromises to biota, including corrosive waters to calcifying organisms, respiratory stress to organisms or both. Ocean acidification affects most waters below 150 m depth, while respiratory compromises due the combined effect of reduced O2 and increased CO2 are located within the 200 to 400 m layer. Only those waters shallower than 100 m present conditions free of stress to aerobic organisms.
Increased CO2 in the future may increase the thickness of the water column where ocean acidification and the relative concentrations of CO2 and O2 compromise marine life, therefore, compressing the vertical extent of the stress-free habitat. Whereas these impacts will affect vulnerable habitats first, such as the Humboldt Current System off Perú and Chile, these impacts will eventually extend across the world’s ocean.
Warm tropical waters are also of concern, as respiratory demands are enhanced at high temperature to the extent that oxygen concentrations are already near critical levels at the saturation concentrations, which are reduced, due to reduced solubility in warm waters. Increasing CO2 in warm tropical waters may bring aerobic organisms close to their respiratory limits even when the waters are saturated in oxygen. As increased CO2 is associated with further warming, breathing stresses are likely to compromise tropical marine life in the future.
Ocean warming, deoxigenation and acidification are, thus, connected pressures that increasingly compromise marine life.
The good news is that corals may be less vulnerable to ocean acidification, are more resistant than previously thought (e.g. Pandolfi et al. 2011). My colleagues at the UWA Oceans Institute have published today a paper unveiling the mechanism allowing most corals to withstand lower pH values than hitherto believed. Malcolm McCulloch and co-workers (2012) showed that corals up-regulate pH at their site of calcification such that internal changes in pH are approximately one-half of those in ambient seawater, and calculated that warming may counteract the effects of lowered pH on coral calcification (cf. https://theconversation.com/some-corals-could-survive-a-more-acidic-ocean-6203). Hence, the future of coral reefs in a high-CO2 world may not be as grim as we thought.
Mayol, E., S. Ruiz-Halpern, C. M. Duarte, J. C. Castilla, and J. L. Pelegrí. 2012. Coupled CO2 and O2-driven compromises to marine life in summer along the Chilean sector of the Humboldt Current System. Biogeosciences 9: 1183-1194.
McCulloch, M., J. Falter, J. Trotter and Paolo Montagna. 2012. Coral resilience to ocean acidification and global warming through pH up-regulation. Nature Climate Change, doi:10.1038/nclimate1473
Pandolfi, J.M., S.r. Connolly, D.J. Marshall, and A.L. Cohen. 2011. Projecting Coral Reef Futures Under Global Warming and Ocean Acidification. Science 333: 418-422
In a few hours, 8:30 to 9:30 pm, WWF invites us to join the Earth Hour (www.earthhour.org) and switch off lights on a gesture to remind us that our energy consumption patterns are taking a big toll on the biosphere, through effects of green house gases on the Earth’s climate and the impacts associated with the extraction and transport of fossil fuels.
The initiative was coined by WWF Australia in 2007, when Sydney citizens and companies were invited to turn of their lights out for one hour to take a stand against climate change. The Earth Hour spread across Australia and then jumped to rest of the World, where it has become a major event.
But shrewd analysts have expressed doubts that the Earth Hour is actually beneficial in terms of energy economy, and others have argued that the massive, global communication campaign to move citizens into joining the Earth Hour leads to massive consumptions of energy, deceiving the purpose. Yet, others argue that the Earth Hour is not nearly enough as one hour power off is inefficient provided the magnitude of the problem.
All of these arguments are correct. However, these analysts really miss the point. The point of the Earth Hour is not to curve energy consumption patterns by virtue of switching lights off for just one hour. I wish this be all it would take…
Since the Earth Hour was initiated in 2007, the CO2 concentration in our atmosphere has increased by over 8 ppm, largely due to fossil fuel combustion to produce energy, with green house gas (GHG) emissions increasing at a rate of 3.1 % per year over the past decade. Australians, in particular, support one of the highest per capita GHG emissions in the world, at about 4.5 ton C per capita per year, and the emissions of gases and particles render our urban atmospheres unhealthy.
The very likely consequence of the continued increase in GHF emissions is an ever warmer planet, with the present trajectory of emissions exceeding those considered in moderate scenarios and, thus, likely to drive the planet to a far warmer future than anticipated in most scenarios. The impacts and costs of a warmer planet will be unfathomable, as the impacts cannot be measured on economic losses alone, since it will tally in millions of lives lost as well, as warming enhances the frequency and strength of many natural disasters, such as floods, heat waves and droughts, which impact particularly on the poor, but also on the wealthy, as Australians experienced recently in the Queensland floods.
There is little doubt within the scientific community as to the trajectory these trends are taking us, as these trajectories have been communicated, clearly and loudly – above the noise denialists introduced in mass and social media – to society. Yet, we seem to behave as Lemmings, marching towards a warmer future; driven, perhaps, for the same cause, overpopulation.
A chinese proverb says that If we don’t change our direction we’re likely to end up where we’re headed.
Tonight, at 8:30 pm, switch off the lights for an hour, pause on your life and reflect what direction your steps are taking you. This is what the Earth Hour is about.
(Disclosure: Carlos M. Duarte is a board member of WWF-Spain)
The Mariana Trench Challenger Deep at 10897 m, the deepest point in the world’s ocean, is featured today around the world media. The reason: the successful descent, and return to the sea surface, of the vehicle Deepsea Challenger, designed and manned by Hollywood director and ocean explorer James Cameron. Cameron is the third human being to reach this depth, following the pioneer descent of Jacques Piccard and Don Walsh in the “Trieste” batiscaf in 1960. This is quite a remarkable feat, mostly because of the revolutionary design of the submersible Cameron built for this purpose, and one that reminds us that the challenges of the exploration of the ocean rival with those of space exploration. For a comparison, four times as many people as those that have descended to the Challenger Deep have walked on the moon and 500 times more people have climbed to the highest peak on earth, mount Everest.
But what may have Cameron seen? Most likely nothing remarkable, a deep, thick darkness, with – if lucky – scattered sparks of bioluminescence, likely triggered by the turbulence created by his vehicle and not much more. Certainly none of the beasts and monsters he imagined in his science fiction movie The Abyss (1989). Most likely a really boring descent, if no doubt full of adrenaline.
Because the ocean, particularly the deep ocean, is a microbial ecosystem. Indeed, the Challenger Deep has been sampled recurrently using unmanned autonomous vehicles notably the Japanese unmanned deep-sea submersible Kaiko (in Japanese, Ocean Trench). The samples collected by Kaiko have lead to discoveries of extreme bacteria. In a series of papers, the microbial flora of the sediments of the basin were reported (e.g. Takami et al. 1997). In 1998, Takai and coworkers reported two new bacteria species able, not surprisingly of growing at extreme pressures . A decade later, Takai and co-workers reported a new bacteria species, which they named Thermaerobacter marianensis, capable of growing very fast (90 min doubling time) at very high temperatures (optimum: 75 °C). Unfortunately, Kaiko was lost during a typhoon and has now been replaced by the Japanese vehicle ABISMO (Automatic Bottom Inspection and Sampling Mobile) able to descend 11,000 m into the ocean.
Granted, this is not as exciting as imaginary bioluminescence monsters, but likely of far greater consequences for science. Indeed, extreme bacteria, isolated from deep, warm ocean waters affected by volcanic activities have delivered a large number of genes and proteins of interest in industrial processes, from bioenergy to biotech, with a huge market value. Enzymes functional at high pressure and high temperature are likely to be able to catalyze processes at very high rates and yields (Arrieta et al. 2010).
Let’s welcome a long overdue new era in the exploration of the deep ocean! But expect discoveries to come from coccoid microorganisms, not large fluorescent monsters, which habitat is to be found in fantasy books.
Arrieta, J.M., S. Arnaud-Haond, and C.M. Duarte. 2010. What lies underneath: Conserving the oceans’’ genetic resources. Proceedings of the National Academy of Sciences 107; 18318-18324
Kato, C., L. Yi, Y. Nogi, Y. Nakamura, J. Tamaoka, and K. Horikoshi. 1988. Extremely Barophilic Bacteria Isolated from the Mariana Trench, Challenger Deep, at a Depth of 11,000 Meters. Applied and Environmental Microbiology 64: 1510-1513.
Takai, K., A. Inoue and K. Horikoshi. 1999. Thermaerobacter marianensis gen. nov., sp. nov., an aerobic extremely thermophilic marine bacterium from the 11000 m deep Mariana Trench. Int. J. Systematic and Evolutionary Microbiology 49: 619-628.
Takami, H., Inoue, A., Fuji, F. and Horikoshi, K. 1997. Microbial flora in the deepest sea mud of the Mariana Trench. FEMS Microbiology Letters, 152: 279–285