How many storms make a big storm?

Thomas Mortlock and Stuart Browning

The past few weeks have not been pleasant for beachfront property owners at Terrigal-Wamberal (see Figure 1), and worrisome for those with a sea view at other erosion “hot-spots” on the east coast, such as Collaroy-Narrabeen and Belongil. Beyond the difficult questions around coastal development and defence that this has raised (again), the passage of two East Coast Low (ECL) storms in quick succession, with a series of low pressure cells still lurking in the Tasman Sea, has highlighted another important issue for coastal hazard assessment. That is, of storm clustering, the resulting cumulative risk, and how we should be doing more to incorporate this additional dimension into the assessment of coastal risk.

Figure 1. Erosion at Wamberal, on the NSW Central Coast, July 2020. Source: Daily Telegraph, 28 July 2020.

What happened in July?

In a period of less than three weeks – from the week beginning 13 July to week ending 31 July – two successive ECL storms impacted the southeast coast of Australia bringing heavy rain, large waves and dangerous surf conditions to many areas including much of the Illawarra, Sydney and Central Coast regions.

The first (week beginning 13 July) was a typical wintertime ECL, with an extra-tropical origin in the South Tasman Sea progressing northwards up the coast (Figure 2, left panel). The peak-storm hourly significant wave height (the highest third of all waves measured in an hour, and a common measure of storm intensity) was 6.9 m at the Sydney wave buoy (located approximately 10 km offshore of Narrabeen), while the maximum single wave recorded during the storm was 11.6 m (on Wednesday 15 July). The wave direction was from the south-south-east for much of the storm, until Friday 17 July when the direction swung round to the south-east. The storm wave height had a return period of about 4 years.

Figure 2. Left panel: synoptic chart for the first ECL on 17 July 2020 at 10 AM as the wave direction becomes southeast to easterly. ECL is moving in a northward direction. Right panel: synoptic setup for the second ECL exactly ten days later with wave directions from the northeast to east. ECL is moving in a southward direction. Source: Bureau of Meteorology Weather Map Archive (2020).

The genesis and track of the second ECL (week beginning 27 July) is less usual during winter, with a tropical origin in the Coral Sea progressing southwards down the coast (Figure 2, right panel). The peak-storm hourly significant wave height was 4.0 m at Sydney and the maximum single wave height recorded was 7.6 m – much smaller than the preceding ECL. This time, the wave direction was from the north-east for much of the storm, before eventually becoming bi-directional, with one mode from the north-east and a second from the south-south-east. As the storm decayed, the south-south-east mode became more prevalent. The storm wave height of the second event had a return period of less than 1 year, but the direction made it more significant (as was the case during the infamous June 2016 ECL, see Mortlock et al., 2017a).

Both storms led to significant erosion at some locations along the east coast, with perhaps the worst area affected being Wamberal, on the NSW Central Coast. The Terrigal-Wamberal embayment is oriented south-east (unlike most other coastal compartments in NSW which face east) making it more exposed to waves from the south-east and anticlockwise thereof. The south-easterly wave direction of the first ECL on Friday 17 July, combined with the morning high tide, is likely to have done most of the damage. The north-easterly direction of the second ECL, only ten days later, led to further erosion of the upper beach and foredune.

What drives ECL clustering?

An analysis of the drivers of Australian ECLs has shown that clustering has been a feature of all high impact ECL seasons since 1851 (Browning and Goodwin, 2016). Over this period, it was found that when the large-scale climate conditions were conducive to ECL formation it was likely that successive storms would occur. When this happened, they were often similar types of ECLs forming along similar storm tracks.

Climate conditions conducive to ECL formation may include a neutral to negative Indian Ocean Dipole (IOD) and neutral to La Niña-like ENSO conditions in the Pacific. Extratropical circulation, described by the Southern Annular Mode (SAM), influences the latitude of impacts: with central and northern NSW impacted under positive SAM and central to southern NSW impacted under negative SAM. All these climate states essentially promote convective behaviour in the vicinity of Southeast Australia.

Another observation is that ECL clustering occurs during a shift in the underlying Pacific climate, specifically the transition from Interdecadal Pacific Oscillation (IPO) El Niño to IPO La Niña (Hopkins and Holland, 1997). The IPO describes low frequency ENSO-like conditions in the Pacific that may persist for periods of years to decades and can either enhance or dampen the intensity of individual ENSO events.

Storm clustering and coastal risk

During an ECL, sediment is usually stripped from the upper beach and deposited seaward below the water line as a surf zone bar (Figure 3 top panel). If the water level is high enough (with a sufficient combination of waves, storm surge and high tides), the foredune may also be eroded, leading to dune instability.

After the storm, a process of beach recovery takes place on the order of weeks to months, whereby sediment is transported landward from the bar back to the beach. The wider the beach, the better the buffer for the dune (and anything built on top of it) when the next storm arrives.

Figure 3. A cross-sectional beach profile showing simplified erosion during a storm event (top panel), and consequential depleted beach and higher water mark post-storm (bottom panel). Source: Yamamoto et al. (2012)

When a series of storm events occur in quick succession, there is no time for beach recovery. Each successive storm after the initial one thus erodes the beach from an already depleted state – similar in nature to a heavy rain event occurring on an already-saturated catchment. Because the beach is lower after the first storm, the high tide mark is further landward, making it easier for subsequent storm waves to erode the base of the dune (Figure 3 bottom panel).

It follows, therefore, that a series of low-magnitude storms in a cluster may have a comparable cumulative erosion impact as a single, higher-magnitude storm (assuming other characteristics, such as wave direction and storm duration, are the same).

It could be argued that a cluster of coastal storms should be regarded as a single event for erosion response, even if from an atmospheric perspective they are identifiably independent systems. In this case, it should be reflected in the return period estimate of coastal storms when wave height exceedance is being used as a metric to define erosion risk.

How many storms make a big storm?

To address this, we use a worked example:

If there were a pair of ECL events, separated less than one month apart (i.e. insufficient time for beach recovery), both with a nominal return period of 2 years, what would be the single-storm return period that delivers an equivalent amount of energy to the beach?

Using hourly wave height observations at the Sydney buoy from 1992 to 2019, the 2-year return period hourly significant wave height is approximately 6.2 m[1] (Figure 4, left panel).

Figure 4. Left panel: return periods associated with wave heights at Sydney. Right panel: synthetic storm curve for the 2-year return period storm for wave height (top) and wave period (bottom). Hs = significant wave height, Tp = peak energy wave period. The Sydney wave buoy is maintained and operated by Manly Hydraulics Laboratory (MHL). Wave data are available on request from MHL.

Using a method developed by Mortlock et al. (2017b)[2], we can take this peak-storm value to build a synthetic storm curve to estimate the total energy delivered to the beach during a storm of this magnitude (Figure 4, right panel, for a 2-year return period storm). Here we are assuming that the wave direction of both storms is the same.

From this, we can estimate the total wave energy flux of the storm. Wave energy flux is a measure of the total amount of power delivered by the storm along a metre length of beach[3], in Gigajoules per metre (GJ/m).

Using this approach, a 2-year return period storm contains approximately 41.2 GJ/m. This means that two of these storms occurring in quick succession have a combined energy of 82.4 GJ/m. Repeating this exercise for different return periods indicates that a pair of two ECL events, each with a nominal return period of 2 years, delivers an equivalent amount of energy to the beach as a single 8 to 9-year return period event (Figure 5, left panel).

Figure 5. Left panel: return periods of total storm wave energy flux for ECLs at Sydney, for when storms are treated as individual events (black line), and in the case where two storms of similar magnitude occur in quick succession (red line). Right panel: the difference between the red and black curves in left panel, with linear fit.

If we take the view that these two hypothetical storms should be considered a single event for erosive potential – and absent beach recovery in between – then it shows that we underestimate the recurrence estimate of storm damage.

Taking the difference between return periods of equivalent energy between the cluster-pair ECLs (red curve Figure 5, left panel) and single-storm ECLs (black curve), we can illustrate the extent to which we are underestimating erosion frequency (Figure 5, right panel). Using this approach, two 5-year ECLs occurring in quick succession may lead to erosion equivalent to a 20-year return period single ECL storm event.

Summary

In some years, there may be more potential for ECL occurrence and clustering, than in others. The winter of 2019 was quiescent for coastal storms on the east coast of Australia because of a very strong positive Indian Ocean Dipole (IOD). In 2020, a neutral IOD means climate variability on the east coast is driven more by what is happening in the Pacific, which appears to be tending towards La Niña, which typically allows for more convective low-pressure storms to develop. The point here is that for some years, it may be pertinent to consider the effects of ECL clustering for coastal risk assessment than for other years.

Using the method described above, we can estimate that the first ECL in July had a return period of four years and the second a return period of less than one year, but if treated as a single storm, the total energy was equivalent to the amount of erosive potential that could be expected of a single ECL of a return period of approximately seven years.

While this analysis is only for illustration, it demonstrates how there can be an under-estimation of coastal risk by assuming all ECLs drive independent erosion responses. If the cumulative erosion potential that exists with clustered ECL events is not incorporated into coastal hazard panning, then we may continue to under-appreciate the importance of event clustering.

References

Browning, S. and Goodwin, I.D. (2016). Large-scale drivers of Australian East Coast Cyclones since 1851. Journal of Southern Hemisphere Earth Systems Science, 66, 125–151.

Hopkins, L. C., and Holland, G. J. (1997). Australian heavy-rain days and associated east coast cyclones: 1958–92. Journal of Climate, 10, 621–635.

Goda, Y. (2010). Random seas and design of maritime structures. World Scientific, Singapore, pp 464.

Mortlock, T.R. and Goodwin, I.D. (2015). Directional Wave Climate and Power Variability along the Southeast Australian Shelf. Continental Shelf Dynamics, 98, 36-53.

Mortlock, T.R. et al. (2017a). The June 2016 Australian East Coast Low: Importance of Wave Direction for Coastal Erosion Assessment. Water, 9(2), 121.

Mortlock, T.R. et al. (2017b). Open Beaches Project 1A – Quantification of Regional Rates of Sand Supply to the NSW Coast: Numerical Modelling Report. A report prepared by Department of Environmental Sciences and Risk Frontiers, Macquarie University, for the SIMS-OEH Coastal Processes and Responses Node of the NSW Adaptation Research Hub, May 2017, pp 155.

Shand, T. et al. (2011). NSW coastal inundation hazard study: coastal storms and extreme waves. Water Research Laboratory, University of New South Wales & Climate Futures, Macquarie University.


 [1] An empirical estimation of the return periods was used here, where the return period = number of years in dataset / rank. Wave heights were linearly interpolated to obtain estimates for whole number of years.

[2] This is based on an analysis of observed storm events and accounts for the relationship between peak-storm wave height and wave period after Goda (2000) and modified by Shand et al. (2011). Storm duration is capped at 76 hours. All storms modelled here reached the duration cap.

[3] The formula for the calculation of total storm wave energy flux is given in full in Mortlock and Goodwin (2015). The water depth these values were calculated for was 20 m, which is prior to wave breaking.

Rapid detection of earthquakes and tsunamis using sea floor fibre-optic cables

Paul Somerville, Chief Geoscientist, Risk Frontiers

Standard methods of earthquake detection use seismic waves, which travel through the earth at speeds up to about 8 km/sec for compressional waves. The compressional waves have speeds about 75% higher than the following shear waves, which are the waves that do damage in earthquakes. This is the basis for early earthquake warning systems, which have been operational in some countries for the past several decades. For nearby earthquakes, the warning provided by compressional waves is only a few seconds, but there can be several tens of seconds warning for more distant earthquakes. The warning time is much greater for tsunamis, because their top speeds across the ocean are only about 0.2 km/sec. This explains why tsunami warning is mainly based on seismic waves. Some warning systems use Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys to detect the passage of tsunamis in the ocean to supplement seismic methods, but these buoys are expensive to build, deploy and maintain. Similarly, Ocean Bottom Seismometers (OBS) are notoriously expensive, unreliable and easy to lose.

Seventy percent of the planet’s surface is covered by water, and seismometer coverage is limited to a handful of permanent OBS stations. Marra et al. (2018) showed that existing telecommunication optical fibre cables can detect seismic events when combined with frequency metrology techniques by using the fibre itself as the sensing element. They detected earthquakes over terrestrial and submarine links with lengths ranging from 75 to 535 kilometres and a geographical distance from the earthquake’s epicenter ranging from 25 to 18,500 kilometres. If information about the occurrence of earthquakes can be transmitted by lasers (light waves), it will arrive much sooner than seismic waves because light waves travel at 204,190 km/sec, providing much more warning time. Marra et al. (2018) proposed that implementing a global seismic network for real-time detection of underwater earthquakes could be accomplished by applying this technique to the existing extensive submarine optical fibre network.

Distributed Acoustic Sensing (DAS) is a new, relatively inexpensive technology that is rapidly demonstrating its promise for recording earthquake and tsunami waves in a wide range of research and public safety applications (Zhan, 2020). DAS systems have the advantage of being already deployed across the oceans where deployments of DART and OBS are difficult and limited. DAS systems are expected to significantly augment present seismic and tsunami detection networks and provide more rapid information for several important applications including early warning.

Fibre-optic cables are commonly used as the channels along which seismic and other kinds of data are transmitted. With DAS, the hair-thin glass fibres themselves are the sensors as well as the transmission channel. Each observation episode begins with a pulse of laser light sent down the fibre. Internal natural flaws within the fibre, such as fluctuations in refractive index in the glass, cause scattering of the pulse of laser light that is sent down the fibre (Figure 1). DAS uses Rayleigh backscattering to infer the longitudinal strain or strain change with time every few metres along the fibre; this information is sent back to the source of the pulse. The strain in each fibre section changes when the cable is disturbed by seismic waves or other vibrations passing through the network. The return signals carry a signature of the disturbance. It takes only a slight extension or compression of a fibre to change the distances – as measured along the fibre – between many scattering points. Interferometric analysis extracts how the signals from scattering points vary in timing or phase, and further processing reconstructs the seismic waves that caused the perturbance. In addition to detecting seismic waves, the data can also be used to detect pressure changes in the ocean itself, which could be used to detect tsunamis.

Figure 1. Backscattering from defects in the fibre that carries information about the strains in every few metres of the cable. Source: Zhan (2020).

Kamalov and Cantono (2020) point out that the links used by Marra et al. (2018) were short (under 535 km for terrestrial and 96 km for subsea) and in relatively shallow waters (~200m deep), limiting practical application of the idea. To make this method more useful, they decided to test it using links that are much deeper on the ocean floor and span much greater distances. Kamalov and Cantono (2020) explain how, in a pilot project, Google is using data obtained from its existing undersea fibre optic cables to detect earthquakes and tsunamis using the DAS method developed by Zhan (2020). Once built, it is planned to use the system to provide information that is complementary to the information provided by dedicated seismic sensors to enhance early warnings of earthquakes and tsunamis.

How much benefit might be possible? The warning time for ground shaking from offshore earthquakes, which is presently a few seconds for nearby earthquakes and several tens of seconds for more distant earthquakes, could be doubled, providing significantly more warning time to take shelter using the “drop, cover and hold on” rule. For tsunamis, the rule is to get to higher ground, and although this evacuation process takes longer than drop, cover and hold on, there are usually at least several tens of minutes of warning. However, at very close distances from the tsunami source, there may only be about five minutes of warning time, and an additional half-minute of warning could potentially save lives. Although locally-based tsunami warning systems have been installed in some Southeast Asian countries to augment regional systems since the occurrence of the 2004 Sumatra earthquake and tsunami, these local systems have not all been well maintained, and the use of infrastructure that is already in place to provide more warning time could be beneficial.

References

Kamalov, Valey and Mattia Cantono (2020). What’s shaking? Earthquake detection with submarine cables. July 16, 2020. https://cloud.google.com/blog/products/infrastructure/using-subsea-cables-to-detect-earthquakes.

Marra, G., C. Clivati, R. Luckett, A. Tampellini, J. Kronjäger, L. Wright, A. Mura, F. Levi, S. Robinson, A. Xuereb, et al. (2018). Ultrastable laser interferometry for earthquake detection with terrestrial and submarine cables, Science 361, no. 6401, doi: 10.1126/science.aat4458.

Zhan, Z. (2020). Distributed Acoustic Sensing Turns FiberOptic Cables into Sensitive Seismic Antennas, Seismol. Res. Lett. 91, 1–15, doi: 10.1785/0220190112.

 

 

A Short Path to Coronavirus Herd Immunity?

Paul Somerville, Chief Geoscientist, Risk Frontiers

This week a number of remarkable articles on herd immunity to Coronavirus COVID-19 (SARS-CoV-2) have been posted without peer review (Britton et al., 2020; Lourenco et al., 2020), and these and other studies have been reviewed by Hamblin (2020). This briefing summarises the main conclusions of these articles.

Until a vaccine is developed, the management of the global pandemic in various regions ranges from elimination (e.g. New Zealand) through effective suppression (e.g. Australia) to reliance on large enough levels of infection to produce herd immunity (e.g. the United States), although as noted at the end of this article it is not clear that immunity is permanent enough to allow herd immunity to develop.

Lourenco et al. (2020) assert that some of the population may already have a high level of immunity to COVID-19 without ever having caught it. They point to evidence suggesting that exposure to seasonal coronaviruses, such as the common cold, may have already provided some with a degree of immunity, and that others may be more naturally resistant to infection. Although it is widely believed that the herd immunity threshold (HIT) required to prevent a resurgence of COVID-19 is more than 50% for any epidemiological setting, their modelling explains how differing levels of pre-existing immunity between individuals could put HIT as low as 20%. These results may help explain the large degree of regional variation observed in infection prevalence and cumulative deaths, and suggest that sufficient herd immunity may already be in place to substantially mitigate a potential second wave.

The effects of the coronavirus are not linear; the virus affects individuals and populations in very different ways. The case-fatality rate varies drastically between adults under 40 and the elderly. This same characteristic variability of the virus – what makes it so dangerous in the early stages of outbreaks – also gives a clue as to why those outbreaks could burn out earlier than initially expected. In countries with uncontained spread of the virus, such as the U.S., exactly what the herd-immunity threshold turns out to be could make a dramatic difference in how many people fall ill and die. Without a better plan, this threshold seems to have become central to the fates of many people around the world.

Gabriela Gomes, professor at the University of Strathclyde in Glasgow, Scotland also believes that the HIT may be much lower than currently thought. She was drawn to the field by frailty variation – why the same diseases manifest so differently from one person to the next. She studies chaos, specifically, patterns in nonlinear dynamics, and uses mathematics to deconstruct the chains of events that can lead two people with the same disease to have wildly different outcomes. For the past few months, she has been collaborating with an international group of mathematicians to run models that incorporate the many variations in how this virus seems to be affecting people. Her goal has been to move as far away from simple averages as possible, and to incorporate as many of the disparate effects of the virus as possible when making new forecasts.

In normal times, herd immunity is calculated based on a standardized intervention with predictable results: vaccination. Everyone is exposed to the same (or very similar) immune-generating viral components, and it is possible to calculate what percentage of people need that exposure in order to develop meaningful immunity across the population.

This is not the case when a virus is spreading in the real world in the absence of a vaccine. Instead, the complexities of real life create heterogeneity: people are exposed to different amounts of the virus, in different contexts, via different routes. A virus that is new to the species creates more variety in immune responses. Some of us are more susceptible to being infected, and some are more likely to transmit the virus once infected. Even small differences in individual susceptibility and transmission can, as with any chaotic phenomenon, lead to very different outcomes as the effects compound over time on the scale of a pandemic.

In a pandemic, the heterogeneity of the infectious process also makes forecasting difficult. Differences in outcome can grow exponentially, reinforcing one another until the situation becomes, through a series of individually predictable moves, radically different from other possible scenarios. Gomes contrasts two models: one in which everyone is equally susceptible to coronavirus infection (a homogeneous model), and the other in which some people are more susceptible than others (a heterogeneous model). Even if the two populations start out with the same average susceptibility to infection, you do not get the same epidemics. The outbreaks look similar at the beginning, but in the heterogeneous population, individuals are not infected at random. The highly susceptible people are more likely to get infected first, causing selective depletion of their fraction of the population. As a result, the average susceptibility becomes lower and lower over time.

Effects like this selective depletion can quickly decelerate a virus’s spread. The compounding effects of heterogeneity seem to show that the onslaught of cases and deaths seen in initial spikes around the world are unlikely to happen a second time. Based on data from several countries in Europe, Gomes’s results show a herd-immunity threshold of less than 20%, consistent with Lourenco et al. (2020) but much lower than that of other models. If that proves to be correct, it would be life-altering news. It would not mean that the virus is gone, but if roughly one out of every five people in a given population is immune to the virus, that seems to be enough to slow its spread to a level where each infectious person is infecting an average of less than one other person.  Under this condition, the basic reproduction number R0 – the average number of new infections caused by an infected individual, becomes less than 1, causing the number of infections to steadily decline, resulting in herd immunity. It would mean, for instance, that at 25% antibody prevalence, New York City could continue its careful reopening without fear of another major surge in cases.

Gomes admits that, although this does not make intuitive sense, homogenous models do not generate curves that match the current data. Dynamic systems develop in complex and unpredictable ways, and the best we can do is continually update models based on what is happening in the real world. It is unclear why the threshold in her models is consistently at or below 20%, but if heterogeneity is not the cause, it is unclear what is.

Tom Britton at Stockholm University has also been building epidemiological models based on data from around the globe (Britton et al., 2020). He believes that variation in susceptibility and exposure to the virus clearly seems to be reducing estimates for herd immunity, and thinks that a 20% threshold, while unlikely, is not impossible.

By definition, dynamic systems do not deal in static numbers. Any such herd immunity threshold is context-dependent and constantly shifting. It will change over time and space, depending on R0. During the early stage of an outbreak of a new virus (to which no one has immunity), that number will be higher. The number is skewed by super-spreading events, and within certain populations that lack heterogeneity, such as a nursing home or school, where the herd immunity threshold may be above 70%.

Heterogeneity of behaviour may be the key determinant of our futures, since R0 clearly changes with behaviour. COVID-19 is the first disease in modern times where the whole world has changed its behavior and disease spread has been reduced. Social distancing and other reactive measures have changed the R0 value, and they will continue to do so. The virus has certain immutable properties, but there is nothing immutable about how many infections it causes in the real world. The herd immunity threshold can change based on how a virus spreads. The spread keeps on changing based on how we react to it at every stage, and the effects compound. Small preventive measures have big downstream effects. The herd in question determines its immunity.

There is no mystery in how to drop the R0 to below 1 and reach an effective herd immunity: masks, social distancing, handwashing. It appears that places like New York City, having gone through an initial onslaught of cases and deaths, may be in a version of herd immunity, or at least safe equilibrium.* However, judging by the decisions some leaders have made so far, it seems that few places in the United States will choose to live this way. Many cities and states are pushing backwards into an old way of life, where the herd-immunity threshold is high. Dangerous decisions will be amplified by the dynamic systems of society. There will only be as much chaos as we allow.

All of these models assume that, after infection, people obtain immunity. However, COVID-19 is a new disease, so no one can be sure that infected people become immune reliably, or how long immunity lasts. (Britton et al., 2020) note that there are no clear instances of double infections so far, which suggests that this virus creates immunity for at least some meaningful length of time, as most viruses do. However, earlier this week, an unreviewed pre-print (Seow et al., 2020) suggested that immunity to COVID-19 can vanish within months, which, if true, indicates that the virus could become endemic. They found that 60% of people retained the potent level of antibodies required to resist future infections in the first two weeks of displaying symptoms. However, that proportion dropped to less than 17% after three months. This prompted Prof Jonathan Heeney, a virologist at the University of Cambridge, to state that the findings had put “another nail in the coffin of the dangerous concept of herd immunity,” demonstrating the remarkable state of uncertainty that currently exists among epidemiologists.

*Note that some chaotic systems can have stable equilibria (Wang et al., 2017).

References

Britton, Tom, Frank Ball and Pieter Trapman (2020). A mathematical model reveals the influence of population heterogeneity on herd immunity to SARS-CoV-2. Science  23 Jun 2020: eabc6810 DOI: 10.1126/science.abc6810

Hamblin, James (2020). A New Understanding of Herd Immunity – The portion of the population that needs to get sick is not fixed. We can change it. The Atlantic, July 13, 2020. https://www.theatlantic.com/health/archive/2020/07/herd-immunity-coronavirus/614035/

Lourenco, Jose, Francesco Pinotti, Craig Thompson, and Sunetra Gupta (2020). The impact of host resistance on cumulative mortality and the threshold of herd immunity for SARS-CoV-2. doi: https://doi.org/10.1101/2020.07.15.20154294

Seow, Jeffrey et al. (2020). Longitudinal evaluation and decline of antibody responses in SARS-CoV-2 infection. doi: https://doi.org/10.1101/2020.07.09.20148429. https://www.medrxiv.org/content/10.1101/2020.07.09.20148429v1

Wang, X. V. Pham, S. Jafari, C. Volos, J. M. Munoz-Pacheco and E. Tlelo-Cuautle (2017). A New Chaotic System with Stable Equilibrium: from Theoretical Model to Circuit Implementation, in IEEE Access, vol. 5, pp. 8851-8858, 2017, doi: 10.1109/ACCESS.2017.2693301.

 

Risk Frontiers Seminar Series 2020

Due to the COVID-19 pandemic Risk Frontiers’ Annual Seminar Series for 2020 will be presented as a series of three one-hour webinars across three weeks.

Webinar 1. Thursday 17th September, 2:30-3:30pm
Webinar 2. Thursday 24th September, 2:30-3:30pm
Webinar 3. Thursday 1st October, 2:30-3:30pm

Risk Modelling and Management Reloaded

Natural hazards such as floods, bushfires, tropical cyclones, thunderstorms (including hail) and drought are often thought of, and treated as, independent events despite knowledge of this not being the case. Understanding the risk posed by these hazards and their relationship with atmospheric variability is of great importance in preparing for extreme events today and in the future under a changing climate. Risk Frontiers’ ongoing research and development is focussed on incorporating this understanding into risk modelling and management as we view this as the way of the future. We look forward to sharing some of our work during our 2020 Seminar Series.

Presentation Day 1

  • Introduction to Risk Frontiers Seminar Series 2020
  • Historical analysis of Australian compound disasters – Andrew Gissing, Dr Stuart Browning

Presentation Day 2

  • Climate conditions preceding the 2019/20 compound event season – Dr Stuart Browning
  • Black Summer learnings and Risk Frontiers’ Submission to the Royal Commission into National Natural Disaster Arrangements – Dr James O’Brien, Lucinda Coates, Andrew Gissing, Dr Ryan Crompton

Presentation Day 3

  • Introduction to Risk Frontiers’ ‘ClimateGLOBE’ physical climate risk framework
  • Incorporating climate change scenarios into catastrophe loss models – Dr Mingzhu Wang, Dr Tom Mortlock, Dr Ryan Springall, Dr Ryan Crompton

The Difference Between Complicated and Complex Systems

Paul Somerville, Chief Geoscientist, Risk Frontiers

This article, published online under the title “What is the Difference Between Complicated and Complex Systems… and Why is it Important in Understanding the Systemic Nature of Risk?,” is the third in a series of eight articles co-authored by Marc Gordon (@Marc4D_risk), United Nations Office for Disaster Risk Reduction (UNDRR) and Scott Williams (@Scott42195), United Nations Development Program (UNDP). This article builds upon the chapter on ‘Systemic Risk, the Sendai Framework and the 2030 Agenda’ included in the Global Assessment Report on Disaster Risk Reduction 2019. Paragraph 15 of the Sendai framework states that “The present Framework will apply to the risk of small-scale and large-scale, frequent and infrequent, sudden and slow-onset disasters caused by natural or man-made hazards, as well as related environmental, technological and biological hazards and risks. It aims to guide the multihazard management of disaster risk in development at all levels as well as within and across all sectors.” These articles explore the systemic nature of risk made visible by the COVID-19 global pandemic, climate change and cyber hazards, and what needs to change and how we can make the paradigm shift from managing disasters to managing risks. This article did not include figure captions but these have been added by the editor. 


We need to clarify the distinction between a ‘complicated’ and a ‘complex’ system. A complicated system can be (dis-)assembled and understood as the sum of its parts. Just as a car is assembled from thousands of well-understood parts, which when combined allow for simpler and safer driving. Multi-hazard risk models allow for the aggregation of risks into well-behaved, manageable or insurable risk products.

By contrast, a complex system exhibits emergent properties that arise from interactions among its constituent parts in which relational information is of critical importance to integrate the complex system. Understanding a complex system is not enough to know the parts. It is necessary to understand the dynamic nature of the relationships between each of the parts. In a complex system, it is impossible to know all the parts at any point in time. The human body, a city traffic system, or a national public health system are examples of complex systems.

Figure 1. Contrast between Complicated and Complex Systems

The priorities for action of the Sendai Framework spur a new understanding of risk. They reinforce the obvious value of discerning the true nature and behaviour of systems, rather than thinking of systems as a collection of discrete elements. Risk management models, as well as economic models and related policymaking, have tended to treat systems as complicated. With this method, simplified stylized models are often applied to single entities or particular channels of interaction to first define and then label the risk phenomena. Methods are then negotiated by stakeholders to quantify or otherwise objectively reflect, the risk in question and then to generalize it again to make policy choices.

Most prevailing risk management tools assume that underlying systems are ‘complicated’. Rather than ‘complex’. In fact, these tools are often designed to suppress complexity and uncertainty. This approach is out-dated, and potentially very harmful – not least in the context of the developing COVID-19 pandemic. And is likely to produce results that fail to capture the rising complexity and need to navigate the full topography of risks.

We must improve our understanding of the interdependencies between system components, including precursor signals and anomalies, systems reverberations, feedback loops and sensitivities to change. Ultimately, the choices made right now in respect of risk and resilience to favour sustaining human health in the face of the COVID-19 pandemic will determine progress towards the goals of the 2030 Agenda and beyond.

Figure 2. Limitations of the current Non-Systemic approach (red) and how they are addressed by the advocated Systemic approach (green).

Risk and uncertainty are measures of deviation from ‘normal.’ Risk is the part of the unexpected quantified by the calculation of probabilities. Uncertainty is the other part of the unexpected. Where information may exist, it may not be available, not recognized as relevant, or unknowable. In a complex system, which is inherently unpredictable, probabilities for uncertainties cannot be reliably measured in a manner currently acceptable to the global risk management community, including governments. Converting uncertainty into acceptable risk quantities that essentially emanate from the dynamic, relational nature of complex system behaviour is currently very difficult, even impossible. Some uncertainties in any complex system will always remain unmeasurable.

Understanding sensitivities to change and system reverberations is far more important and more challenging in the context of complex systems. Particularly when dealing with very large human, economic and ecological loss and damage across the planet – as is the case with the COVID-19 pandemic. Simulations of such systems show that very small changes can produce almost unnoticeable but still identifiable initial ripples. These are then amplified by non-linear effects and associated path dependencies, causing changes that lead to significant, and potentially irreversible, consequences. This is what the world is experiencing now with the highly infectious COVID-19 outbreak. Country after country impose lockdowns and strict restrictions on human interactions, as individuals do not fully appreciate that a single infected (and possibly asymptomatic) person can provoke tens of thousands of cases of infection within weeks.

Risk is everyone’s business. Almost everyone across the world is starting to understand this, with physical distancing fast becoming the global norm. We must now review how our relationship with behaviour and choice transfers to individual and collective accountability for risk creation and amplification, or for risk reduction. This understanding must translate into action.

Increasing complexity in a networked world of complex, tightly coupled human systems (economic-political-technical-infrastructure-health) within nature can create instability and move beyond control. It may not be possible to understand this ahead of time (that is, ex ante). This inability to understand and manage systemic risk is an important challenge for current risk assessments, including in the context of the response to the COVID-19 pandemic, the wider context of the Sendai Framework and the achievement of the 2030 Agenda on Sustainable Development.

To allow humankind to embark on a development trajectory which is, at the very least, manageable, and at best sustainable and regenerative, consistent with the 2030 Agenda on Sustainable Development, a fundamental rethink and redesign of how to deal with systemic risk is essential; starting with a shift in mindset from ‘complicated’ to ‘complex’.

We must improve our understanding of the interdependencies between system components, including precursor signals and anomalies, systems reverberations, feedback loops and sensitivities to change. Ultimately, the choices made right now in respect of risk and resilience to favour sustaining human health in the face of the COVID-19 pandemic will determine progress towards the goals of the 2030 Agenda and beyond.

http://www.acclimatise.uk.com/2020/05/12/what-is-the-difference-between-complicated-and-complex-systems-and-why-is-it-important-in-understanding-the-systemic-nature-of-risk/

https://www.preventionweb.net/files/43291_sendaiframeworkfordrren.pdf

 

Newsletter Volume 19, Issue 3

In this issue:

Weather-related flight disruptions in a warming world

by Stuart Browning, Thomas Mortlock and Ryan Crompton, Risk Frontiers

There have been numerous causes of air travel disruption since the start of the 21st century. This includes the 9/11 terrorist attacks, eruption of Iceland’s Eyjafjallajökull volcano in 2010, and, most recently, the COVID-19 pandemic. While the airline industry has seemingly recovered from the shocks of 2001 (air passenger travel in the US reached its pre-9/11 peak in July 2004 (US Department of Transportation)) and 2010 and still has some way to go with COVID-19, the industry could be dealing with the ongoing effect of climate change impacts for many decades into the future. Our initial analysis using only publicly available information is that weather-related flight disruptions at Australia’s busiest airports are set to increase in a warming world due primarily to an increase in heatwave conditions and thunderstorm activity.

Regular flyers know only too well the inconvenience and frustration of flight delays and cancellations; these can occur for a range of reasons, but, most commonly due to extreme weather. Under otherwise normal conditions Australia has some of the busiest flight routes in the world, including Sydney to Melbourne where around a quarter of all flights are delayed, usually due to weather. For Canberra Airport, fog delays are so common that business travellers with morning meetings are advised to arrive the day prior.

A 2010 report commissioned by the US Federal Aviation Administration (FAA) estimated the cost of flight delays in the US at USD32.9B per year (USD4690 per hour, per flight): a cost that is distributed between airlines, passengers, and travel insurance providers (Nextor 2010). Flight delays can increase fuel consumption due to a range of factors including extended taxi times, holding patterns, path stretching, re-routing, increased flight speeds to meet schedules, and proactive measures such as padding schedules in anticipation of delays. This is not only a financial issue, but a reputational problem for an industry facing scrutiny over its CO2 emissions. Airports also play a crucial role in the global flow of goods and services, so are a key factor of supply chain risk. Due to the interconnected nature of air travel, delays at one airport can propagate across the network making them difficult to anticipate and manage. Given the projected exponential rise in air transport, there is surprisingly little research into how weather-related disruptions will impact this sector in the future.

The key weather phenomena of interest for airport operations are strong winds (particularly crosswinds), fog mist and low cloud ceiling, thunderstorms, heavy rainfall, and extreme heat. Also relevant are tropical cyclones, hail, snowfall, smoke (our latest Black Summer bushfire season illustrated this), dust, humidity and extreme cold. This study seeks to understand the weather events responsible for most delays at Australia’s three busiest airports: Sydney, Melbourne and Brisbane, and how this might change in the future under possible climate change scenarios.

Documented airport disruption data are not readily available, however announcements about disruptions are made by major airports on Twitter. For this study, airport Twitter feeds for the last 5 years were mined for information on the occurrence and causes of flight disruptions. This information was then matched against atmospheric data to determine the thresholds at which flight delays and cancellations occurred.

Historical weather and climate information were obtained from the ERA5 reanalysis (C3S 2017) at hourly resolution and validated against in-situ automatic weather station (AWS) observations. Five types of extreme weather events were evaluated: heat, wind speed, fog, rain and thunderstorms. Figure 1 shows Convective Available Potential Energy (CAPE as an indicator of thunderstorm risk) thresholds for Sydney Airport determined from mining Twitter data. Using the ERA5 reanalysis and thresholds from the 5-year period analysed we were able to develop a 39-year history (1979 to 2018) of the frequency of weather-related disruptions at each of these airports. Figure 2 shows that over this time-period the most frequent weather-related disruptions for Brisbane and Sydney airports were due to fog (54%) and thunderstorm (25%) events, while for Melbourne it was fog (56%) and strong wind (31%) events.

Figure 1. An example (thunderstorm activity at Sydney airport) of the hourly analysis of weather conditions associated with flight disruptions.
Figure 2. Breakdown of weather-related disruption hours for Brisbane, Melbourne and Sydney airports over the 1979-2018 time period.

Projections of the frequency for each type of weather event have been developed from a seven-member ensemble of global climate model (GCM) projections from the fifth Coupled Model Intercomparison Project (CMIP5). Model output for two of the four Representative Concentration Pathways (RCPs) (formerly ‘emission scenarios’) was analysed – RCP 4.5 and RCP 8.5. RCP 4.5 is one of two medium stabilisation scenarios and RCP 8.5 represents a very high baseline emission scenario where historical carbon emission trajectories are maintained (van Vuuren et al. 2011).

Figure 3 shows the multi-model ensemble projections for an increase in the frequency of thunderstorm hours at Sydney airport for both RCPs 4.5 and 8.5. Total weather-related disruption hours at Sydney airport, represented as the sum of all weather events (fog, heat, wind, rainfall, and thunderstorms), are set to increase by ~40% under RCP 4.5 (Figure 4), and ~75% under RCP 8.5 (not shown) to 2100. Increases in weather-related disruptions are also expected at Melbourne and Brisbane airports, due primarily to an increase in thunderstorm activity and heatwave conditions.

Figure 3. Multi-model ensemble projections for thunderstorm hours at Sydney Airport from 2006 to 2100. Black line represents historical thunderstorm activity determined from the ERA5 reanalysis. The blue and red lines represent RCPs 4.5 and 8.5 projections respectively. Shading indicates two standard deviations from the 7-member ensemble mean. Much of the uncertainty around changes in mean state are associated with natural, year-to-year internal climate system variability.
Figure 4. Projections of total weather-related disruption hours at Sydney airport from a 7-member ensemble of GCM simulations under a medium emission (RCP 4.5) scenario. For clarity only the ensemble mean is shown for each type of weather event.

Allen et al. (2014) used an environments-based approach to suggest an increase in thunderstorm-related delays of 14-30% for Sydney, Brisbane, and Melbourne. However, a wide range of uncertainty exists as very few studies have investigated potential changes. Increasing frequency and intensity of hot days are among the most robust projections from GCM simulations. Recent research indicates summertime daily maximum temperatures in Sydney will regularly exceed 50oC by the end of this century, even under low emission scenarios (Lewis et al. 2019). Extreme heat days historically have not been a major issue for Australian airports, with the hottest summer days typically in the low- to mid-40oC range. However, once maximum temperatures move into the high-40oC range, major issues arise for both ground crews and aeroplanes. Beyond ~48oC tarmac begins to soften, and beyond ~50oC many aircraft cannot take off due to the lower density of hot air, especially when coupled with low humidity.

Fog, from our analysis, has historically been one of the main causes of flight delays. Projections for fog are more uncertain than for temperature, however research from the NSW and ACT Regional Climate Modelling (NARCliM) Project suggests an increase in temperature inversions (when cold air gets trapped beneath warm air) during winter months which will likely cause an increase in fog days for Sydney (Ji et al. 2019). Projections for extreme wind and rainfall events are also more uncertain than temperature projections and show no significant changes from present. However, numerous studies have linked global warming to an increase in the intensity of rainfall events, especially those associated with thunderstorms (Allen et al. 2014), meaning this could also become more problematic in the future.

Flight delays due to bushfire smoke have not been directly considered in this research as historically this has not been a major issue for Australian airports. However, severe bushfire smoke haze during the 2019/20 Black Summer bushfire season caused extensive flight cancellations (see here and here). Given the robust projections for an increase in bushfire weather risk, this is likely to become a significant issue in the future and Risk Frontiers’ probabilistic bushfire and grassfire loss model ‘FireAUS’ could be used to assess changes in this risk.

The projections presented in this study are based on the CMIP5 multi-model ensemble, which was developed almost 10 years ago for the Intergovernmental Panel on Climate Change Fifth Assessment Report (AR5). A new generation of higher-resolution projections are currently being developed as part of CMIP6 and the Coordinated Regional Climate Downscaling Experiment (CORDEX) and both are becoming available in 2020. These new datasets are expected to greatly improve our ability to produce location-specific climate risk projections due to better model physics and increased spatial resolution. While the high-level message of increasing weather extremes due to increasing temperature is not expected to change, ongoing research will allow us to answer specific questions with more confidence.

Our preliminary research on airport flight disruptions indicates a robust projection for increases in flight delays from weather-related events at Brisbane, Sydney and Melbourne airports under both medium and very high emission scenarios. There is most confidence in the impacts of extreme heat; however, the impacts of other delay-causing events such as thunderstorms, fog, wind and rainfall, are more uncertain. Existing research based on downscaled projections suggests an increase in wintertime fog frequency and the intensity of extreme rainfall events.

Adaptation measures will be necessary to minimise impacts in addition to mitigation (reducing CO2 emissions). (Note that prior to COVID-19 air traffic had increased substantially in recent decades and was projected to continue increasing at a near exponential rate further contributing to emissions.) For example, Melbourne and Perth airports have Category III B rated runways, which allow take-off and landing in fog conditions but are costly to install and maintain. Airlines, passengers and their travel insurance providers will continue to bear most of the costs associated with flight delays.

Australia’s vulnerability to weather and climate events was most recently tested during the 2019/20 catastrophic bushfires and drought. The impacts extended beyond the built environment to society more generally and the economy. Organisations need to understand their sensitivities historically and how these may change under future climate scenarios. Recommendations of the Task Force on Climate-related Financial Disclosure (TCFD) are encouraging a science-based risk evaluation and adaptation process, especially for the financial sector. Through ClimateGLOBE, Risk Frontiers is collaborating with Australia’s leading climate researchers, and building on decades of catastrophe loss modelling experience, to develop robust assessments of current and projected financial impacts of climate change that are applicable not just to airports, but all organisations.


Risk Frontiers’ submission to the Royal Commission into Natural Disaster Arrangements

The following is the Executive Summary from Risk Frontiers’ submission to the Royal Commission into National Natural Disaster Arrangements. Ryan Crompton was a witness at the opening day of public hearings of the Royal Commission on May 25 and his statement can be found here. Risk Frontiers may be called again in later hearings.

The 2019/20 ‘Black Summer’ bushfire season in Australia was extremely damaging. A distinguishing feature of Black Summer was the extended time period over which fires raged throughout several states including New South Wales (NSW), Queensland and Victoria. Our submission focuses on:

  • the impacts of Black Summer expressed in terms of numbers of destroyed buildings, insured losses and fatalities;
  • how bushfire impacts compare with those arising from other natural perils such as tropical cyclones, floods, hailstorms and earthquakes;
  • recommendations for improved bushfire mitigation and resilience, and
    the future of bushfire fighting in Australia.

In terms of bushfire building damage, Black Summer is expected to be comparable to the most damaging seasons (if not the most damaging) in Australia since 1925 after increases in dwelling numbers are taken into account. A point of difference with previous major fire events is the Black Summer damage accumulated throughout the season (or at least first half season) rather than on single days of fire. The Insurance Council of Australia’s current estimate of insured losses for Black Summer is $2.23 billion, slightly less than those incurred in the Ash Wednesday (February 1983) fires. Across all perils, bushfires comprise 12% of normalised Australian insured natural hazard losses over the period 1966-2017.

Post-event damage surveys undertaken by Risk Frontiers confirm the significant role that proximity to bushland can play. In the case of the NSW South Coast, approximately 38% of destroyed buildings were situated within 1 metre of surrounding bush. Risk Frontiers observed similar findings in the aftermath of the 2009 Black Saturday bushfires where an estimated 25% of destroyed buildings in Kinglake and Marysville were located physically within the bushland boundary. Land-use planning in bushfire-prone locations needs to acknowledge this risk.

Risk Frontiers is now in a position to characterise the national natural peril profile by comparing risks between perils at a given postcode or comparing the all hazard risk across different postcodes. Postcodes that face the greatest risk of financial loss to insurable assets lie in Western Australia, Queensland or NSW, with flood and tropical cyclone being the most significant perils. Bundaberg has the highest average annual loss relative to all other postcodes. Information like this could be employed to guide national mitigation investment priorities.

We posit the need to adopt an all-hazards, whole of community and nationwide approach to managing large scale disasters. Consideration should be given to multiple large-scale concurrent or sequential events in future disaster planning. There is a need to encourage community participation and greater private sector engagement. A national bushfire capability development plan should guide investment in the next generation of bushfire fighting capability as we seek to move away from long-standing approaches that are resource intensive and struggle to control fires when conditions are truly catastrophic. This will be important given the trend toward more dangerous conditions in southern Australia and an earlier start to the fire season due at least in part to anthropogenic climate change.


Risk Frontiers’ Seminar Series 2020

 

Save the dates and Registration Link

Due to the COVID-19 pandemic Risk Frontiers’ Annual Seminar Series for 2020 will be presented as a series of three one-hour webinars across three weeks.

Webinar 1. Thursday 17th September, 2:30-3:30pm
Webinar 2. Thursday 24th September, 2:30-3:30pm
Webinar 3. Thursday 1st October, 2:30-3:30pm

Risk Modelling and Management Reloaded

Natural hazards such as floods, bushfires, tropical cyclones, thunderstorms (including hail) and drought are often thought of and treated as independent events despite knowledge of this not being the case. Understanding the risk posed by these hazards and their relationship with atmospheric variability is of great importance in preparing for extreme events today and in the future under a changing climate. Risk Frontiers’ ongoing research and development is focussed on incorporating this understanding into risk modelling and management as we view this as the way of the future. We look forward to sharing some of our work during our 2020 Seminar Series.

Presentation Day 1

  • Introduction to Risk Frontiers Seminar Series 2020
  • Historical analysis of Australian compound disasters – Andrew Gissing

Presentation Day 2

  • Climate conditions preceding the 2019/20 compound event season – Dr Stuart Browning
  • Black Summer learnings and Risk Frontiers’ Submission to the Royal Commission into National Natural Disaster Arrangements – Dr James O’Brien, Lucinda Coates, Andrew Gissing, Dr Ryan Crompton

Presentation Day 3

  • Introduction to Risk Frontiers’ ‘ClimateGLOBE’ physical climate risk framework
  • Incorporating climate change scenarios into catastrophe loss models – Dr Mingzhu Wang, Dr Tom Mortlock, Dr Ryan Springall, Dr Ryan Crompton

Appointment of New Managing Director

John McAneney and Ryan Crompton

It is with sadness we herald the retirement of Prof John McAneney from the position of Managing Director of Risk Frontiers and thank him for his valuable contribution over the last 18 years. It is with pleasure we announce that Dr Ryan Crompton has been promoted to Managing Director as from 1st July 2020.

John took over the leadership of Risk Frontiers from Prof Russell Blong in 2003. Under John’s stewardship, both inside Macquarie University and more recently outside as a private company, Risk Frontiers has grown to be an internationally recognised risk management, modelling and resilience organisation. John has nurtured the professionals who have chosen to work at Risk Frontiers and promoted the culture of tremendous work ethic, intellectual curiosity and scientific rigour underpinning the company. Our new Managing Director, Dr Ryan Crompton, looks forward to doing the same.

Announcing his retirement to Risk Frontiers staff John commented:

Being in charge of Risk Frontiers has truly been the best part of my working career. It has been a privilege to have been able to work closely with so many brilliant people and help create a research capability that is second to none in our field. Few are given that privilege and I’m grateful to Russell for entrusting me with that responsibility so many moons ago. I’m also indebted to each of you for your support and I look forward to catching up again when next in Australia.

John has agreed to remain on the Board as a Non-Executive Director until at least 31 December 2020.

In rising to the role of Managing Director, Dr Ryan Crompton brings his long academic achievements and commercial experience with Risk Frontiers since 2003. Ryan was appointed to Risk Frontiers while studying at Macquarie University soon after John joined, and has since held numerous roles within the company, most recently as Acting Managing Director. He has developed strong relationships with staff, clients and associates over many years and we look forward to his leadership of the Risk Frontiers team.

We look forward to your continued support of Risk Frontiers.