Newsletter Volume 19, Issue 3

In this issue:

Weather-related flight disruptions in a warming world

There have been numerous causes of air travel disruption since the start of the 21st century. This includes the 9/11 terrorist attacks; eruption of Iceland’s Eyjafjallajökull volcano in 2010, and most recently COVID-19. While the airline industry has seemingly recovered from the shocks of 2001 (air passenger travel in the US reached its pre-9/11 peak in July 2004 (US Department of Transportation)) and 2010 and still has some way to go with COVID-19, the industry could be dealing with the ongoing effect of climate change impacts for many decades into the future. Our initial analysis using only publicly available information is that weather-related flight disruptions at Australia’s busiest airports are set to increase in a warming world due primarily to an increase in heatwave conditions and thunderstorm activity.

Regular flyers know only too well the inconvenience and frustration of flight delays and cancellations; these can occur for a range of reasons, but the most common is extreme weather. Under otherwise normal conditions Australia has some of the busiest flight routes in the world, including Sydney to Melbourne where around a quarter of all flights are delayed, usually due to weather. For Canberra Airport, fog delays are so common that business travellers with morning meetings are advised to arrive the day prior.

A 2010 report commissioned by the US Federal Aviation Administration (FAA) estimated the cost of flight delays in the US at USD32.9B per year (USD4690 per hour, per flight): a cost that is distributed between airlines, passengers, and travel insurance provides (Nextor 2010). Flight delays can increase fuel consumption due to a range of factors including extended taxi times, holding patterns, path stretching, re-routing, increased flight speeds to meet schedules, and proactive measures such as padding schedules in anticipation of delays. This is not only a financial issue, but a reputational problem for an industry facing scrutiny over its CO2 emissions. Airports also play a crucial role in the global flow of goods and services, so are a key factor of supply chain risk. Due to the interconnected nature of air travel, delays at one airport can propagate across the network making them difficult to anticipate and manage. Given the projected exponential rise in air transport, there is surprisingly little research into how weather-related disruptions will impact this sector in the future.

The key weather phenomena of interest for airport operations are strong winds (particularly crosswinds), fog mist and low cloud ceiling, thunderstorms, heavy rainfall, and extreme heat. Also relevant are tropical cyclones, hail, snowfall, smoke (our latest Black Summer bushfire season illustrated this), dust, humidity and extreme cold. This study seeks to understand the weather events responsible for most delays at Australia’s three busiest airports: Sydney, Melbourne and Brisbane, and how this might change in the future under possible climate change scenarios.
Documented airport disruption data are not readily available, however, announcements about disruptions are made by major airports on Twitter. For this study, airport Twitter feeds for the last 5 years were mined for information on the occurrence and causes of flight disruptions. This information was then matched against atmospheric data to determine the thresholds at which flight delays and cancellations occurred.

Historical weather and climate information were obtained from the ERA5 reanalysis (C3S 2017) at hourly resolution and validated against in-situ automatic weather station (AWS) observations. Five types of extreme weather events were evaluated: heat, wind speed, fog, rain and thunderstorms. Figure 1 shows Convective Available Potential Energy (CAPE as an indicator of thunderstorm risk) thresholds for Sydney Airport determined from mining Twitter data. Using the ERA5 reanalysis and thresholds from the 5-year period analysed we were able to develop a 39-year history (1979 to 2018) of the frequency of weather-related disruptions at each of these airports. Figure 2 shows that over this time-period the most frequent weather-related disruptions for Brisbane and Sydney airports were due to fog (54%) and thunderstorm (25%) events, while for Melbourne it was fog (56%) and strong wind (31%) events.

Figure 1. An example (thunderstorm activity at Sydney airport) of the hourly analysis of weather conditions associated with flight disruptions.
Figure 2. Breakdown of weather-related disruption hours for Brisbane, Melbourne and Sydney airports over the 1979-2018 time period.

Projections of the frequency for each type of weather event have been developed from a seven-member ensemble of global climate model (GCM) projections from the fifth Coupled Model Intercomparison Project (CMIP5). Model output for two of the four Representative Concentration Pathways (RCPs) (formerly ‘emission scenarios’) was analysed – RCP 4.5 and RCP 8.5. RCP 4.5 is one of two medium stabilisation scenarios and RCP 8.5 represents a very high baseline emission scenario where historical carbon emission trajectories are maintained (van Vuuren et al. 2011).

Figure 3 shows the multi-model ensemble projections for an increase in the frequency of thunderstorm hours at Sydney airport for both RCPs 4.5 and 8.5. Total weather-related disruption hours at Sydney airport, represented as the sum of all weather events (fog, heat, wind, rainfall, and thunderstorms), are set to increase by ~40% under RCP 4.5 (Figure 4), and ~75% under RCP 8.5 (not shown) to 2100. Increases in weather-related disruptions are also expected at Melbourne and Brisbane airports, due primarily to an increase in thunderstorm activity and heatwave conditions.

Figure 3. Multi-model ensemble projections for thunderstorm hours at Sydney Airport from 2006 to 2100. Black line represents historical thunderstorm activity determined from the ERA5 reanalysis. The blue and red lines represent RCPs 4.5 and 8.5 projections respectively. Shading indicates two standard deviations from the 7-member ensemble mean. Much of the uncertainty around changes in mean state are associated with natural, year-to-year internal climate system variability.
Figure 4. Projections of total weather-related disruption hours at Sydney airport from a 7-member ensemble of GCM simulations under a medium emission (RCP 4.5) scenario. For clarity only the ensemble mean is shown for each type of weather event.

Allen et al. (2014) used an environments-based approach to suggest an increase in thunderstorm-related delays of 14-30% for Sydney, Brisbane, and Melbourne. However, a wide range of uncertainty exists as very few studies have investigated potential changes. Increasing frequency and intensity of hot days are among the most robust projections from GCM simulations. Recent research indicates summertime daily maximum temperatures in Sydney will regularly exceed 50oC by the end of this century, even under low emission scenarios (Lewis et al. 2019). Extreme heat days historically have not been a major issue for Australian airports, with the hottest summer days typically in the low- to mid-40oC range. However, once maximum temperatures move into the high-40oC range, major issues arise for both ground crews and for aeroplanes. Beyond ~48oC tarmac begins to soften, and beyond ~50oC many aircraft cannot take off due to the lower density of hot air, especially when coupled with low humidity.

Fog, from our analysis, has historically been one of the main causes of flight delays. Projections for fog are more uncertain than for temperature, however research from the NSW and ACT Regional Climate Modelling (NARCliM) Project suggests an increase in temperature inversions (when cold air gets trapped beneath warm air) during winter months which will likely cause an increase in fog days for Sydney (Ji et al. 2019). Projections for extreme wind and rainfall events are also more uncertain than temperature projections and show no significant changes from present. However, numerous studies have linked global warming to an increase in the intensity of rainfall events, especially those associated with thunderstorms (Allen et al. 2014), meaning this could also become more problematic in the future.

Flight delays due to bushfire smoke have not been directly considered in this research as historically this has not been a major issue for Australian airports. However, severe bushfire smoke haze during the 2019/20 Black Summer bushfire season caused extensive flight cancellations (see here and here). Given the robust projections for an increase in bushfire weather risk, this is likely to become a significant issue in the future and Risk Frontiers’ probabilistic bushfire and grassfire loss model ‘FireAUS’ could be used to assess changes in this risk.

The projections presented in this study are based on the CMIP5 multi-model ensemble, which was developed almost 10 years ago for the Intergovernmental Panel on Climate Change Fifth Assessment Report (AR5). A new generation of higher-resolution projections are currently being developed as part of CMIP6 and the Coordinated Regional Climate Downscaling Experiment (CORDEX) and both are becoming available in 2020. These new datasets are expected to greatly improve our ability to produce location-specific climate risk projections due to better model physics and increased spatial resolution. While the high-level message of increasing weather extremes due to increasing temperature is not expected to change, ongoing research will allow us to answer specific questions with more confidence.

Our preliminary research on airport flight disruptions indicates a robust projection for increases in flight delays from weather-related events at Brisbane, Sydney and Melbourne airports under both medium and very high emission scenarios. There is most confidence in the impacts of extreme heat and other delay-causing events such as thunderstorms, fog, wind and rainfall, are more uncertain. Existing research based on downscaled projections suggests an increase in wintertime fog frequency and the intensity of extreme rainfall events.

Adaptation measures will be necessary to minimise impacts in addition to mitigation (reducing CO2 emissions). (Note that prior to COVID-19 air traffic had increased substantially in recent decades and is projected to continue increasing at a near exponential rate further contributing to emissions.) For example, Melbourne and Perth airports have Category III B rated runways, which allow take-off and landing in fog conditions but are costly to install and maintain. Airlines, passengers and their travel insurance providers will continue to bear most of the costs associated with flight delays.

Australia’s vulnerability to weather and climate events was most recently tested during the 2019/20 catastrophic bushfires and drought. The impacts extended beyond the built environment to society more generally and the economy. Organisations need to understand their sensitivities historically and how these may change under future climate scenarios. Recommendations of the Task Force on Climate-related Financial Disclosure (TCFD) are encouraging a science-based risk evaluation and adaptation process, especially for the financial sector. Through ClimateGLOBE, Risk Frontiers is collaborating with Australia’s leading climate researchers, and building on decades of catastrophe loss modelling experience, to develop robust assessments of current and projected financial impacts of climate change that are applicable not just to airports, but all organisations.


Risk Frontiers’ submission to the Royal Commission into Natural Disaster Arrangements

The following is the Executive Summary from Risk Frontiers’ submission to the Royal Commission into National Natural Disaster Arrangements. Ryan Crompton was a witness at the opening day of public hearings of the Royal Commission on May 25 and his statement can be found here. Risk Frontiers may be called again in later hearings.

The 2019/20 ‘Black Summer’ bushfire season in Australia was extremely damaging. A distinguishing feature of Black Summer was the extended period of time over which fires raged throughout several states including New South Wales (NSW), Queensland and Victoria. Our submission focuses on:

  • the impacts of Black Summer expressed in terms of numbers of destroyed buildings, insured losses and fatalities;
  • how bushfire impacts compare with those arising from other natural perils such as tropical cyclones, floods, hailstorms and earthquakes;
  • recommendations for improved bushfire mitigation and resilience, and
    the future of bushfire fighting in Australia.

In terms of bushfire building damage, Black Summer is expected to be comparable to the most damaging seasons (if not the most damaging) in Australia since 1925 after increases in dwelling numbers are taken into account. A point of difference with previous major fire events is the Black Summer damage accumulated throughout the season (or at least first half season) rather than on single days of fire. The Insurance Council of Australia’s current estimate of insured losses for Black Summer is $2.23 billion, slightly less than those incurred in the Ash Wednesday (February 1983) fires. Across all perils, bushfires comprise 12% of normalised Australian insured natural hazard losses over the period 1966-2017.
Post-event damage surveys undertaken by Risk Frontiers confirm the significant role that proximity to bushland can play. In the case of the NSW South Coast, approximately 38% of destroyed buildings were situated within 1 metre of surrounding bush. Risk Frontiers observed similar findings in the aftermath of the 2009 Black Saturday bushfires where an estimated 25% of destroyed buildings in Kinglake and Marysville were located physically within the bushland boundary. Land-use planning in bushfire-prone locations needs to acknowledge this risk.
Risk Frontiers is now in a position to characterise the national natural peril profile by comparing risks between perils at a given postcode or comparing the all hazard risk across different postcodes. Postcodes that face the greatest risk of financial loss to insurable assets lie in Western Australia, Queensland or NSW, with flood and tropical cyclone being the most significant perils. Bundaberg has the highest average annual loss relative to all other postcodes. Information like this could be employed to guide national mitigation investment priorities.

We posit the need to adopt an all-hazards, whole of community and nationwide approach to managing large scale disasters. Consideration should be given to multiple large-scale concurrent or sequential events in future disaster planning. There is a need to encourage community participation and greater private sector engagement. A national bushfire capability development plan should guide investment in the next generation of bushfire fighting capability as we seek to move away from long standing approaches that are resource intensive and struggle to control fires when conditions are truly catastrophic. This will be important given the trend toward more dangerous conditions in southern Australia and an earlier start to the fire season due at least in part to anthropogenic climate change.


Risk Frontiers’ Seminar Series 2020

 

Save the dates and Registration Link

Due to the COVID-19 pandemic Risk Frontiers’ Annual Seminar Series for 2020 will be presented as a series of three one-hour webinars across three weeks.

Webinar 1. Thursday 17th September, 2:30-3:30pm
Webinar 2. Thursday 24th September, 2:30-3:30pm
Webinar 3. Thursday 1st October, 2:30-3:30pm

Risk Modelling and Management Reloaded

Natural hazards such as floods, bushfires, tropical cyclones, thunderstorms (including hail) and drought are often thought of and treated as independent events despite knowledge of this not being the case. Understanding the risk posed by these hazards and their relationship with atmospheric variability is of great importance in preparing for extreme events today and in the future under a changing climate. Risk Frontiers’ ongoing research and development is focussed on incorporating this understanding into risk modelling and management as we view this as the future of it. We look forward to sharing some of our work during our 2020 Seminar Series.

Presentation Day 1

  • Introduction to Risk Frontiers Seminar Series 2020
  • Historical analysis of Australian compound disasters – Andrew Gissing

Presentation Day 2

  • Climate conditions preceding the 2019/20 compound event season – Dr Stuart Browning
  • Black Summer learnings and Risk Frontiers’ Submission to the Royal Commission into National Natural Disaster Arrangements – Dr James O’Brien, Lucinda Coates, Andrew Gissing, Dr Ryan Crompton

Presentation Day 3

  • Introduction to Risk Frontiers’ ‘ClimateGLOBE’ physical climate risk framework
  • Incorporating climate change scenarios into catastrophe loss models – Dr Mingzhu Wang, Dr Tom Mortlock, Dr Ryan Springall, Dr Ryan Crompton

Appointment of New Managing Director

John McAneney and Ryan Crompton

It is with sadness we herald the retirement of Prof John McAneney from the position of Managing Director of Risk Frontiers and thank him for his valuable contribution over the last 18 years. It is with pleasure we announce that Dr Ryan Crompton has been promoted to Managing Director as from 1st July 2020.

John took over the leadership of Risk Frontiers from Prof Russell Blong in 2003. Under John’s stewardship, both inside Macquarie University and more recently outside as a private company, Risk Frontiers has grown to be an internationally recognised risk management, modelling and resilience organisation. John has nurtured the professionals who have chosen to work at Risk Frontiers and promoted the culture of tremendous work ethic, intellectual curiosity and scientific rigour underpinning the company. Our new Managing Director, Dr Ryan Crompton, looks forward to doing the same.

Announcing his retirement to Risk Frontiers staff John commented:

Being in charge of Risk Frontiers has truly been the best part of my working career. It has been a privilege to have been able to work closely with so many brilliant people and help create a research capability that is second to none in our field. Few are given that privilege and I’m grateful to Russell for entrusting me with that responsibility so many moons ago. I’m also indebted to each of you for your support and I look forward to catching up again when next in Australia.

John has agreed to remain on the Board as a Non-Executive Director until at least 31 December 2020.

In rising to the role of Managing Director, Dr Ryan Crompton brings his long academic achievements and commercial experience with Risk Frontiers since 2003. Ryan was appointed to Risk Frontiers while studying at Macquarie University soon after John joined, and has since held numerous roles within the company, most recently as Acting Managing Director. He has developed strong relationships with staff, clients and associates over many years and we look forward to his leadership of the Risk Frontiers team.

We look forward to your continued support of Risk Frontiers.

 

Low Damage Seismic Design

Paul Somerville, Chief Geoscientist, Risk Frontiers

It is commonly assumed that modern building codes assure resilience, guaranteeing that recently built structures can be quickly reoccupied, or at least readily repaired, after an earthquake. However, building codes were devised to protect lives, not property, so they do little to limit the kind of damage that might make a building uninhabitable for an extended period of time or even necessitate demolition. As demonstrated in Christchurch, New Zealand following the 22 February 2011 earthquake, code-compliant buildings may suffer several years of downtime after a significant earthquake.  To address this issue, the Structural Engineering Society of New Zealand (SESOC) is preparing Low Damage Design Guidance, and the Structural Engineers Association of California (SEAOC) is in the process of developing a Functional Recovery Standard for New Buildings.

An example of an earthquake resilient building is Casa Adelante. David Mar, a structural engineer in Berkeley, California, designed this nine-storey affordable housing building in San Francisco (Figure 1), with 25 percent of the units set aside for the formerly homeless. His objective was to demonstrate that it is possible to design resilient housing that keeps functioning in a large earthquake at a cost that is no larger than that of a conventional design.  David Mar was the keynote speaker at the New Zealand Society for Earthquake Engineering webinar series on 25 June 2020 (Mar, 2020).

Casa Adelante building in San Francisco
Figure 1. Casa Adelante building in San Francisco (left) and workers installing an earthquake energy absorbing damper within the foundation of the building. Source: David Mar.

The United States Resiliency Council (USRC), which awarded Casa Adelante a Gold Rating, has the mission “to establish and implement meaningful rating systems that describe the performance of buildings during earthquakes and other natural hazard events, to educate the general public to understand these risks and to thereby improve societal resilience.” The Casa Adelante apartment building in San Francisco is just one of 34 buildings worldwide to have received a Gold Rating award, and is the first-ever multifamily, 100 percent affordable-housing development to have been recognised.

Base isolation of the building, which uses rubber bearings or friction pendulums to isolate the horizontal motion of the ground from the building (Figure 2), is the best way of reducing damage, but is much more expensive than the conventional fixed-based approach and so was not feasible for this project.  The solution was to design a very stiff building (to limit lateral displacement and therefore damage) with a specially designed foundation.

Base isolation using rubber bearings and friction pendulum
Figure 2. Base isolation using rubber bearings (left) and friction pendulum (right).

Most of Mar’s design process focused on fine-tuning a conventional reinforced concrete building with structural shear walls.  The first storey of a building is especially vulnerable in earthquakes, because it often has openings such as entrances that interrupt the continuity of the shear wall.  An example of a building in San Francisco that exhibited this “weak first story” behaviour in an earthquake is shown in Figure 3. There is a distinct lean in the ground floor, but the upper floors are almost vertical and relatively undamaged. The advantages of a concrete shear wall over other structural systems are illustrated in Figure 4. The concrete shear wall structure (pale blue) has much lower repair cost (left) and repair time (right) than the other structural systems.

Building damaged in San Francisco in the 1989 Loma Prieta earthquake
Figure 3. Building damaged in San Francisco in the 1989 Loma Prieta earthquake. Source: Raymond B. Seed.

A good design approach for concrete shear wall buildings is to allow them to rock on their foundations in an earthquake and then come back to centre, minimising the damage. In order to allow for this rocking, Mar used a damper, developed by Professor Geoff Rodgers at the University of Canterbury, Christchurch, New Zealand, that was installed in the foundation (right panel of Figure 1).

The design used a mat foundation that is strong but a little thinner than a conventional foundation, so that when the walls flex, the wall will not break because the foundation is sufficiently flexible to undergo uplift. The wall is also able to re-centre (have no permanent lateral displacement or tilt) after the shaking ends. The damper couples the foundation of the building to a pier in the ground, so the building pulls up on the damper, dissipating the energy during the rocking action. The rocking motion is enabled by making the wall stronger that the foundation.

Repair costs and repair times of various structural systems
Figure 4. Repair costs (left) and repair times (right) of various structural systems (colour coded) as a function of return period of the earthquake ground motion (MCE is 1:2,500 AEP). The Special Reinforced Concrete Shear Wall building (SRCSW, pale blue) used by Mar has much better performance than the other systems [from left to right: Reinforced Concrete Special Moment Resisting Frame (RCSMRF), Steel Special Moment Resisting Frame (StlSMRF), Steel Special Concentric Braced Frame (StlSCBF), and Steel Buckling Restrained Braced Frame (StlBRBF)]. Source: FEMA (2018).

One of the potential outcomes of physically rigorous performance based seismic design it to develop a predictive capability that is sufficiently reliable to enable the ranking and quantitative estimation of losses of the kind shown in Figure 4.  The development of low damage seismic design marks an important advance in the use of performance based seismic design to accomplish performance objectives that extend beyond life safety to consider the economic costs due to damage and downtime.  This has the potential to reduce not only earthquake losses but also the uncertainty and therefore the costs of insurance as well as the life cycle costs of construction. This can enable informed decision-making by building owners, regulators and insurers about impacts beyond life safety that incentive the development of resilience of individual buildings and communities.

References

FEMA (2018). Seismic Performance Assessment of Buildings Volume 5 – Expected Seismic Performance of Code-Conforming Buildings FEMA P-58-5 / December 2018.

Mar, David (2020). Low Damage Seismic Design.  New Zealand Society for Earthquake Engineering Webinar series 5, 25 June 2020.

Covid-19: The Imperial College modelling

John McAneney from Post-Lockdown NZ

The March 16 Imperial College report[1] generated a lot of controversy. The study explores the effectiveness of various non-pharmaceutical interventions (NPI) in respect to limiting the spread of the Covid-19 and moderating its impact on the general population and the healthcare system. It’s claimed that the modelling was, in part, responsible for influencing the UK government and other jurisdictions to reverse policies originally aimed at achieving a level of herd immunity to a severe government-mandated lockdown of the economy and enforced social distancing.

Conceptually the Ferguson et al. SIR model is simple, whereby the population is apportioned between various subgroups labelled as Susceptible, Infected, Recovered or Deceased, with the latter two immune to reinfection. Transmission events occur through contacts made between susceptible and infectious individuals in the household, workplace, school or randomly in the community. The model attempts to estimate numbers of deaths, the time course of these and the demand for hospital Intensive Care Units.

In the absence of any mitigation efforts, by individuals or as mandated by government, the model projects large numbers of deaths, 510,000 in Great Britain and 2.2 million in the US. It’s hard to imagine any government ignoring these, particularly as they come with the imprimatur of Imperial College, London.

As a result of this study most of us are now very aware of the importance of the basic reproductive number (R0).  R0 is a measure of the average number of people infected by each already infected person. R0 is thought to have a mean value of between 2 and 3, but, importantly, is not a constant and will vary over the course of the epidemic, a point to which we will return shortly.

In reviewing elements of the Imperial College modelling and its purported political influence, we must accept that for all its faults it was undertaken during a public health emergency, at a time when there was still a lot to be learned about a new disease. Given this climate and the urgent need for decisions, the report was not peer-viewed. By necessity it makes a large number of assumptions, not just about basic epidemiological variables fitted to limited data, but also about the supposed degree of public compliance with various NPIs aimed at reducing transmission of the virus, the main point of the study.

Some have questioned the model’s fitness for-purpose. We cannot comment on the coherence of its coding and so, for our part, we assume a priori that the model faithfully reflects its conceptual framework and underpinning assumptions. Our concerns relate more to its usefulness as a guide to public decision-making.

For its worse-case scenario, Ferguson et al. hold R0 at the same value through time at 2.4 with each infection leading to another 2.4 until the virus runs out of people to infect.

What this means is that regardless of the carnage going on around them, no one makes any attempt at self-preservation — minimising contacts with infected people, hand washing, avoiding large gatherings, and working at home. Infections and deaths accumulate to the point where some 81% of the population is infected. Then take an Infection Fatality Rate of 0.9% and you get the extraordinary numbers mentioned above.

Ferguson and his colleagues acknowledge that this scenario as “unlikely” but nonetheless use it for their base case against which all other government-mandated NPIs are evaluated. Alan Reynolds at the Cato Institute nicely explains why R0 is not a constant.

“Suppose an infected man walks into a small elevator with three other people and begins coughing. The other three get infected from droplets in the air or from virus on objects (such as elevator buttons) they touch before touching their faces. In this case, we observe an R0 of 3.0. But if the coughing man is wearing a mask then perhaps one person does not become infected by inhaling the virus, so the R0 falls to 2.0. If the other two quickly use an alcohol‐​based hand sanitizer before touching their face, or wash their hands, then nobody becomes infected and the R0 falls to zero.[2]

In the 1918 influenza pandemic, Sydney was the most heavily affected Australian city and the virus was estimated to have infected 36-37% of its population[3]. According to Reynolds, the same virus reached 28% of the entire US population. Thus, a figure of 81% for the corona virus does seem a bit of a stretch. In short, the key assumption of a constant R0 is that people are stupid. People are certainly not always rational, but stupid? Everyone, and at the same time?

It would have been far better, in my view, if the base case had assumed a most likely scenario in which people were assumed to undertake plausible degrees of self-preservation, regardless of government controls. It’s always dangerous in decision-analysis to adopt pessimistic (or optimistic) choices at every step of the way. It can only lead to bias.

The model has other curious features. While it claims to be a stochastic model, it seems more deterministic with some few stochastic elements. Most key variables and assumptions are hard-wired and so it’s difficult to understand what variables are driving the numbers and where the uncertainties lie. Sensitivity analyses principally around R0, are basic.

The model also imagines a vaccine becoming available in 12 to 18 months but there is no exploration of the best policy option should no vaccine arrive. This issue is of particular importance for New Zealand as it joins a handful of other countries in having successfully eliminated the virus before any significant herd immunity was achieved. The NZ government’s decision-making was presumably informed by similar models to that employed by the Imperial College team.

The question now facing NZ, with tourism as its most significant export earner, is how to re-engage with the global economy. And how to avoid a second wave of the epidemic like that which accompanied the 1918-19 influenza pandemic.

In the words of its Prime Minister, NZ chose to go “hard and early” and with the latitude afforded by its isolation did so before any significant community transmission developed. It is difficult to fault the government’s comportment on this issue and the daily press conferences became compulsive viewing for much of the country. However, now having squashed the virus, what is the long-term plan? If faced with a second wave, one hopes that the answer will not be another total lockdown with its network of unintended consequences.

One positive thing is to accept that R0 is not a constant. The distribution of R0 is heavy-tailed with a few extreme cases (super spreaders) responsible for much of the viral seeding. In one of the country’s largest clusters, one person infected ~90 people at a wedding.

In response the government may consider reintroducing measures limiting numbers at gatherings where people are in close contact for extended periods — weddings and funerals, churches and choir practice. In short, we need to lower R0 as much as possible via the least GDP-destructive choices as possible.

We know the consequences now, we know how to minimise the chances of getting infected, so let’s be clever about how we manage it next time around. As pointed out by Scott et al. (2020)[4] we have to be careful that we are not killing more people indirectly than we save by lockdown. These lives are lost because of reduced family incomes, healthcare treatments delayed or missed diagnostics. This will preferentially affect low-income families who are more likely to lose jobs and have higher mortality rates.

To date our response to Covid-19 has been framed as a public health problem. The voice of economists, psychologists, geographers and historians have all been missing in action, at least publicly. For all its faults, the Imperial College report caught our attention whether we liked it or not; but relying on a constant R0 as if we are a bunch of rats rather than humans, and to work with that as your base case was more than silly.


[1] Ferguson et al. 2020. Report 9: Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. Imperial College London. https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/covid-19/report-9-impact-of-npis-on-covid-19/

[2] Alan Reynolds. 2020. How one model simulated 2.2 million US deaths from. Cato at Liberty. https://www.cato.org/blog/how-one-model-simulated-22-million-us-deaths-covid-19

[3] Kevin McCracken and Peter Curson. 2019. A century after the Spanish flu, preparing for the next pandemic. Sydney Morning Herald. https://www.smh.com.au/national/a-century-after-the-spanish-flu-preparing-for-the-next-pandemic-20190130-p50uhm.html

[4] Scott W. Atlas, John R. Birge, Ralph L Keeney and Alexander Lipton, The Covid-19 shutdown will cost Americans millions of years of life. https://thehill.com/opinion/healthcare/499394-the-covid-19-shutdown-will-cost-americans-millions-of-years-of-life

 

Will COVID-19 affect ECL forecasts on the 46th anniversary of the Sygna storm?

By Stuart Browning

Australia’s Eastern Seaboard is set to be lashed by the first real East Coast Low (ECL) of the cold season over the next couple of days beginning on 22 May 2020, (Figure 1). Unlike the February 5-10 2020 ECL, which originated in the tropics and impacted Southeast Queensland (Mortlock and Somerville 2020), this one has its origin in the cold Southern Ocean and will mostly impact Sydney. The cold pool of upper atmospheric air which is expected to drive intensification of this ECL has already dumped snow on the Alps as it passed over Southeast Australia.

The early stage development of this storm is remarkably similar to the Sygna storm: one of the most powerful East Coast Lows on record, and one of the worst storms to impact Sydney and Newcastle. Exactly 46 years ago, on the 21st of May 1974, a precursor to the Sygna storm was identified as a pool of very cold air over Adelaide (Bridgman 1985). After dropping heavy snow on the Alps it moved into the Tasman Sea and intensified into a powerful ECL, where it not only wrecked the Norwegian bulk carrier the Sygna, but caused extensive damage to coastal infrastructure including the destruction of Manly’s famous harbour pool.

While this weekend’s storm is forecast to produce typical ECL conditions of strong winds, heavy rainfall and dangerous surf, it is not forecast to reach the magnitude of truly destructive storms such as the Sygna, or the more recent Pasha Bulka Storm of 2007. However, ECL have proven notoriously difficult to predict. One of the key drivers of ECL intensification is a cold pool of air in the upper atmosphere, hence the alpine snow which often precedes intense storms. The behaviour of these cold pools of air presents a challenge for numerical forecast models under usual circumstances, but the COVID-19 pandemic has made their job even more difficult.

COVID-19 Grounding of Flights Impacting Global Weather Data Collection

Weather forecast models rely on a vast network of observations to describe the current state of the atmosphere. According to the European Centre for Medium-range Weather Forecasting (ECWMF) aircraft-based observations are second only to satellite data in their impact on forecasts. The number of aircraft observations has plummeted since the COVID-19 pandemic effectively grounded most of the world’s commercial airline fleet (Figure 2). Prior to COVID-19 Sydney to Melbourne was one of the world busiest flight routes, and weather observations from those flights provided valuable information for developing weather forecasts – especially for the simulation of complex weather systems like ECL. A ECMWF study in 2019 showed that excluding half of the regular number of aircraft observations had a significant impact on forecasts of upper atmospheric winds and temperature, especially in the 24-hr ahead.

Whether or not a lack of aircraft observations will affect forecasts for tomorrow’s ECL remains to be seen. While this event is unlikely to reach the magnitude of its historical counterpart, the May 1974 Sygna storm, it will provide a timely reminder that ECL are a regular part of Tasman Sea weather and climate; and if you’re on Australia’s eastern seaboard then get ready for the first large maritime storm of the winter.

Figure 1BOM numerical forecast for a Tasman Sea ECL on Friday the 22nd of May
Figure 2 Number of aircraft reports over Europe received and used at ECMWF per day (https://www.ecmwf.int/en/about/media-centre/news/2020/drop-aircraft-observations-could-have-impact-weather-forecasts).

References

Bridgman, H. A.: The Sygna storm at Newcastle – 12 years later, Meteorology Australia, VBP 4574, 10–16, 1985.

ECMWF 2020 Drop in aircraft observations could have impact on weather forecasts. https://www.ecmwf.int/en/about/media-centre/news/2020/drop-aircraft-observations-could-have-impact-weather-forecasts

Mortlock and Somerville 2020 February 2020 East Coast Low: Sydney Impacts. https://riskfrontiers.com/february-2020-east-coast-low-sydney-impacts/

The 14 May 2020 Burra Earthquake Sequence and its Relation to Flinders Ranges Faults

Paul Somerville, Principal Geoscientist, Risk Frontiers

Three earthquakes occurred about 200km north of Adelaide between May 10 and May 14, 2020, as shown on the left side of Figure 1. The first event (yellow), local magnitude ML 2.6, occurred near Spalding on May 10 at 22:53 between the other two events. The second event (orange), ML 2.4, occurred to the northwest of the first event, northeast of Laura on 13 May at 19:18. The third event (red), ML 4.3, occurred to the southeast of the first event at Burra on 14 May at 15:23.

All three earthquakes are estimated by Geoscience Australia (GA) to have occurred at depths of 10 km, consistent with the depth of 7 km +/-3 km for the Burra event estimated by the United States Geological Survey (USGS). The USGS estimated a body wave magnitude mb of 4.3 for the Burra earthquake from worldwide recordings. Neither GA nor the USGS have estimated its moment magnitude Mw.

The Burra event is the largest earthquake to have occurred near Adelaide in the past decade. People felt shaking in Adelaide office and apartment buildings, as well as in the Adelaide Hills, the Yorke Peninsula and southern Barossa, but it is not known to have caused any damage.  Maps of estimated peak acceleration and Modified Mercalli Intensity are shown in Figures 3 and 4 respectively.

The distance between the three events spans about 85 km, and they presumably occurred over a segment of the western range front of the Flinders Ranges. One segment of the range front, formed by the Wilkatana fault (Quigley et al., 2006), is shown on the right side of Figure 1. The occurrence of the three events close in time suggests that they are related to a large scale disturbance in the stress field on the range front faults, because individually the dimensions of the fault ruptures (about 1 km for the Burra earthquake and 200 m for the two smaller events) is much less than their overall separation of 85 km, so they are unlikely to have influenced each other.

There is no indication that a larger earthquake is about to occur, but if a 100 km length of the western range front of the Flinders Ranges were to rupture, it would have a magnitude of about Mw 7.3. Repeated large earthquakes on both sides of the range fronts have raised the Flinders Ranges and Mt Lofty Ranges by several hundred metres over the past several million years (Sandiford, 2003; Figures 2 and 5).

Until the occurrence of the 1989 Newcastle earthquake, the 28 February 1954 Adelaide earthquake (left side of Figure 5) was Australia’s most damaging earthquake.  Its estimated magnitude has varied between 5.6 and 5.4 until the release of the 2018 National Seismic Hazard Assessment (NSHA18) by Geoscience Australia (Allen et al., 2019). As part of that assessment, the local magnitudes ML in the Australian earthquake catalogue were revised and converted to moment magnitude Mw (Allen et al., 2018).  On average across Australia, this resulted in a reduction of 0.3 magnitude units, but the magnitude of the 1954 Adelaide earthquake was reduced much more, to a moment magnitude Mw of 4.79.

The 1954 Adelaide earthquake is thought to have occurred on the Eden-Burnside fault that lies just east of Adelaide. As shown on the right side of Figure 5, the Eden-Burnside fault is one of several faults on the western flank of the Mt Lofty Ranges that are uplifting the ranges. No lives were lost in the 1954 Adelaide earthquake and there were only three recorded injuries. Many houses were cracked, and heavy pieces of masonry fell from parapets and tall buildings in the city. One of Adelaide’s earliest buildings, the Victoria Hotel, partially collapsed. Other major buildings that were severely damaged included St Francis Xavier Cathedral, the Adelaide Post Office clock tower and a newly completed hospital in Blackwood, which sustained major damage to its wards and offices.

Risk Frontiers (2016) estimated the impact of a magnitude 5.6 scenario earthquake on the Eden-Burnside fault based on the 1954 Adelaide earthquake, and found the scenario’s losses to be much larger than the adjusted historical losses for the 1954 earthquake. With the revision of the magnitude of the 1954 Adelaide from 5.6 to 4.79, we now understand the cause of the large discrepancy in losses.

Figure 1. Left: Locations of the earthquake sequence, from top: Laura (orange), Sperling, yellow) and Burra (red). Source: Geoscience Australia, 2020. Top Right: Segments of the Wilkatana fault (dashed yellow lines). Source: Quigley et al., 2006. The Laura event is near the southern end of the Wilkatana fault, and the Sperling and Burra events are off the southern end of the Wilkatana fault on an adjacent segment of the range front fault system (not mapped). Bottom Right: My relative Jonathan Teasdale looking down a fault that dips down to the east (right) at about 45 degrees (black line) in the Flinders Ranges, raising the mountains on the right (east) side. The two sides of the fault are converging towards each other due to east-west horizontal compression, with the west side moving east and down, and the east side moving west and up.
Figure 2. Left: Topographic relief map of the Flinders and Mount Lofty Ranges. Source: Sandiford, 2003. Right: Association of historical seismicity (dots) with topography and faults (black lines) of the Flinders and Mount Lofty Ranges. Source: Celerier et al., 2005.
Figure 3. Contours of estimated peak acceleration (in percent g) from the Burra earthquake; the yellow contour represents 10%g. Source: Geoscience Australia, 2020.

 

Figure 4. Estimated MMI intensity from the Burra earthquake; the epicentral intensity is MMI V. Source: Geoscience Australia, 2020.
Figure 5. Left: Historical seismicity of the Adelaide region showing the location of the 1954 Adelaide earthquake. Right: Active faults of the Mt Lofty Ranges including the Eden-Burnside fault to the east of Adelaide. Source: Sandiford (2003).

References

Allen, T. I., Leonard, M., Ghasemi, H, Gibson, G. 2018. The 2018 National Seismic Hazard Assessment for Australia – earthquake epicentre catalogue. Record 2018/30. Geoscience Australia, Canberra. http://dx.doi.org/10.11636/Record.2018.030.

Allen, T., J. Griffin, M. Leonard, D. Clark and H. Ghasemi, 2019. The 2018 National Seismic Hazard Assessment: Model overview. Record 2018/27. Geoscience Australia, Canberra. http://dx.doi.org/10.11636/Record.2018.027

Celerier, Julien, Mike Sandiford, David Lundbek Hansen, and Mark Quigley (2005).  Modes of active intraplate deformation, Flinders Ranges, Australia. Tectonics, Vol. 24, TC6006, doi:10.1029/2004TC001679, 2005.

Geoscience Australia (2020). https://earthquakes.ga.gov.au/event/ga2020jgwjhk

Quigley M. C., Cupper M. L. & Sandiford M. 2006. Quaternary faults of southern Australia: palaeoseismicity, slip rates and origin. Australian Journal of Earth Sciences 53, 285-301.

Risk Frontiers (2016). What if a large earthquake hit Adelaide? https://www.bnhcrc.com.au/news/2016/what-if-large-earthquake-hit-adelaide

Sandiford M. 2003. Neotectonics of southeastern Australia: linking the Quaternary faulting record with seismicity and in situ stress. In: Hillis R. R. & Muller R. D. eds. Evolution and Dynamics of the Australian Plate, pp. 101 – 113. Geological Society of Australia, Special Publication 22 and Geological Society of America Special Paper 372.

 

 

Ranking of Potential Causes of Human Extinction

Paul Somerville, Risk Frontiers

We are good at learning from recent experience; the availability heuristic is the tendency to estimate the likelihood of an event based on our ability to recall examples. However, we are much less skilled at anticipating potential catastrophes that have no precedent in living memory. Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it. This was the problem with COVID 19: many informed scientists (e.g. Gates, 2015) predicted that a global pandemic was almost certain to break out at some point in the near future, but very few governments did anything about it.

We are all familiar with the annual Global Risk Reports published by the World Economic Forum.  Looking at their ranking of the likelihood and severity of risks (see Figure I, page 1 of the 2020 report), we see that the rankings over the past three years have consistently attributed the highest likelihood to Extreme Weather events and the highest impact to Weapons of Mass Destruction. However, in 2020, Climate Action Failure displaced Weapons of Mass Destruction as the top impact risk. Further, the rankings have changed markedly over the past 22 years, and while it may be that human activity has had an inordinately large impact on objective risks levels such as that due to Weapons of Mass Destruction in the last three years, there is probably a large component of subjectivity and availability heuristic in the rankings reflecting changing risk perceptions.

The work of Toby Ord and colleagues described below stands in stark contrast with these risk assessments.  First, it addresses much more dire events that could lead to human extinction.  Second, it attempts to use objective methods to assess the risks to avoid problems arising from risk perception. This work results in some surprising and thought-provoking conclusions, including that most human extinction risk comes from anthropogenic sources other than nuclear war or climate change.

Australian-born Toby Ord is a moral philosopher at the Future of Humanity Institute at Oxford University who has advised organisations such as the World Health Organisation, the World Bank and the World Economic Forum. In The Precipice, he addresses the fundamental threats to humanity. He begins by stating that we live at a critical time for humanity’s future and concludes that in the last century we faced a one-in-a-hundred risk of human extinction, but that we now face a one-in-six risk this century.

In previous work, Snyder-Beattie et al. (2019) estimated an upper bound for the background rate of human extinction due to natural causes. Beckstead et al. (2014) addressed unprecedented technological risks of extreme catastrophes, including synthetic biology, geoengineering (employed to avert climate change), distributed manufacturing (of weapons), and Artificial General Intelligence (AGI); see also Hawking (2010). In what follows, the conclusions of these studies are summarised and the various potential causes of human extinction ranked (Table 1).

Natural risks, including asteroids and comets, supervolcanic eruptions and stellar explosions are estimated to have relatively low risks, which, taken together, contribute a one-in-a-million chance of extinction per century.

Turning to anthropogenic risks, the most obvious risk to human survival would seem to be that of nuclear war, and we have come near it, mainly by accident, on several occasions. However, Ord doubts that even nuclear winter would lead to total human extinction or the global unrecoverable collapse of civilisation. Similarly, Ord considers that while climate change has the capacity to be a global calamity of unprecedented scale, it similarly would not necessarily lead to human extinction. He also considers that environmental damage does not show a direct mechanism for existential risk. Nevertheless, he concludes that each of these anthropogenic risks has a higher probability than that of all natural risks put together (one-in-a-million per century).

Future risks that Ord considers include pandemics, “un­aligned artificial intelligence” (superintelligent AI systems with goals that are not aligned with human ethics), ­“dystopian scenarios” (“a world with civilisation intact, but locked into a terrible form, with little or no value”), nanotechnology, and extraterrestrial life.

Ord considers the risk represented by pandemics to be mostly anthropogenic, not natural, and the risk from engineered pandemics is estimated to be one-in-30 per century, constituting the second highest ranked risk. He does not consider COVID 19 to be a plausible existential threat.

Ord considers that the highest risk comes from unaligned artificial intelligence.  Substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. The human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. However, if AI surpasses humanity in general, then it becomes “superintelligent” and could become powerful and difficult to control. The risk of this is estimated to be one-in-10 per century.   These risks combine for a one-in-six chance of extinction per century.

The methodology behind Ord’s estimates is described in detail in the book and in the answers to questions he was asked in the 80,000 Hours podcast (2020). For example, for the case of AGI, Ord states that the typical AI expert’s view of the chance that we develop smarter than human AGI this century is about 50%.  Conditional on that, he states that experts working on trying to make sure that AGI would be aligned with our values estimate there is only an 80% chance of surviving this transition while still retaining control of our destiny. This yields a 10% chance of not surviving in the next hundred years.

In the rankings in Table 1, all considered anthropogenic risks, shown in Roman; exceed all natural risks,  shown in italics

Table 1. Ranking of Risks of Human Extinction
Table 1. Ranking of Risks of Human Extinction

References

80,000 Hours (2020). https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#robs-intro-000000

Beckstead, Nick, Nick Bostrom, Neil Bowerman, Owen Cotton-Barratt, William MacAskill, Seán Ó hÉigeartaigh, and Toby Ord (2014). Unprecedented Technological Risks. https://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf.

Gates, Bill. (2015).  The next outbreak? We’re not ready. https://www.ted.com/talks/bill_gates_the_next_outbreak_we_re_not_ready/transcript?language=en

Hawking S. (2010), Abandon Earth or Face Extinction, Bigthink.com, 6 August 2010.

Snyder-Beattie, Andrew E., Toby Ord and Michael B. Bonsall (2019). An upper bound for the background rate of human extinction. Nature Reports, https://doi.org/10.1038/s41598-019-47540-7.

Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury.

Rougier, J., Sparks, R. S. J., Cashman, K. V. & Brown, S. K. The global magnitude–frequency relationship for large explosive volcanic eruptions. Earth Planet. Sci. Lett. 482, 621–629 (2018).

World Economic Forum (2020). The Global Risks Report 2020. http://www3.weforum.org/docs/WEF_Global_Risk_Report_2020.pdf

 

No, senator, science can’t do away with models

Foster Langbein, Chief Technology Officer, Risk Frontiers

The following article was written in response to COVID-19 pandemic modelling but has a particular resonance with why we make CAT models and how and why they change. CAT models explore some interesting territory – integrating as they do a myriad of sources from models of key ‘hard science’ physical processes, historical data, assumptions about geographic distribution, engineering assumptions and interpretations of building codes through to models of financial conditions from policy documents. Integrating such disparate sources becomes severely intractable mathematically when more than a few different distributions and their associated uncertainties are involved. The solution – Monte-Carlo simulation – harks back to the 1940’s and was critical in the simulations required in the Manhattan Project – in which, incidentally, a young Richard Feynman (quoted in the article) was involved. This powerful technique of random sampling a great number of times only became practical with the advent of computers – so computer models of CAT events are here to stay. But the essential point remains – they are just tools to help us understand the consequences of all the assumptions we input. When better science emerges or new data are incorporated and these assumptions are updated – changes are expected! Navigating those assumptions and helping understand the consequences and inevitable changes are part and parcel of Risk Frontiers modelling work. In what follows, Scott K. Johnson explains why U.S. Senator John Cornyn’s critique of modelling is misguided. 


On Friday, Texas Senator John Cornyn took to Twitter with some advice for scientists: models aren’t part of the scientific method. Scientists have responded with a mix of bafflement and exasperation. And Cornyn’s misconception is common enough – and important enough -that it’s worth exploring.

@JohnCornyn:  After #COVIDー19 crisis passes, could we have a good faith discussion about the uses and abuses of “modeling” to predict the future?  Everything from public health, to economic to climate predictions.  It isn’t the scientific method, folks.

Cornyn’s beef with models echoes a talking point often brought up by people who want to reject inconvenient conclusions of systems sciences. In reality, “you can make a model say anything you want” is about as potent an argument as “all swans are white.” The latter is either a disingenuous argument, or you have an embarrassingly limited familiarity with swans.

Models aren’t perfect. They can generate inaccurate predictions. They can generate highly uncertain predictions when the science is uncertain. And some models can be genuinely bad, producing useless and poorly supported predictions. But the idea that models aren’t central to science is deeply and profoundly wrong. It’s true that the criticism is usually centered on mathematical simulations, but these are just one type of model on a spectrum—and there is no science without models.

What’s a model to do?

There’s something fundamental to scientific thinking – and indeed most of the things we navigate in daily life: the conceptual model. This is the image that exists in your head of how a thing works. Whether studying a bacterium or microwaving a burrito, you refer to your conceptual model to get what you’re looking for. Conceptual models can be extremely simplistic (turn key, engine starts) or extremely detailed (working knowledge of every component in your car’s ignition system), but they’re useful either way.

As science is a knowledge-seeking endeavor, it revolves around building ever-better conceptual models. While the interplay between model and data can take many forms, most of us learn a sort of laboratory-focused scientific method that consists of hypothesis, experiment, data, and revised hypothesis.

In a now-famous lecture, quantum physicist Richard Feynman similarly described to his students the process of discovering a new law of physics: “First, we guess it. Then we compute the consequences of the guess to see what… it would imply. And then we compare those computation results to nature… If it disagrees with experiment, it’s wrong. In that simple statement is the key to science.”

In order to “compute the consequences of the guess,” one needs a model. For some phenomena, a good conceptual model will suffice. For example, one of the bedrock principles taught to young geologists is T.C. Chamberlin’s “method of multiple working hypotheses.” He advised all geologists in the field to keep more than one hypothesis – built out into full conceptual models – in mind when walking around making observations.

That way, instead of simply tallying up all the observations that are consistent with your favored hypothesis, the data can more objectively highlight the one that is closer to reality. The more detailed your conceptual model, the easier it is for an observation to show that it is incorrect. If you know where you expect a certain rock layer to appear and it’s not there, there’s a problem with your hypothesis.

There is math involved

But at some point, the system being studied becomes too complex for a human to “compute the consequences” in their own head. Enter the mathematical model. This can be as simple as a single equation solved in a spreadsheet or as complex as a multi-layered global simulation requiring supercomputer time to run.

And this is where the modeler’s adage, coined by George E.P. Box, comes in: “All models are wrong, but some are useful.” Any mathematical model is necessarily a simplification of reality and is thus unlikely to be complete and perfect in every possible way. But perfection is not its job. Its job is to be more useful than no model.

Consider an example from a science that generates few partisan arguments: hydrogeology. Imagine that a leak has been discovered in a storage tank below a gas station. The water table is close enough to the surface here that gasoline has contaminated the groundwater. That contamination needs to be mapped out to see how far it has traveled and (ideally) to facilitate a cleanup.

If money and effort was no object, you could drill a thousand monitoring wells in a grid to find out where it went. Obviously, no one does this. Instead, you could drill three wells close to the tank, determining the characteristics of the soil or bedrock, the direction of groundwater flow, and the concentration of contaminants near the source. That information can be plugged into a groundwater model simple enough to run on your laptop, simulating likely flow rates, chemical reactions, and microbial activity breaking down the contaminants and so on, spitting out the probable location and extent of contamination. That’s simply too much math to do all in your head, but we can quantify the relevant physics and chemistry and let the computer do the heavy lifting.

A truly perfect model prediction would more or less require knowing the position of every sand grain and every rock fracture beneath the station. But a simplified model can generate a helpful hypothesis that can easily be tested with just a few more monitoring wells – certainly more effective than drilling on a hunch.

Don’t shoot the modeler

Of course, Senator Cornyn probably didn’t have groundwater models in mind. The tweet was prompted by work with epidemiological models projecting the effects of COVID-19 in the United States. Recent modeling incorporating the social distancing, testing, and treatment measures so far employed is projecting fewer deaths than earlier projections did. Instead of welcoming this sign of progress, some have inexplicably attacked the models, claiming these downward revisions show earlier warnings exaggerated the threat and led to excessive economic impacts.

There is a blindingly obvious fact being ignored in that argument: earlier projections showed what would happen if we didn’t adopt a strong response (as well as other scenarios), while new projections show where our current path sends us. The downward revision doesn’t mean the models were bad; it means we did something.

Often, the societal value of scientific “what if?” models is that we might want to change the “if.” If you calculate how soon your bank account will hit zero if you buy a new pair of pants every day, it might lead to a change in your overly ambitious wardrobe procurement plan. That’s why you crunched the numbers in the first place.

Yet complaints about “exaggerating models” are sadly predictable. All that fuss about a hole in the ozone layer, and it turns out it stopped growing! (Because we banned production of the pollutants responsible.) Acid rain was supposed to be some catastrophe, but I haven’t heard about it in years! (Because we required pollution controls on sulfur-emitting smokestacks.) The worst-case climate change scenario used to be over 4°C warming by 2100, and now they’re projecting closer to 3°C! (Because we’ve taken halting steps to reduce emissions.)

These complaints seem to view models as crystal balls or psychic visions of a future event. But they’re not. Models just take a scenario or hypothesis you’re interested in and “compute the consequences of the guess.” The result can be used to further the scientific understanding of how things work or to inform important decisions.

What, after all, is the alternative? Could science spurn models in favor of some other method? Imagine what would happen if NASA eyeballed Mars in a telescope, pointed the rocket, pushed the launch button, and hoped for the best. Or perhaps humanity could base its response to climate change on someone who waves their hands at the atmosphere and says, “I don’t know, 600 parts per million of carbon dioxide doesn’t sound like much.”

Obviously these aren’t alternatives that any reasonable individual should be seriously considering.

The spread of COVID-19 is an incredibly complex process and difficult to predict. It depends on some things that are well studied (like how pathogens can spread between people), some that are partly understood (like the characteristics of the SARS-CoV-2 virus and its lethality), and some that are unknowable (like the precise movements and actions of every single American). And it has to be simulated at fairly fine scale around the country if we want to understand the ability of hospitals to meet the local demand for care.

Without computer models, we’d be reduced to back-of-the-envelope spit-balling – and even that would require conceptual and mathematical models for individual variables. The reality is that big science requires big models. Those who pretend otherwise aren’t defending some “pure” scientific method. They just don’t understand science.

We can’t strip science of models any more than we can strip it of knowledge.

https://arstechnica.com/science/2020/04/no-senator-science-cant-do-away-with-models/

The 25th Solar Cycle is about to begin, with new evidence for enormous solar storms

Foster Langbein and Paul Somerville, Risk Frontiers.

Solar Cycle 25 is the upcoming and 25th solar cycle since 1755, when extensive recording of solar sunspot activity began. It is expected to begin around April this year and continue past 2030 (see Figure 1). Stronger solar activity tends to occur during odd-numbered solar cycles with a number of events occurring during cycle 23 (see table at the end of this briefing), however solar events are more frequent near any maxima such as the Quebec geomagnetic storm in 1989 which coincided with cycle 22.

Solar sunspot cycle
Figure 1: The Solar sunspot cycle showing the roughly eleven-year periodicity and a recent forecast from NOAA for cycle 25.

Even moderate space weather events cause significant risks to airline communications and power industries through service interruptions as well as potential damage. Severe incidents are capable of damaging or destroying the very large high-voltage transformers on which our power networks depend with replacements for these custom-built components potentially taking years to implement. Internationally, such damage is often covered by traditional insurance policies through the prolonged effect of power outages. In Australia  insurers are likely to be less impacted, however the impact on business would be severe;  businesses would  need to have negotiated the inclusion of public utilities extension in policies for failure of electricity supply and utility companies are not liable for failure to supply electricity in the event of a natural disaster.

The most severe event in recorded history was the Carrington event of 1859, where auroral effects were clearly visible at mid-latitudes across the globe, for example in Sydney. It is estimated this event was at least twice as severe as the 1989 Quebec event. Extreme events such as these are a concern for re-insurers because their global scale limits the effectiveness of regional diversification.

Of particular note is the risk to satellites, with approximately two thirds of the 35 satellites launched annually covered by damage and liability insurance, up to a value of $700 million (Lloyds 2010). Between 1996 and 2005, insurers paid nearly US$2 billion to cover satellite damage, of which a significant proportion is solar-related and it is estimated that solar disruptions to satellites cost on the order of $100 million a year (Odenwald and Green, 2008).

The flow-on impacts of power cuts to other industries can be significant, with studies suggesting brownouts and blackouts in the USA causing $80 billion of economic losses every year (Odenwald & Green 2008). Between June 2000 and December 2001, it is estimated that solar storms increased the total cost of electricity in the US by $500 million. The capacity of even moderate events to cause significant cost is exemplified in the solar incidents, listed in the table at the end of this briefing, that occurred during recent solar maxima.

With the onset of the next odd-numbered solar maximum this year, an increased frequency of solar event activity as the cycle progresses is expected, especially in the moderate to severe intensity range. Moderate events have been easily dealt with by insurance companies and are unlikely to affect large portions of the planet. Solar storm impacts are most significant in areas close to the poles, with a reduced likelihood of widespread power failure in Australia. The risk is, however, non-zero with the drive to inter-connect our power networks through long transmission lines increasing the susceptibility to Geomagnetic Induced Currents (GICs). Although space-weather alerts issued by the Bureau of Meteorology are monitored to mitigate the risk by compartmentalizing the network during event occurrence, the residual risk has not been studied. The implications for emergency management, for example, of widespread power outage and the cascading effects of our increasingly interconnected networks are similarly unstudied in an Australian context, particularly if a system is already under stress, say, during a heatwave.

In addition to the consideration of purely local effects, the impacts of solar storms on communication and navigation systems world-wide and in space are likely to have flow-on effects for productivity to Australian businesses.

Although the incidence of moderate solar effects is expected to increase, extreme events at the Carrington scale are not well correlated with the solar cycle. A recent discovery of evidence in Greenland for a huge solar storm that occurred 2,500 years ago is thought to be of similar magnitude to an event in 774–775AD where there was an observed increase of 1.2% in the concentration of carbon-14 isotope in tree rings in Japan dated to the years 774 or 775 AD (Miyake et al., 2012). A surge in beryllium isotope 10Be, detected in Antarctic ice cores, has also been associated with this event, suggesting that it was a solar flare having global impact. Although a solar flare event will not have the geomagnetic effects on our power network that an event such as the Carrington or 1989 Quebec events would, the implications for our satellite systems – imaging, GPS, communications, and so on – would be catastrophic. The new discovery gives some frequency context to this, suggesting a return period on the order of 1000 years, which, although large, should not be ignored given the likely severity of such an event. An event at ARI 1000 would have a 3% chance to occur once in a 30-year period.

The following article, by Ian Sample, Science Editor of The Guardian, appeared on 12 March 2019 under the title “Radioactive particles from huge solar storm found in Greenland.”


Traces of an enormous solar storm that battered the atmosphere and showered Earth in radioactive particles more than 2,500 years ago have been discovered under the Greenland ice sheet. Scientists studying ice nearly half a kilometre beneath the surface found a band of radioactive elements unleashed by a storm that struck the planet in 660BC. It was at least 10 times more powerful than any recorded by instruments set up to detect such events in the past 70 years, and as strong as the most intense known solar storm, which hit Earth in AD775.

Raimund Muscheler, a professor of quaternary sciences at Lund University in Sweden, said: “What our research shows is that the observational record over the past 70 years does not give us a complete picture of what the sun can do.” The discovery means that the worst-case scenarios used in risk planning for serious space weather events underestimate how powerful solar storms can be, he said.

Solar storms are whipped up by intense magnetic fields on the surface of the sun. When they are pointed toward Earth they can send highly energetic streams of protons crashing into the atmosphere. The sudden rush of particles can pose a radiation risk to astronauts and airline passengers, and can damage satellites, power grids and other electrical devices.

Scientists have come to realise over the past decade that intense solar storms can leave distinct traces when they crash into the planet. When high energy particles slam into the stratosphere, they collide with atomic nuclei to create radioactive isotopes of elements such as carbon, beryllium and chlorine. These can linger in the atmosphere for a year or two, but when they reach the ground they can show up in tree rings and ice cores used to study the ancient climate.

Muscheler’s team analysed two ice cores drilled from the Greenland ice sheet and found that both contained spikes in isotopes of beryllium and chlorine that date back to about 660BC. The material appears to be the radioactive remnants of a solar storm that battered the atmosphere.

The scientists calculate that the storm sent at least 10bn protons per square centimetre into the atmosphere. “A solar proton event of such magnitude occurring in modern times could result in severe disruption of satellite-based technologies, high frequency radio communication and space-based navigation systems,” they write in Proceedings of the National Academy of Sciences.

Britain’s emergency plans for severe space weather are based on a worst-case scenario that involves a repeat of the 1859 Carrington event. This was a powerful geomagnetic storm set off by a huge eruption on the sun known as a coronal mass ejection. A 2015 Cabinet Office report anticipated only 12 hours warning of a similar storm that could lead to power outages and other disruption. The discovery of more powerful solar storms in the past 3,000 years suggests that space weather can be worse than the UK plans for. “The Carrington event is often used as a worst-case scenario, but our research shows that this probably under-estimates the risks,” said Muscheler.


REFERENCES

Brooks, M. (2009) Space storm alert: 90 seconds from catastrophe. New Scientist 2700

Burns, A.G., Killeen, T.L., Deng, W., Cairgnan, G.R., and Roble, R.G. (1995)

Dayton (1989) Solar storms halt stock market as computers crash. New Scientist 1681

Lloyd’s (2010) Insurance on the Final Frontier. Available from http://www.lloyds.com/News-and-Insight/News-and-Features/Specialist/Specialist-2010/Insurance_on_the_final_frontier

Marshall et al, A preliminary risk assessment of the Australian region power network to space weather, SPACE WEATHER, VOL. 9, S10004, doi:10.1029/2011SW000685, 2011

Miyake, F., K. Nagaya, K. Masuda, and T. Nakamura (2012).   A signature of cosmic-ray increase in AD 774-775 from tree rings in Japan. Nature, Volume 486, Issue 7402, pp. 240-242 (2012).

NASA (2008) A Super Solar Flare. Available from http://science.nasa.gov/science-news/science-at-nasa/2008/06may_carringtonflare/

NOAA (2003) October-November 2003 Solar Storm. Available from http://www.magazine.noaa.gov/stories/mag131b.htm

Odenwald, S.F. and Green, J.L. (2008) Bracing the Satellite Infrastructure for a Solar Superstorm. Scientific American, available from http://www.scientificamerican.com/article.cfm?id=bracing-for-a-solar-superstorm

Sample, Ian (2019). Radioactive particles from huge solar storm found in Greenland. https://www.theguardian.com/science/2019/mar/11/radioactive-particles-from-huge-solar-storm-found-in-greenland

Solar Storms (unknown date) Available from http://www.solarstorms.org/SRefStorms.html

Devil’s Staircase of Earthquake Occurrence: Implications for Seismic Hazard in Australia and New Zealand

Paul Somerville, Principal Geoscientist, Risk Frontiers

The temporal clustering of large surface faulting earthquakes that has been observed in the western part of Australia has been elegantly explained by the Devil’s Staircase fractal model of fault behaviour. Although the only available paleoseismic observations in eastern Australia are from the Lake Edgar fault in Tasmania, it seems likely that the Devil’s Staircase also describes large surface faulting occurrence in Eastern Australia and more generally worldwide.


Paleoseismic Observations of Surface Faulting Recurrence in Australia

Clark et al. (2012, 2014) showed that large surface faulting earthquakes in Australia are clustered within relatively short time periods that are separated by longer and variable intervals of quiescence. Figure 1 shows the time sequences of large earthquakes on a set of faults in Australia in the past million years inferred from paleoseismic studies. Most of these faults are in Western Australia and it is remarkable that earthquakes have occurred on three of these faults in historical time.  Before now, the most recent period of activity was about 10,000 years ago. Few observations of this kind are available in other stable continental regions of the world analogous to Australia.

Occurrence of surface faulting earthquakes on individual faults in the past million years.
Figure 1. Occurrence of surface faulting earthquakes on individual faults in the past million years. Source: Clark et al. (2012).

The Devil’s Staircase in Global Earthquake Catalogues

Clark et al. (2012) proposed the earthquake recurrence model shown in Figure 2 in which clusters of several earthquakes are separated by long intervals of seismic quiescence. Chen et al (2020) have shown that this irregular earthquake recurrence can be described mathematically by the “Devil’s Staircase” (Mandelbrot, 1982; Turcotte, 1997). The Devil’s Staircase is a fractal property of complex dynamic systems. The Devil’s Staircase is commonly found in nature, and fractal properties are scale invariant, and so they are observed on all scales. Fractal systems are characterized by self-organised criticality, in which large interactive systems self-organize into a critical state in which small perturbations result in chain reactions that can affect any number of elements within the system (Winslow, 1997).

Schematic model of earthquake recurrence on a fault in Australia
Figure 2. Schematic model of earthquake recurrence on a fault in Australia. Source: Clark et al. (2012)

Chen et al. (2020) fit the interevent time data with probability models using the maximum likelihood method to a set of earthquake catalogues, one of which is shown on the left of Figure 3. They tested five probability models (Poisson, gamma, Weibull, lognormal, and Brownian passage time [BPT]). The Poisson model assumes that, although the mean interval between events is known for a sequence, the exact occurrence time of each event is random. The interevent-time distribution of such a sequence follows an exponential distribution. The Poisson model is a simple one-parameter model commonly used in seismic hazard analysis, and is a special case of the more generalized gamma and Weibull distributions. Both the gamma and Weibull models fit the data for earthquakes of magnitude 6 and larger better than the Poisson model, whereas the lognormal and BPT models fit worse, as shown on the right of Figure 3.

Figure 3. Left: Cumulative number of earthquakes in the world with magnitudes 8.5 or larger since 1900; declustering indicates the removal of dependent events (aftershocks). Right: Comparison of the relative frequency histograms (rectangular columns) of the distribution of interevent times with probabilities predicted by five probability models (curves) for all earthquakes in the world with magnitude 6 or larger. Source: Chen et al. (2020).

The variation of the interevent times can be measured by the coefficient of variation (COV), or aperiodicity, which is defined by the ratio of the standard deviation of interevent times to the mean of interevent times (Salditch et al., 2019). For a sequence generated by a Poisson process, the COV value is 1. To measure the deviation from the Poisson model, Chen et al. (2020) use a normalized COV, called the burstiness parameter B (Goh and Barabási, 2008), whose value ranges from −1 to 1. B of −1 corresponds to a perfectly periodic sequence with a COV of 0; B of 1 corresponds to the most bursty sequence with infinite COV, and B of 0 corresponds to a sequence produced by an ideal Poisson process a COV of 1.  Thus, a sequence is “bursty” when 0 < B < 1 (Fig. 4b) and quasiperiodic (the opposite of “bursty”) when −1 < B < 0 (Fig. 4c).

Figure 4. (a) A sequence of events generated by a Poisson model. (b) A bursty sequence generated by the Weibull interevent-time. (c) A quasiperiodic sequence generated by the Gaussian interevent-time distribution. Source: Chen et al. (2020).

Implications of the Devil’s Staircase for Seismic Hazard Analysis in Australia and New Zealand

The Devil’s Staircase pattern of large earthquakes has important implications for earthquake hazard assessment. The mean recurrence time, a key parameter in seismic hazard analysis, can vary significantly depending on which part of the sequence the catalogue represents. This can be important in hazard assessment, because catalogues for large earthquakes are often too short to reflect their complete temporal pattern, and it is difficult to know whether the few events in a catalogue occurred within an earthquake cluster or spanned both clusters and quiescent intervals. Consequently, an event may not be “overdue” just because the time since the previous event exceeds a “mean recurrence time” based on an incomplete catalogue.

The Poisson model is a time-independent model in which each event in the sequence is independent of other events. However, Devil’s Staircase behaviour indicates that most earthquake sequences, especially when dependent events are not excluded, are burstier than a Poisson sequence and may be better fit by the gamma or Weibull distributions. The conditional probability of another large earthquake for both the gamma and Weibull models is higher than that of the Poisson model soon after a large earthquake.

This concept underlies the earthquake forecast for Central New Zealand developed by an international review panel convened by GNS Science in 2018 and published by Geonet (2018). This forecast relies in part on transfer of stress from the northeast coast of the South Island to the southeast coast of the North Island following recent earthquake activity in the region, notably the Mw 7.8 Kaikoura earthquake of 2016 which occurred off the northeast coast of the South Island. Risk Frontiers has implemented this time-dependent earthquake hazard model in our recent update of QuakeNZ. Earthquake clusters involving stress transfer are ubiquitous and have occurred recently on the Sumatra subduction zone (2004 – 2008) and along the North Anatolia fault in Turkey (1939 – 1999).

Given the pervasive occurrence of fractal phenomena in geology (Turcotte, 1997) and the identification by Chen et al. (2020) of Devil’s Staircase recurrence behaviour in a wide variety of earthquake catalogues, it is likely that this is a general feature of earthquake occurrence.

Temporal Clustering of Very Large Subduction Earthquakes

The left side Figure 3 reflects two clusters of very large subduction earthquakes. The first occurred in the middle of last century and included the 1952 Mw 9.0 Kamchatka earthquake, the 1960 Mw 9.5 Chile earthquake and the Mw 9.2 Alaska earthquake. The second cluster began with the occurrence of the Mw 9.15 Sumatra earthquake of 26 December 2004 and continued with the Mw 8.8 Chile earthquake on 27 February 2010 and the Mw 9.0 Tohoku earthquake on 11 March 2011. The usual approach to assessing the significance of this apparent clustering is to test statistically the hypothesis that the global earthquake catalogue is well explained by a Poisson process. Risk Frontiers analysed the power of such tests to detect non-Poissonian features, and showed that the low frequency of large events and the brevity of our earthquake catalogues reduce the power of the statistical tests and render them unable to provide an unequivocal answer to this question (Dimer de Oliveira, 2012).  This conclusion is consistent with the Devil’s Staircase behaviour shown in Figure 3.

References

Chen, Y., M. Liu, and G. Luo (2020). Complex Temporal Patterns of Large Earthquakes: Devil’s Staircases, Bull. Seismol. Soc. Am. XX, 1–13, doi: 10.1785/0120190148

Clark, D., A. McPherson, and T. Allen (2014). Intraplate earthquakes in Australia, in Intraplate Earthquakes, Cambridge University Press, New York, New York, 49 pp.

Clark, D., A. McPherson, and R. Van Dissen (2012). Long-term behaviour of Australian stable continental region (SCR) faults, Tectonophysics 566, 1–30.

Dimer de Oliveira, F. (2012). Can we trust earthquake cluster detection tests? Risk Frontiers Newsletter Vol. 11 Issue 3.

Dimer de Oliveira, F. (2012). Can we trust earthquake cluster detection tests? Geophysical Research Letters, Vol. 39, L17305, doi:10.1029/2012GL052130.

Geonet (2018). Updated earthquake forecast for Central New Zealand. https://www.geonet.org.nz/news/5JBSbLk9qw8OU4uWeI86KG

Goh, K.-I., and A.-L. Barabási (2008). Burstiness and memory in complex systems, Europhys. Lett. 81, 48002.

Mandelbrot, B. B. (1982). The Fractal Geometry of Nature, W. H. Freeman, New York, New York.

Salditch, L., S. Stein, J. Neely, B. D. Spencer, E. M. Brooks, A. Agnon, and M. Liu (2019). Earthquake supercycles and Long-Term Fault Memory, Tectonophysics 228/289, doi: 10.1016/j.tecto.2019.228289.

Somerville, Paul (2018). Updated GNS Central New Zealand Earthquake Forecast, Risk Frontiers Briefing Note 364.

Turcotte, D. L. (1997). Fractals and Chaos in Geology and Geophysics, Cambridge University Press, New York, New York.

Winslow, N.  (1997). Introduction to Self-Organized Criticality and Earthquakes http://www2.econ.iastate.edu/classes/econ308/tesfatsion/SandpileCA.Winslow97.htm

Future of bushfire fighting in Australia

Andrew Gissing, Risk Frontiers, Neil Bibby, People & Innovation

Australia needs to be ambitious in its thinking about how future bushfires are managed and fought. Recent bushfires caused significant damage and widespread disruption leaving some 3093 homes destroyed (AFAC) and 35 fatalities as well as major damage to community infrastructure. We must learn from this experience.

Today’s management of bushfire risk is largely reliant on long standing approaches that are resource intensive and which struggle to control fires when conditions are catastrophic. This issue is compounded under a warming climate with fire seasons becoming longer, and days of significant fire danger more frequent.

An inherent problem is that bushfire detection is complex and in the time it takes before resources can be tasked and targeted, bushfires have already spread to the point where suppression is difficult. This problem is exacerbated when bushfire ignition occurs in remote areas far from emergency management resources. Making the problem worse still is a growing bushland-urban interface where buildings and community infrastructure are highly vulnerable and exposure is growing.

Innovation to discover the next generation of firefighting capability should be a priority in any government response to the Black Summer bushfires. Our institutions must think big.

To explore blue sky thinking in respect of future firefighting capabilities and enhanced bushfire resilience, Risk Frontiers and People & Innovation hosted a forum with experts in construction, technology, aviation, insurance, risk management, firefighting and information technology. In what follows, insights and questions arising from this forum are outlined.

New thinking is required

There are two stages in considering future capabilities. The first stage is planning and investment to improve capabilities in the short term particularly before the next bushfire season, and the second stage is research and innovation to inspire the next generation of firefighting capability. What is needed is a blueprint of how bushfires will be fought in the future. This blueprint should be focused on a vision whereby bushfires can be rapidly managed and controlled in a coordinated manner informed by advanced predictive intelligence; and where the built environment is resilient. Key research questions to be answered in the development of such a blueprint include:

Bushfire detection and suppression

  • How can bushfires be detected more quickly?
  • How can bushfires be extinguished before they are able to spread?
  • How can the safety of firefighters be improved?

Coordination

  • How can communications enable effective coordination?
  • How can resources be tasked and tracked in a more effective manner?
  • How can situational awareness be enhanced to inform decision-making?

Community resilience

  • How can new buildings be made more resilient?
  • How can existing building stock be retrofitted for resilience?
  • How can community infrastructure such as energy distribution systems, telecommunications, water supplies and sewerage systems be designed with greater resilience?

Short term

It is widely agreed that in the short term there are many technologies and systems already existing that could enhance firefighting and broader disaster management capabilities. Specific opportunities identified by industry experts include:

  • Satellites, such as data sourced from the Himawari satellite, should be evaluated for their ability to enhance fire detection. High Altitude Platform Systems may be another option.
  • In the United States, Unmanned Aerial Vehicles (UAV) have been employed to provide enhanced imagery over firegrounds and if equipped with infrared sensors these can support monitoring of fire conditions at night. The Victorian Government has established a panel contract with UAV providers to assist with real-time fire detection and monitoring. Further policy regarding airspace management is required to support wider demand-based deployments of UAVs.
  • Existing agricultural monitoring technologies could be repurposed to monitor bushfire fuels and soil conditions.
  • Balloons equipped with radio communications could provide coverage when traditional communications technologies have been disrupted. Alternatively, small UAVs could create a mesh network to provide a wireless communications network or equipment fitted to aircraft.
  • Advances in the use of robotics in the mining sector may provide applications to firefighting, for example autonomous trucks.
  • Resource tracking technologies could be implemented to improve coordination and firefighter safety.
  • Emerging fire extinguisher technologies could help to suppress bushfires.

Operational decisions could be improved by enhanced collation and fusion of data already available. There are many data sources that are managed by different organisations, not just government agencies. Collating these datasets to provide a common operating picture across all organisations would improve situational awareness and data analytics.

The widespread adoption of artificial intelligence and greater digital connectedness across the economy and emergency management sector will find new ways to make sense of data and improve decisions. In the built environment, improved information to households about the resilience of their buildings along with programs to implement simple retrofitting measures should be considered. In the aftermath of bushfires, governments should consider land swaps and buy-outs to reduce exposure in high risk areas. Similarly, governments should better plan communities to ensure infrastructure is more resistant to failure when most needed in emergencies.

2030 and beyond

A key area for research and innovation investment over the coming decade should be how to rapidly suppress bushfires once detected. This could see swarms of large capacity UAVs supported by ground-based drones to target suppression and limit fire spread. Resources would be rapidly dispatched and coordinated autonomously once a bushfire was detected. Pre-staging of resources would be informed by advanced predictive analytics and enabled by unmanned traffic management systems. UAVs and drones would have applications beyond fire suppression including for rapid impact assessment, search and rescue, logistics and clearance of supply routes.

The way forward

A research and innovation blueprint is needed that outlines how technologies will be translated to enhance firefighting and resilience in the short term and, beyond this, how the next generation of capability will be designed and built. Its development should involve government, research and industry stakeholders in a collaborative manner. The final blueprint should be integrated with future workforce and asset planning to support broader change management.

Adopting new technologies will not be easy and existing cultural and investment barriers should be considered. In adopting new technologies, it is important to recognise that innovation is an iterative process of improvement and will rarely provide a perfect solution in the first instance.

Public private partnerships will be key to realising opportunities and government must seek to engage a broad range of stakeholders. In the aftermath of Hurricane Sandy in the United States in 2012, the US Government launched a competition called ‘re-build by design’ focused on proactive solutions to minimise risk. Already in Australia, numerous innovation challenges involving businesses and universities are being held to assist in inspiring ideas. There is an opportunity to harness and coordinate such challenges on a grand scale to promote new thinking and collaboration linking directly with responsible agencies.

We need to be bold in our thinking!

Acknowledgements

Forum participants included IAG, SwissRe, IBM, Defence Science and Technology, IAI, Cicada Innovations, Lend Lease and ARUP.