Accidental Ammonium Nitrate Explosions

Paul Somerville and Ryan Crompton, Risk Frontiers

Insurers are gearing up for what is likely to be one of the most expensive insured cargo and port infrastructure losses ever from the Beirut explosion, on a scale at least as large as the one resulting from the explosions at the Chinese port of Tianjin in 2015 (800 tonnes, 173 deaths). It is expected that Lebanon’s second port of Tripoli, believed to be operating at just 40% capacity on account of COVID-19, will become the country’s main gateway for both emergency supplies and normal trading.

The accidental ammonium nitrate explosion in Beirut serves as a reminder of how frequent and deadly these events are. A timeline and description of events since 2000 is shown in Figure 1. The Wyandra, Northern Territory event of 2014 is shown in Figure 1 and is one of three Australian events, described later, that appear in the Han (2016) catalogue.

Timeline of accidental ammonium nitrate explosions.
Figure 1. Timeline of the largest accidental ammonium nitrate explosions in the world since 2000.  Source: VisualCapitalist.

We analysed Han’s (2016) catalogue which lists 79 events since 1896, 42 of which occurred in the United States, to assess their frequency of occurrence. Since 1900, they have occurred at a uniform rate of about 0.75 per year, with an increase to about 1 per year since 2000. To the extent that Han’s list is incomplete, these rates are underestimated. Han (2016) notes that the ammonium nitrate that exploded in the 1947 Texas City (Galveston) event was coated with wax to prevent caking.  Practices introduced in the 1950s eliminating the use of wax coatings yield ammonium nitrate, used in fertilisers, that contain less than 0.2 percent combustible material. This practice does not appear to have impacted the frequency of events.

We also analysed the Wikipedia catalogue of 36 events, which lists both size (tonnes) and deaths, to assess the relation between size (tonnes) and number of deaths, shown in Figure 2. To first order, Log10 Deaths = 0.85 log10 Tonnes. Four notable events on the left panel of Figure 2, clockwise from top left, and labelled by numbers of deaths, are the 1921 Oppau, Germany event (450 tonnes, 561 deaths), the 1947 Texas City (Galveston, U.S.) event (2906 tonnes, 581 deaths),  the 2020 Beirut event (2750 tonnes, 220 deaths), and the 1947 Brest, France event (3000 tonnes, 29 deaths).

The relationship between size and deaths from ammonium nitrate explosions.
Figure 2. Relation between size (tonnes) and deaths from accidental ammonium nitrate explosions on linear (left) and log (right) scales. Several zero values on the axes of the log plot actually represent zero values: the 2004 North Korean event of 162 tonnes had no reported deaths.

Australian-based company, Orica, the world’s largest provider of commercial explosives and blasting systems to the mining, quarrying, oil and gas and construction markets, has a stockpile of ammonium nitrate up to four times the size of the one in Beirut. There are many stockpiles in Australia but Orica’s Kooragang Island plant has received a lot of media attention.

Between 6,000 to 12,000 tonnes are currently stored at Orica’s Kooragang Island plant in the Port of Newcastle, which produces approximately 400,000 tonnes each year. This plant is located 3 km from Newcastle’s CBD and 800 m from residents in Stockton. Up to 40,000 people live in what would be the ‘blast zone’ if there were to be an explosion. Orica state that they follow strict safety protocols and ensure that the ammonium nitrate storage areas are fire resistant and built exclusively from non-flammable materials, with no flammable sources within designated exclusion zones. The operations on the Kooragang Island site, which has been in operation for 51 years, are highly regulated to state and federal standards. The site’s safety management systems, security arrangements, and emergency response procedures undergo a strict auditing and verification process by SafeWork NSW. The Kooragang Precinct Emergency Sub Plan can be found here.

The safety of ammonium nitrate was previously highlighted in South Australia in 2013, when concerns were raised about the location of the Incitec Pivot fertiliser plant at Port Adelaide following the West, Texas, explosion of 2013 involving 240 tonnes of chemical that killed 15 people in a 50 unit apartment block. In 2013, the South Australian Government made an agreement with Incitec Pivot to move its plant away from the heart of Port Adelaide, because it posed an unacceptable risk to residents of a proposed major development there. The company moved its operations to a location further from the centre of Port Adelaide to Gillman in 2018. According to SafeWork SA, all 170 of the ammonium nitrate storages in the state are heavily regulated, heavily controlled and monitored.

Figure 3. Left: Incitec Pivot plant, Port of Adelaide; Right: Orica Kooragang Island plant.

In the remainder of this briefing we describe three Australian accidental explosions, all involving trucks.

Taroom, Queensland, 30 August 1972

A truck explosion occurred near Stonecroft Station on Fitzroy Development Road in August 1972. The truck and trailer were carrying 12 tonnes of ammonium nitrate. The truck experienced an electrical fault and caught fire north of Taroom. After the driver stopped and parked the burning truck, two brothers from a nearby cattle property who saw the fire rode up on motorbikes to assist. The three men were killed when the truck exploded at around 18:15. The explosion destroyed the prime mover and trailer, leaving a crater in the road 2 m deep, 5 m wide, 20 m long. Parts of the truck and trailer were scattered up to 2 km away. The explosion burnt out more than 800 hectares (2,000 acres) of surrounding bushland. The explosion was heard and shook houses 88 km away in Moura and 55 km away in Theodore.

Wyandra, Queensland, 5 September 2014

On September 5, 2014, an ammonium nitrate truck explosion (Figure 4) occurred near Wyandra, about 75 km south of Charleville in south-west Queensland, Australia. The truck carrying 56 tonnes of ammonium nitrate for making explosives rolled over a bridge and exploded, injuring eight persons including the driver, a police officer, and six firefighters. Rescue crews were trying to extract the driver from the truck when they found out there was ammonium nitrate inside. They were making a mad dash from the truck when it exploded.

The prime mover caught fire about 9.50pm and the driver steered off the highway, causing it to hit a guard rail near the Angellala Creek Bridge and roll onto its side in the dry creek bed. The crash led to two explosions occurring at 10.11pm and 10.12pm. The blast was so powerful that the truck disintegrated, destroying two firefighting vehicles along with it and causing catastrophic damage to the Mitchell Highway. Two road bridges were destroyed (Figure 5), one of the railway bridge spans was thrown 20 m through the air, and a major section of the highway was missing. Geoscience Australia recorded the explosion as a magnitude 2.0 event, and coincidentally, 20 minutes after the explosion, a magnitude 2 earthquake was recorded 55 km south of Charleville.

Emergency vehicles damaged by the Wyandra truck explosion. Queensland Police Service
Figure 4. Emergency vehicles damaged by the Wyandra truck explosion. Queensland Police Service.

The dangers posed by the remaining ammonium nitrate led to a 2km exclusion zone around the site for a number of days. The large crater formed by the blast closed the highway necessitating detours of up to 600 km, including a 100 km detour to Cunnamulla along the Charleville-Bollon Road. In April 2015, the $10 million tender to reconstruct the highway and bridges were awarded and the construction work took place between June and November 2015.

Damage to bridges caused by the Wyandra explosion.
Figure 5. Damage to bridges caused by the Wyandra explosion. Queensland Police Service

Queensland Transport Minister Scott Emerson noted that there are rules in place relating to signage and the particular routes that are allowed to carry dangerous goods and that he would be talking to police about whether anything was done wrongly. However, Assistant Fire Commissioner Dawson dismissed concerns that such a volatile material was being carried in trucks. “Not so much a worry; this product – and trucks like this very same truck – travel these roads every day,” he said. “Every day they’re out there and they don’t go bang. Something’s happened to bring this truck in a situation, which has possibly mixed the product on the back of the truck – maybe with the diesel fuel, the impact of the initial [crash] when it goes off the road – so those circumstances have had more of a connection to the end result. You’d be surprised – there’s a lot of these trucks – they do it very safely and very effectively.

On January 10, 2019, the Queensland State Government launched a lawsuit in the Brisbane Supreme Court claiming more than $7.8 million in damages, the estimated cost of building a temporary detour, and inspected the area to ensure it was safe as well as replacing the road and railway bridge. It was holding the trucking company, Kalari Proprietary Limited, road train driver Anthony David Eden and insurer Dornoch Limited responsible for the repair bill.

Ti Tree, Northern Territory, 18 November 2014

A road train consisting of three flat-bed trailers carrying ammonium nitrate fertiliser exploded in Ti Tree, NT on November 19, 2014 (Figure 6). Witnesses at the Ti Tree roadhouse, 200 km north of Alice Springs, saw a fire igniting on the left-hand side of the rear axle of the rear trailer. The road train driver inhaled fumes as he desperately unhooked the burning trailer of explosive ammonium nitrate from his truck on the Stuart Highway at Ti Tree. Moments later the trailer exploded with a loud bang, startling residents more than several hundred metres. The driver had towed away the two other trailers of ammonium nitrate. No-one was injured.

Police went door-to-door to evacuate residents to the school and establish a 1 km exclusion zone. Sixty to eighty people were evacuated to the school at the northern end of town at 10:30 pm, and were allowed to go home at 1:30 am but there was still a 300 m exclusion zone. At 2:00 am the fire crew declared the fire ‘safe’ and Stuart Highway was reopened.

Figure 6. Ti Tree explosion (left, Nicolai Bangsagaard) near the Ti Tree Roadhouse (right, Olivia Ryder).


From the Vault: What we knew about a future pandemic in 2005

Paul Somerville, Briefings editor

The following briefing is a reproduction of the article entitled “A Future Pandemic” that was published in our Quarterly Newsletter Volume 5 Issue 2, December 2005. It was written by Risk Frontiers’ former employee Jeffrey Fisher and Peter Curson, Emeritus Professor at Macquarie University, and edited by John McAneney. In the light of the current coronavirus pandemic, the article was very insightful and needs no further introduction. Additional Briefing Notes on this theme are numbers 121 and 173 (available on request). From time-to-time we will return to our Insights “vault” to assess how well our understanding of natural hazards and other extremes stands up to the test of actual events.

It is difficult to pick up a paper or watch television today without seeing some reference to bird flu, H5N1, or a possible influenza pandemic and the world’s lack of preparedness for it. One thing seems clear: in an increasingly interconnected world, where 1.5 billion people cross international borders by air every year, a virus could circle the globe very rapidly, possibly even before it was detected. In contrast, the so-called 1918-19 ‘Spanish Influenza’ took some 18 months to circle the globe and about four to six months to do its damage in any one country. This article examines some implications of such an event for the life insurance business and the wider economy.

Some insurance and reinsurance companies have prepared for the eventuality of a pandemic, assessing their risk and taking steps to offset expected losses. Financial instruments such as mortality bonds, the life insurance equivalent of catastrophe bonds, have been used to transfer some of the risk to the capital markets. However, catastrophe modelling, now standard for non-life lines of business, seems far less sophisticated in the case of life insurance. Many companies still appear to be working out what their losses might be.

Some Basic Numbers for Australia

So how many people are likely to die if a flu pandemic reaches Australia? If there were a repeat of the ‘Spanish Influenza’ pandemic, the death toll in Australia could be somewhere between 60,000 and 80,000. To put this in some context, some 130,000 Australians die in any one year, a sum that includes about 2,000 from influenza. Thus, a repeat of the 1918-19 scenario would represent an increase in the annual death toll of over 50%.

Circumstances today, however, are very different from 1918. Medical and community health standards have improved dramatically. In 1918, intensive care wards had yet to be developed; there were no effective drug therapies for pneumonia; knowledge of viruses was rudimentary; and doctors had no antibiotics or antiviral drugs. Taken together, these factors could reduce the death rate considerably below that experienced in 1918-19.

On the other hand, the mobility of people today could allow the disease to spread very rapidly causing a dramatic increase in patient numbers in a short space of time. This has the potential to overwhelm the health system and reduce the benefits of modern medicine because of a shortage of drugs and hospital beds. Thus, while the death rate is unlikely to be as high as for the 1918-19 pandemic, it nonetheless remains a good benchmark as a plausible worst-case scenario.

An Optimisation Problem

The current strain of bird flu has killed over half the people known to have become infected with it. While this is cause for concern, a flu pandemic could not develop with a mortality rate this high. The mortality rate is defined as the proportion of the infected population that dies from the disease.

Influenza viruses have an initial period when an infected person exhibits no symptoms. This is the virus’s window of opportunity to spread; once symptoms appear, it is fairly easy to isolate cases and prevent further transmission. Furthermore, approximately half of all people who catch the virus get only a mild case with no obvious symptoms while still being infectious. These two attributes allow a flu virus to spread throughout a population. An influenza virus that kills its host too quickly will die out before it can cause a global epidemic.

Figure 1: Modelled deaths in a population of 10,000 as a function of mortality rate.

Risk Frontiers has a simple simulation model to examine this issue. A typical simulation deals with a population of 10,000 people. They are assumed to be a fairly homogenous group in the same geographic area – imagine a small Australian town or suburb. One way of incorporating the viral attributes mentioned above is to assume some degree of negative correlation between the length of the infectious period and the mortality rate. Negatively correlating these variables means that, on average, as the mortality rate increases, the length of the infectious period decreases and so less people catch the disease. Figure 1 shows this trade-off. Initially, as the mortality rate increases, more people die. Beyond a rate of around 1.6%, however, the tide turns as the likelihood of someone dying after catching the disease increases but the total number of people infected goes down.

The real concern is that the current strain of bird flu will combine with human flu viruses or develop the ability to jump directly to humans. If this were to happen then the global death toll could be very high. The 1918-19 flu virus killed, depending upon different reports, somewhere between 1.2% and 2.8% of people who contracted it, i.e. close to the optimum shown in Figure 1. A very well-designed bug!


Clearly, targeted vaccination at the source of an outbreak is likely to be the best means of avoiding its wider dissemination. In a recent article, Ferguson et al. (2005) explores the efficacy of using such targeted preventative medicine. These authors argue that if good detection measures are in place, if anti-viral drugs are stockpiled appropriately and deployed quickly, then the chances of containing an outbreak at source by treating everyone in the vicinity would be greater than 90%. While this conclusion is encouraging, neither sufficiently rapid detection nor efficient implementation of preventative measures can be taken for granted in the countries where outbreaks are most likely to occur.

Australia is an unlikely source of the disease and it is far more likely that the general population would have to be vaccinated. Let’s assume for the moment that this is possible. The proportion of the general population that must be vaccinated to stop an epidemic depends on the Basic Infection Rate (BIR), essentially a measure of how easily transmissible the virus is. The BIR is the average number of new cases caused by each virus-infected person in a population with no immunity to that virus.  For the current strain of bird flu, we might prudently assume that no one has immunity. According to Ferguson et al. (2005), a typical pandemic strain of influenza would likely have a BIR of around 1.8, a figure that implies the need to vaccinate roughly one half of the population in order to arrest the spread of the disease (see Inset). In other words, about nine million people in Australia.

All this presupposes the availability of a vaccine. In fact, it would take about six months to isolate a particular strain and produce a vaccine in sufficient numbers.  By this time the pandemic would be over.  So, will there be enough vaccine to go around? Given current global development and production capabilities, the answer is no.

Insurance Costs

What would a modern-day pandemic cost the Australian life insurance industry? If we assume a very rough estimate of $300,000 for a life insurance payout, then it is simply a matter of counting the dead or, at least, those with life insurance.  The current level of life insurance penetration is around 30% of the adult workforce, who in turn comprise about 65% of the entire population. This being the case, a pandemic comparable to the 1918-19 influenza outbreak would lead to a total insured loss of around $4.1 billion. This sum is of the same order as a repeat of Cyclone Tracy that destroyed Darwin in 1974 (see the next issue of Risk Frontiers’ Quarterly Newsletter.)

There are, however, other complications not considered in the above calculation. For reasons that are still not entirely clear, the 1918-19 epidemic preferentially killed people between the ages of 25 and 40, i.e. those normally at the lowest risk of dying from influenza.  Thus, usual actuarial assumptions about expected age at death may not apply in the case of a pandemic.  Moreover, people in this target age group are more likely to have life insurance and will tend to be insured for relatively higher amounts.

There may be other calls on insurance caused by the failure of some businesses to fulfil critical supply contracts due to workers being afraid to, or prevented by Government decree, from turning up to work. Private medical insurance could be another source of losses for the insurance industry.

Social and Economic Consequences

While our analyses suggest that the implications of a 1918-19-type pandemic could be significant for the insurance industry, insured losses will represent only a tiny fraction of the wider economic losses borne by society.

The recent SARS epidemic gives us some clues to the likely magnitude of these losses.  The province of Ontario, for example, suffered an estimated loss of more than C$2 billion due to reductions in tourism, including lost income and jobs. Hotels in Toronto remained two-thirds empty during the peak of the epidemic and cost the hotel industry more than C$125 million. More than 15,000 people were quarantined at home for at least 10 days. If nothing else, SARS demonstrated the impact that a short-lived epidemic can have on consumer confidence, investment and consumer spending. Some sources have estimated the total global economic cost of SARS at $US 30 – 50 billion (Financial Times, 14/11/05).

A major flu pandemic would be much more significant than SARS. Businesses could be confronted by 25-30% absenteeism as home quarantine removed many from the workforce for up to two months; people would avoid shops, restaurants, hotels, places of recreation and public transport. There would be a run on basic foodstuffs, medications, masks and gloves. As there is little surge capacity in our hospitals, temporary hospitals would need to be established. Schools, childcare centres, theatres, not to mention pubs and race meetings – the fundamental heartbeat of our nation – would be closed or cancelled. Government imposed quarantine and absenteeism would severely disrupt interstate and international trade. All this would produce a decline in consumer confidence leading to significant reductions in consumption spending.

Some Other Issues

Let’s return now to the question of preventative medicine. As has already been explained, there is simply not going to be enough anti-viral drugs, vaccines and other preventative measures to go around. The current stockpile of anti-viral drugs could be insufficient even for just all essential health care workers, emergency service workers – and politicians? And what about me?  Yes moi!

Assuming Australia has the luxury of time to become better prepared, then difficult choices still remain.  For example, who will get the extra supply after the needs of essential workers are met? Would they be handed out by lottery, should they go to the elderly and young, would people be able to buy them?  Public outcry might prevent a scheme where they were sold to the highest bidder, but it is easy to imagine somebody risking the small chance of personal death and selling their vaccine shots on e-bay for large sums of money.  The problem could be an administrative and ethical nightmare.

Final Thoughts

So, where does all this leave us? As far as preventing an outbreak, the only place that this can be done is in the place of origin, most probably somewhere in Asia. If a pandemic does occur, then it is going to inevitably affect Australia. Quarantine measures that the government will feel obliged to put in place might delay its development but are unlikely to prevent it from reaching us. Given a lead-time of six months to develop an effective vaccine, society and the government will be faced with some difficult choices about who gets access to limited supplies of anti-viral drugs. And for the life insurance industry, our admittedly rough calculations suggest that it is not good news. However only a minor proportion of the economic costs will be borne by the insurance sector.  And underlying all this is a fundamental truth – a healthy population represents the human capital necessary for productivity, innovation and economic growth.

Calculating the proportion of people to vaccinate

The relationship between the Basic Infection Rate (BIR) and the proportion of people who need to be vaccinated to contain or prevent an epidemic is a relatively simple one. In order for the virus to propagate through a population, an infected person must infect at least one other person.  Thus for a vaccine program to be effective, it must lower the effective BIR of the virus to below 1.0. Assuming no immunity within population, the proportion that needs vaccination is given by the formula:

Proportion = (BIR – 1.0)/(BIR)

Given a BIR > 1.0, then vaccinating this proportion of the population will stop an epidemic from gaining hold, although small outbreaks are still possible. With a typical value for a pandemic-type strain of 1.8 (Fergusan et al., 2005), the formula suggests 44% of the population will need to be vaccinated.

If a virus is currently in circulation, then people with it already or having low level infections can be assumed to be immune and not require vaccination. This will reduce the quantity of vaccine needed.

If the virus is sufficiently widespread, however, it will still take a long time to die out and so vaccinating as large a proportion of the population as is feasible is the best defence. Moreover, we will not know the actual BIR for some time and so once again assuming a 1918-19-like worst-case scenario may be the only prudent policy.


Ferguson, Cummings, Couchemez, Fraser, Riley, Meeyai, Lamsirithaworn and Burke, Strategies for containing an influenza pandemic in Southeast Asia, Nature, Volume 437, 2005, pp 209 – 213.

Harris, Melling, Borsay, ed. The Spanish Influenza Pandemic of 1918 – 1919: New Perspectives. Routledge, 2003.

Crosby, America’s Forgotten Pandemic: The Influenza of 1918. Cambridge University Press, 1989.



Heatwave poses challenge to Japanese medical system already stressed by virus

Paul Somerville and Andrew Gissing, Risk Frontiers

In recent years, eastern Australia, like Japan, has experienced extremely high maximum temperatures that are consistent with patterns of global changes in climate. Fortunately, last summer’s heatwaves in Australia occurred before the prevalence of COVID-19, and if Australia is able to maintain its suppression of the virus, it may be able to avoid the compounding effects of those conditions. This briefing demonstrates that even with the low prevalence of the virus in Japan, these compounding effects can be significant.

The number of people showing signs of heatstroke or heat exhaustion has sharply increased recently. Temperatures soared to 41.1 C in Hamamatsu in central Japan on Monday (Mainichi Shimbun, 2020a), tying with the country’s highest-ever temperature, marked in Kumagaya near Tokyo in 2018.

The 2018 Heatwave

During the 2018 heatwave, Mainichi Shimbun (2018) showed that the 94 people who died included 26 fatalities in Tokyo, where the heat reached 40.8 degrees in the suburban city of Ome. Saitama Prefecture also reported nine deaths, while in the western part of the country, Osaka Prefecture had six, Mie and Hyogo five each, and Hiroshima saw four. Aichi Prefecture in central Japan also announced four deaths. (According to Slate (2020), more than a thousand people died from heat-related illnesses over the course of those few weeks).

When broken down by the gender of the victims, there were 52 women and 42 men (Mainichi Shimbun, 2018). All of them were 40 years old or older. Those in their 80s constituted the largest group with 37 deaths, followed by 22 in their 70s, 15 in their 60s, 10 in their 90s, five in their 50s and four in their 40s.

Among the victims, 28 fell ill while they were outside, and many were farming in their fields. As many as 36 people were found ill or unconscious while they were inside, due in several cases to broken air conditioners or electric fans. In some cities such as Yamato, elderly residents who live alone are monitored day and night by an elaborate system of motion sensors and communication protocols between city officials, residents and their relatives.

Older people tend to have difficulty recognizing when they are dehydrated. They face the risk of their conditions deteriorating before realizing it, even when they are not subject to searing heat. Lowering temperatures inside using air conditioning is important, but not all homes have air conditioners.

2020 Heatwave – Distinguishing heatwave symptoms from corona virus symptoms

On August 19, 2020, officials in Tokyo reported that 28 people died of heatstroke in the city during the eight-day period from August 12 to August 19, bringing the total number of fatalities in Tokyo in August to 131 (NHK, 2020). The Medical Examiner’s Office said that 11 of the 28 victims were in their 70s, ten were in their 80s, and about 80 percent of the victims were at least 70 years old. Eleven of the victims died at night and 27 died indoors, of whom 25 were not using air conditioners.

In the midst of this year’s heatwave, it is reported that medical workers worry that the similarity of heat stress symptoms to COVID-19 may place extra pressure on a health care system already creaking under the strain of the coronavirus pandemic (Mainichi Shimbun, 2020a).  There are times when medical personnel cannot immediately distinguish those suffering from heat-related illness from those with COVID-19 when the patient is feeling unwell with high fever because that is a symptom they have in common. Japan has a relatively small number of COVID-19 cases (Figure 1), with only 1,169 deaths so far.  The Japanese Health Ministry reported no evidence of excess deaths during April and May (the latest months for which data are available), and it is likely that undetected COVID-19 cases are contributing significantly to the numbers of heatwave deaths that are being reported.

Figure 1. COVID-19 cases in Japan.  Cases: 62,507; Deaths: 1,181; Recovered: 49.340.  Source: Worldometers (2020), 25 August 2020.

The problem posed by the pandemic is that treatment has to take account of the possibilities of both COVID-19 and heat-related conditions when staff cannot rule out the possibility of coronavirus infection. Amid reported public fears that mask-wearing to prevent the spread of the novel coronavirus could itself cause heatstroke or heat exhaustion, 12,804 people were taken to hospital across Japan between Aug. 10 and Aug. 16 for heat-related conditions, up from 6,664 people the previous week, according to the Fire and Disaster Management Agency.  There is a concern that this large number of patients being taken to the hospital may cause the hospital system to collapse if the heatwave continues.

Recent heatwave conditions in the United States have also seen authorities needing to adapt plans to account for the risks of COVID-19, with fears that people may be reluctant to leave their homes to seek cooler shelter due to infection risks. Adaptions have included restricting the number of people accommodated within cooling centres to allow social distancing.

Some resources complied by the Global Heat Health Information Network on COVID-19 and heatwaves are available here:

Public Information on Heat Stress

The Ministry of the Environment is providing English-language information about the heat stress index on its website in a bid to prevent illnesses caused by intense heat, which has become a major threat to health and even life in Japan in recent summers (Mainichi Shimbun, 2020c).

The website, designed for viewing by both smartphones and personal computers, indicates the intensity of the heat effect throughout the country in five colors, from blue (almost safe) to red (danger). It also provides two-day predictions for the heat stress index, as well as data for each observation point nationwide.

The heat stress index, also called the Wet Bulb Globe Temperature (WBGT), is one of the empirical indices showing the heat stress an individual is exposed to. It is calculated incorporating factors such as humidity, sunlight and reflection intensities and atmospheric temperature.

According to the ministry website, the number of people suffering from heatstroke shoots up rapidly when the WBGT, which is denoted in degrees but is different from normal air temperature, exceeds the upper threshold of the “Warning” level (25-28 degrees), when the air temperature is between 28 and 31 degrees Celsius.

For the warning level indicated in yellow, people are advised to rest often. When the index is at the “Severe Warning” level of orange, people are advised to refrain from heavy exercise. At the “Danger” level shown in red, people should stop all exercise.

Figure 2. Screen capture showing the Ministry of Environment website providing heat stress index information. Mainichi Shimbun (2020d).



Mainichi Shimbun:





NHK (2020):

Slate (2020):

Worldometers (2020):


California Bushfires, August 2020

Paul Somerville, Chief Geoscientist, Risk Frontiers

Nearly 771,000 acres of largely unpopulated land have burned across California during the past week as dozens of lightning-sparked wildfires moved quickly through dry vegetation and threatened the edges of cities and towns. The fires have been most severe in the state’s northern and central regions, where about 600,000 acres have burned in the past week (Figure 1).

Evacuations surged on August 18 and 19 as authorities worried that high heat and gusty winds could cause the fires to spread rapidly. The resulting fires – and complexes of many small fires – have merged into major conflagrations in many parts of the state. By August 20, several of the major fires had more than doubled in size, in some cases jumping across major highways, as crews struggled to contain the blazes. By August 21, the two largest blazes, the SCU[1] and LNU[2] Lightning Complexes, had charred 340,000 and 325,000 acres respectively, becoming the second and third largest fires in California history (Table 1). The CZU[3] Lightning Fire forced the evacuation of more than 64,000 people, some of whom may not be able to return to their homes for weeks. Five people have died and about 1,000 structures have burned.

Figure 1. Left: Fire locations in California using Active Fire Data (hotspots) derived from the VIIRS for the last 7 days. Right: Satellite Image on August 19. Source: Washington Post.
Figure 2. Left: Fire in Napa, California. Right: Fire in Lassen County, California. Source: Washington Post.

The California wildfires, along with other blazes in the West, have sent a blanket of smoke across at st 10 states and southwestern Canada, with smoke extending over the Pacific Ocean as well (Figure 1, right panel). Air quality alerts are in effect for parts of California, where the tiny particles in the dense smoke are aggravating respiratory conditions and worsening preexisting health conditions that are already threatened by the coronavirus. The cloth masks that have now become a habit for many Californians when they venture outside are largely ineffective against the tiny smoke particles filling the air, and doctors recommend using N95 masks with vents. People are being asked to shelter in place, staying at home with their windows closed and ventilation systems set to recirculate air, which is difficult during a heatwave in areas such as San Francisco where many people do not have air conditioning.

A rare mix of ingredients came together in central and northern California to produce fast-moving, explosively growing wildfires that are powerful enough to create their own weather. Doppler radar revealed at least five tornado-strength rotational signatures inside the smoke plume in Lassen County, California. The record heat reached astonishing levels during the past two weeks as a massive “heat dome” parked itself over the West. On August 16, Death Valley, California, reached 130 degrees Fahrenheit (54 degrees Celsius). The combination of an intense, long-lasting heatwave, dry vegetation at the end of the summer, and a rare outbreak of August thunderstorms led to these blazes. Fueled by the heat, thunderstorms broke out on Sunday Aug 15 as a surge of tropical moisture pushed inland. The storms’ 20,000 lightning strikes (Fig. 3), including dry lightning storms, sparked more than two dozen blazes over a period of 3 days. An ancient stand of the world’s tallest trees has fallen victim to California’s raging wildfires. The CZU and SCU complex fires near Santa Cruz have ravaged Big Basin State Park, California’s oldest state park, some of whose giant redwoods are more than 50 feet around and 1,000 to 1,800 years old (Fig. 4).

Figure 3. Lightning storms in San Francisco and Healdsburg. Source: Washington Post.
Figure 4. Giant redwoods in Big Basin State Park. Source: Washington Post.

This is just the beginning of the state’s wildfire season, something that has been a constant threat during the past four years of blazes, some sparked by downed powerlines, that have set records for size and lethality. Despite the familiarity, the current fires and their speed and thick smoke have presented a new terror amid a global pandemic – poor air quality and concerns about evacuating masses of people to crowded shelters, and that some might not heed the warnings. Tens of thousands of people have been asked to evacuate and make difficult decisions about where to go. In the past, they might have stayed with friends or family, but now they need to calculate the risk of exposure to the novel coronavirus. Wherever people go, they are likely to face other hardships. California has been enduring a record-breaking heatwave that has prompted rolling blackouts because of high electricity demands for air conditioning and other uses. Most of the area is also experiencing severe or moderate drought.

In Santa Cruz and San Mateo Counties, south of San Francisco, about 48,000 people were ordered to evacuate because of a fire, part of the CZU Lightning Complex, that is threatening communities there. The blaze has already burned 50 structures. On the evening of August 20, the University of California at Santa Cruz was under mandatory evacuation and had declared a state of emergency.

The largest of the lightning-related fires was north of San Francisco, covering Napa and Sonoma counties. On August 20, that mass of fires, the LNU Lightning Complex, had grown to 219,000 acres and was uncontained. Approximately 30,000 structures were at risk of burning and 480 had been destroyed.

The blaze near Vacaville, known as the Hennessey Fire and part of the LNU Lightning Complex, has been one of the most destructive, burning down homes and claiming the life of a PG&E worker who was assisting first responders. This blaze burned down the La Borgata Winery and Distillery in Vacaville. Mandatory evacuations remained in effect for the north part of the city on August 20, and CalFire reported three additional civilian fatalities associated with the LNU Lightning Complex.

CalFire is at normal staffing levels, with approximately 12,000 firefighters working on August 21. Additional firefighters are being sought from other states and from Australia. In Central California, a pilot on a firefighting flight near Fresno died when his helicopter crashed.

Overall losses include 5 deaths, 64,000 people evacuated, over one thousand structures burned, 31,000 structures threatened, and approximately one million acres burned as of August 21.

The 2019/20 bushfires in eastern Australia were fought under dire conditions, but the presence of the coronavirus in California has made fire-fighting conditions there even more dire, especially those relating to evacuation.  There are 665,000 coronavirus cases in the state, growing by 5,000 a day, and 12,000 deaths, growing by 150 a day.

Table 1. 20 largest wildfires in California since 1932. Only 3 occurred before 2000. (Source: Updated from Cal Fire)


[1] SCU Lightning Complex Fire: Contra Costa, Alameda, Santa Clara, Stanislaus and San Joaquin counties

[2] LNU Lightning Complex Fire: Napa, Sonoma, Solano, Yolo and Lake counties

[3] CZU Lightning Complex Fire: San Mateo and Santa Cruz counties

Risk based earthquake pricing using catastrophe model output

Paul Somerville and Valentina Koschatzky,  Risk Frontiers

As the insurance market trends toward more analytical and data-driven decisions, insurers are continually exploring ways to rate risk better and more precisely. For the case of earthquake risk, this means an enhanced understanding of the relationship between event location, frequency, severity, how buildings respond to an event and the ensuing financial costs. The increased quantity, quality and granularity (resolution) of the available underwriting data and highly refined rating engines give insurers the opportunity to become extremely risk-specific in their pricing. Risk-based pricing – charging different rates depending on different risk characteristics of specific policies and in contrast to portfolio underwriting – leads to stability and confidence in pricing.  Risk based pricing aims to ensure that premium levels are commensurate with individual property risk profiles, with those in highly exposed areas experiencing a specific rate on the earthquake component of their coverage.  This seems to be a fairer and more equitable way of pricing risk. The ability to differentiate between perceived risk and actual risk affords insurers a better way to achieve their financial goals, allocate capital and meet client needs for coverage.

Several features of earthquake hazards and risks render them readily amenable to risk-based pricing.  First, the level of seismic hazard is not uniformly distributed across a country. New Zealand is an extreme example in which Wellington is located directly on a tectonic plate boundary having extremely high seismic hazard, whereas Auckland is remote from the plate boundary and has a seismic hazard level comparable to that of Australia (Figure 1).  However, even in Australia, the seismic hazard level also varies by an order on magnitude between relatively high levels in northwestern Western Australia, the Yilgarn region east of Perth, Adelaide, and southeastern Australia on the one hand and the extremely low levels in Queensland.

Figure 1. Peak acceleration maps for 1:500 AEP on Risk Frontiers’ Variable Resolution Grid for Australia and New Zealand.
Figure 1. Peak acceleration maps for 1:500 AEP on Risk Frontiers’ Variable Resolution Grid for Australia and New Zealand.

Second, the factors that increase the level of the hazard are well understood and mapped.  These include the presence of soils that amplify the level of ground shaking compared with that on rock, and the presence of saturated sands that can be liquefied during earthquake shaking, as occurred in Christchurch during the 2010-2011 Canterbury earthquake sequence.

Third, we are able to quantify the variations in building vulnerability to earthquake damage due to different building types, heights, ages of construction, and whether seismic building code provisions were used in design on a very specific basis.  G-NAF (Geocoded National Address File) is a geocoded address index listing all valid physical addresses in Australia. NEXIS (National Exposure Information System) is a database developed by Geoscience Australia containing building details for residential commercial and industrial buildings in Australia at a Statistical Area 1 (SA1) level.  There are 57,523 SA1 in Australia. These datasets allows wood, Mid-rise Steel, Concrete, and Reinforced Masonry and low-rise Unreinforced Masonry buildings damage ratios to be modelled and enable customised underwriting in Australia at the location, SA1 or postcode level. For New Zealand, the use of a variable resolution grid created using the Linz NZ street addresses database enables us to calculate the ground shaking hazard at a resolution as fine as 500 m while the liquefaction hazard is  calculated at the address level with a resolution of 16 m.

Finally, Risk Frontiers’ QuakeAUS and QuakeNZ models use a level of refinement in property damage estimation that is unique in the worldwide catastrophe loss modelling industry. Conventional earthquake loss estimation uses building fragility functions that are pre-computed using standard capacity curves for each building category of interest with a simplified representation of the building demand curve in response to ground-shaking. We instead account for the entire response spectral shape of the ground motion, which varies with many factors, including the earthquake magnitude, earthquake distance, and soil category at the risk location. Accordingly, our loss model dynamically calculates fragility curves for each building category at each site for each earthquake in the event set.  This produces building- and event-specific damages for each building category for each event, enhancing the accuracy and reliability of the loss calculation.

These four categories of information are combined to make detailed estimates of losses for each building that are then aggregated to obtain portfolio loss estimates.  However, it is an easy step to use this detailed information to quantify potential losses for any soil type or building category at any desired level of spatial resolution. For example, our model output can provide postcode level risk premiums (average annual loss AAL’s) for all of Australia and New Zealand for a nominal risk to estimate loss rate due to earthquakes for the following building modifiers, as shown in the example in Table 1:

  • Structure Type: Unknown, Wood/Light Frame low-rise, Steel Moment Frame mid-rise, Concrete Moment Frame mid-rise, Reinforced Masonry Bearing Walls mid-rise, unreinforced masonry low-rise. Mid-rise is defined as 4+floors, low rise as 1-2 floors
  • Year-built: pre-code (1980) and post-code
  • Damage calculated separately for buildings and contents
  • Separate estimates of direct damage and demand surge
Risk Premium
Postcode Structure Type Construction Date Building Contents
2294 Light Wood Unknown 74 64
2294 Mid-rise Steel Moment Frame after 1981 26 3
2294 Mid-rise Concrete Moment Frame before 1981 44 7
2294 Mid-rise reinforced Masonry Bearing Walls Unknown 35 25
2294 Low-rise Unreinforced masonry bearing Walls after 1981 226 58
2294 Unknown before 1981 111 71
2294 Low-rise Unreinforced masonry bearing Walls after 1981 141 58
2294 Low-rise Unreinforced masonry bearing Walls unknown 142 59
2286 Low-rise Unreinforced masonry bearing Walls before 1981 51 10
2295 Low-rise Unreinforced masonry bearing Walls before 1981 168 42
2291 Low-rise Unreinforced masonry bearing Walls before 1981 90 21

Table 1. Newcastle region risk premiums for building and contents with a nominal sum-insured. Earthquake risk based on location (postcode), construction type and year of construction can inform better underwriting decisions.

This bottom-up understanding of risk and pricing will also lead to better alignment of risk-premium and capital management.

Figure 2. By introducing earthquake risk pricing, insurers have an opportunity to align portfolio risk (left) and capital management with original premium rating and risk selection (right).
Figure 2. By introducing earthquake risk pricing, insurers have an opportunity to align portfolio risk (left) and capital management with original premium rating and risk selection (right).


How many storms make a big storm?

Thomas Mortlock and Stuart Browning

The past few weeks have not been pleasant for beachfront property owners at Terrigal-Wamberal (see Figure 1), and worrisome for those with a sea view at other erosion “hot-spots” on the east coast, such as Collaroy-Narrabeen and Belongil. Beyond the difficult questions around coastal development and defence that this has raised (again), the passage of two East Coast Low (ECL) storms in quick succession, with a series of low pressure cells still lurking in the Tasman Sea, has highlighted another important issue for coastal hazard assessment. That is, of storm clustering, the resulting cumulative risk, and how we should be doing more to incorporate this additional dimension into the assessment of coastal risk.

Figure 1. Erosion at Wamberal, on the NSW Central Coast, July 2020. Source: Daily Telegraph, 28 July 2020.

What happened in July?

In a period of less than three weeks – from the week beginning 13 July to week ending 31 July – two successive ECL storms impacted the southeast coast of Australia bringing heavy rain, large waves and dangerous surf conditions to many areas including much of the Illawarra, Sydney and Central Coast regions.

The first (week beginning 13 July) was a typical wintertime ECL, with an extra-tropical origin in the South Tasman Sea progressing northwards up the coast (Figure 2, left panel). The peak-storm hourly significant wave height (the highest third of all waves measured in an hour, and a common measure of storm intensity) was 6.9 m at the Sydney wave buoy (located approximately 10 km offshore of Narrabeen), while the maximum single wave recorded during the storm was 11.6 m (on Wednesday 15 July). The wave direction was from the south-south-east for much of the storm, until Friday 17 July when the direction swung round to the south-east. The storm wave height had a return period of about 4 years.

Figure 2. Left panel: synoptic chart for the first ECL on 17 July 2020 at 10 AM as the wave direction becomes southeast to easterly. ECL is moving in a northward direction. Right panel: synoptic setup for the second ECL exactly ten days later with wave directions from the northeast to east. ECL is moving in a southward direction. Source: Bureau of Meteorology Weather Map Archive (2020).

The genesis and track of the second ECL (week beginning 27 July) is less usual during winter, with a tropical origin in the Coral Sea progressing southwards down the coast (Figure 2, right panel). The peak-storm hourly significant wave height was 4.0 m at Sydney and the maximum single wave height recorded was 7.6 m – much smaller than the preceding ECL. This time, the wave direction was from the north-east for much of the storm, before eventually becoming bi-directional, with one mode from the north-east and a second from the south-south-east. As the storm decayed, the south-south-east mode became more prevalent. The storm wave height of the second event had a return period of less than 1 year, but the direction made it more significant (as was the case during the infamous June 2016 ECL, see Mortlock et al., 2017a).

Both storms led to significant erosion at some locations along the east coast, with perhaps the worst area affected being Wamberal, on the NSW Central Coast. The Terrigal-Wamberal embayment is oriented south-east (unlike most other coastal compartments in NSW which face east) making it more exposed to waves from the south-east and anticlockwise thereof. The south-easterly wave direction of the first ECL on Friday 17 July, combined with the morning high tide, is likely to have done most of the damage. The north-easterly direction of the second ECL, only ten days later, led to further erosion of the upper beach and foredune.

What drives ECL clustering?

An analysis of the drivers of Australian ECLs has shown that clustering has been a feature of all high impact ECL seasons since 1851 (Browning and Goodwin, 2016). Over this period, it was found that when the large-scale climate conditions were conducive to ECL formation it was likely that successive storms would occur. When this happened, they were often similar types of ECLs forming along similar storm tracks.

Climate conditions conducive to ECL formation may include a neutral to negative Indian Ocean Dipole (IOD) and neutral to La Niña-like ENSO conditions in the Pacific. Extratropical circulation, described by the Southern Annular Mode (SAM), influences the latitude of impacts: with central and northern NSW impacted under positive SAM and central to southern NSW impacted under negative SAM. All these climate states essentially promote convective behaviour in the vicinity of Southeast Australia.

Another observation is that ECL clustering occurs during a shift in the underlying Pacific climate, specifically the transition from Interdecadal Pacific Oscillation (IPO) El Niño to IPO La Niña (Hopkins and Holland, 1997). The IPO describes low frequency ENSO-like conditions in the Pacific that may persist for periods of years to decades and can either enhance or dampen the intensity of individual ENSO events.

Storm clustering and coastal risk

During an ECL, sediment is usually stripped from the upper beach and deposited seaward below the water line as a surf zone bar (Figure 3 top panel). If the water level is high enough (with a sufficient combination of waves, storm surge and high tides), the foredune may also be eroded, leading to dune instability.

After the storm, a process of beach recovery takes place on the order of weeks to months, whereby sediment is transported landward from the bar back to the beach. The wider the beach, the better the buffer for the dune (and anything built on top of it) when the next storm arrives.

Figure 3. A cross-sectional beach profile showing simplified erosion during a storm event (top panel), and consequential depleted beach and higher water mark post-storm (bottom panel). Source: Yamamoto et al. (2012)

When a series of storm events occur in quick succession, there is no time for beach recovery. Each successive storm after the initial one thus erodes the beach from an already depleted state – similar in nature to a heavy rain event occurring on an already-saturated catchment. Because the beach is lower after the first storm, the high tide mark is further landward, making it easier for subsequent storm waves to erode the base of the dune (Figure 3 bottom panel).

It follows, therefore, that a series of low-magnitude storms in a cluster may have a comparable cumulative erosion impact as a single, higher-magnitude storm (assuming other characteristics, such as wave direction and storm duration, are the same).

It could be argued that a cluster of coastal storms should be regarded as a single event for erosion response, even if from an atmospheric perspective they are identifiably independent systems. In this case, it should be reflected in the return period estimate of coastal storms when wave height exceedance is being used as a metric to define erosion risk.

How many storms make a big storm?

To address this, we use a worked example:

If there were a pair of ECL events, separated less than one month apart (i.e. insufficient time for beach recovery), both with a nominal return period of 2 years, what would be the single-storm return period that delivers an equivalent amount of energy to the beach?

Using hourly wave height observations at the Sydney buoy from 1992 to 2019, the 2-year return period hourly significant wave height is approximately 6.2 m[1] (Figure 4, left panel).

Figure 4. Left panel: return periods associated with wave heights at Sydney. Right panel: synthetic storm curve for the 2-year return period storm for wave height (top) and wave period (bottom). Hs = significant wave height, Tp = peak energy wave period. The Sydney wave buoy is maintained and operated by Manly Hydraulics Laboratory (MHL). Wave data are available on request from MHL.

Using a method developed by Mortlock et al. (2017b)[2], we can take this peak-storm value to build a synthetic storm curve to estimate the total energy delivered to the beach during a storm of this magnitude (Figure 4, right panel, for a 2-year return period storm). Here we are assuming that the wave direction of both storms is the same.

From this, we can estimate the total wave energy flux of the storm. Wave energy flux is a measure of the total amount of power delivered by the storm along a metre length of beach[3], in Gigajoules per metre (GJ/m).

Using this approach, a 2-year return period storm contains approximately 41.2 GJ/m. This means that two of these storms occurring in quick succession have a combined energy of 82.4 GJ/m. Repeating this exercise for different return periods indicates that a pair of two ECL events, each with a nominal return period of 2 years, delivers an equivalent amount of energy to the beach as a single 8 to 9-year return period event (Figure 5, left panel).

Figure 5. Left panel: return periods of total storm wave energy flux for ECLs at Sydney, for when storms are treated as individual events (black line), and in the case where two storms of similar magnitude occur in quick succession (red line). Right panel: the difference between the red and black curves in left panel, with linear fit.

If we take the view that these two hypothetical storms should be considered a single event for erosive potential – and absent beach recovery in between – then it shows that we underestimate the recurrence estimate of storm damage.

Taking the difference between return periods of equivalent energy between the cluster-pair ECLs (red curve Figure 5, left panel) and single-storm ECLs (black curve), we can illustrate the extent to which we are underestimating erosion frequency (Figure 5, right panel). Using this approach, two 5-year ECLs occurring in quick succession may lead to erosion equivalent to a 20-year return period single ECL storm event.


In some years, there may be more potential for ECL occurrence and clustering, than in others. The winter of 2019 was quiescent for coastal storms on the east coast of Australia because of a very strong positive Indian Ocean Dipole (IOD). In 2020, a neutral IOD means climate variability on the east coast is driven more by what is happening in the Pacific, which appears to be tending towards La Niña, which typically allows for more convective low-pressure storms to develop. The point here is that for some years, it may be pertinent to consider the effects of ECL clustering for coastal risk assessment than for other years.

Using the method described above, we can estimate that the first ECL in July had a return period of four years and the second a return period of less than one year, but if treated as a single storm, the total energy was equivalent to the amount of erosive potential that could be expected of a single ECL of a return period of approximately seven years.

While this analysis is only for illustration, it demonstrates how there can be an under-estimation of coastal risk by assuming all ECLs drive independent erosion responses. If the cumulative erosion potential that exists with clustered ECL events is not incorporated into coastal hazard panning, then we may continue to under-appreciate the importance of event clustering.


Browning, S. and Goodwin, I.D. (2016). Large-scale drivers of Australian East Coast Cyclones since 1851. Journal of Southern Hemisphere Earth Systems Science, 66, 125–151.

Hopkins, L. C., and Holland, G. J. (1997). Australian heavy-rain days and associated east coast cyclones: 1958–92. Journal of Climate, 10, 621–635.

Goda, Y. (2010). Random seas and design of maritime structures. World Scientific, Singapore, pp 464.

Mortlock, T.R. and Goodwin, I.D. (2015). Directional Wave Climate and Power Variability along the Southeast Australian Shelf. Continental Shelf Dynamics, 98, 36-53.

Mortlock, T.R. et al. (2017a). The June 2016 Australian East Coast Low: Importance of Wave Direction for Coastal Erosion Assessment. Water, 9(2), 121.

Mortlock, T.R. et al. (2017b). Open Beaches Project 1A – Quantification of Regional Rates of Sand Supply to the NSW Coast: Numerical Modelling Report. A report prepared by Department of Environmental Sciences and Risk Frontiers, Macquarie University, for the SIMS-OEH Coastal Processes and Responses Node of the NSW Adaptation Research Hub, May 2017, pp 155.

Shand, T. et al. (2011). NSW coastal inundation hazard study: coastal storms and extreme waves. Water Research Laboratory, University of New South Wales & Climate Futures, Macquarie University.

 [1] An empirical estimation of the return periods was used here, where the return period = number of years in dataset / rank. Wave heights were linearly interpolated to obtain estimates for whole number of years.

[2] This is based on an analysis of observed storm events and accounts for the relationship between peak-storm wave height and wave period after Goda (2000) and modified by Shand et al. (2011). Storm duration is capped at 76 hours. All storms modelled here reached the duration cap.

[3] The formula for the calculation of total storm wave energy flux is given in full in Mortlock and Goodwin (2015). The water depth these values were calculated for was 20 m, which is prior to wave breaking.

Rapid detection of earthquakes and tsunamis using sea floor fibre-optic cables

Paul Somerville, Chief Geoscientist, Risk Frontiers

Standard methods of earthquake detection use seismic waves, which travel through the earth at speeds up to about 8 km/sec for compressional waves. The compressional waves have speeds about 75% higher than the following shear waves, which are the waves that do damage in earthquakes. This is the basis for early earthquake warning systems, which have been operational in some countries for the past several decades. For nearby earthquakes, the warning provided by compressional waves is only a few seconds, but there can be several tens of seconds warning for more distant earthquakes. The warning time is much greater for tsunamis, because their top speeds across the ocean are only about 0.2 km/sec. This explains why tsunami warning is mainly based on seismic waves. Some warning systems use Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys to detect the passage of tsunamis in the ocean to supplement seismic methods, but these buoys are expensive to build, deploy and maintain. Similarly, Ocean Bottom Seismometers (OBS) are notoriously expensive, unreliable and easy to lose.

Seventy percent of the planet’s surface is covered by water, and seismometer coverage is limited to a handful of permanent OBS stations. Marra et al. (2018) showed that existing telecommunication optical fibre cables can detect seismic events when combined with frequency metrology techniques by using the fibre itself as the sensing element. They detected earthquakes over terrestrial and submarine links with lengths ranging from 75 to 535 kilometres and a geographical distance from the earthquake’s epicenter ranging from 25 to 18,500 kilometres. If information about the occurrence of earthquakes can be transmitted by lasers (light waves), it will arrive much sooner than seismic waves because light waves travel at 204,190 km/sec, providing much more warning time. Marra et al. (2018) proposed that implementing a global seismic network for real-time detection of underwater earthquakes could be accomplished by applying this technique to the existing extensive submarine optical fibre network.

Distributed Acoustic Sensing (DAS) is a new, relatively inexpensive technology that is rapidly demonstrating its promise for recording earthquake and tsunami waves in a wide range of research and public safety applications (Zhan, 2020). DAS systems have the advantage of being already deployed across the oceans where deployments of DART and OBS are difficult and limited. DAS systems are expected to significantly augment present seismic and tsunami detection networks and provide more rapid information for several important applications including early warning.

Fibre-optic cables are commonly used as the channels along which seismic and other kinds of data are transmitted. With DAS, the hair-thin glass fibres themselves are the sensors as well as the transmission channel. Each observation episode begins with a pulse of laser light sent down the fibre. Internal natural flaws within the fibre, such as fluctuations in refractive index in the glass, cause scattering of the pulse of laser light that is sent down the fibre (Figure 1). DAS uses Rayleigh backscattering to infer the longitudinal strain or strain change with time every few metres along the fibre; this information is sent back to the source of the pulse. The strain in each fibre section changes when the cable is disturbed by seismic waves or other vibrations passing through the network. The return signals carry a signature of the disturbance. It takes only a slight extension or compression of a fibre to change the distances – as measured along the fibre – between many scattering points. Interferometric analysis extracts how the signals from scattering points vary in timing or phase, and further processing reconstructs the seismic waves that caused the perturbance. In addition to detecting seismic waves, the data can also be used to detect pressure changes in the ocean itself, which could be used to detect tsunamis.

Figure 1. Backscattering from defects in the fibre that carries information about the strains in every few metres of the cable. Source: Zhan (2020).

Kamalov and Cantono (2020) point out that the links used by Marra et al. (2018) were short (under 535 km for terrestrial and 96 km for subsea) and in relatively shallow waters (~200m deep), limiting practical application of the idea. To make this method more useful, they decided to test it using links that are much deeper on the ocean floor and span much greater distances. Kamalov and Cantono (2020) explain how, in a pilot project, Google is using data obtained from its existing undersea fibre optic cables to detect earthquakes and tsunamis using the DAS method developed by Zhan (2020). Once built, it is planned to use the system to provide information that is complementary to the information provided by dedicated seismic sensors to enhance early warnings of earthquakes and tsunamis.

How much benefit might be possible? The warning time for ground shaking from offshore earthquakes, which is presently a few seconds for nearby earthquakes and several tens of seconds for more distant earthquakes, could be doubled, providing significantly more warning time to take shelter using the “drop, cover and hold on” rule. For tsunamis, the rule is to get to higher ground, and although this evacuation process takes longer than drop, cover and hold on, there are usually at least several tens of minutes of warning. However, at very close distances from the tsunami source, there may only be about five minutes of warning time, and an additional half-minute of warning could potentially save lives. Although locally-based tsunami warning systems have been installed in some Southeast Asian countries to augment regional systems since the occurrence of the 2004 Sumatra earthquake and tsunami, these local systems have not all been well maintained, and the use of infrastructure that is already in place to provide more warning time could be beneficial.


Kamalov, Valey and Mattia Cantono (2020). What’s shaking? Earthquake detection with submarine cables. July 16, 2020.

Marra, G., C. Clivati, R. Luckett, A. Tampellini, J. Kronjäger, L. Wright, A. Mura, F. Levi, S. Robinson, A. Xuereb, et al. (2018). Ultrastable laser interferometry for earthquake detection with terrestrial and submarine cables, Science 361, no. 6401, doi: 10.1126/science.aat4458.

Zhan, Z. (2020). Distributed Acoustic Sensing Turns FiberOptic Cables into Sensitive Seismic Antennas, Seismol. Res. Lett. 91, 1–15, doi: 10.1785/0220190112.



A Short Path to Coronavirus Herd Immunity?

Paul Somerville, Chief Geoscientist, Risk Frontiers

This week a number of remarkable articles on herd immunity to Coronavirus COVID-19 (SARS-CoV-2) have been posted without peer review (Britton et al., 2020; Lourenco et al., 2020), and these and other studies have been reviewed by Hamblin (2020). This briefing summarises the main conclusions of these articles.

Until a vaccine is developed, the management of the global pandemic in various regions ranges from elimination (e.g. New Zealand) through effective suppression (e.g. Australia) to reliance on large enough levels of infection to produce herd immunity (e.g. the United States), although as noted at the end of this article it is not clear that immunity is permanent enough to allow herd immunity to develop.

Lourenco et al. (2020) assert that some of the population may already have a high level of immunity to COVID-19 without ever having caught it. They point to evidence suggesting that exposure to seasonal coronaviruses, such as the common cold, may have already provided some with a degree of immunity, and that others may be more naturally resistant to infection. Although it is widely believed that the herd immunity threshold (HIT) required to prevent a resurgence of COVID-19 is more than 50% for any epidemiological setting, their modelling explains how differing levels of pre-existing immunity between individuals could put HIT as low as 20%. These results may help explain the large degree of regional variation observed in infection prevalence and cumulative deaths, and suggest that sufficient herd immunity may already be in place to substantially mitigate a potential second wave.

The effects of the coronavirus are not linear; the virus affects individuals and populations in very different ways. The case-fatality rate varies drastically between adults under 40 and the elderly. This same characteristic variability of the virus – what makes it so dangerous in the early stages of outbreaks – also gives a clue as to why those outbreaks could burn out earlier than initially expected. In countries with uncontained spread of the virus, such as the U.S., exactly what the herd-immunity threshold turns out to be could make a dramatic difference in how many people fall ill and die. Without a better plan, this threshold seems to have become central to the fates of many people around the world.

Gabriela Gomes, professor at the University of Strathclyde in Glasgow, Scotland also believes that the HIT may be much lower than currently thought. She was drawn to the field by frailty variation – why the same diseases manifest so differently from one person to the next. She studies chaos, specifically, patterns in nonlinear dynamics, and uses mathematics to deconstruct the chains of events that can lead two people with the same disease to have wildly different outcomes. For the past few months, she has been collaborating with an international group of mathematicians to run models that incorporate the many variations in how this virus seems to be affecting people. Her goal has been to move as far away from simple averages as possible, and to incorporate as many of the disparate effects of the virus as possible when making new forecasts.

In normal times, herd immunity is calculated based on a standardized intervention with predictable results: vaccination. Everyone is exposed to the same (or very similar) immune-generating viral components, and it is possible to calculate what percentage of people need that exposure in order to develop meaningful immunity across the population.

This is not the case when a virus is spreading in the real world in the absence of a vaccine. Instead, the complexities of real life create heterogeneity: people are exposed to different amounts of the virus, in different contexts, via different routes. A virus that is new to the species creates more variety in immune responses. Some of us are more susceptible to being infected, and some are more likely to transmit the virus once infected. Even small differences in individual susceptibility and transmission can, as with any chaotic phenomenon, lead to very different outcomes as the effects compound over time on the scale of a pandemic.

In a pandemic, the heterogeneity of the infectious process also makes forecasting difficult. Differences in outcome can grow exponentially, reinforcing one another until the situation becomes, through a series of individually predictable moves, radically different from other possible scenarios. Gomes contrasts two models: one in which everyone is equally susceptible to coronavirus infection (a homogeneous model), and the other in which some people are more susceptible than others (a heterogeneous model). Even if the two populations start out with the same average susceptibility to infection, you do not get the same epidemics. The outbreaks look similar at the beginning, but in the heterogeneous population, individuals are not infected at random. The highly susceptible people are more likely to get infected first, causing selective depletion of their fraction of the population. As a result, the average susceptibility becomes lower and lower over time.

Effects like this selective depletion can quickly decelerate a virus’s spread. The compounding effects of heterogeneity seem to show that the onslaught of cases and deaths seen in initial spikes around the world are unlikely to happen a second time. Based on data from several countries in Europe, Gomes’s results show a herd-immunity threshold of less than 20%, consistent with Lourenco et al. (2020) but much lower than that of other models. If that proves to be correct, it would be life-altering news. It would not mean that the virus is gone, but if roughly one out of every five people in a given population is immune to the virus, that seems to be enough to slow its spread to a level where each infectious person is infecting an average of less than one other person.  Under this condition, the basic reproduction number R0 – the average number of new infections caused by an infected individual, becomes less than 1, causing the number of infections to steadily decline, resulting in herd immunity. It would mean, for instance, that at 25% antibody prevalence, New York City could continue its careful reopening without fear of another major surge in cases.

Gomes admits that, although this does not make intuitive sense, homogenous models do not generate curves that match the current data. Dynamic systems develop in complex and unpredictable ways, and the best we can do is continually update models based on what is happening in the real world. It is unclear why the threshold in her models is consistently at or below 20%, but if heterogeneity is not the cause, it is unclear what is.

Tom Britton at Stockholm University has also been building epidemiological models based on data from around the globe (Britton et al., 2020). He believes that variation in susceptibility and exposure to the virus clearly seems to be reducing estimates for herd immunity, and thinks that a 20% threshold, while unlikely, is not impossible.

By definition, dynamic systems do not deal in static numbers. Any such herd immunity threshold is context-dependent and constantly shifting. It will change over time and space, depending on R0. During the early stage of an outbreak of a new virus (to which no one has immunity), that number will be higher. The number is skewed by super-spreading events, and within certain populations that lack heterogeneity, such as a nursing home or school, where the herd immunity threshold may be above 70%.

Heterogeneity of behaviour may be the key determinant of our futures, since R0 clearly changes with behaviour. COVID-19 is the first disease in modern times where the whole world has changed its behavior and disease spread has been reduced. Social distancing and other reactive measures have changed the R0 value, and they will continue to do so. The virus has certain immutable properties, but there is nothing immutable about how many infections it causes in the real world. The herd immunity threshold can change based on how a virus spreads. The spread keeps on changing based on how we react to it at every stage, and the effects compound. Small preventive measures have big downstream effects. The herd in question determines its immunity.

There is no mystery in how to drop the R0 to below 1 and reach an effective herd immunity: masks, social distancing, handwashing. It appears that places like New York City, having gone through an initial onslaught of cases and deaths, may be in a version of herd immunity, or at least safe equilibrium.* However, judging by the decisions some leaders have made so far, it seems that few places in the United States will choose to live this way. Many cities and states are pushing backwards into an old way of life, where the herd-immunity threshold is high. Dangerous decisions will be amplified by the dynamic systems of society. There will only be as much chaos as we allow.

All of these models assume that, after infection, people obtain immunity. However, COVID-19 is a new disease, so no one can be sure that infected people become immune reliably, or how long immunity lasts. (Britton et al., 2020) note that there are no clear instances of double infections so far, which suggests that this virus creates immunity for at least some meaningful length of time, as most viruses do. However, earlier this week, an unreviewed pre-print (Seow et al., 2020) suggested that immunity to COVID-19 can vanish within months, which, if true, indicates that the virus could become endemic. They found that 60% of people retained the potent level of antibodies required to resist future infections in the first two weeks of displaying symptoms. However, that proportion dropped to less than 17% after three months. This prompted Prof Jonathan Heeney, a virologist at the University of Cambridge, to state that the findings had put “another nail in the coffin of the dangerous concept of herd immunity,” demonstrating the remarkable state of uncertainty that currently exists among epidemiologists.

*Note that some chaotic systems can have stable equilibria (Wang et al., 2017).


Britton, Tom, Frank Ball and Pieter Trapman (2020). A mathematical model reveals the influence of population heterogeneity on herd immunity to SARS-CoV-2. Science  23 Jun 2020: eabc6810 DOI: 10.1126/science.abc6810

Hamblin, James (2020). A New Understanding of Herd Immunity – The portion of the population that needs to get sick is not fixed. We can change it. The Atlantic, July 13, 2020.

Lourenco, Jose, Francesco Pinotti, Craig Thompson, and Sunetra Gupta (2020). The impact of host resistance on cumulative mortality and the threshold of herd immunity for SARS-CoV-2. doi:

Seow, Jeffrey et al. (2020). Longitudinal evaluation and decline of antibody responses in SARS-CoV-2 infection. doi:

Wang, X. V. Pham, S. Jafari, C. Volos, J. M. Munoz-Pacheco and E. Tlelo-Cuautle (2017). A New Chaotic System with Stable Equilibrium: from Theoretical Model to Circuit Implementation, in IEEE Access, vol. 5, pp. 8851-8858, 2017, doi: 10.1109/ACCESS.2017.2693301.


Risk Frontiers Seminar Series 2020

Due to the COVID-19 pandemic Risk Frontiers’ Annual Seminar Series for 2020 will be presented as a series of three one-hour webinars across three weeks.

Webinar 1. Thursday 17th September, 2:30-3:30pm
Webinar 2. Thursday 24th September, 2:30-3:30pm
Webinar 3. Thursday 1st October, 2:30-3:30pm

Risk Modelling and Management Reloaded

Natural hazards such as floods, bushfires, tropical cyclones, thunderstorms (including hail) and drought are often thought of, and treated as, independent events despite knowledge of this not being the case. Understanding the risk posed by these hazards and their relationship with atmospheric variability is of great importance in preparing for extreme events today and in the future under a changing climate. Risk Frontiers’ ongoing research and development is focussed on incorporating this understanding into risk modelling and management as we view this as the way of the future. We look forward to sharing some of our work during our 2020 Seminar Series.

Presentation Day 1

  • Introduction to Risk Frontiers Seminar Series 2020
  • Historical analysis of Australian compound disasters – Andrew Gissing, Dr Stuart Browning

Presentation Day 2

  • Climate conditions preceding the 2019/20 compound event season – Dr Stuart Browning
  • Black Summer learnings and Risk Frontiers’ Submission to the Royal Commission into National Natural Disaster Arrangements – Dr James O’Brien, Lucinda Coates, Andrew Gissing, Dr Ryan Crompton

Presentation Day 3

  • Introduction to Risk Frontiers’ ‘ClimateGLOBE’ physical climate risk framework
  • Incorporating climate change scenarios into catastrophe loss models – Dr Mingzhu Wang, Dr Tom Mortlock, Dr Ryan Springall, Dr Ryan Crompton

The Difference Between Complicated and Complex Systems

Paul Somerville, Chief Geoscientist, Risk Frontiers

This article, published online under the title “What is the Difference Between Complicated and Complex Systems… and Why is it Important in Understanding the Systemic Nature of Risk?,” is the third in a series of eight articles co-authored by Marc Gordon (@Marc4D_risk), United Nations Office for Disaster Risk Reduction (UNDRR) and Scott Williams (@Scott42195), United Nations Development Program (UNDP). This article builds upon the chapter on ‘Systemic Risk, the Sendai Framework and the 2030 Agenda’ included in the Global Assessment Report on Disaster Risk Reduction 2019. Paragraph 15 of the Sendai framework states that “The present Framework will apply to the risk of small-scale and large-scale, frequent and infrequent, sudden and slow-onset disasters caused by natural or man-made hazards, as well as related environmental, technological and biological hazards and risks. It aims to guide the multihazard management of disaster risk in development at all levels as well as within and across all sectors.” These articles explore the systemic nature of risk made visible by the COVID-19 global pandemic, climate change and cyber hazards, and what needs to change and how we can make the paradigm shift from managing disasters to managing risks. This article did not include figure captions but these have been added by the editor. 

We need to clarify the distinction between a ‘complicated’ and a ‘complex’ system. A complicated system can be (dis-)assembled and understood as the sum of its parts. Just as a car is assembled from thousands of well-understood parts, which when combined allow for simpler and safer driving. Multi-hazard risk models allow for the aggregation of risks into well-behaved, manageable or insurable risk products.

By contrast, a complex system exhibits emergent properties that arise from interactions among its constituent parts in which relational information is of critical importance to integrate the complex system. Understanding a complex system is not enough to know the parts. It is necessary to understand the dynamic nature of the relationships between each of the parts. In a complex system, it is impossible to know all the parts at any point in time. The human body, a city traffic system, or a national public health system are examples of complex systems.

Figure 1. Contrast between Complicated and Complex Systems

The priorities for action of the Sendai Framework spur a new understanding of risk. They reinforce the obvious value of discerning the true nature and behaviour of systems, rather than thinking of systems as a collection of discrete elements. Risk management models, as well as economic models and related policymaking, have tended to treat systems as complicated. With this method, simplified stylized models are often applied to single entities or particular channels of interaction to first define and then label the risk phenomena. Methods are then negotiated by stakeholders to quantify or otherwise objectively reflect, the risk in question and then to generalize it again to make policy choices.

Most prevailing risk management tools assume that underlying systems are ‘complicated’. Rather than ‘complex’. In fact, these tools are often designed to suppress complexity and uncertainty. This approach is out-dated, and potentially very harmful – not least in the context of the developing COVID-19 pandemic. And is likely to produce results that fail to capture the rising complexity and need to navigate the full topography of risks.

We must improve our understanding of the interdependencies between system components, including precursor signals and anomalies, systems reverberations, feedback loops and sensitivities to change. Ultimately, the choices made right now in respect of risk and resilience to favour sustaining human health in the face of the COVID-19 pandemic will determine progress towards the goals of the 2030 Agenda and beyond.

Figure 2. Limitations of the current Non-Systemic approach (red) and how they are addressed by the advocated Systemic approach (green).

Risk and uncertainty are measures of deviation from ‘normal.’ Risk is the part of the unexpected quantified by the calculation of probabilities. Uncertainty is the other part of the unexpected. Where information may exist, it may not be available, not recognized as relevant, or unknowable. In a complex system, which is inherently unpredictable, probabilities for uncertainties cannot be reliably measured in a manner currently acceptable to the global risk management community, including governments. Converting uncertainty into acceptable risk quantities that essentially emanate from the dynamic, relational nature of complex system behaviour is currently very difficult, even impossible. Some uncertainties in any complex system will always remain unmeasurable.

Understanding sensitivities to change and system reverberations is far more important and more challenging in the context of complex systems. Particularly when dealing with very large human, economic and ecological loss and damage across the planet – as is the case with the COVID-19 pandemic. Simulations of such systems show that very small changes can produce almost unnoticeable but still identifiable initial ripples. These are then amplified by non-linear effects and associated path dependencies, causing changes that lead to significant, and potentially irreversible, consequences. This is what the world is experiencing now with the highly infectious COVID-19 outbreak. Country after country impose lockdowns and strict restrictions on human interactions, as individuals do not fully appreciate that a single infected (and possibly asymptomatic) person can provoke tens of thousands of cases of infection within weeks.

Risk is everyone’s business. Almost everyone across the world is starting to understand this, with physical distancing fast becoming the global norm. We must now review how our relationship with behaviour and choice transfers to individual and collective accountability for risk creation and amplification, or for risk reduction. This understanding must translate into action.

Increasing complexity in a networked world of complex, tightly coupled human systems (economic-political-technical-infrastructure-health) within nature can create instability and move beyond control. It may not be possible to understand this ahead of time (that is, ex ante). This inability to understand and manage systemic risk is an important challenge for current risk assessments, including in the context of the response to the COVID-19 pandemic, the wider context of the Sendai Framework and the achievement of the 2030 Agenda on Sustainable Development.

To allow humankind to embark on a development trajectory which is, at the very least, manageable, and at best sustainable and regenerative, consistent with the 2030 Agenda on Sustainable Development, a fundamental rethink and redesign of how to deal with systemic risk is essential; starting with a shift in mindset from ‘complicated’ to ‘complex’.

We must improve our understanding of the interdependencies between system components, including precursor signals and anomalies, systems reverberations, feedback loops and sensitivities to change. Ultimately, the choices made right now in respect of risk and resilience to favour sustaining human health in the face of the COVID-19 pandemic will determine progress towards the goals of the 2030 Agenda and beyond.