Why isn’t the bond market more worried about climate change?

Paul Somerville and Thomas Mortlock

In December 2017, the credit rating agency Moody’s warned U.S. cities and states to prepare for the effects of climate change or risk being downgraded. It explained how it assesses the credit risks to a city or state that’s being impacted by climate change — whether that impact be a short-term “climate shock” like a wildfire, hurricane or drought, or a longer-term “incremental climate trend” like rising sea levels or increased temperatures. It also takes into consideration communities’ preparedness for such shocks and their activities adapting to climate trends.

A recent report by Charles Donovan and Christopher Corbishley of Imperial College predicts that countries disproportionately impacted by climate change could have to pay an extra $170 billion in interest rates over the next 10 years. The following article by Henry Grabar, which appeared on Slate on Oct. 28 2017, explains why the bond market is not more worried by climate change.

The article draws examples from recent flooding in US cities and the infamous National Flood Insurance Program. Parts of the US like Miami, New Orleans and New York are feeling the effects of sea level rise now during extreme weather events, in part because of the low-lying topography and high population density of these coastal areas. In Eastern Australia, shorelines have – until now – broadly been able to keep pace with a rising tidal prism because of antecedent sediment conditions and a relatively steep coastal hinterland.

However, high and rising coastal populations and expanding infrastructure (~ 85 % of Australia’s population currently lives near the coast) leave some big east coast cities like Newcastle, Brisbane and Cairns with significant exposure to higher sea levels in the coming decades.

We should perhaps be looking to examples in the US and elsewhere as a present-day ‘litmus test’ of financial markets response, and a window to the near-future time when sea level rise begins to have a more significant impact on some of the big east coast cities in Australia.

Early this month, when the annual king tide swept ocean water into the streets of Miami, the city’s Republican mayor, Tomás Regalado, used the occasion to stump for a vote. He’d like Miami residents to pass the “Miami Forever” bond issue, a $400-million property tax increase to fund seawalls and drainage pumps (they’ll vote on it on Election Day). “We cannot control nature,” Regalado says in a recent television ad, “but we can prepare the city.”

Miami is considered among the most exposed big cities in the U.S. to climate change. One study predicts the region could lose 2.5 million residents to climate migration by the end of the century. As on much of the Eastern Seaboard, the flooding is no longer hypothetical. Low-lying properties already get submerged during the year’s highest tides. So-called “nuisance flooding” has surged 400 percent since 2006.

Business leaders are excited about the timing of the vote in part because Miami currently has its best credit ratings in 30 years, meaning that the city can borrow money at low rates. Amid the dire predictions and the full moon floods, that rating is a bulwark. It signifies that the financial industry doesn’t think sea level rise and storm risk will prevent Miami from paying off its debts. In December, a report issued by President Obama’s budget office outlined a potential virtuous cycle: Borrow money to build seawalls and the like while your credit is good, and your credit will still be good when you need to borrow in the future.

Figure 1 (A) Even non-cyclonic heavy rain events can leave Miami Beach flooded, putting assets and people at risk. Source: National Weather Service (2015).
(B) Flooded homes in New Jersey after Superstorm Sandy made landfall in 2012. Source: AFP PHOTO/US Coast Guard.

The alternative: Flood-prone jurisdictions go into the financial tailspin we recognize from cities like Detroit, unable to borrow enough to protect the assets whose declining value makes it harder to borrow.   The long ribbon of vulnerable coastal homes from Brownsville to Acadia has managed to stave off that cycle in part thanks to a familiar, federally backed consensus between homebuyers and politicians. Homebuyers continue to place high values on homes, even when they’ve suffered repeated flood damage. That’s because the federal government is generous with disaster aid and its subsidy of the National Flood Insurance Program, which helps coastal homeowners buy new washing machines when theirs get wrecked. Banks require coastal homeowners with FHA-backed mortgages to purchase flood insurance, and in turn, coastal homes are rebuilt again and again and again—even when it might no longer be prudent.

But there’s another element that helps cement the bargain: investors’ confidence that coastal towns will pay back the money they borrow. Homebuyers are irrational. Politicians are self-interested. But lenders—and the ratings agencies that help direct their investments—ought to have a more clinical view. Evaluating long-term risk is exactly their business model. If they thought environmental conditions threatened investments, they would sound the alarm—or just vote with their wallets. They’ve done it before—cities like New Orleans, Galveston, Texas, and Seaside Heights, New Jersey were all downgraded by rating agencies after damage from Hurricanes Katrina, Ike, and Sandy. But all have since rebounded. There does not appear to be a single jurisdiction in the United States that has suffered a credit downgrade related to sea level rise or storm risk. Yet.

To understand why, it helps to look at communities like Seaside Heights, the boardwalk enclave along the Jersey Shore whose marooned roller coaster provided the definitive image of the 2012 storm. Seaside Heights was given an A3 rating from Moody’s in 2013, meaning “low credit risk.” Ocean County, New Jersey—the county in which Seaside Heights sits—has a AAA rating. In the summer of 2016, before Ocean County sold $31 million in 20-year bonds, neither Moody’s Investor Services nor S&P Global Ratings asked about how climate change might affect its finances, the county’s negotiator told Bloomberg this summer. “It didn’t come up, which says to me they’re not concerned about it.”

The credit rating agencies would deny that characterization—to a point. They do know about sea level rise. They just don’t think it matters yet. In 2015, analysts from Fitch concluded, “sea level rise has not played a material role” in assessing creditworthiness, despite “real threats.” Hurricane Sandy had no discernible effect on the median home prices in Monmouth, Ocean, and Atlantic Counties, which make up New Jersey’s Atlantic Coast. The effect on tourism spending was also negligible.

“We take a lot from history, and historically what’s happened is that these places are desirable to be in,” explains Amy Laskey, a managing director at Fitch Ratings. “People continue to want to be there and will rebuild properties, usually with significant help from federal and state governments, so we haven’t felt it affects the credit of the places we rate.”

There are three reasons for that. The first is that disasters tend to be good for credit, thanks to cash infusions from FEMA’s generous Disaster Relief Fund. “The tax base of New Orleans now is about twice what it was prior to Katrina,” Laskey says, despite a population that remains 60,000 persons shy of its 2005 peak. “Longer term what tends to happen is there’s rebuilding, a tremendous influx of funds from the federal and state governments and private insurers.” Local Home Depots are busy. Rental apartments fill up with construction workers. Contractors have to schedule work months in advance. Look at Homestead, Florida, Laskey advised, a sprawling city south of Miami that was nearly destroyed by Hurricane Andrew. Today it is bigger than ever. “If there was going to be a place that wasn’t going to come back, that would have been it.”

What emerges from the destruction, for the most part, are communities full of properties that are more valuable than they were before, because they’re both newer and better prepared for the next storm. Or as a Moody’s report on environmental risk puts it, “generally disasters have been positive for state finances.” But this is entirely dependent on federal largesse: After Massachusetts brutal winter of 2015, FEMA granted only a quarter of the state’s request for aid. Moody’s determined that could negatively impact the credit ratings of local governments that had to shoulder the cost of snow and ice removal.

Second is that people still want to live on the shore. “The amenity value of the beach is something you can enjoy every day of the summer,” says Robert Muir-Wood, the chief research officer at Risk Management Solutions. “People may say, ‘The benefits of living on the beach to my health and wellbeing outweigh the impact of the flood.’” That calculus is strongly influenced by affordable flood insurance policies, but it has not changed. In a way, despite the risks, the sea is a more dependable economic engine for a community than, say, a factory that could shut its doors and move away any minute.  Most bonds get paid off from property taxes. If property values remain high, bondholders have little to worry about. If, on the other hand, property values fall, tax rates must rise. If buildings go into foreclosure, or neighborhoods undergo “buy-outs” to restore wetlands or dunes, more of the burden to pay off that new seawall falls on everyone else.

Third: Most jurisdictions are large. New Jersey’s coastal counties also contain thousands of inland homes whose risk exposure is much, much lower. Adam Stern, a co-head of research at Boston’s Breckinridge Capital Advisors, argues that the first credit problems will come for small communities devastated by major storms.

Still, Stern said, his firm looks at these issues. “One of the things we try to get at when we look at an issuer of bonds that’s on the coast: Do you take climate change seriously? Are you planning for that?” Still, he said, bond buyers—like everyone else—discount the value of future money, and hence future risk. When could the breaking point for the muni market come? Stern predicts that will happen when property values start to discernibly change in reaction to climate risk. It’s a game of chicken between infrastructure investors and homeowners.

A global slowdown of tropical-cyclone translation speed and implications for flooding

Thomas Mortlock, Risk Frontiers.

As the Earth’s atmosphere warms, the atmospheric circulation changes. These changes vary by region and time of year, but there is evidence to suggest that anthropogenic warming causes a general weakening of summertime tropical circulation. Because tropical cyclones are carried along within the ambient environmental wind, there is an expectation that the translation speed of tropical cyclones has or will slow with warming.

Severe Tropical Cyclone Debbie, which made landfall near Mackay in March 2017, was an unusually slow event, crossing the coast at only seven kilometers per hour. Likewise, the “stalling” of Hurricane Harvey over Texas in August 2017 is another example of a recent, slow-moving event. While two events by no means constitute a trend, slow-moving cyclones can be especially damaging in terms of the rainfall volumes that are precipitated out over a single catchment or town (Fig. 1). A slow translation speed means strong wind speeds are sustained for longer periods of time and it can also increase the surge-producing potential of a tropical cyclone.

Figure 1. Flooding during TC Debbie; left – flood gauge in the Fitzroy River; centre – flooded runway at Rockhampton Airport; right – the flooded Logan River and Pacific Motorway. Source: Office of the Inspector-General Emergency Management (2017).

But have changes in the translation speeds of tropical cyclones been observed in the Australian region and can we draw any conclusions about any impact of these changes on related flooding?

A recent article published in the journal Nature by James Kossin of NOAA looks at tropical cyclone translation speeds from 1949 through to 2016, using data from the US National Hurricane Center (NHC) and Joint Typhoon Warning Center (JTWC), and finds a 10 percent global decrease. For western North Pacific and North Atlantic tropical cyclones, he reports a slowdown over land areas of 30 percent and 20 percent respectively, and a slowdown of 19 percent over land areas in Australia.

The following is an extract from Kossin’s article, followed by some comments on the significance of his work for the Australian region. The full article and associated references are available here.

Kossin’s article – in short

Anthropogenic warming, both past and projected, is expected to affect the strength and patterns of global atmospheric circulation. Tropical cyclones are generally carried along within these circulation patterns, so their past translation speeds may be indicative of past circulation changes. In particular, warming is linked to a weakening of tropical summertime circulation and there is a plausible a priori expectation that tropical-cyclone translation speed may be decreasing. In addition to changing circulation, anthropogenic warming is expected to increase lower-tropospheric water-vapour capacity by about 7 percent per degree (Celsius) of warming. Expectations of increased mean precipitation under global warming are well documented. Increases in global precipitation are constrained by the atmospheric energy budget but precipitation extremes can vary more broadly and are less constrained by energy considerations.

Because the amount of local tropical-cyclone-related rainfall depends on both rain rate and translation speed (with a decrease in translation speed having about the same local effect, proportionally, as an increase in rain rate), each of these two independent effects of anthropogenic warming is expected to increase local rainfall.

Time series of annual-mean global and hemispheric translation speed are shown in Fig. 2, based on global tropical-cyclone ‘best-track’ data. A highly significant global slowdown of tropical-cyclone translation speed is evident, of −10 percent over the 68-yr period 1949–2016. During this period, global-mean surface temperature has increased by about 0.5 °C. The global distribution of translation speed exhibits a clear shift towards slower speeds in the second half of the 68-yr period, and the differences are highly significant throughout most of the distribution.

Figure 2. Global (a) and hemispheric (b) time series of annual-mean tropical-cyclone translation speed and their linear trends. Grey shading indicates 95 percent confidence bounds. Source: Kossin (2018)

This slowing is found in both the Northern and Southern Hemispheres (Fig. 2b) but is stronger and more significant in the Northern Hemisphere, where the annual number of tropical cyclones is generally greater. The times series for the Southern Hemisphere exhibits a change-point around 1980, but the reason for this is not clear.

The trends in tropical-cyclone translation speed and their signal-to-noise ratios vary considerably when the data are parsed by region but slowing over water is found in every basin except the northern Indian Ocean. Significant slowing of −20 percent in the western North Pacific Ocean and of −15 percent in the region around Australia (Southern Hemisphere, east of 100° E) are observed.

When the data are constrained within global latitude belts, significant slowing is observed at latitudes above 25° N and between 0° and 30° S. Slowing trends near the equator tend to be smaller and not significant, whereas there is a substantial (but insignificant) increasing trend in translation speed at higher latitudes in the Southern Hemisphere.

Figure 3. Time series of annual-mean tropical-cyclone translation speed and their linear trends over land and water for individual ocean basins. Source: Kossin (2018).

Changes in tropical-cyclone translation speed over land vary substantially by region (Fig. 3). There is a substantial and significant slowing trend over land areas affected by North Atlantic tropical cyclones (20 percent reduction over the 68-yr period), by western North Pacific tropical cyclones (30 percent reduction) and by tropical cyclones in the Australian region (19 percent reduction, but the significance is marginal).

Contrarily, the tropical-cyclone translation speeds over land areas affected by eastern North Pacific and northern Indian tropical cyclones, and of tropical cyclones that have affected Madagascar and the east coast of Africa, all exhibit positive trends, although none are significant.

In addition to the global slowing of tropical-cyclone translation speed identified here, there is evidence that tropical cyclones have migrated poleward in several regions. The rate of migration in the western North Pacific was found to be large, which has had a substantial effect on regional tropical-cyclone-related hazard exposure.

These recently identified trends in tropical-cyclone track behaviour emphasize that tropical-cyclone frequency and intensity should not be the only metrics considered when establishing connections between climate variability and change and the risks associated with tropical cyclones, both past and future.

These trends further support the idea that the behaviours of tropical cyclones are being altered in societally relevant ways by anthropogenic factors. Continued research into the connections between tropical cyclones and climate is essential to understanding and predicting the changes in risk that are occurring on a global scale.

Significance for the Australian region

While an interesting piece of work, the results for the Southern Hemisphere and the Australian region, are less clear than for the North Atlantic and North Pacific basins.

The trend shown in Fig. 2b for the whole of the Southern Hemisphere is not significant and is clearly composed of two separate trends, each spanning around 30 years. Assuming a homogenous dataset, the time series may be reflecting the strong influence of inter-decadal climate forcing.

In the Southern Hemisphere, the role of multi-decadal climate-ocean variability, like the Pacific Decadal Oscillation (PDO) or the Indian Ocean Dipole (IOD) has a large influence on decadal-scale climate variability (particularly in Australia) and can mask a linear, anthropogenically-forced trend.

The paper also mentions that global slowdown rates are only significant over-water (which makes up around 90 percent of the best track data used), whereas the trend for the 10 percent of global data that corresponds to cyclones over land (where rainfall effects become most societally relevant) is not significant. Therefore, it is unclear, at a global scale, whether tropical cyclones have slowed down over land or not. The trend for the Australian region (Fig. 3f, Southern Hemisphere > 100 °E), for both over land and over water slowdowns (approx. -19 percent), is only marginally significant. Further work could analyse translation speeds in the Australian region using our Bureau of Meteorology tropical cyclone database.

As with previous studies of changes to tropical cyclone behaviour in Australia, results are unclear. The relatively short time span of consistent records, combined with high year-to-year variability, makes it difficult to discern any clear trends in tropical cyclone frequency or intensity in this region (CSIRO, 2015).

For the period 1981 to 2007, no statistically significant trends in the total numbers of cyclones, or in the proportion of the most intense cyclones, have been found in the Australian region, South Indian Ocean or South Pacific Ocean (Kuleshov et al. 2010). However, observations of tropical cyclone numbers from 1981–82 to 2012–13 in the Australian region show a decreasing trend that is significant at the 93-98 percent confidence level when variability associated with ENSO is accounted for (Dowdy, 2014). Only limited conclusions can be drawn regarding tropical cyclone frequency and intensity in the Australian region prior to 1981, due to a lack of data. However, a long-term decline in numbers on the Queensland coast has been suggested (Callaghan and Power, 2010) and northeast Australia is also a region of projected decrease in tropical cyclone activity, including cat 4-5 storms, according to Knutson et al. (2015).

In summary, based on global and regional studies, tropical cyclones are in general projected to become less frequent with a greater proportion of high intensity storms (stronger winds and greater rainfall). This may be accompanied with a general slow-down in translation speed. A greater proportion of storms may reach south (CSIRO, 2015).

The take home message? The known-unknowns are still quite a bit greater than the known-knowns.


CALLAGHAN, J. & POWER, S. 2010. A reduction in the frequency of severe land-falling tropical cyclones over eastern Australia in recent decades. Clim Dynam.

CSIRO and BoM [CSIRO] 2015. Climate Change in Australia Information for Australia’s Natural Resource Management Regions: Technical Report, CSIRO and Bureau of Meteorology, Australia, pp 222.

DOWDY, A. J. 2014. Long-term changes in Australian tropical cyclone numbers. Atmospheric Science Letters.

KNUTSON, T.R., SIRUTIS, J.J., ZHAO, M., TULEYA, R.E., BENDER, M., VECCHI, G.A., VILLARINI, G. & CHAVAS, D. 2015.  Global Projections of Intense Tropical Cyclone Activity for the Late Twenty-First Century from Dynamical Downscaling of CMIP5/RCP4.5 Scenarios. Journal of Climate, 28, 7203-7224.

KOSSIN, J.P. 2018. A global slowdown of tropical-cyclone translation speed. Nature 558, 104-107.

KULESHOV, Y., FAWCETT, R., QI, L., TREWIN, B., JONES, D., MCBRIDE, J. & RAMSAY, H. 2010. Trends in tropical cyclones in the South Indian Ocean and the South Pacific Ocean. Journal of Geophysical Research-Atmospheres, 115.

OFFICE OF THE INSPECTOR-GENERAL EMERGENCY MANAGEMENT 2017. The Cyclone Debbie Review: Lessons for delivering value and confidence through trust and empowerment. Report 1: 2017-18.

Cyclocopters: Drones of the future

Jacob Evans, Risk Frontiers (jacob.evans@riskfrontiers.com)

Cyclocopters are a new concept of drone that has recently shown success in development, garnering significant interest from leading robotic institutions and the US Army. The commercially available drones most people are familiar with are referred to as polycopters. Polycopters typically have four or six equally spaced helicopter style blades. They have a wide range of uses, from recreational to military, with drones recently being used by Risk Frontiers to analyse disaster areas after natural disasters such as volcanic lahars.  Though these types of drones offer a wide variety of applications and already play a significant role in society, cyclocopters are viewed as the next stage in their evolution, with the potential ability to extensively survey during natural disasters and perform risk assessment.

The cyclocopter concept was developed about 100 years ago, however only recently have the materials and technology been available to turn this futuristic looking machine into reality. Cyclocopters can be visualised as an aerial paddleboat, having two or four cycloidal rotors (cyclorotors) (Figure 1). The rotors stir the air into vortices, creating lift, thrust and control.  Each rotor has multiple (conventionally four) aerofoils, whose pitch (angle) can be adjusted in synchronisation to move the cyclocopter in any direction perpendicular to the cyclorotor. There is also a tail propeller to keep the drone level. Hence the aerodynamics can be viewed like that of an insect, imagine a dragonfly.

Figure 1: The world smallest functional cyclocopter. Image: Moble Benedict/Texas A&M University.

The cyclocopter design has several advantages. Unlike conventional drones which, like a helicopter, tilt in the direction of flight, cyclocopters remain parallel. Their engineering design also provides them with better maneuverability, forward speed, and altitude limit, as well as making them less disturbed by wind gusts. They are also much quieter, having lower blade-tip speeds which are responsible for the typical noise from bladed aircraft. However, the most significant advantage is that these drones actually perform better when scaled down. The vortex created by the cyclorotor configuration get proportionally more powerful as the size shrinks.  This makes cyclocopters the leading candidate for miniaturised drones, with the ability to withstand strong winds during natural disasters and survey inaccessible areas.

Research into cyclocopters in the USA is being carried out at the University of Maryland, Texas A&M University and the University of California, Berkley, formally as part of the Micro Autonomous Systems and Technology (MAST) programme funded by the US Army, and now under the Distributed and Collaborative Intelligent Systems and Technology (DCIST) programme. Over the last 10 years, they have developed fully functional cyclocopters whilst reducing the size and weight from 500 g to just 29 g. A video of the MAST research groups’ latest cyclocopter can be found here (https://youtu.be/WTUCCkTcIW0). The next step in their evolution involves further miniaturisation and optimisation, and also getting drones to swarm and coordinate together.

Commercial cyclocopters are viewed to be only a couple of years away. They could play a significant part in saving lives. A common concept is the formation of an advanced network of drones with different capabilities. In search and rescue operations during natural disasters, cyclocopters could quickly scour the disaster area, including inaccessible areas, alerting authorities or communicating with larger ambulance drones which could provide survivors with necessities or even airlift them to safety. During gusty bushfires, a network of stabile cyclocopters could detect ignition points or homes at risk, communicating with larger extinguishing drones.

For cyclocopters individually, the military application presented by the MAST research group also focuses on saving lives, with the initiative of drones being able to fly ahead of military troops looking over ridges and embankments ensuring the soldiers safety. For the insurance industry, they could be used for the rapid assessment of unsafe and contaminated premises. From a perils standpoint, tiny cyclocopters could be used to access obstructed areas, and their stability and coordination would allow for faster and more accurate mapping of disaster relief areas, providing invaluable information for modelling.

Climate change may lead to bigger atmospheric rivers

The following briefing, by Esprit Smith of NASA’s Jet Propulsion Laboratory, was published on the NASA website on 24 May 2018.

The study described below considers projections based on two Representative Concentration Pathways (RCPs) – 4.5 and 8.5. There are four pathways in total (including RCP2.6 and RCP6) and the findings of the IPCC Fifth Assessment Report are based upon these. Most of the discussion of results presented below is based on the RCP8.5 analysis which is the most extreme scenario based on minimal effort to reduce emissions. Toward the end of the briefing the results from the RCP4.5 analysis are noted as follows: ‘The team also tested the algorithm with a different climate model scenario that assumed more conservative increases in the rate of greenhouse gas emissions. They found similar, though less drastic changes.’

A new NASA-led study shows that climate change is likely to intensify extreme weather events known as atmospheric rivers across most of the globe by the end of this century, while slightly reducing their number.  The new study projects atmospheric rivers will be significantly longer and wider than the ones we observe today, leading to more frequent atmospheric river conditions in affected areas.

“The results project that in a scenario where greenhouse gas emissions continue at the current rate, there will be about 10 percent fewer atmospheric rivers globally by the end of the 21st century,” said the study’s lead author, Duane Waliser, of NASA’s Jet Propulsion Laboratory in Pasadena, California. “However, because the findings project that the atmospheric rivers will be, on average, about 25 percent wider and longer, the global frequency of atmospheric river conditions — like heavy rain and strong winds — will actually increase by about 50 percent.” The results also show that the frequency of the most intense atmospheric river storms is projected to nearly double.

Atmospheric rivers are long, narrow jets of air that carry huge amounts of water vapor from the tropics to Earth’s continents and polar regions. These “rivers in the sky” typically range from 250 to 375 miles (400 to 600 kilometers) wide and carry as much water — in the form of water vapor — as about 25 Mississippi Rivers. When an atmospheric river makes landfall, particularly against mountainous terrain (such as the Sierra Nevada and the Andes), it releases much of that water vapor in the form of rain or snow.

These storm systems are common — on average, there are about 11 present on Earth at any time. In many areas of the globe, they bring much-needed precipitation and are an important contribution to annual freshwater supplies. However, stronger atmospheric rivers — especially those that stall at landfall or that produce rain on top of snowpack — can cause disastrous flooding. Atmospheric rivers show up on satellite imagery, including in data from a series of actual atmospheric river storms that drenched the U.S. West Coast and caused severe flooding in early 2017.

In early 2017, the Western United States experienced rain and flooding from a series of storms flowing to America on multiple streams of moist air, each individually known as an atmospheric river. Image credit: NASA/JPL-Caltech

The study

Climate change studies on atmospheric rivers to date have been mostly limited to two specific regions, the western United States and Europe. They have typically used different methodologies for identifying atmospheric rivers and different climate projection models — meaning results from one are not quantitatively comparable to another.

The team sought to provide a more streamlined and global approach to evaluating the effects of climate change on atmospheric river storms.   The study relied on two resources — a set of commonly used global climate model projections for the 21st century developed for the Intergovernmental Panel on Climate Change’s latest assessment report, and a global atmospheric river detection algorithm that can be applied to climate model output. The algorithm, developed earlier by members of the study team, identifies atmospheric river events from every day of the model simulations, quantifying their length, width and how much water vapor they transport.

The team applied the atmospheric river detection algorithm to both actual observations and model simulations for the late 20th century. Comparing the data showed that the models produced a relatively realistic representation of atmospheric rivers for the late 20th century climate.  They then applied the algorithm to model projections of climate in the late 21st century. In doing this, they were able to compare the frequency and characteristics of atmospheric rivers for the current climate with the projections for future climate.

The team also tested the algorithm with a different climate model scenario that assumed more conservative increases in the rate of greenhouse gas emissions. They found similar, though less drastic changes. Together, the consideration of the two climate scenarios indicates a direct link between the extent of warming and the frequency and severity of atmospheric river conditions.

What does this mean?

The significance of the study is two-fold.   First, “knowing the nature of how these atmospheric river events might change with future climate conditions allows for scientists, water managers, stakeholders and citizens living in atmospheric river-prone regions [e.g. western N. America, western S. America, S. Africa, New Zealand, western Europe] to consider the potential implications that might come with a change to these extreme precipitation events,” said Vicky Espinoza, postdoctoral fellow at the University of California-Merced and first author of the study. And secondly, the study and its approach provide a much-needed, uniform way to research atmospheric rivers on a global level — illustrating a foundation to analyze and compare them that did not previously exist.


Data across the models are generally consistent — all support the projection that atmospheric river conditions are linked to warming and will increase in the future; however, co-author Marty Ralph of the University of California, San Diego, points out that there is still work to be done. “While all the models project increases in the frequency of atmospheric river conditions, the results also illustrate uncertainties in the details of the climate projections of this key phenomenon,” he said. “This highlights the need to better understand why the models’ representations of atmospheric rivers vary.”

The study, titled “Global Analysis of Climate Change Projection Effects on Atmospheric Rivers,” was recently published in the journal Geophysical Research Letters.

Drivers risk death when driving into flood water: new study

This article by Fran Molloy was published in yesterday’s issue of  Macquarie University’s The Lighthouse.

New research shows that most Australian drivers think they can work out when it is safe to enter flood waters – as foolhardy Hobart drivers proved during last week’s natural disaster.

Read more: https://lighthouse.mq.edu.au/article/drivers-risk-death-when-driving-into-floodwater-new-study

Newsletter Volume 17, Issue 3

The new QuakeAUS: impact of revised GA earthquake magnitudes on hazards and losses

Paul Somerville and Valentina Koschatsky, Risk Frontiers

Geoscience Australia (GA) is updating the seismic hazard model for Australia through the National Seismic Hazard Assessment (NSHA18) project (Allen et al., 2017). The update includes the corrections of measurements of local magnitude, ML and the conversion of the ML values to moment magnitude, MW. Moment magnitude is the preferred magnitude type for probabilistic seismic hazard analyses, and all modern ground motion prediction equations use this magnitude type. This is because ML is a purely empirical estimate of earthquake size whereas MW is a theoretically-based measure of earthquake size, derived from the seismic moment, M0 of the earthquake which is given by:

M0 = u A D

where A is the rupture area of the fault, D is the average displacement on the fault and u is the shear modulus of rock. The seismic moment quantifies the size of each of the pair of opposing force couples that constitute the force representation of the shear dislocation on the fault plane. For comparison with the more familiar magnitude scale, MW is calibrated to M0 using the following equation:

MW = 2/3 log10 M0 – 10.7

Prior to the early 1990s, most Australian seismic observatories relied on the Richter (1935) local magnitude (ML) formula developed for southern California. At regional distances (where many earthquakes are recorded), the Richter scale tends to overestimate ML relative to modern Australian magnitude formulae. Because of the likely overestimation of local magnitudes for Australian earthquakes recorded at regional distances, there is a need to account for pre-1990 magnitude estimates due to the use of inappropriate Californian magnitude formulae. A process was employed that systematically corrected local magnitudes using the difference between the original (inappropriate) magnitude formula (e.g., Richter, 1935) and the Australian-specific correction curves (e.g., Michael-Leiba and Malafant, 1992) at a distance determined by the nearest recording station likely to have recorded a specific earthquake.

The relationship between ML and MW developed for the NSHA18 demonstrates that MW is approximately 0.3 magnitude units lower than ML for moderate-to-large earthquakes (4.0<MW<6.0). Together, the ML corrections and the subsequent conversions to MW more than halve the number (and consequently the annual rate) of earthquakes exceeding magnitude 4.5 and 5.0, as shown in Figure 1. This has downstream effects on hazard calculations when forecasting the rate of rare large earthquakes using Gutenberg-Richter magnitude-frequency distributions in PSHA. A secondary effect of the ML to MW magnitude conversion is that it tends to increase the number of small and moderate-sized earthquakes relative to large earthquakes. This increases the Gutenberg–Richter b-value, which in turn further decreases the relative annual rates of larger potentially damaging earthquakes (Allen et al., 2017).

Figure 1. Cumulative number of earthquakes with magnitudes equal to or exceeding 4.5 (left) and 5.0 (right) for earthquakes in eastern Australia (east of 135°E longitude) from 1900 to 2010. The different curves show different stages of the NSHA18 catalogue preparation: original catalogue magnitudes, modified magnitudes (only local magnitude modified) and preferred MW (for all earthquakes). Source: Modified from Allen et al., (2017).

Preliminary seismic hazard calculations by Allen et al. (2017b) using the new earthquake source catalogue are compared with the existing PGA hazard map for Be site conditions for a return period of 500 years in Figure 2. We have updated the earthquake source model to incorporate the new GA catalogue into QuakeAUS , and obtained a new hazard map for Australia similar to that in Figure 2.

Figure 2. Existing (left) and draft (right) PGA maps for site class Be for a return period of 500 years. Source: Modified from Allen et al. (2017).

Preliminary loss estimates using the new version of QuakeAUS show large scale reductions. Losses in a national residential portfolio for 200 year ARP and for AAL are 30% and 35% of their former values respectively. The changes are not regionally uniform, with the largest reductions occurring in Perth and the lowest reductions occurring in Darwin. Among the five perils that are modelled on Risk Frontiers’ Multiperil Workbench (earthquake, fire, flood, hail and tropical cyclone), earthquake previously had the largest 200 year ARP loss but now lies below tropical cyclone in a near tie with flood and hail, and its AAL has dropped from second last to last, below hail.

We expect to release QuakeAUS 6.0, including these changes, early in the third quarter of 2018.


Allen, T., J. Griffin, M. Leonard, D. Clark and H. Ghasemi (2017). An updated National Seismic Hazard Assessment for Australia: Are we designing for the right earthquakes? Proceedings of the Annual Conference of the Australian Earthquake Engineering Society in Canberra, November 24-26, 2017.
Michael-Leiba, M., and Malafant, K. (1992). A new local magnitude scale for southeastern Australia, BMR J. Aust. Geol. Geophys. Vol 13, No 3, pp 201-205.

Tathra 2018 Bushfires

James O’Brien, Mingzhu Wang, Jacob Evans

The 2017/18 bushfire season across southeastern Australia during this hot summer season burned through 237,869 hectares from 11,182 fires prompting seven Emergency Warnings, 25 Watch and Act alerts and 16 Total Fire Ban days1. Despite the high number of fires, the losses were limited, until the Tathra fires with two homes lost in Comboyne. True to its mission of better understanding natural disasters, Risk Frontiers produced in-depth intelligence from aerial photography, field survey and GIS analytics. In what follows we report the results of these exercises.

Observations from the field

The early December 2017 heatwave (December was the 5th hottest on record) set the conditions for the bushfires in New South Wales on 18 March 2018. The high temperatures combined with high winds established the conditions under which an electrical fault apparently triggered the fire. The bushfires in Tathra destroyed around 65 homes, damaged 48 homes, destroyed 35 caravans and cabins and burned 1250 hectares of bushland, in additional to the emotional trauma experienced by survivors. Fortunately there were no casualties.

Risk Frontiers scientists (James, Mingzhu and Jacob) arrived in Tathra on April 10th, a little over three weeks following the peak of the bushfire damage, due to the high proportion (around 50%) of properties which contained asbestos. Our objective was to investigate the most affected areas in Tathra.

New above-ground electricity infrastructure in the region was a clear sign of the work undertaken to repair the obliterated power network and an indication of the extensive damage to infrastructure that left Tathra without power and water for a number of days following the fire.

We were able to quickly cover the whole town in less than a day on foot with the exception of some isolated areas in Reedy Swamp where the fire started and a small number of houses are located. This survey was useful to qualitatively gauge the assumptions used in our bushfire loss model, FireAUS. Our observations can be summarised as follows:

Zero-One (binary) damage ratios: We saw very few cases of partial damage to structures. It appears that once fire hits a structure during a bushfire it will almost certainly be completely destroyed. That’s not to say that the adjacent structures at the same address will always burn; we observed several cases of sheds that were burnt while the main house was unscathed and vice versa. The partial damage we did observe was charring to the sides of properties, where it appeared an active effort had been made to save the property.

Statistical dependence of bushfire risk on distance to bush: As described above, there is no clear pattern in the spatial distribution of damage when observed at close-range. However, the statistics of bushfire damage based on aggregated data from a broad area do show the importance of distance of a property to the nearby bush (see Figure 2). Whether a property is burnt in a bushfire seems determined by random chance and this chance is conditioned by the distance to the bushland. In FireAUS, we assume that any two addresses equidistant from the bush have equal probabilities of burning.

Independence of risk from building types: We observed damage to different construction types: unreinforced masonry, wood, fibro, mobile homes and even stone. There were destroyed brick houses away from the bush and spared wood and fibro houses close to the bush and vice-versa. The damage for this locality appears independent of building types even when globally influenced by proximity to bushland. If there are other risk factors that could explain the building damage, they are not visible in a short inspection and would require a full forensic investigation of each damaged building. The prevailing view was that newer homes generally seemed to perform better than older homes – and in one case a home built within the last 5 years sustained minimal bushfire damage (timber steps were destroyed) although that property was also actively defended by neighbours.

Mapping damage

Figure 1 – Vicinity of Tathra / Reedy Swamp bushfire with prevailing wind direction on the day indicated by arrow and X indicating approximate ignition point.

As the events in Tathra unfolded, Risk Frontiers started the data gathering process to provide a view of this event. Our damage analysis is based on post-fire ground surveys and RFS burned area data captured from live data feeds on Sunday. We also acquired 25 km2 of pre-fire satellite imagery (WorldView-2, 2m resolution) for vegetation analysis and utilized Pitney Bowes Geovision for building location and bushland / tree data.

Figure 2 provides a complete map of damaged properties (house icons) overlain with bushland boundaries (green shading) derived from GeoVision data. It is clear that a number of these properties are surrounded by bushland and are therefore deemed to be at a distance of zero metres from the urban and bushland interface. Properties not within the bushland areas are assigned the linear distance in metres to the nearest pre-fire bushland area greater than 0.5 sq km in area, not necessarily the bushland that burned. Further analysis could be undertaken to classify the burned vegetation – however, in the Tathra region, the majority of bushland burned around properties and it is difficult to recover the clear timeline of local ignition.

There are eyewitness reports of ember attack and the pattern of damage around the different locations has destroyed houses at some distance from the bushland interface with adjacent properties destroyed by either further ember attack or contagion from the neighbouring property.

Figure 2 – Location of destroyed homes and adjacent bushland in Tathra classified from pre-fire imagery and GeoVision (Minimum area threshold for contiguous vegetation: 500 m2)

Individual data

While Figure 2 demonstrates the spatial distribution of destroyed homes graphically, it is useful to quantify the loss as a function of distance to adjacent bushland. The data presented are in cumulative form so as to be consistent with other Risk Frontiers reports and other research. Figure 3 shows the percentile of destroyed buildings in relation to nearby bushland from recent major bushfires in Australia:

  • January 2003 Canberra bushfires (damaged suburbs include Duffy)
  • February 2009 “Black Saturday” bushfires in Victoria (damaged suburbs include Marysville and Kinglake)
  • February 2011 Perth bushfires (damaged suburbs include Roleystone)
  • January 2013 Tasmania bushfires (damaged suburbs include Dunalley)
  • January 2016 Yarloop, WA bushfire

Some new statistics and evidence that emerged from the bushfire damage in Tathra are as follows:

  • 42% of destroyed homes were within 0m of classified bushland boundaries.
  • 50% of surveyed destroyed homes were within 30m of the bushland interface and 72.6% of surveyed homes destroyed were within 100m of the bushland interface. These results closely match the findings previously presented in the “Bushfire Penetration into Urban Areas in Australia” report prepared for the 2009 Victorian Bushfires Royal Commission by Risk Frontiers.
  • No homes were destroyed further than 630m from bushland.
Figure 4 – A view of a destroyed property from Riverview Crescent, Tathra looking west in the direction of the fire’s ignition point across the Bega River. Note the burned vegetation in the distance and the lower green belt on the river’s edge demonstrating ember attack across the river.
Figure 5 – Map and aerial imagery showing property losses in the vicinity of Oceanview Drive, Tathra (1) in top left corner. Note the proximity to bushland immediately behind those properties and the distance to those lost in the lower right corner at Francis Hollis (2) and Bay View Drive (3), suggesting ember attack. House icons again denote destroyed properties. Wind direction was from top left to bottom right of image, red line and shading showing burnt boundary.


FS-ISAC 2018 Cybersecurity Trends

By Tahiry Rabehaja.  Email: tahiry.rabehaja@riskfrontiers.com.

2017 was not a good year for cyber security. Victims ranged from small businesses to corporate giants such as Equifax, Deloitte and Kmart with the impacts of ‘improved’ ransomware such as WannaCry and NotPetya just two well-publicised examples.  Such breaches emphasise that cybersecurity poses not just a headache for IT departments but is an issue warranting a top-down solution, starting with C-level executives. To this end, the Financial Services Information Sharing and Analysis Center (FS-ISAC), have recently published a report summarising the thoughts of over 100 financial sector Chief Information Security Officers (CISO) regarding key priorities to improve digital security postures for 2018 (FS-ISAC, 2018). This survey shows most executives focused on improving their defensive strategies against cyber attacks.

Figure 1: Snapshot from the FS-ISAC report ranking the key priorities to improve cyber security postures in 2018.

[FS-ISAC is a non-profit global organisation providing a platform for sharing and analysing cyber and physical security information and intelligence. It currently has approximately 7000 members from 39 different countries. It was an initiative established by the financial service sector in response to the 1998 US Presidential Directive 63.] 

For more than a third (35%) of the executives, improving employees’ awareness about digital threats ranks top of the list. This comes as no surprise given employees have always been on the front line of defence against cyber attacks while remaining the weakest link. Indeed, most attacks against financial services companies exploit human weaknesses using social engineering, spear phishing and account take-over due to weak and reused passwords, etc. In 2017, Verizon reported that 1 in 14 employees were opening attachments or links sent through phishing emails and 1 in 4 were giving out account credentials or personal information (Verizon, 2017).

Investment into modern cyber resilient infrastructures (25%) comes in as runner up. Such an investment includes a progressive upgrade of existing network defence hardware and software as well as the creation of specialised departments that ensure digital information security.

Another recent study shows that subscription to Threat Intelligence, the emergent use of defence systems based on Machine Learning as well as strategic use of Cyber Analytics rank amongst the more cost-effective security investments (Accenture, 2017). That same study shows many companies over-investing in technologies that fail to deliver the desired cost-benefit ratios. These include extensive applications of Advanced Perimeter Controls and incongruous use of data loss prevention such as full disk encryption. Thus, efficient security programs should be implemented by ensuring an optimal cost-benefit ratio. This can be achieved by prioritising the security of critical assets and related infrastructures.

Figure 2: Snapshot from the Accenture report showing spending in security technology and the associated business benefit value.

2018 will also mark a long-awaited ratification of various breach notification regulatory laws. These include changes to the General Data Protection Regulation in Europe, the Notifiable Data Breaches scheme that has just come into effect in Australia, and upcoming changes to China’s Cybersecurity and Data Protection laws. These entail that compliance, explicitly voted by 2% of the surveyed executives, will also play an important role in shaping digital security especially for companies dealing with personally identifiable information.

The focus towards defensive solutions (FS-ISAC, 2018) is disturbing. The report also investigates the impact of hierarchical organization on reporting frequency but nothing is said about responses. This may be due to the fact that those executives interviewed were mainly from the financial industry. However, historical breaches shows response is equally as important as is defence. In fact, it is very likely that a resourceful hacker interested in a particular asset of a certain company will be able to hack in and extract or destroy the targeted information.

Targeted attacks are amongst the most costly and usually affect critical assets such as Intellectual Property. A successful attack on these key assets can have destructive impacts on the victim’s business model itself. Expenses incurred during a cyber event will span from direct costs — forensic and remediation cost, customer protection, regulatory penalty, etc. — to collateral damages — loss of customers, damage to reputation and brand name, increased cost of capital, etc. These costs can be considerably reduced using efficient incident response and mitigation policies as well as cyber insurance.

The White House Council of Economic Advisers estimate the average cost of a breach to be as high as $330 million when an event negatively affects the market value of the victim (Advisers, 2018). For instance, Equifax’s stock price dropped by more than 35% within 7 days of last year’s massive data breach disclosure. The emergence of cyber insurance is anticipated to provide cover against some of the financial losses. Various vendors are already providing cyber insurance products and it is expected this market will grow to over $7 billion within the next three years (PwC, 2015).


Accenture. (2017). Cost of Cybercrime Study. Retrieved from Accenture: https://www.accenture.com/au-en/insight-cost-of-cybercrime-2017

Advisers, W. H. (2018, February 16). Cost of malicious cyber activity to the US economy. Retrieved from https://www.whitehouse.gov/articles/cea-report-cost-malicious-cyber-activity-u-s-economy/

FS-ISAC. (2018, February 12). FS-ISAC Unveils 2018 Cybersecurity Trends According to Top Financial CISOs. Retrieved from FS-ISAC: https://www.fsisac.com/article/fs-isac-unveils-2018-cybersecurity-trends-according-top-financial-cisos

PwC. (2015). Insurance 2020 and beyond: Reaping the dividends of cyber resilience. Retrieved from https://www.pwc.com/gx/en/industries/financial-services/publications/insurance-2020-cyber.html

Verizon. (2017). Verizon Data Breach Investigation Report. Retrieved from Verizon: http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/


Why is Roman concrete more durable than modern concrete?

Jacob Evans, Risk Frontiers (jacob.evans@riskfrontiers.com)

Modern concrete is porous and degrades in contact with seawater. Seawater can seep into its pores, and when dried out the salts crystalize. The crystallization pressure of the salts produces stresses that can result in cracks and spalls. There are also other chemical processes such as sulphate attack, lime leaching and alkali-aggregate expansion all of which degrade modern concrete. Some submerged concrete objects may last only 10 years; meanwhile, 2000-year old concrete constructed during the Roman Empire is still going strong (Figure 1). Why this is so is a question an international research team led by geologist Marie Jackson of Utah University sought to reveal.

Figure 1: Erosion due to sea water on concrete pylons. Image: Brian Robinson.

The composition of Roman concrete has been long known, being a mixture of volcanic ash, quicklime (calcium oxide) and volcanic rock, but the science behind its resilience to seawater remained unknown until recently. It is thought volcanic material was used after the Romans observed ash from volcanic eruptions crystallize to form durable rock.

The research team discovered that while modern concrete is made to be inert, the Roman version interacts with the environment. When seawater interacts with the mixture, it forms rare minerals aluminous tobermorite and phillipsite which are believed to strengthen the material. This discovery could lead to the development of more resilient concrete to be used in coastal environments.

Modern concrete is generally limestone mixed with other ingredients such as sandstone, ash, chalk, iron and clay. The mixture is designed to be inert and not interact with the environment. In coastal environments building regulations govern the type of concrete used and water-cement ratio, but the concrete is still porous: seawater can pass through the material, leading to corrosion and destructuralisation.

As well as salt crystallization, the process whereby dried out salts within the concrete lead to a buildup of pressure, other chemical reactions can affect the integrity of concrete. These include sulphate attack, lime leaching and alkali-aggregate expansion (Figure 2). Sulphate attack occurs when sulphates in the water react with the hydrated calcium aluminate within the concrete. This changes the microstructure and leads to an increase in volume within the concrete, resulting in physical stress and potential cracking. Lime leaching is the simple process of water passing through the concrete and dissolving calcium hydroxide from the concrete. (Calcium hydroxide is formed from the action of calcium oxide and water.) This is often seen as white patches or stalactites on the exterior of the concrete and reduces its strength. Alkali-aggregate expansion is when aggregates, such as silica, decrease the alkalinity of the cement paste, resulting in the expansion of minerals and cracking of the cement.

Figure 2: A 2000 year old Roman jetty. Image: Art853.

Roman concrete however does not appear susceptible to any of these processes. The research team found that seawater, the kryptonite to modern concrete, was the magic ingredient responsible for the structural stability of the Roman mixture. The Roman concrete samples were found to contain rare aluminous tobermorite and phillipsite crystals. It is believed that with long-term exposure to seawater, tobermorite crystalizes from the phillipsite as it becomes more alkaline. This crystallization is thought to strengthen the compound, as tobermorite has long plate-like crystals that allow the material to bend rather than crack under stress. Pliny the Elder in the first century CE exclaimed “that as soon as it [concrete] comes into contact with the waves of the sea and is submerged [it] becomes a single stone mass (fierem unum lapidem), impregnable to the waves and every day stronger.”

Figure 3: The research ground lead by Marie Jackson obtaining samples from the Portus Cosanus pier in Orbetello Italy. Image: Marie Jackson.

To arrive at these conclusions, Jackson et. al. (2017) performed scanning electron microscopy (SEM), micro x-ray diffraction (XRD), Raman spectroscopy and electron probe microanalysis at the Advanced Light Source at the Lawrence Berkeley National Laboratory. Samples were obtained by drilling Roman harbour structures, and were compared with volcanic rock (Figure 3). The combination of these techniques in conjunction with in situ analysis provided evidence of crystallized aluminous tobermorite and phillipsite within Roman marine concrete (Figure 4). These crystals formed long after the original setting of the concrete. This finding was surprising, as tobermorite typically forms only at temperatures above 80 °C, though there is one occurrence of it forming at ambient temperature in the Surtsey volcano.

Figure 4: SEM image showing the presence of aluminous tobermorite and phillipsite within Roman marine concrete. Image from Jackson et. al., Figure 6.

After this discovery, there is now a desire to develop a concrete mixture which replicates ancient Roman marine concrete. It could result in more environmentally friendly concrete construction, and would provide a mixture resilient to seawater and advantageous to coastal defence.


Jackson, M.D. et. al. (2017). Phillipsite and Al-tobermorite mineral cements produced through low-temperature water-rock reactions in Roman marine concrete. American Mineralogist: Journal of Earth and Planetary Materials102(7), pp.1435-1450.

Jackson, M.D. et. al. (2013). Unlocking the secrets of Al-tobermorite in Roman seawater concrete. American Mineralogist98(10), pp.1669-1687.

Suprenant, B.A. (1991). Designing concrete for exposure to seawater. Concrete Construction Magazine, pp.814-816.



Updated GNS Central New Zealand Earthquake Forecast

Paul Somerville, Risk Frontiers

Until now, GNS Science earthquake forecasts have been mainly focused on aftershocks occurring within the region affected by mainshock events.  This has been the case for the 2010 Mw 7.1 Darfield and 2011 Mw 6.2 Christchurch earthquakes as well as the 2016 Mw 7.8 Kaikoura earthquake. However, these events have the potential to trigger large earthquakes in adjacent regions (as described in Briefing Note 332). Now, GNS Science and an international group of earthquake scientists have developed a forecast, excerpts of which are reproduced below, that accounts in part for such large events. 

The probability of an earthquake with magnitude 7.8 and higher has doubled compared with that in the National Seismic Hazard Model, while that for an earthquake of magnitude 7.0 and higher has only increased by 20%.  Although the information released by GNS does not indicate which earthquake sources are contributing to these increases, we can deduce, with reference to Briefing Note 332, that the Hikurangi subduction zone is making the largest contribution, because the Wairarapa fault is the only crustal fault that is thought to be capable of producing an earthquake with Mw larger than 7.8.  The Hikurangi subduction zone may be capable of generating earthquakes as large as Mw 9.0.

GNS Science have taken considerable care to lucidly explain the forecast to the general public, but as in previous forecasts this new one is characterized by sanguine verbal descriptions of probabilities, such as “We estimate that there is a 2% to 14% chance – in verbal likelihood terms this is a very unlikely chance – of a magnitude 7 or above earthquake occurring within the next year in central New Zealand.”

GeoNet – Geological hazard information for New Zealand

Published: Tue Dec 19 2017 11:45 AM


Updated Forecasts

We estimate that there is a 2% to 14% chance – in verbal likelihood terms this is a very unlikely chance – of a magnitude 7 or above earthquake occurring within the next year in central New Zealand.

The area inside the yellow box in the map below indicates the area of the earthquake forecasts that we refer to in this story. Our best estimate is a 6% (very unlikely) chance, which is about a 1 in 16 chance. This has decreased over the last year (in December 2016 it was greater than 20% within the next year), but it is still a higher chance than before the 2016 Kaikōura earthquake.

The table below shows the estimated chance of large earthquakes within the next year, and within the next 10 years. For example, within the next 10 years, there is a 10% to 60% chance (best estimate is 30%, unlikely) of a magnitude 7 or higher earthquake occurring in the area shown on the map (the map below shows what we mean by central New Zealand).

Updated probabilities table for central New Zealand.

The magnitude ranges are for a magnitude 7.8 or greater and magnitude 7 or greater within the next year and within the next 10 years.

How did we come up with these numbers?

Scientists from Japan, Taiwan, and USA met with our scientists to estimate the chance of a large earthquake occurring in central New Zealand. Together they assessed all the earthquake models, plus newly developed models of how slow slip events impact the probability of future earthquakes.

The results of these models were then combined with other information, including observations of how the numbers of earthquakes change during slow slip events, and evidence of earthquake clustering over the past few thousand years to estimate revised probabilities for large events in central New Zealand.

How does this forecast compare to before the Kaikoura earthquake?

The best estimate over the next year for a magnitude 7.0 or higher earthquake is 6%. This is an increase of 20% over the long-term estimates from the National Seismic Hazard Model (i.e., it is 1.2 times higher).

The best estimate for a magnitude 7.8 and higher earthquake is 1% within the next year. This is double the long-term estimates (i.e. it is twice as likely to happen now as it was before November 2016). The upper bounds for both magnitude range estimates are much higher than the long-term estimates.

That being said, the chance of a very big earthquake has been going down over the past year, since we first estimated the numbers following the Kaikoura earthquake.

Back in December 2016, there was a 5% chance of a M7.8+ earthquake within the coming year (December 2016 to November 2017), now the best estimate is 1% within the next year (December 2017 to November 2018).

This exercise has been focused on earthquake forecasts for larger magnitude earthquakes over central New Zealand rather than the Kaikōura aftershock sequence- (those will still be regularly updated here).

Science contact: Matt Gerstenberger m.gerstenberger@gns.cri.nz.

Science input received from Matt Gerstenberger and Sally Potter (GNS Science) as well as valuable contributions from our colleagues at MCDEM and USGS This research was funded by the Natural Hazards Research Platform Kaikoura Earthquake short term research projects.


Newsletter Volume 17, Issue 2

Weather-related natural disasters 2017: was this a reversion to the mean?

Professor Roger Pielke Jr (University of Colorado, Boulder)

Last July, I observed here that the world had recently experienced an era of unusually low disasters and that streak of good luck was going to end sometime. Little could I know that less than one month later the United States would be hit by Hurricane Harvey, which was soon followed by Hurricanes Irma and Maria. Not only did these three major hurricanes emphatically break the more than decade-long drought in major hurricane landfalls in the US but, according to Aon Benfield (PDF), together they resulted in > $220 billion in total losses and >$80 billion in insured losses.

In this column I take a look back at 2017 and put its catastrophes into longer-term historical perspective. Media reports have sent mixed messages about the catastrophes of 2017. On the one hand, there have been headlines about the record insured catastrophe losses of 2017. On the other hand, the impact of record losses on pricing in insurance and reinsurance has been less than many had expected or hoped for. How might we reconcile these two perspectives?

The short answer is that 2017 did indeed result in record weather-related catastrophe losses, but understanding the significance of losses requires understanding the inexorable growth in global wealth in addition to patterns in weather extremes. Total global losses in 2017 were $344 billion worldwide according to Aon Benfield. In terms of total catastrophe losses, 2017 trails only 2011 which had $486 billion in losses. Insured losses followed a similar pattern, with $134 billion in total losses (almost all of which were weather-related), just below that of 2011 and just above 2005.

The figure below places total weather-related catastrophe losses into the context of increasing global GDP. The graph presents data on losses from Munich Re (1990-2017) and Aon Benfield (2000-2017) in relation to global GDP (World Bank) all expressed in constant 2017 dollars (US Office of Management and Budget) (OMB). The data show clearly that 2017 was indeed an extreme year, with losses exceeding 0.4% of global GDP.








Yet, at the same time, since 1990 total global catastrophe losses are down by about one third, based on a simple linear trend. Over the past decade, reinsurance capacity, according to Aon Benfield, has increased by almost 80% (to $605 billion in 2017), whereas global GDP increased by about 24%. Simple math here helps to explain why reinsurance market pricing did not respond as much as some thought despite the 2017 record losses: (1) global GDP has increased, (2) reinsurance capacity has increased much faster than global GDP and (3) catastrophe losses have decreased as a proportion of global GDP. The consequence of these dynamics explain why it is that, even with losses in 2017 at a record levels, the market is nonplussed.

The majority of 2017 catastrophe losses and vast majority of insured losses resulted from the three major Atlantic hurricanes. How should we understand the 2017 Atlantic hurricane season?

According to Phil Klotzbach of Colorado State University, 2017 saw the most active North Atlantic hurricane season since 2005. (This assessment uses a metric called ACE, accumulated cyclone energy.) Three of the previous four years were well below average. These data reinforce what I wrote last July: “A simple regression to the mean would imply disasters of a scale not seen worldwide in more than a decade.” The active 2017 hurricane season reminds us that catastrophe luck cuts both ways.

Interestingly, in addition to the three major hurricanes that made landfall in the North Atlantic, there was only one other intense landfall worldwide (tropical cyclone Enawo struck Mozambique, killing 81 and causing >$20 million in damage). The figure below (based on updated data provided by Ryan Maue http://(tropical cyclone Enawo struck Mozambique, killing 81 and causing >$20 million in damage- @ryanmaue –http://(tropical cyclone Enawo struck Mozambique, killing 81 and causing >$20 million in damage based on our 2012 study) shows global tropical cyclone landfalls since 1970.

In 2017 there were 18 total landfalls at hurricane strength, above the long-term average of 15.3 (median = 15, record = 30 in 1971), but the four major landfalls were below the long-term average of 4.8 (median = 4; record = 9 (five times)). Overall, 2009 to 2016 were all below average for global landfalls, which helps to explain the good fortune experienced with respect to global weather catastrophe losses.

Despite the record catastrophe losses in 2017, according to Aon Benfield, the year continued a streak of well-below average (and below median) loss of life, according to longer-term data provided by Max Roser and Hannah Ritchie at Oxford University. However, large loss of life in 2004, 2008 and 2010 (>200,000 in each year) reminds us that the challenge of protecting lives in the face of disasters remains a crucial priority.

2017 saw a range of other catastrophes, including notable severe weather and wildfire events, together totaling more than $50 billion in losses, whereas flood losses were well below a longer-term average. However, despite these various catastrophes and associated losses, 2017 was notable primarily due to the three major hurricanes in the North Atlantic.

What does 2017 portend for 2018?

My advice has not changed: Even with the record losses of 2017, over more than a decade the world has had a run of good luck when it comes to weather disasters. The hurricanes of 2017 show how quickly good luck can come to an end.
Understanding loss potential in the context of inexorable global development and long term climate patterns is hard enough. It is made even more difficult with the politicized overlay that often accompanies the climate issue. Fortunately, there is good science and solid data available to help cut through the noise. 2017 was far from the worst we will see: even bigger disasters are coming – will you be ready?

The Hawaii nuclear alert: how did people respond?

Andrew Gissing & Ashley Avci

Nuclear tensions between the United States and North Korea have been extensively reported as both sides continue to posture via threats and propaganda and North Korea continues its missile tests. North Korea’s leader Kim Jong-Un has promised to decimate the US and has referred to President Trump as mentally ‘deranged’. A story in the New York Times based upon consultations with leading security experts recently suggested that the chance of war breaking out was between 15 and 50 percent (Kristof, 29/11/2017). Given the threat of an attack, U.S. government officials have encouraged residents to be prepared and have commenced monthly drills to test warning systems.

Within this environment of heightened geopolitical tensions, a single text message was sent in error to people in Hawaii on the 13th of January at 8.07am, warning of an imminent ballistic missile strike. The message read:


Officials alerted the public to the error via social media 13 minutes later, but it took 38 minutes to send a follow-up text message. In the meantime, the community was left to react as if a real missile was to strike Hawaii within twelve to fifteen minutes. It has been revealed that the delays were the result of local officials believing they required federal approval to cancel the alert.

The alert presents an opportunity to improve the understanding of how people react to warnings of extreme events. Risk Frontiers researchers conducted an analysis of media interviews with 207 individuals (respondents) who received the warnings to identify people’s attitudes and responses after the alert was received. The media interviews were sourced from a search of global online media outlets that had reported on the false alarm. Interview responses were coded, analysed and are reported in this article.


Respondents commonly spoke of where they were when they received the alert. Locations varied, highlighting the importance of considering the many likely locations of people when an alert is issued. Most frequently respondents were in a hotel (n=39) or awake at home (n=38). Others were at home, but in bed (n=11); at work (n=10), in a car (n=10), at the beach (n=7) or in the ocean (n=3).

Most respondents received the alert via the official text message issued by the State (n=89), but a minority were informed by someone else: for example, a family member (n=17). Some respondents, however, spoke of being spared the stress of the false alarm as they did not receive the initial warning (Hawaii News Now, 16/1/2018).

Respondents often spoke about how they had trusted the alert because they had interpreted it in the context of existing North Korea and United States tensions (n=36) and therefore believed the alert to be plausible.

Those that chose to validate the warning did so through a multitude of different channels including social media (n=26), making contact with others (n=15), searching websites (n=16), listening for sirens (n=16), watching TV (n=11) or calling authorities (n=3). Based on interview statements in which residents stated how they had immediately responded to the warning, we estimate that a large number of residents may not have attempted to validate the warning (n=64).

Respondents often spoke about how they felt when they received the alert. Most often people described their emotions as fearful (n=51), concerned (n=23), panicked (n=21), upset (n=13) or calm (n=13).

Most respondents undertook protective actions in response to the warning (n=136), most often stating that they attempted to seek shelter within the building they were located in (n=43); called or texted others to alert them (n=23) or called or texted others to express their emotions (n=22). Other actions included packing emergency items (n=17); gathering family members (n=16); attempting to leave a building to seek shelter elsewhere (n=15) and leaving an open space to seek shelter (n=12). Eighteen respondents stated that they did not know what to do when they received the alert.

Respondents also commented on what they observed other people doing. Most commonly others were observed attempting to seek shelter (n=50), crying (n=26), running (n=25) or calling or messaging others (n=13).

When seeking shelter, respondents most often stated that they had attempted to seek shelter within their home (n=34), frequently within the bathroom (n=18). In addition nineteen respondents spoke about sheltering within their hotel. Some commented that they did not know where to seek shelter (n=18).

A small number of respondents stated that they did not take any action (n=16). Reasons for not responding were that respondents thought that there was nothing that could be done (n=7); the warning was false as sirens did not sound (n=4); the missile would be shot down or would miss (n=2); or the warning was a joke or hoax (n=2).

Those that mentioned how they had discovered the alert was false found this information through social media (n=21) or via a text message from authorities (n=12). On discovering that the alert was a false alarm, respondents described their emotions as relieved (n=23), concerned (n=7) or upset (n=7).

Respondents commented on how the situation was handled or how warnings could be improved in the future. Most often, respondents were concerned about the lack of safeguards to avoid such a false alarm and that it took too long for authorities to notify the public that the alert was false. In some cases, respondents reflected on their own personal disaster preparedness, noting specific actions that they had not undertaken to be prepared.

Discussion and Conclusion

The Hawaii missile false alarm provides numerous insights into how people behave when warned of an extreme event. Practitioners should note the importance of social media as a communications mechanism, particularly for people to validate warnings and share with others.

The case study demonstrates the role of informal networks in both communicating and validating warnings. Hotels were clearly an important node of communication with their guests, and should always be considered an important network in communicating warnings in at-risk areas with large tourist populations.

Interestingly, it would appear that the population had been primed to respond to such an alert by their knowledge or concerns regarding tensions between North Korea and the United States. This demonstrates the importance of communicating long range forecasts to build the community’s awareness of a risk so that individuals will recognise and respond to a warning when it occurs.

Given that the official advice as to what to do in the event of a real alert is for “all residents and visitors to immediately seek shelter in a building or other substantial structure”, it appears that most respondents reacted appropriately. However, consistent with previous Risk Frontiers briefings on community responses to warnings, not everyone responded or knew how to respond. This is a further demonstration that even in extreme circumstances, emergency warnings cannot be relied on to achieve full compliance by communities. This finding should be considered when relying on warning systems to justify the permitting of development in high risk locations.

As for improving warning technologies, the Hawaiian Emergency Management Agency has suspended all future drills until a review of the event has been completed; instituted a two-person activation/verification rule for all tests and actual alarms and instigated a cancellation command that can be activated within seconds of a false alarm.


HAWAII NEWS NOW. 16/1/2018. If you didn’t get the false alert about an inbound missile, this might be why. Available: http://www.hawaiinewsnow.com/story/37269695/if-you-didnt-get-the-false-missile-alert-this-might-be-why [Accessed 27/1/2018].

KRISTOF, N. 29/11/2017. Are we headed toward a new Korean war. New York Times.

Risk Frontiers’ Multi-Peril Workbench 2.4 has now been released!

Workbench 2.4:

  • features a major update to our HailAUS hail loss model to national coverage
  • includes our Demand Surge model that can be applied to all Australian Perils
  • contains updates to FloodAUS, FireAUS, CyclAUS as well as many enhancements to the Workbench itself

Changes are coming to QuakeAUS … have you heard?

Prof. Paul Somerville of Risk Frontiers has been participating in the Geoscience Australia update of the seismic hazard model for Australia through the National Seismic Hazard Assessment (NSHA18) project. We have commenced preparations to update our Australian earthquake loss model, QuakeAUS, and expect to have preliminary results in the first quarter of this year!