Why isn’t the bond market more worried about climate change?

Paul Somerville and Thomas Mortlock


In December 2017, the credit rating agency Moody’s warned U.S. cities and states to prepare for the effects of climate change or risk being downgraded. It explained how it assesses the credit risks to a city or state that’s being impacted by climate change — whether that impact be a short-term “climate shock” like a wildfire, hurricane or drought, or a longer-term “incremental climate trend” like rising sea levels or increased temperatures. It also takes into consideration communities’ preparedness for such shocks and their activities adapting to climate trends.

A recent report by Charles Donovan and Christopher Corbishley of Imperial College predicts that countries disproportionately impacted by climate change could have to pay an extra $170 billion in interest rates over the next 10 years. The following article by Henry Grabar, which appeared on Slate on Oct. 28 2017, explains why the bond market is not more worried by climate change.

The article draws examples from recent flooding in US cities and the infamous National Flood Insurance Program. Parts of the US like Miami, New Orleans and New York are feeling the effects of sea level rise now during extreme weather events, in part because of the low-lying topography and high population density of these coastal areas. In Eastern Australia, shorelines have – until now – broadly been able to keep pace with a rising tidal prism because of antecedent sediment conditions and a relatively steep coastal hinterland.

However, high and rising coastal populations and expanding infrastructure (~ 85 % of Australia’s population currently lives near the coast) leave some big east coast cities like Newcastle, Brisbane and Cairns with significant exposure to higher sea levels in the coming decades.

We should perhaps be looking to examples in the US and elsewhere as a present-day ‘litmus test’ of financial markets response, and a window to the near-future time when sea level rise begins to have a more significant impact on some of the big east coast cities in Australia.


Early this month, when the annual king tide swept ocean water into the streets of Miami, the city’s Republican mayor, Tomás Regalado, used the occasion to stump for a vote. He’d like Miami residents to pass the “Miami Forever” bond issue, a $400-million property tax increase to fund seawalls and drainage pumps (they’ll vote on it on Election Day). “We cannot control nature,” Regalado says in a recent television ad, “but we can prepare the city.”

Miami is considered among the most exposed big cities in the U.S. to climate change. One study predicts the region could lose 2.5 million residents to climate migration by the end of the century. As on much of the Eastern Seaboard, the flooding is no longer hypothetical. Low-lying properties already get submerged during the year’s highest tides. So-called “nuisance flooding” has surged 400 percent since 2006.

Business leaders are excited about the timing of the vote in part because Miami currently has its best credit ratings in 30 years, meaning that the city can borrow money at low rates. Amid the dire predictions and the full moon floods, that rating is a bulwark. It signifies that the financial industry doesn’t think sea level rise and storm risk will prevent Miami from paying off its debts. In December, a report issued by President Obama’s budget office outlined a potential virtuous cycle: Borrow money to build seawalls and the like while your credit is good, and your credit will still be good when you need to borrow in the future.

Figure 1 (A) Even non-cyclonic heavy rain events can leave Miami Beach flooded, putting assets and people at risk. Source: National Weather Service (2015).
(B) Flooded homes in New Jersey after Superstorm Sandy made landfall in 2012. Source: AFP PHOTO/US Coast Guard.

The alternative: Flood-prone jurisdictions go into the financial tailspin we recognize from cities like Detroit, unable to borrow enough to protect the assets whose declining value makes it harder to borrow.   The long ribbon of vulnerable coastal homes from Brownsville to Acadia has managed to stave off that cycle in part thanks to a familiar, federally backed consensus between homebuyers and politicians. Homebuyers continue to place high values on homes, even when they’ve suffered repeated flood damage. That’s because the federal government is generous with disaster aid and its subsidy of the National Flood Insurance Program, which helps coastal homeowners buy new washing machines when theirs get wrecked. Banks require coastal homeowners with FHA-backed mortgages to purchase flood insurance, and in turn, coastal homes are rebuilt again and again and again—even when it might no longer be prudent.

But there’s another element that helps cement the bargain: investors’ confidence that coastal towns will pay back the money they borrow. Homebuyers are irrational. Politicians are self-interested. But lenders—and the ratings agencies that help direct their investments—ought to have a more clinical view. Evaluating long-term risk is exactly their business model. If they thought environmental conditions threatened investments, they would sound the alarm—or just vote with their wallets. They’ve done it before—cities like New Orleans, Galveston, Texas, and Seaside Heights, New Jersey were all downgraded by rating agencies after damage from Hurricanes Katrina, Ike, and Sandy. But all have since rebounded. There does not appear to be a single jurisdiction in the United States that has suffered a credit downgrade related to sea level rise or storm risk. Yet.

To understand why, it helps to look at communities like Seaside Heights, the boardwalk enclave along the Jersey Shore whose marooned roller coaster provided the definitive image of the 2012 storm. Seaside Heights was given an A3 rating from Moody’s in 2013, meaning “low credit risk.” Ocean County, New Jersey—the county in which Seaside Heights sits—has a AAA rating. In the summer of 2016, before Ocean County sold $31 million in 20-year bonds, neither Moody’s Investor Services nor S&P Global Ratings asked about how climate change might affect its finances, the county’s negotiator told Bloomberg this summer. “It didn’t come up, which says to me they’re not concerned about it.”

The credit rating agencies would deny that characterization—to a point. They do know about sea level rise. They just don’t think it matters yet. In 2015, analysts from Fitch concluded, “sea level rise has not played a material role” in assessing creditworthiness, despite “real threats.” Hurricane Sandy had no discernible effect on the median home prices in Monmouth, Ocean, and Atlantic Counties, which make up New Jersey’s Atlantic Coast. The effect on tourism spending was also negligible.

“We take a lot from history, and historically what’s happened is that these places are desirable to be in,” explains Amy Laskey, a managing director at Fitch Ratings. “People continue to want to be there and will rebuild properties, usually with significant help from federal and state governments, so we haven’t felt it affects the credit of the places we rate.”

There are three reasons for that. The first is that disasters tend to be good for credit, thanks to cash infusions from FEMA’s generous Disaster Relief Fund. “The tax base of New Orleans now is about twice what it was prior to Katrina,” Laskey says, despite a population that remains 60,000 persons shy of its 2005 peak. “Longer term what tends to happen is there’s rebuilding, a tremendous influx of funds from the federal and state governments and private insurers.” Local Home Depots are busy. Rental apartments fill up with construction workers. Contractors have to schedule work months in advance. Look at Homestead, Florida, Laskey advised, a sprawling city south of Miami that was nearly destroyed by Hurricane Andrew. Today it is bigger than ever. “If there was going to be a place that wasn’t going to come back, that would have been it.”

What emerges from the destruction, for the most part, are communities full of properties that are more valuable than they were before, because they’re both newer and better prepared for the next storm. Or as a Moody’s report on environmental risk puts it, “generally disasters have been positive for state finances.” But this is entirely dependent on federal largesse: After Massachusetts brutal winter of 2015, FEMA granted only a quarter of the state’s request for aid. Moody’s determined that could negatively impact the credit ratings of local governments that had to shoulder the cost of snow and ice removal.

Second is that people still want to live on the shore. “The amenity value of the beach is something you can enjoy every day of the summer,” says Robert Muir-Wood, the chief research officer at Risk Management Solutions. “People may say, ‘The benefits of living on the beach to my health and wellbeing outweigh the impact of the flood.’” That calculus is strongly influenced by affordable flood insurance policies, but it has not changed. In a way, despite the risks, the sea is a more dependable economic engine for a community than, say, a factory that could shut its doors and move away any minute.  Most bonds get paid off from property taxes. If property values remain high, bondholders have little to worry about. If, on the other hand, property values fall, tax rates must rise. If buildings go into foreclosure, or neighborhoods undergo “buy-outs” to restore wetlands or dunes, more of the burden to pay off that new seawall falls on everyone else.

Third: Most jurisdictions are large. New Jersey’s coastal counties also contain thousands of inland homes whose risk exposure is much, much lower. Adam Stern, a co-head of research at Boston’s Breckinridge Capital Advisors, argues that the first credit problems will come for small communities devastated by major storms.

Still, Stern said, his firm looks at these issues. “One of the things we try to get at when we look at an issuer of bonds that’s on the coast: Do you take climate change seriously? Are you planning for that?” Still, he said, bond buyers—like everyone else—discount the value of future money, and hence future risk. When could the breaking point for the muni market come? Stern predicts that will happen when property values start to discernibly change in reaction to climate risk. It’s a game of chicken between infrastructure investors and homeowners.

Risk Frontiers’ new earthquake model shows reduced losses for Australia

Risk Frontiers have announced today the imminent release of their new probabilistic earthquake loss model for Australia, QuakeAUS 6.0. The updated model, developed by Dr Valentina Koschatzky with input from Risk Frontiers’ Chief Geoscientist, Dr Paul Somerville, incorporates the latest data from Geoscience Australia’s recent revision of the Australian Earthquake Catalogue, which has more than halved the rate of earthquakes exceeding 4.5 in magnitude.

The updated Risk Frontiers model incorporates new earthquake source and active fault models. The active fault model is based on geologically identified rare and large prehistorical events that are not present in the short historical record of earthquakes in Australia. The model also includes important updates in the exposure data and in soil classification and amplification.

Dr Paul Somerville, Risk Frontiers’ Chief Geoscientist said:

“Compared with the previous version of Risk Frontiers’ QuakeAUS model, losses have generally decreased across the country (average annual loss is 80% and the 200-year return period loss is 63% of former values on an indicative national portfolio) due to the update in the historical catalogue. This effect is partly mitigated at longer return periods in the regions where active faults have now been modelled.

Changes in losses are not uniform spatially or temporally. Sydney, for example, shows a drastic reduction in losses at every return period, while the losses for Melbourne show a slight increment. In other areas such as Adelaide the losses are lower than in the previous model for short return periods, but that trend is reversed for return periods greater than 1,000 years”.

Risk Frontiers’ General Manager, Dr Ryan Crompton said:

“The release of this updated model demonstrates our commitment to deliver world class research on Australian natural hazard risks to the domestic and international insurance industry.

“We live and work here in Australia and actively participate in research and development in the fields of earthquake hazards and earthquake engineering through participation in the Australian Earthquake Engineering Society, of which Dr Paul Somerville is Past President, and in the Science Advisory Panel’s review of Geoscience Australia’s National Seismic Hazard Assessment. This local expertise means we are ideally placed to bring leading edge knowledge about local earthquake risk to the insurance and reinsurance industries”.

For media enquiries regarding the release of the new earthquake model please contact Andrew Gissing on andrew.gissing@riskfrontiers.com.

A global slowdown of tropical-cyclone translation speed and implications for flooding

Thomas Mortlock, Risk Frontiers.

As the Earth’s atmosphere warms, the atmospheric circulation changes. These changes vary by region and time of year, but there is evidence to suggest that anthropogenic warming causes a general weakening of summertime tropical circulation. Because tropical cyclones are carried along within the ambient environmental wind, there is an expectation that the translation speed of tropical cyclones has or will slow with warming.

Severe Tropical Cyclone Debbie, which made landfall near Mackay in March 2017, was an unusually slow event, crossing the coast at only seven kilometers per hour. Likewise, the “stalling” of Hurricane Harvey over Texas in August 2017 is another example of a recent, slow-moving event. While two events by no means constitute a trend, slow-moving cyclones can be especially damaging in terms of the rainfall volumes that are precipitated out over a single catchment or town (Fig. 1). A slow translation speed means strong wind speeds are sustained for longer periods of time and it can also increase the surge-producing potential of a tropical cyclone.

Figure 1. Flooding during TC Debbie; left – flood gauge in the Fitzroy River; centre – flooded runway at Rockhampton Airport; right – the flooded Logan River and Pacific Motorway. Source: Office of the Inspector-General Emergency Management (2017).

But have changes in the translation speeds of tropical cyclones been observed in the Australian region and can we draw any conclusions about any impact of these changes on related flooding?

A recent article published in the journal Nature by James Kossin of NOAA looks at tropical cyclone translation speeds from 1949 through to 2016, using data from the US National Hurricane Center (NHC) and Joint Typhoon Warning Center (JTWC), and finds a 10 percent global decrease. For western North Pacific and North Atlantic tropical cyclones, he reports a slowdown over land areas of 30 percent and 20 percent respectively, and a slowdown of 19 percent over land areas in Australia.

The following is an extract from Kossin’s article, followed by some comments on the significance of his work for the Australian region. The full article and associated references are available here.

Kossin’s article – in short

Anthropogenic warming, both past and projected, is expected to affect the strength and patterns of global atmospheric circulation. Tropical cyclones are generally carried along within these circulation patterns, so their past translation speeds may be indicative of past circulation changes. In particular, warming is linked to a weakening of tropical summertime circulation and there is a plausible a priori expectation that tropical-cyclone translation speed may be decreasing. In addition to changing circulation, anthropogenic warming is expected to increase lower-tropospheric water-vapour capacity by about 7 percent per degree (Celsius) of warming. Expectations of increased mean precipitation under global warming are well documented. Increases in global precipitation are constrained by the atmospheric energy budget but precipitation extremes can vary more broadly and are less constrained by energy considerations.

Because the amount of local tropical-cyclone-related rainfall depends on both rain rate and translation speed (with a decrease in translation speed having about the same local effect, proportionally, as an increase in rain rate), each of these two independent effects of anthropogenic warming is expected to increase local rainfall.

Time series of annual-mean global and hemispheric translation speed are shown in Fig. 2, based on global tropical-cyclone ‘best-track’ data. A highly significant global slowdown of tropical-cyclone translation speed is evident, of −10 percent over the 68-yr period 1949–2016. During this period, global-mean surface temperature has increased by about 0.5 °C. The global distribution of translation speed exhibits a clear shift towards slower speeds in the second half of the 68-yr period, and the differences are highly significant throughout most of the distribution.

Figure 2. Global (a) and hemispheric (b) time series of annual-mean tropical-cyclone translation speed and their linear trends. Grey shading indicates 95 percent confidence bounds. Source: Kossin (2018)

This slowing is found in both the Northern and Southern Hemispheres (Fig. 2b) but is stronger and more significant in the Northern Hemisphere, where the annual number of tropical cyclones is generally greater. The times series for the Southern Hemisphere exhibits a change-point around 1980, but the reason for this is not clear.

The trends in tropical-cyclone translation speed and their signal-to-noise ratios vary considerably when the data are parsed by region but slowing over water is found in every basin except the northern Indian Ocean. Significant slowing of −20 percent in the western North Pacific Ocean and of −15 percent in the region around Australia (Southern Hemisphere, east of 100° E) are observed.

When the data are constrained within global latitude belts, significant slowing is observed at latitudes above 25° N and between 0° and 30° S. Slowing trends near the equator tend to be smaller and not significant, whereas there is a substantial (but insignificant) increasing trend in translation speed at higher latitudes in the Southern Hemisphere.

Figure 3. Time series of annual-mean tropical-cyclone translation speed and their linear trends over land and water for individual ocean basins. Source: Kossin (2018).

Changes in tropical-cyclone translation speed over land vary substantially by region (Fig. 3). There is a substantial and significant slowing trend over land areas affected by North Atlantic tropical cyclones (20 percent reduction over the 68-yr period), by western North Pacific tropical cyclones (30 percent reduction) and by tropical cyclones in the Australian region (19 percent reduction, but the significance is marginal).

Contrarily, the tropical-cyclone translation speeds over land areas affected by eastern North Pacific and northern Indian tropical cyclones, and of tropical cyclones that have affected Madagascar and the east coast of Africa, all exhibit positive trends, although none are significant.

In addition to the global slowing of tropical-cyclone translation speed identified here, there is evidence that tropical cyclones have migrated poleward in several regions. The rate of migration in the western North Pacific was found to be large, which has had a substantial effect on regional tropical-cyclone-related hazard exposure.

These recently identified trends in tropical-cyclone track behaviour emphasize that tropical-cyclone frequency and intensity should not be the only metrics considered when establishing connections between climate variability and change and the risks associated with tropical cyclones, both past and future.

These trends further support the idea that the behaviours of tropical cyclones are being altered in societally relevant ways by anthropogenic factors. Continued research into the connections between tropical cyclones and climate is essential to understanding and predicting the changes in risk that are occurring on a global scale.

Significance for the Australian region

While an interesting piece of work, the results for the Southern Hemisphere and the Australian region, are less clear than for the North Atlantic and North Pacific basins.

The trend shown in Fig. 2b for the whole of the Southern Hemisphere is not significant and is clearly composed of two separate trends, each spanning around 30 years. Assuming a homogenous dataset, the time series may be reflecting the strong influence of inter-decadal climate forcing.

In the Southern Hemisphere, the role of multi-decadal climate-ocean variability, like the Pacific Decadal Oscillation (PDO) or the Indian Ocean Dipole (IOD) has a large influence on decadal-scale climate variability (particularly in Australia) and can mask a linear, anthropogenically-forced trend.

The paper also mentions that global slowdown rates are only significant over-water (which makes up around 90 percent of the best track data used), whereas the trend for the 10 percent of global data that corresponds to cyclones over land (where rainfall effects become most societally relevant) is not significant. Therefore, it is unclear, at a global scale, whether tropical cyclones have slowed down over land or not. The trend for the Australian region (Fig. 3f, Southern Hemisphere > 100 °E), for both over land and over water slowdowns (approx. -19 percent), is only marginally significant. Further work could analyse translation speeds in the Australian region using our Bureau of Meteorology tropical cyclone database.

As with previous studies of changes to tropical cyclone behaviour in Australia, results are unclear. The relatively short time span of consistent records, combined with high year-to-year variability, makes it difficult to discern any clear trends in tropical cyclone frequency or intensity in this region (CSIRO, 2015).

For the period 1981 to 2007, no statistically significant trends in the total numbers of cyclones, or in the proportion of the most intense cyclones, have been found in the Australian region, South Indian Ocean or South Pacific Ocean (Kuleshov et al. 2010). However, observations of tropical cyclone numbers from 1981–82 to 2012–13 in the Australian region show a decreasing trend that is significant at the 93-98 percent confidence level when variability associated with ENSO is accounted for (Dowdy, 2014). Only limited conclusions can be drawn regarding tropical cyclone frequency and intensity in the Australian region prior to 1981, due to a lack of data. However, a long-term decline in numbers on the Queensland coast has been suggested (Callaghan and Power, 2010) and northeast Australia is also a region of projected decrease in tropical cyclone activity, including cat 4-5 storms, according to Knutson et al. (2015).

In summary, based on global and regional studies, tropical cyclones are in general projected to become less frequent with a greater proportion of high intensity storms (stronger winds and greater rainfall). This may be accompanied with a general slow-down in translation speed. A greater proportion of storms may reach south (CSIRO, 2015).

The take home message? The known-unknowns are still quite a bit greater than the known-knowns.

References

CALLAGHAN, J. & POWER, S. 2010. A reduction in the frequency of severe land-falling tropical cyclones over eastern Australia in recent decades. Clim Dynam.

CSIRO and BoM [CSIRO] 2015. Climate Change in Australia Information for Australia’s Natural Resource Management Regions: Technical Report, CSIRO and Bureau of Meteorology, Australia, pp 222.

DOWDY, A. J. 2014. Long-term changes in Australian tropical cyclone numbers. Atmospheric Science Letters.

KNUTSON, T.R., SIRUTIS, J.J., ZHAO, M., TULEYA, R.E., BENDER, M., VECCHI, G.A., VILLARINI, G. & CHAVAS, D. 2015.  Global Projections of Intense Tropical Cyclone Activity for the Late Twenty-First Century from Dynamical Downscaling of CMIP5/RCP4.5 Scenarios. Journal of Climate, 28, 7203-7224.

KOSSIN, J.P. 2018. A global slowdown of tropical-cyclone translation speed. Nature 558, 104-107.

KULESHOV, Y., FAWCETT, R., QI, L., TREWIN, B., JONES, D., MCBRIDE, J. & RAMSAY, H. 2010. Trends in tropical cyclones in the South Indian Ocean and the South Pacific Ocean. Journal of Geophysical Research-Atmospheres, 115.

OFFICE OF THE INSPECTOR-GENERAL EMERGENCY MANAGEMENT 2017. The Cyclone Debbie Review: Lessons for delivering value and confidence through trust and empowerment. Report 1: 2017-18.

Cyclocopters: Drones of the future

Jacob Evans, Risk Frontiers (jacob.evans@riskfrontiers.com)

Cyclocopters are a new concept of drone that has recently shown success in development, garnering significant interest from leading robotic institutions and the US Army. The commercially available drones most people are familiar with are referred to as polycopters. Polycopters typically have four or six equally spaced helicopter style blades. They have a wide range of uses, from recreational to military, with drones recently being used by Risk Frontiers to analyse disaster areas after natural disasters such as volcanic lahars.  Though these types of drones offer a wide variety of applications and already play a significant role in society, cyclocopters are viewed as the next stage in their evolution, with the potential ability to extensively survey during natural disasters and perform risk assessment.

The cyclocopter concept was developed about 100 years ago, however only recently have the materials and technology been available to turn this futuristic looking machine into reality. Cyclocopters can be visualised as an aerial paddleboat, having two or four cycloidal rotors (cyclorotors) (Figure 1). The rotors stir the air into vortices, creating lift, thrust and control.  Each rotor has multiple (conventionally four) aerofoils, whose pitch (angle) can be adjusted in synchronisation to move the cyclocopter in any direction perpendicular to the cyclorotor. There is also a tail propeller to keep the drone level. Hence the aerodynamics can be viewed like that of an insect, imagine a dragonfly.

Figure 1: The world smallest functional cyclocopter. Image: Moble Benedict/Texas A&M University.

The cyclocopter design has several advantages. Unlike conventional drones which, like a helicopter, tilt in the direction of flight, cyclocopters remain parallel. Their engineering design also provides them with better maneuverability, forward speed, and altitude limit, as well as making them less disturbed by wind gusts. They are also much quieter, having lower blade-tip speeds which are responsible for the typical noise from bladed aircraft. However, the most significant advantage is that these drones actually perform better when scaled down. The vortex created by the cyclorotor configuration get proportionally more powerful as the size shrinks.  This makes cyclocopters the leading candidate for miniaturised drones, with the ability to withstand strong winds during natural disasters and survey inaccessible areas.

Research into cyclocopters in the USA is being carried out at the University of Maryland, Texas A&M University and the University of California, Berkley, formally as part of the Micro Autonomous Systems and Technology (MAST) programme funded by the US Army, and now under the Distributed and Collaborative Intelligent Systems and Technology (DCIST) programme. Over the last 10 years, they have developed fully functional cyclocopters whilst reducing the size and weight from 500 g to just 29 g. A video of the MAST research groups’ latest cyclocopter can be found here (https://youtu.be/WTUCCkTcIW0). The next step in their evolution involves further miniaturisation and optimisation, and also getting drones to swarm and coordinate together.

Commercial cyclocopters are viewed to be only a couple of years away. They could play a significant part in saving lives. A common concept is the formation of an advanced network of drones with different capabilities. In search and rescue operations during natural disasters, cyclocopters could quickly scour the disaster area, including inaccessible areas, alerting authorities or communicating with larger ambulance drones which could provide survivors with necessities or even airlift them to safety. During gusty bushfires, a network of stabile cyclocopters could detect ignition points or homes at risk, communicating with larger extinguishing drones.

For cyclocopters individually, the military application presented by the MAST research group also focuses on saving lives, with the initiative of drones being able to fly ahead of military troops looking over ridges and embankments ensuring the soldiers safety. For the insurance industry, they could be used for the rapid assessment of unsafe and contaminated premises. From a perils standpoint, tiny cyclocopters could be used to access obstructed areas, and their stability and coordination would allow for faster and more accurate mapping of disaster relief areas, providing invaluable information for modelling.

Climate change may lead to bigger atmospheric rivers

The following briefing, by Esprit Smith of NASA’s Jet Propulsion Laboratory, was published on the NASA website on 24 May 2018.

The study described below considers projections based on two Representative Concentration Pathways (RCPs) – 4.5 and 8.5. There are four pathways in total (including RCP2.6 and RCP6) and the findings of the IPCC Fifth Assessment Report are based upon these. Most of the discussion of results presented below is based on the RCP8.5 analysis which is the most extreme scenario based on minimal effort to reduce emissions. Toward the end of the briefing the results from the RCP4.5 analysis are noted as follows: ‘The team also tested the algorithm with a different climate model scenario that assumed more conservative increases in the rate of greenhouse gas emissions. They found similar, though less drastic changes.’


A new NASA-led study shows that climate change is likely to intensify extreme weather events known as atmospheric rivers across most of the globe by the end of this century, while slightly reducing their number.  The new study projects atmospheric rivers will be significantly longer and wider than the ones we observe today, leading to more frequent atmospheric river conditions in affected areas.

“The results project that in a scenario where greenhouse gas emissions continue at the current rate, there will be about 10 percent fewer atmospheric rivers globally by the end of the 21st century,” said the study’s lead author, Duane Waliser, of NASA’s Jet Propulsion Laboratory in Pasadena, California. “However, because the findings project that the atmospheric rivers will be, on average, about 25 percent wider and longer, the global frequency of atmospheric river conditions — like heavy rain and strong winds — will actually increase by about 50 percent.” The results also show that the frequency of the most intense atmospheric river storms is projected to nearly double.

Atmospheric rivers are long, narrow jets of air that carry huge amounts of water vapor from the tropics to Earth’s continents and polar regions. These “rivers in the sky” typically range from 250 to 375 miles (400 to 600 kilometers) wide and carry as much water — in the form of water vapor — as about 25 Mississippi Rivers. When an atmospheric river makes landfall, particularly against mountainous terrain (such as the Sierra Nevada and the Andes), it releases much of that water vapor in the form of rain or snow.

These storm systems are common — on average, there are about 11 present on Earth at any time. In many areas of the globe, they bring much-needed precipitation and are an important contribution to annual freshwater supplies. However, stronger atmospheric rivers — especially those that stall at landfall or that produce rain on top of snowpack — can cause disastrous flooding. Atmospheric rivers show up on satellite imagery, including in data from a series of actual atmospheric river storms that drenched the U.S. West Coast and caused severe flooding in early 2017.

In early 2017, the Western United States experienced rain and flooding from a series of storms flowing to America on multiple streams of moist air, each individually known as an atmospheric river. Image credit: NASA/JPL-Caltech

The study

Climate change studies on atmospheric rivers to date have been mostly limited to two specific regions, the western United States and Europe. They have typically used different methodologies for identifying atmospheric rivers and different climate projection models — meaning results from one are not quantitatively comparable to another.

The team sought to provide a more streamlined and global approach to evaluating the effects of climate change on atmospheric river storms.   The study relied on two resources — a set of commonly used global climate model projections for the 21st century developed for the Intergovernmental Panel on Climate Change’s latest assessment report, and a global atmospheric river detection algorithm that can be applied to climate model output. The algorithm, developed earlier by members of the study team, identifies atmospheric river events from every day of the model simulations, quantifying their length, width and how much water vapor they transport.

The team applied the atmospheric river detection algorithm to both actual observations and model simulations for the late 20th century. Comparing the data showed that the models produced a relatively realistic representation of atmospheric rivers for the late 20th century climate.  They then applied the algorithm to model projections of climate in the late 21st century. In doing this, they were able to compare the frequency and characteristics of atmospheric rivers for the current climate with the projections for future climate.

The team also tested the algorithm with a different climate model scenario that assumed more conservative increases in the rate of greenhouse gas emissions. They found similar, though less drastic changes. Together, the consideration of the two climate scenarios indicates a direct link between the extent of warming and the frequency and severity of atmospheric river conditions.

What does this mean?

The significance of the study is two-fold.   First, “knowing the nature of how these atmospheric river events might change with future climate conditions allows for scientists, water managers, stakeholders and citizens living in atmospheric river-prone regions [e.g. western N. America, western S. America, S. Africa, New Zealand, western Europe] to consider the potential implications that might come with a change to these extreme precipitation events,” said Vicky Espinoza, postdoctoral fellow at the University of California-Merced and first author of the study. And secondly, the study and its approach provide a much-needed, uniform way to research atmospheric rivers on a global level — illustrating a foundation to analyze and compare them that did not previously exist.

Limitations

Data across the models are generally consistent — all support the projection that atmospheric river conditions are linked to warming and will increase in the future; however, co-author Marty Ralph of the University of California, San Diego, points out that there is still work to be done. “While all the models project increases in the frequency of atmospheric river conditions, the results also illustrate uncertainties in the details of the climate projections of this key phenomenon,” he said. “This highlights the need to better understand why the models’ representations of atmospheric rivers vary.”

The study, titled “Global Analysis of Climate Change Projection Effects on Atmospheric Rivers,” was recently published in the journal Geophysical Research Letters.

Drivers risk death when driving into flood water: new study

This article by Fran Molloy was published in yesterday’s issue of  Macquarie University’s The Lighthouse.

New research shows that most Australian drivers think they can work out when it is safe to enter flood waters – as foolhardy Hobart drivers proved during last week’s natural disaster.

Read more: https://lighthouse.mq.edu.au/article/drivers-risk-death-when-driving-into-floodwater-new-study

Newsletter Volume 17, Issue 3

The new QuakeAUS: impact of revised GA earthquake magnitudes on hazards and losses

Paul Somerville and Valentina Koschatsky, Risk Frontiers

Geoscience Australia (GA) is updating the seismic hazard model for Australia through the National Seismic Hazard Assessment (NSHA18) project (Allen et al., 2017). The update includes the corrections of measurements of local magnitude, ML and the conversion of the ML values to moment magnitude, MW. Moment magnitude is the preferred magnitude type for probabilistic seismic hazard analyses, and all modern ground motion prediction equations use this magnitude type. This is because ML is a purely empirical estimate of earthquake size whereas MW is a theoretically-based measure of earthquake size, derived from the seismic moment, M0 of the earthquake which is given by:

M0 = u A D

where A is the rupture area of the fault, D is the average displacement on the fault and u is the shear modulus of rock. The seismic moment quantifies the size of each of the pair of opposing force couples that constitute the force representation of the shear dislocation on the fault plane. For comparison with the more familiar magnitude scale, MW is calibrated to M0 using the following equation:

MW = 2/3 log10 M0 – 10.7

Prior to the early 1990s, most Australian seismic observatories relied on the Richter (1935) local magnitude (ML) formula developed for southern California. At regional distances (where many earthquakes are recorded), the Richter scale tends to overestimate ML relative to modern Australian magnitude formulae. Because of the likely overestimation of local magnitudes for Australian earthquakes recorded at regional distances, there is a need to account for pre-1990 magnitude estimates due to the use of inappropriate Californian magnitude formulae. A process was employed that systematically corrected local magnitudes using the difference between the original (inappropriate) magnitude formula (e.g., Richter, 1935) and the Australian-specific correction curves (e.g., Michael-Leiba and Malafant, 1992) at a distance determined by the nearest recording station likely to have recorded a specific earthquake.

The relationship between ML and MW developed for the NSHA18 demonstrates that MW is approximately 0.3 magnitude units lower than ML for moderate-to-large earthquakes (4.0<MW<6.0). Together, the ML corrections and the subsequent conversions to MW more than halve the number (and consequently the annual rate) of earthquakes exceeding magnitude 4.5 and 5.0, as shown in Figure 1. This has downstream effects on hazard calculations when forecasting the rate of rare large earthquakes using Gutenberg-Richter magnitude-frequency distributions in PSHA. A secondary effect of the ML to MW magnitude conversion is that it tends to increase the number of small and moderate-sized earthquakes relative to large earthquakes. This increases the Gutenberg–Richter b-value, which in turn further decreases the relative annual rates of larger potentially damaging earthquakes (Allen et al., 2017).

Figure 1. Cumulative number of earthquakes with magnitudes equal to or exceeding 4.5 (left) and 5.0 (right) for earthquakes in eastern Australia (east of 135°E longitude) from 1900 to 2010. The different curves show different stages of the NSHA18 catalogue preparation: original catalogue magnitudes, modified magnitudes (only local magnitude modified) and preferred MW (for all earthquakes). Source: Modified from Allen et al., (2017).

Preliminary seismic hazard calculations by Allen et al. (2017b) using the new earthquake source catalogue are compared with the existing PGA hazard map for Be site conditions for a return period of 500 years in Figure 2. We have updated the earthquake source model to incorporate the new GA catalogue into QuakeAUS , and obtained a new hazard map for Australia similar to that in Figure 2.

Figure 2. Existing (left) and draft (right) PGA maps for site class Be for a return period of 500 years. Source: Modified from Allen et al. (2017).

Preliminary loss estimates using the new version of QuakeAUS show large scale reductions. Losses in a national residential portfolio for 200 year ARP and for AAL are 30% and 35% of their former values respectively. The changes are not regionally uniform, with the largest reductions occurring in Perth and the lowest reductions occurring in Darwin. Among the five perils that are modelled on Risk Frontiers’ Multiperil Workbench (earthquake, fire, flood, hail and tropical cyclone), earthquake previously had the largest 200 year ARP loss but now lies below tropical cyclone in a near tie with flood and hail, and its AAL has dropped from second last to last, below hail.

We expect to release QuakeAUS 6.0, including these changes, early in the third quarter of 2018.

References

Allen, T., J. Griffin, M. Leonard, D. Clark and H. Ghasemi (2017). An updated National Seismic Hazard Assessment for Australia: Are we designing for the right earthquakes? Proceedings of the Annual Conference of the Australian Earthquake Engineering Society in Canberra, November 24-26, 2017.
Michael-Leiba, M., and Malafant, K. (1992). A new local magnitude scale for southeastern Australia, BMR J. Aust. Geol. Geophys. Vol 13, No 3, pp 201-205.

Tathra 2018 Bushfires

James O’Brien, Mingzhu Wang, Jacob Evans

The 2017/18 bushfire season across southeastern Australia during this hot summer season burned through 237,869 hectares from 11,182 fires prompting seven Emergency Warnings, 25 Watch and Act alerts and 16 Total Fire Ban days1. Despite the high number of fires, the losses were limited, until the Tathra fires with two homes lost in Comboyne. True to its mission of better understanding natural disasters, Risk Frontiers produced in-depth intelligence from aerial photography, field survey and GIS analytics. In what follows we report the results of these exercises.

Observations from the field

The early December 2017 heatwave (December was the 5th hottest on record) set the conditions for the bushfires in New South Wales on 18 March 2018. The high temperatures combined with high winds established the conditions under which an electrical fault apparently triggered the fire. The bushfires in Tathra destroyed around 65 homes, damaged 48 homes, destroyed 35 caravans and cabins and burned 1250 hectares of bushland, in additional to the emotional trauma experienced by survivors. Fortunately there were no casualties.

Risk Frontiers scientists (James, Mingzhu and Jacob) arrived in Tathra on April 10th, a little over three weeks following the peak of the bushfire damage, due to the high proportion (around 50%) of properties which contained asbestos. Our objective was to investigate the most affected areas in Tathra.

New above-ground electricity infrastructure in the region was a clear sign of the work undertaken to repair the obliterated power network and an indication of the extensive damage to infrastructure that left Tathra without power and water for a number of days following the fire.

We were able to quickly cover the whole town in less than a day on foot with the exception of some isolated areas in Reedy Swamp where the fire started and a small number of houses are located. This survey was useful to qualitatively gauge the assumptions used in our bushfire loss model, FireAUS. Our observations can be summarised as follows:

Zero-One (binary) damage ratios: We saw very few cases of partial damage to structures. It appears that once fire hits a structure during a bushfire it will almost certainly be completely destroyed. That’s not to say that the adjacent structures at the same address will always burn; we observed several cases of sheds that were burnt while the main house was unscathed and vice versa. The partial damage we did observe was charring to the sides of properties, where it appeared an active effort had been made to save the property.

Statistical dependence of bushfire risk on distance to bush: As described above, there is no clear pattern in the spatial distribution of damage when observed at close-range. However, the statistics of bushfire damage based on aggregated data from a broad area do show the importance of distance of a property to the nearby bush (see Figure 2). Whether a property is burnt in a bushfire seems determined by random chance and this chance is conditioned by the distance to the bushland. In FireAUS, we assume that any two addresses equidistant from the bush have equal probabilities of burning.

Independence of risk from building types: We observed damage to different construction types: unreinforced masonry, wood, fibro, mobile homes and even stone. There were destroyed brick houses away from the bush and spared wood and fibro houses close to the bush and vice-versa. The damage for this locality appears independent of building types even when globally influenced by proximity to bushland. If there are other risk factors that could explain the building damage, they are not visible in a short inspection and would require a full forensic investigation of each damaged building. The prevailing view was that newer homes generally seemed to perform better than older homes – and in one case a home built within the last 5 years sustained minimal bushfire damage (timber steps were destroyed) although that property was also actively defended by neighbours.

Mapping damage

Figure 1 – Vicinity of Tathra / Reedy Swamp bushfire with prevailing wind direction on the day indicated by arrow and X indicating approximate ignition point.

As the events in Tathra unfolded, Risk Frontiers started the data gathering process to provide a view of this event. Our damage analysis is based on post-fire ground surveys and RFS burned area data captured from live data feeds on Sunday. We also acquired 25 km2 of pre-fire satellite imagery (WorldView-2, 2m resolution) for vegetation analysis and utilized Pitney Bowes Geovision for building location and bushland / tree data.

Figure 2 provides a complete map of damaged properties (house icons) overlain with bushland boundaries (green shading) derived from GeoVision data. It is clear that a number of these properties are surrounded by bushland and are therefore deemed to be at a distance of zero metres from the urban and bushland interface. Properties not within the bushland areas are assigned the linear distance in metres to the nearest pre-fire bushland area greater than 0.5 sq km in area, not necessarily the bushland that burned. Further analysis could be undertaken to classify the burned vegetation – however, in the Tathra region, the majority of bushland burned around properties and it is difficult to recover the clear timeline of local ignition.

There are eyewitness reports of ember attack and the pattern of damage around the different locations has destroyed houses at some distance from the bushland interface with adjacent properties destroyed by either further ember attack or contagion from the neighbouring property.

Figure 2 – Location of destroyed homes and adjacent bushland in Tathra classified from pre-fire imagery and GeoVision (Minimum area threshold for contiguous vegetation: 500 m2)

Individual data

While Figure 2 demonstrates the spatial distribution of destroyed homes graphically, it is useful to quantify the loss as a function of distance to adjacent bushland. The data presented are in cumulative form so as to be consistent with other Risk Frontiers reports and other research. Figure 3 shows the percentile of destroyed buildings in relation to nearby bushland from recent major bushfires in Australia:

  • January 2003 Canberra bushfires (damaged suburbs include Duffy)
  • February 2009 “Black Saturday” bushfires in Victoria (damaged suburbs include Marysville and Kinglake)
  • February 2011 Perth bushfires (damaged suburbs include Roleystone)
  • January 2013 Tasmania bushfires (damaged suburbs include Dunalley)
  • January 2016 Yarloop, WA bushfire

Some new statistics and evidence that emerged from the bushfire damage in Tathra are as follows:

  • 42% of destroyed homes were within 0m of classified bushland boundaries.
  • 50% of surveyed destroyed homes were within 30m of the bushland interface and 72.6% of surveyed homes destroyed were within 100m of the bushland interface. These results closely match the findings previously presented in the “Bushfire Penetration into Urban Areas in Australia” report prepared for the 2009 Victorian Bushfires Royal Commission by Risk Frontiers.
  • No homes were destroyed further than 630m from bushland.
Figure 4 – A view of a destroyed property from Riverview Crescent, Tathra looking west in the direction of the fire’s ignition point across the Bega River. Note the burned vegetation in the distance and the lower green belt on the river’s edge demonstrating ember attack across the river.
Figure 5 – Map and aerial imagery showing property losses in the vicinity of Oceanview Drive, Tathra (1) in top left corner. Note the proximity to bushland immediately behind those properties and the distance to those lost in the lower right corner at Francis Hollis (2) and Bay View Drive (3), suggesting ember attack. House icons again denote destroyed properties. Wind direction was from top left to bottom right of image, red line and shading showing burnt boundary.

1https://www.rfs.nsw.gov.au/news-and-media/ministerial-media-releases/minister-urges-public-to-remain-prepared-with-ongoing-dry-conditions

Thwaites and Pine Island Glaciers of Antarctica and the Prospect of Rapid Sea Level Rise

Thomas Mortlock and Paul Somerville, Risk Frontiers.

The Thwaites and Pine Island glaciers in Antarctica are flowing toward the Amundsen Sea along a 250 km wide front.  Further inland, the glaciers widen into a 3 km thick mass of ice covering an area the size of Texas. Scientists are worried that the glaciers are going into irreversible retreat, meaning that no amount of climate change reversal could stop them from melting into the ocean.  If both of these glaciers were to melt completely, they would raise the sea level of the world’s oceans by 1 metre. What is worse, together these glaciers act as a plug holding back enough ice to raise the sea level of the world’s oceans by over 3 metres— an amount that would submerge large areas of the world’s coastal cities.

When in balance, the quantity of snow at the glacier’s head matches the ice lost to the ocean at its front through the calving of icebergs (top of Figure 2). But Thwaites is out of balance: it has sped up and is currently flowing at over 4km per year. It is also thinning at a rate of almost 40cm a year. According to Dr Anna Hogg of Leeds University, this thinning started after 2000, spreading inland at a rate of 10-12km/year at its fastest.  She suggested that on Thwaites Glacier, the increase in ice speed has coincided with a period of rapid ice thinning, and grounding line retreat, which suggests that the observed changes may have been caused by warm ocean water reaching the glacier and accelerating ice melt. The grounding line refers to the zone where the glacier enters the sea and lifts up to form a buoyant platform of ice.

If warm ocean bottom-waters are able to get under this shelf (bottom of Figure 2), the grounding line can be eroded and the glacier forced backwards even if local air temperatures are sub-zero. In the case of Thwaites, a large portion of the ice stream sits below sea level, with the rock bed sloping back towards the continent.  This can produce marine ice sheet instability, in which the tall cliff that forms at the front of the glacier begins to calve in a runaway fashion.  This has not yet been seen in this part of Antarctica.

Figure 2.  Schematic diagram of stable and retreating glaciers.  Source: BBC

It is unclear how long it would take for the glaciers to melt completely – it may take decades or centuries.  Scientists have been looking back to the end of the last ice age, about 11,000 years ago, when global temperatures stood at roughly their current levels. There is growing evidence that the glaciers collapsed rapidly back then, flooding the world’s coastlines.  Unfortunately, as indicated above, the ocean floor on which the glaciers rest gets deeper toward the interior of Antarctica (Figure 2), so each new iceberg that breaks away exposes progressively taller and taller cliffs. When the cliffs become so tall that they cannot support their own weight, they may collapse catastrophically.

Scientists funded by the U.K. National Environment Research Council and the U.S. National Science Foundation are planning to go to the field to try to find out how quickly these glaciers might collapse.  They will monitor the way in which ocean water moves beneath the floating shelf, drill sediments from under and just in front of the glacier to find out what it did during past warming events on Earth, and use a submersible to explore the cavity under the buoyant sections of Thwaites.

Such massive ice sheet collapses have occurred in the past, but the climate effect of a huge freshwater input into the Southern Ocean, in the form of ice sheet melt, is far from certain. In the Northern Hemisphere, there is evidence to suggest that past periods of rapid ice sheet melt have actually led to periods of climate cooling, called Heinrich events, after the paleoclimatologist Hartmut Heinrich. Scientists have hypothesised that these freshwater dumps reduced ocean salinity enough to slow deepwater formation in the Arctic and the ocean circulation that relies on seawater density differences (in the form of salinity and temperature) to operate. Since the ‘thermohaline’ circulation plays an important role in transporting heat towards Europe, a slowdown would cause the North Atlantic to cool. Such deepwater formation also occurs around the rim of Antarctica.

The U.S. National Oceanic and Atmospheric Administration reports that, globally, sea level has risen about 6.6 cm above the 1993 average level, and it continues to rise by about 3 mm per year.  Meltwater streaming into the Amundsen Sea from Antarctica’s Thwaites glacier  accounted for about 4 percent of total global sea level rise in recent years — twice its contribution from the mid-1990s.

Glaciers like Thwaites matter a great deal to sea level because they are large masses of landlocked ice that hold back even larger masses of ice, keeping them from sliding into the sea. Landlocked ice changes sea level because when it melts, it introduces new water to the ocean. Sea ice, on the other hand, like the ice cap in the Arctic, can have major effects on climate when it melts, but it is basically water that is already in the ocean, and whether it is liquid or solid does not directly affect sea level around the world.

The current suite of projections of sea level rise are derived from a range of global climate models and a range of future carbon emission scenarios (Representative Concentration Pathways, RCPs) – thus inter-scenario and intra-model uncertainty is not insignificant. The range of uncertainty for global sea level rise to 2100 is largely shaped by the uncertain contributions of the Antarctic Ice Sheet and Greenland Ice Sheet, and thermal expansion of the oceans (Figure 3).

Figure 3.  Future probabilistic global sea-level projections for the 21st century under RCP2.6 (dark blue), RCP4.5 (light blue) and RCP8.5 (red) forcing scenarios. Source: IPCC (2014).

On the east coast of Australia, sea level rise to 2030 is expected to be on the order of 0.09 – 0.19 m, and between 0.22 – 0.88 m by 2090 (change relative to 1986 – 2005, taking the 95 % confidence limits of RCP 2.6, 4.5 and 8.5). A typical, convenient horizon for most coastal planning is to consider sea level rise of 0.9 – 1.0 m by 2100.

Over the past four years, Risk Frontiers has been developing a coastal risk visualisation tool, in association with the Office of Environment and Heritage, to help State Government entities and Local Governments visualize impacts of sea level rise on assets and infrastructure over planning timeframes in NSW. Figure 4 shows an example of the amount of expected seawater inundation around Newcastle with 0.5 to 1.5 m sea level rise, using the tool. As can been seen, inundation even to this level impacts critical infrastructure and residential areas with potential significant costs to asset owners, insurance and the local community.

Figure 4.  Potential seawater inundation as a result of sea level rise between 0.5 to 1.5 m above the present-day high tide level in Newcastle using the Risk Frontiers / OEH coastal risk visualisation tool.

However, there is great uncertainty in the standard IPCC projections related to the West Antarctic Ice Sheet (WAIS), of which Thwaites is a small part. Some studies (Bakker et al, 2017, Pollard et al) suggest that if the whole of the WAIS were to collapse (of which Thwaites is a small part), it could contribute a further 3 – 4 m to global sea levels (Figure 5).

Figure 5.  Future sea-level projections including very uncertain contribution of the WAIS. Red line shows most extreme RCP scenario considered by the IPCC, and yellow and brown shaded areas demarcate different WAIS collapse scenarios, with deep uncertainty in between. Source: Bakker et al (2017)

Obviously, these levels of sea level rise would be catastrophic for the ~ 80 % of Australian population that current lives within the coastal zone and well beyond planning capabilities. While Government deliberates over how best to plan for sea level rise of 1 m by 2100, we perhaps should also be thinking about what provisions should be in place if sea level rise of 4 m + were to occur. For the time being, all eyes are on Thwaites.

References

Bakker, A.M.R., et al. (2017), Sea-level projections representing the deeply uncertain contribution of the West Antarctic ice sheet, Scientific Reports, 7, 3880

Pollard, D., et al (2015), Potential Antarctic ice sheet retreat driven by hydrofracturing and ice cliff failure. Earth Plan. Sci. Lett. 412, 112–121.

IPCC, 2014: Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, R.K. Pachauri and L.A. Meyer (eds.)]. IPCC, Geneva, Switzerland, 151 pp.

FS-ISAC 2018 Cybersecurity Trends

By Tahiry Rabehaja.  Email: tahiry.rabehaja@riskfrontiers.com.

2017 was not a good year for cyber security. Victims ranged from small businesses to corporate giants such as Equifax, Deloitte and Kmart with the impacts of ‘improved’ ransomware such as WannaCry and NotPetya just two well-publicised examples.  Such breaches emphasise that cybersecurity poses not just a headache for IT departments but is an issue warranting a top-down solution, starting with C-level executives. To this end, the Financial Services Information Sharing and Analysis Center (FS-ISAC), have recently published a report summarising the thoughts of over 100 financial sector Chief Information Security Officers (CISO) regarding key priorities to improve digital security postures for 2018 (FS-ISAC, 2018). This survey shows most executives focused on improving their defensive strategies against cyber attacks.

Figure 1: Snapshot from the FS-ISAC report ranking the key priorities to improve cyber security postures in 2018.


[FS-ISAC is a non-profit global organisation providing a platform for sharing and analysing cyber and physical security information and intelligence. It currently has approximately 7000 members from 39 different countries. It was an initiative established by the financial service sector in response to the 1998 US Presidential Directive 63.] 

For more than a third (35%) of the executives, improving employees’ awareness about digital threats ranks top of the list. This comes as no surprise given employees have always been on the front line of defence against cyber attacks while remaining the weakest link. Indeed, most attacks against financial services companies exploit human weaknesses using social engineering, spear phishing and account take-over due to weak and reused passwords, etc. In 2017, Verizon reported that 1 in 14 employees were opening attachments or links sent through phishing emails and 1 in 4 were giving out account credentials or personal information (Verizon, 2017).

Investment into modern cyber resilient infrastructures (25%) comes in as runner up. Such an investment includes a progressive upgrade of existing network defence hardware and software as well as the creation of specialised departments that ensure digital information security.

Another recent study shows that subscription to Threat Intelligence, the emergent use of defence systems based on Machine Learning as well as strategic use of Cyber Analytics rank amongst the more cost-effective security investments (Accenture, 2017). That same study shows many companies over-investing in technologies that fail to deliver the desired cost-benefit ratios. These include extensive applications of Advanced Perimeter Controls and incongruous use of data loss prevention such as full disk encryption. Thus, efficient security programs should be implemented by ensuring an optimal cost-benefit ratio. This can be achieved by prioritising the security of critical assets and related infrastructures.

Figure 2: Snapshot from the Accenture report showing spending in security technology and the associated business benefit value.

2018 will also mark a long-awaited ratification of various breach notification regulatory laws. These include changes to the General Data Protection Regulation in Europe, the Notifiable Data Breaches scheme that has just come into effect in Australia, and upcoming changes to China’s Cybersecurity and Data Protection laws. These entail that compliance, explicitly voted by 2% of the surveyed executives, will also play an important role in shaping digital security especially for companies dealing with personally identifiable information.

The focus towards defensive solutions (FS-ISAC, 2018) is disturbing. The report also investigates the impact of hierarchical organization on reporting frequency but nothing is said about responses. This may be due to the fact that those executives interviewed were mainly from the financial industry. However, historical breaches shows response is equally as important as is defence. In fact, it is very likely that a resourceful hacker interested in a particular asset of a certain company will be able to hack in and extract or destroy the targeted information.

Targeted attacks are amongst the most costly and usually affect critical assets such as Intellectual Property. A successful attack on these key assets can have destructive impacts on the victim’s business model itself. Expenses incurred during a cyber event will span from direct costs — forensic and remediation cost, customer protection, regulatory penalty, etc. — to collateral damages — loss of customers, damage to reputation and brand name, increased cost of capital, etc. These costs can be considerably reduced using efficient incident response and mitigation policies as well as cyber insurance.

The White House Council of Economic Advisers estimate the average cost of a breach to be as high as $330 million when an event negatively affects the market value of the victim (Advisers, 2018). For instance, Equifax’s stock price dropped by more than 35% within 7 days of last year’s massive data breach disclosure. The emergence of cyber insurance is anticipated to provide cover against some of the financial losses. Various vendors are already providing cyber insurance products and it is expected this market will grow to over $7 billion within the next three years (PwC, 2015).

References

Accenture. (2017). Cost of Cybercrime Study. Retrieved from Accenture: https://www.accenture.com/au-en/insight-cost-of-cybercrime-2017

Advisers, W. H. (2018, February 16). Cost of malicious cyber activity to the US economy. Retrieved from https://www.whitehouse.gov/articles/cea-report-cost-malicious-cyber-activity-u-s-economy/

FS-ISAC. (2018, February 12). FS-ISAC Unveils 2018 Cybersecurity Trends According to Top Financial CISOs. Retrieved from FS-ISAC: https://www.fsisac.com/article/fs-isac-unveils-2018-cybersecurity-trends-according-top-financial-cisos

PwC. (2015). Insurance 2020 and beyond: Reaping the dividends of cyber resilience. Retrieved from https://www.pwc.com/gx/en/industries/financial-services/publications/insurance-2020-cyber.html

Verizon. (2017). Verizon Data Breach Investigation Report. Retrieved from Verizon: http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/

 

Why is Roman concrete more durable than modern concrete?

Jacob Evans, Risk Frontiers (jacob.evans@riskfrontiers.com)

Modern concrete is porous and degrades in contact with seawater. Seawater can seep into its pores, and when dried out the salts crystalize. The crystallization pressure of the salts produces stresses that can result in cracks and spalls. There are also other chemical processes such as sulphate attack, lime leaching and alkali-aggregate expansion all of which degrade modern concrete. Some submerged concrete objects may last only 10 years; meanwhile, 2000-year old concrete constructed during the Roman Empire is still going strong (Figure 1). Why this is so is a question an international research team led by geologist Marie Jackson of Utah University sought to reveal.

Figure 1: Erosion due to sea water on concrete pylons. Image: Brian Robinson.

The composition of Roman concrete has been long known, being a mixture of volcanic ash, quicklime (calcium oxide) and volcanic rock, but the science behind its resilience to seawater remained unknown until recently. It is thought volcanic material was used after the Romans observed ash from volcanic eruptions crystallize to form durable rock.

The research team discovered that while modern concrete is made to be inert, the Roman version interacts with the environment. When seawater interacts with the mixture, it forms rare minerals aluminous tobermorite and phillipsite which are believed to strengthen the material. This discovery could lead to the development of more resilient concrete to be used in coastal environments.

Modern concrete is generally limestone mixed with other ingredients such as sandstone, ash, chalk, iron and clay. The mixture is designed to be inert and not interact with the environment. In coastal environments building regulations govern the type of concrete used and water-cement ratio, but the concrete is still porous: seawater can pass through the material, leading to corrosion and destructuralisation.

As well as salt crystallization, the process whereby dried out salts within the concrete lead to a buildup of pressure, other chemical reactions can affect the integrity of concrete. These include sulphate attack, lime leaching and alkali-aggregate expansion (Figure 2). Sulphate attack occurs when sulphates in the water react with the hydrated calcium aluminate within the concrete. This changes the microstructure and leads to an increase in volume within the concrete, resulting in physical stress and potential cracking. Lime leaching is the simple process of water passing through the concrete and dissolving calcium hydroxide from the concrete. (Calcium hydroxide is formed from the action of calcium oxide and water.) This is often seen as white patches or stalactites on the exterior of the concrete and reduces its strength. Alkali-aggregate expansion is when aggregates, such as silica, decrease the alkalinity of the cement paste, resulting in the expansion of minerals and cracking of the cement.

Figure 2: A 2000 year old Roman jetty. Image: Art853.

Roman concrete however does not appear susceptible to any of these processes. The research team found that seawater, the kryptonite to modern concrete, was the magic ingredient responsible for the structural stability of the Roman mixture. The Roman concrete samples were found to contain rare aluminous tobermorite and phillipsite crystals. It is believed that with long-term exposure to seawater, tobermorite crystalizes from the phillipsite as it becomes more alkaline. This crystallization is thought to strengthen the compound, as tobermorite has long plate-like crystals that allow the material to bend rather than crack under stress. Pliny the Elder in the first century CE exclaimed “that as soon as it [concrete] comes into contact with the waves of the sea and is submerged [it] becomes a single stone mass (fierem unum lapidem), impregnable to the waves and every day stronger.”

Figure 3: The research ground lead by Marie Jackson obtaining samples from the Portus Cosanus pier in Orbetello Italy. Image: Marie Jackson.

To arrive at these conclusions, Jackson et. al. (2017) performed scanning electron microscopy (SEM), micro x-ray diffraction (XRD), Raman spectroscopy and electron probe microanalysis at the Advanced Light Source at the Lawrence Berkeley National Laboratory. Samples were obtained by drilling Roman harbour structures, and were compared with volcanic rock (Figure 3). The combination of these techniques in conjunction with in situ analysis provided evidence of crystallized aluminous tobermorite and phillipsite within Roman marine concrete (Figure 4). These crystals formed long after the original setting of the concrete. This finding was surprising, as tobermorite typically forms only at temperatures above 80 °C, though there is one occurrence of it forming at ambient temperature in the Surtsey volcano.

Figure 4: SEM image showing the presence of aluminous tobermorite and phillipsite within Roman marine concrete. Image from Jackson et. al., Figure 6.

After this discovery, there is now a desire to develop a concrete mixture which replicates ancient Roman marine concrete. It could result in more environmentally friendly concrete construction, and would provide a mixture resilient to seawater and advantageous to coastal defence.

References

Jackson, M.D. et. al. (2017). Phillipsite and Al-tobermorite mineral cements produced through low-temperature water-rock reactions in Roman marine concrete. American Mineralogist: Journal of Earth and Planetary Materials102(7), pp.1435-1450.

Jackson, M.D. et. al. (2013). Unlocking the secrets of Al-tobermorite in Roman seawater concrete. American Mineralogist98(10), pp.1669-1687.

Suprenant, B.A. (1991). Designing concrete for exposure to seawater. Concrete Construction Magazine, pp.814-816.