Building evidence for risk-based insurance

Professor John McAneney and Andrew Gissing were invited to contribute to the 2016 World Disaster Report by the International Federation of Red Cross and Red Crescent Societies. Their contribution is provided below.


 Improving societal resilience in the face of the growing cost of disasters triggered by natural disasters and how to do so in a fair and affordable manner is an increasing challenge. Many governments are looking to insurance as a partial solution to this problem.

Insurance is a contract between a policy-holder and a company that guarantees compensation for a specified loss in return for the payment of a premium. Conventional insurance works by pooling risks, an approach that works well for car accidents and house fires but not for the spatially-related nature of losses from disasters caused by natural hazards. It is the global reinsurance market that ultimately accepts much of this catastrophe risk (Roche et al., 2010). Relatively new financial instruments such as Catastrophe Bonds and Insurance-Linked Securities are also being employed to transfer some catastrophe risks to the capital markets.

Insurance is part of the essential infrastructure of a developed economy but it would be a mistake to see it as an instrument of social policy. It cannot in itself prevent flooding or earthquakes. On the other hand, insurance can promote socially desirable outcomes by helping policy-holders fund their post-disaster recovery more effectively. The greater the proportion of home-owners and businesses having insurance against naturally-triggered disasters, the more resilient the community will be.

Insurers can also help promote risk awareness by property owners and motivate them and communities, as well as governments, to take mitigation actions to reduce damaging losses (McAneney et al., 2016). The mechanism for doing this is by way of insurance premiums that properly reflect risk. Insurance is not the only means of providing transparency on the cost of risk, but private insurers are the only ones with a financial incentive to acknowledge such costs. Moreover, they are the only entities that can reward policy-holders when risks are reduced (Kunreuther, 2015; McAneney et al., 2016).

It is in the interest of communities to have a viable private sector insurance market and, arguably, governments should only become involved in the case of market failure (Roche et al., 2010). Of those government-authorized catastrophe insurance schemes examined by McAneney et al. (2016), many are actuarially unsound and end up creating a continuing liability for governments, and/or, in not pricing individual risks correctly, they encourage property development in risky locations while failing to provide incentives for retrofitting older properties at high risk. In less-developed insurance markets some government involvement may encourage the uptake of insurance (e.g., Tinh and Hung, 2014).

How do we assemble the evidence to support risk-reflective insurance premiums? New technologies such as catastrophe loss modelling, satellite imagery and improved geospatial tools are proving helpful in allowing insurers to better understand their exposure to natural hazard risks. While these technologies are increasingly available, in some countries the normal outcomes of such data gathering and analysis – insurance premiums – are constrained politically. This is the case in the United States of America where there has been a tendency to keep premiums low across the board and to have policy-holders in low-risk areas cross-subsidizing those at higher risk (Czajkowski, 2012). Such practices do little to constrain poor land-use planning decisions that lie at the heart of many disasters triggered by natural hazards (e.g., Pielke Jr et al., 2008; Crompton and McAneney, 2008). McAneney et al. (2010) show that most of the homes destroyed in the 2009 Black Saturday fires in Australia were located very close to fire-prone bushland with some 25 per cent actually constructed within the bushland. Effectively these homes were part of the fuel load and their destruction was unsurprising.

One way to build a wider evidence base for collective action to support risk-based insurance policies is for governments to share information on risks of disasters related to natural hazards, both with insurers as well as the community. This information might be hazard footprints as well as the likely cost of the damage (The Wharton School, 2016). In Australia, governments have been reluctant to do this. In some developing insurance markets, home-owners or farmers may have a better understanding of the risks than do insurers, who will price this uncertainty into premiums. Unrestricted access to hazard data for all parties would encourage fairer insurance pricing.

Gathering hazard data for building evidence for risk-reflective premiums depends on the type of hazard. For example, the distance of buildings from fire-prone bushland or the local likelihood of flooding are key determinants of vulnerability to these location-specific hazards. In other areas, or within the same areas in some cases, the annual likelihood of exceeding damaging levels of seismic ground-shaking, wind speed or volcanic ash are important metrics, as are distance from the sea and the elevation of a property when it comes to coastal hazards like tsunami and storm surge.

When this risk evidence is established and becomes reflected in national construction standards, improvements in resilience follow. For example, improvements in construction standards introduced in Australia after the destruction of Darwin by Tropical Cyclone Tracy in 1974 have been credited with reducing subsequent losses from tropical cyclones by some 67 per cent (McAneney et al., 2007).

The availability of such data may result in reductions in some insurance premiums, an increase for others, or, in extreme cases, the withdrawal of insurers from areas where the risk is considered to be too high. The latter outcome will send a strong signal to communities and government for investments in mitigation; subsidized insurance is not the answer. Governments should also ensure that humanitarian aid provided after disasters is targeted effectively, in order to avoid creating disincentives for people to purchase insurance.

Lastly, and to return to the issue of poor land-use planning, it is worth remembering that the 1945 thesis of the famous American geographer, Gilbert White, that “Floods are an act of God, but flood losses are largely an act of man”, still rings true and applicable to a wider range of disasters triggered by natural hazards than just floods.

A full copy of the report can be found at http://www.ifrc.org/Global/Documents/Secretariat/201610/WDR%202016-FINAL_web.pdf.

 

The June 2016 Australian East Coast Low: Importance of Wave Direction for Coastal Erosion Assessment

by Thomas R. Mortlock, Ian D. Goodwin, John K. McAneney and Kevin Roche.

In June 2016, an unusual East Coast Low storm affected some 2000 km of the eastern
seaboard of Australia bringing heavy rain, strong winds and powerful wave conditions. While wave
heights offshore of Sydney were not exceptional, nearshore wave conditions were such that
beaches experienced some of the worst erosion in 40 years. Hydrodynamic modelling of wave
and current behaviour as well as contemporaneous sand transport shows the east to north-east
storm wave direction to be the major determinant of erosion magnitude. This arises because of
reduced energy attenuation across the continental shelf and the focussing of wave energy on coastal sections not equilibrated with such wave exposure under the prevailing south-easterly wave climate. Narrabeen–Collaroy, a well-known erosion hot spot on Sydney’s Northern Beaches, is shown to be particularly vulnerable to storms from this direction because the destructive erosion potential is amplified by the influence of the local embayment geometry. We demonstrate the magnified erosion response that occurs when there is bi-directionality between an extreme wave event and preceding modal conditions and the importance of considering wave direction in extreme value analyses.

Click on the link to read entire article:  http://www.mdpi.com/2073-4441/9/2/121

 

Crowds are wise enough to know when other people will get it wrong

Unexpected yet popular answers often turn out to be correct.

This article by Cathleen O’Grady was published by Ars Technical on 29th January, 2017. https://arstechnica.com/science/2017/01/to-improve-the-wisdom-of-the-crowd-ask-people-to-predict-vote-outcome/Cathleen O’Grady  is Ars Technica’s contributing science reporter. She has a background in cognitive science and evolutionary linguistics.

Flickr user. Hsing Wei

The “wisdom of the crowd” is a simple approach that can be surprisingly effective at finding the correct answer to certain problems. For instance, if a large group of people is asked to estimate the number of jelly beans in a jar, the average of all the answers gets closer to the truth than individual responses. The algorithm is applicable to limited types of questions, but there’s evidence of real-world usefulness, like improving medical diagnoses.

This process has some pretty obvious limits, but a team of researchers at MIT and Princeton published a paper in Nature [Nature, 2016. DOI: doi:10.1038/nature21054] this week suggesting a way to make it more reliable: look for an answer that comes up more often than people think it will, and it’s likely to be correct.

As part of their paper, Dražen Prelec and his colleagues used a survey on capital cities in the US. Each question was a simple True/False statement with the format “Philadelphia is the capital of Pennsylvania.” The city listed was always the most populous city in the state, but that’s not necessarily the capital. In the case of Pennsylvania, the capital is actually Harrisburg, but plenty of people don’t know that.

The wisdom of crowds approach fails this question. The problem is that questions sometimes rely on people having unusual or otherwise specialized knowledge that isn’t shared by a majority of people. Because most people don’t have that knowledge, the crowd’s answer will be resoundingly wrong.

Previous tweaks have tried to correct for this problem by taking confidence into account. People are asked how confident they are in their answers, and higher weight is given to more confident answers. However, this only works if people are aware that they don’t know something—and this is often strikingly not the case.

In the case of the Philadelphia question, people who incorrectly answered “True” were about as confident in their answers as people who correctly answered “False,” so confidence ratings didn’t improve the algorithm. But when people were asked to predict what they thought the overall answer would be, there was a difference between the two groups: people who answered “True” thought most people would agree with them, because they didn’t know they were wrong. The people who answered “False,” by contrast, knew they had unique knowledge and correctly assumed that most people would answer incorrectly, predicting that most people would answer “True.”

Because of this, the group at large predicted that “True” would be the overwhelmingly popular answer. And it was—but not to the extent that they predicted. More people knew it was a trick question than the crowd expected. That discrepancy is what allows the approach to be tweaked. The new version looks at how people predict the population will vote, looks for the answer that people gave more often than those predictions would suggest, and then picks that “surprisingly popular” answer as the correct one.

To go back to our example: most people will think others will pick Philadelphia, while very few will expect others to name Harrisburg. But, because Harrisburg is the right answer, it’ll come up much more often than the predictions would suggest.

Prelec and his colleagues constructed a statistical theorem suggesting that this process would improve matters and then tested it on a number of real-world examples. In addition to the state capitals survey, they used a general knowledge survey, a questionnaire asking art professionals and laypeople to assess the prices of certain artworks, and a survey asking dermatologists to assess whether skin lesions were malignant or benign.

Across the aggregated results from all of these surveys, the “surprisingly popular” (SP) algorithm had 21.3 percent fewer errors than a standard “popular vote” approach. In 290 of the 490 questions across all the surveys, they also assessed people’s confidence in their answers. The SP algorithm did better here, too: it had 24.2 percent fewer errors than an algorithm that chose confidence-weighted answers.

It’s easy to misinterpret the “wisdom of crowds” approach as suggesting that any answer reached by a large group of people will be the correct one. That’s not the case; it can pretty easily be undermined by social influences, like being told how other people had answered. These failings are a problem, because it could be a really useful tool, as demonstrated by its hypothetical uses in medical settings.

Improvements like these, then, contribute to sharpening the tool to the point where it could have robust real-world applications. “It would be hard to trust a method if it fails with ideal respondents on simple problems like [the capital of Pennsylvania],” the authors write. Fixing it so that it gets simple questions like these right is a big step in the right direction.

 

Estimating building vulnerability to volcanic ash fall for insurance and other purposes

This paper by R. J. Blong, P. Grasso, S. F. Jenkins, C. R. Magill, T. M. Wilson, K. McMullan and J. Kandlbauer was published on 26th January 2017 in the Journal of Applied Volcanology.

Abstract:

Volcanic ash falls are one of the most widespread and frequent volcanic hazards, and are produced by all explosive volcanic eruptions. Ash falls are arguably the most disruptive volcanic hazard because of their ability to affect large areas and to impact a wide range of assets, even at relatively small thicknesses. From an insurance perspective, the most valuable insured assets are buildings. Ash fall vulnerability curves or functions, which relate the magnitude of ash fall to likely damage, are the most developed for buildings, although there have been important recent advances for agriculture and infrastructure.  Read more

Scientists expect sand flow on East Coast to slow

The following news pieces have been picked up by various sources from a paper published late last year in the Journal of Geophysical Research, Oceans titled “Tropical and extratropical-origin storm wave types and their influence on the East Australian longshore sand transport system under a changing climate” by Ian Goodwin, Thomas Mortlock and Stuart Browning. Thomas Mortlock is a member of the Risk Frontiers’ team and Ian Goodwin and Stuart Browning are members of the Marine Climate Risk Group at Macquarie University.  Click here to read entire article.

Aerial view of Byron Bay. Source: Swellnet Analysis

Scientists expect sand flow on East Coast to slow. Swellnet Analysis. https://www.swellnet.com/news/swellnet-analysis/2016/07/27/scientists-expect-sand-flow-east-coast-slow

Why Qld beaches will lose their sand to NSW. The Chronicle. http://www.thechronicle.com.au/news/why-qld-beaches-will-lose-their-sand-nsw/3084034/#/0

Gold Coast at threat of severe erosion and property damage, research shows. Gold Coast Bulletin. http://www.goldcoastbulletin.com.au/lifestyle/beaches-and-fishing/gold-coast-at-threat-of-severe-erosion-and-property-damage-research-shows/news-story/46326698783e430830f4e668b0086191

Solving the Puzzle of Hurricane History

This article was posted on the NOAA website on 11 Feb 2016.

If you want to understand today, you have to search yesterday.”  ~ Pearl S. Buck

One of the lesser-known but important functions of the NHC [National Hurricane Centre, Miami, Florida] is to maintain a historical hurricane database that supports a wide variety of uses in the research community, private sector, and the general public.  This database, known as HURDAT (short for HURricane DATabase), documents the life cycle of each known tropical or subtropical cyclone.  In the Atlantic basin, this dataset extends back to 1851; in the eastern North Pacific, the records start in 1949.  The HURDAT includes 6-hourly estimates of position, intensity, cyclone type (i.e., whether the system was tropical, subtropical, or extratropical), and in recent years also includes estimates of cyclone size.  Currently, after each hurricane season ends, a post-analysis of the season’s cyclones is conducted by NHC, and the results are added to the database. The Atlantic dataset was created in the mid-1960s, originally in support of the space program to study the climatological impacts of tropical cyclones at Kennedy Space Center.  It became obvious a couple of decades ago, however, that the HURDAT needed to be revised because it was incomplete, contained significant errors, or did not reflect the latest scientific understanding regarding the interpretation of past data.  Charlie Neumann, a former NHC employee, documented many of these problems and obtained a grant to address them under a program eventually called the Atlantic Hurricane Database Re-analysis Project.  Chris Landsea, then employed by the NOAA Hurricane Research Division (HRD) and now currently the Science and Operations Officer at the NHC, has served as the lead scientist and program manager of the Re-analysis Project since the late 1990s.

In response to the re-analysis effort, NHC established the Best Track Change Committee (BTCC) in 1999 to review proposed changes to the HURDAT (whether originating from the Re-analysis Project or elsewhere) to ensure a scientifically sound tropical cyclone database.  The committee currently consists of six NOAA scientists, four of whom work for the NHC and two who do not (currently, one is from HRD and the other is from the Weather Prediction Center).

Over the past two decades, Landsea, researchers Andrew Hagen and Sandy Delgado, and some local meteorology students have systematically searched for and compiled any available data related to each known storm in past hurricane seasons.  This compilation also includes systems not in the HURDAT that could potentially be classified as tropical cyclones.  The data are carefully examined using standardized analysis techniques, and a best track is developed for each system, many of which would be different from the existing tracks in the original dataset.  Typically, a season’s worth of proposed revised or new tracks is submitted for review by the BTCC.  Fig. 1 provides an example set of data that helped the BTCC identify a previously unknown tropical storm in 1955.

 

Figure 1. Surface plot of data from 1200 UTC 26 Sep 1955, showing a previously unknown tropical storm.

The BTCC members review the suggested changes submitted by the Re-analysis Project, noting areas of agreement and proposed changes requiring additional data or clarification. The committee Chairman, Dr. Jack Beven, then assembles the comments into a formal reply from the BTCC to the Re-analysis Project. Occasionally, the committee’s analysis is presented along with any relevant documentation that would help Landsea and his group of re-analyzers account for the differing interpretation.   The vast majority of the suggested changes to HURDAT are accepted by the BTCC.  In cases where the proposed changes are not accepted, the BTCC and members of the Re-Analysis Project attempt to resolve any disagreements, with the BTCC having final say.

In the early days of the Re-analysis Project, the amount of data available for any given tropical cyclone or even a single season was quite small, and so were the number of suggested changes.  This allowed the re-analysis of HURDAT to progress relatively quickly.  However, since the project has reached the aircraft reconnaissance era (post 1944), the amount of data and the corresponding complexity of the analyses have rapidly increased, which has slowed the project’s progress during the last couple of years.

The BTCC’s approved changes have been significant. On average, the BTCC has approved the addition of one to two new storms per season.  One of the most highly visible changes was made 14 years ago, when the committee approved Hurricane Andrew’s upgrade from a category 4 to a category 5 hurricane.  This decision was made on the basis of (then) new research regarding the relationship between flight-level and surface winds from data gathered by reconnaissance aircraft using dropsondes.

Figure 2 show the revisions made to the best tracks of the 1936 hurricane season, and gives a flavor of the type, significance, and number of changes being made as part of the re-analysis.  More recent results from the BTCC include the re-analysis of the New England 1938 hurricane, which reaffirmed its major hurricane status in New England from a careful analysis of surface observations.  Hurricane Diane in 1955, which brought tremendous destruction to parts of the Mid-Atlantic states due to its flooding rains, was judged to be a tropical storm at landfall after re-analysis.   Also of note is the re-analysis of Hurricane Camille in 1969, one of three category 5 hurricanes to have struck the United States in the historical record.  The re-analysis confirmed that Camille was indeed a category 5 hurricane, but revealed fluctuations in its intensity prior to its landfall in Mississippi that were not previously documented.

The most recent activity of the BTCC was an examination of the landfall of the Great Manzanillo Hurricane of 1959.  It was originally designated as a category 5 hurricane landfall in HURDAT and was the strongest landfalling hurricane on record for the Pacific coast of Mexico. A re-analysis of ship and previously undiscovered land data, however, revealed that the landfall intensity was significantly lower (140 mph).  Thus, 2015’s Hurricane Patricia is now the strongest landfalling hurricane on record for the Pacific coast of Mexico, with an intensity of 150 mph.

Figure 2. Revisions made to the best tracks of the 1936 hurricane season

The BTCC is currently examining data from the late 1950s and hopes to have the 1956-1960 re-analysis released before next hurricane season.  This analysis will include fresh looks at Hurricane Audrey in 1957 and Hurricane Donna in 1960, both of which were classified as category 4 hurricane landfalls in the United States.   As the re-analysis progresses into the 1960s, the committee will be tackling the tricky issue of how to incorporate satellite images into the re-analysis, including satellite imagery’s irregular frequency and quality during that decade. The long-term plan is to extend the re-analysis until about the year 2000, when current operational practices for estimating tropical cyclone intensity became established using GPS dropsonde data and flight-level wind reduction techniques.

https://noaanhc.wordpress.com/2016/02/11/solving-the-jigsaw-puzzle-of-hurricane-history/

 

Stationarity of major flood frequencies and heights on the Ba River, Fiji, over a 122-year record

Paper by John McAneney, Robin van den Honert and Stephen Yeo published in International Journal of Climatology.

ABSTRACT: The economic impact of natural disasters on developing economies can be severe with the recovery diverting scarce funds that might otherwise be targeted at development projects and stimulating the need for international aid. In view of the likely sensitivity of low-lying Pacific Islands to anticipated changes in climate, a 122-year record of major flooding depths at the Rarawai Sugar Mill on the Ba River in the northwest of the Fijian Island of Viti Levu is analysed. Reconstructed largely from archived correspondence of the Colonial Sugar Refining Company, the time series comprises simple measurements of height above the Mill floor. It exhibits no statistically significant trends in either frequency or flood heights, once the latter have been adjusted for average relative sea-level rise. This is despite persistent warming of air temperatures as characterized in other studies. There is a strong dependence of frequency (but not magnitude) upon El Niño-Southern Oscillation (ENSO) phase, with many more floods in La Niña phases. The analysis of this long-term data series illustrates the difficulty of detecting a global climate change signal from hazard data, even given a consistent measurement methodology (cf HURDAT2 record of North Atlantic hurricanes) and warns of the strong dependence of any statistical significance upon choices of start and end dates of the analysis.

Click here to read entire paper.

Earthquake Scenario, Melbourne, Mw 5.5, 6.0 & 7.0

Report by Dr Valentina Koschatzky, Dr James O’Brien, Prof. Paul Somerville for Bushfire and Natural Hazards CRC. 

Despite its low seismic activity, Australia is more vulnerable to earthquakes than one would expect due to the concentration of population and the large stock of buildings which are structurally unable to withstand even moderate seismic shaking. This was demonstrated by the 1989 M5.6 Newcastle earthquake, one of the costliest natural disasters in Australia, despite its low magnitude. One question elicited by these circumstances is: what would happen if one of Australia’s main cities were hit by an earthquake similar to the Newcastle earthquake? An example of a near miss is the 1954 M5.6 Adelaide earthquake, whose epicentre, far from developed areas at the time, would lie in densely developed areas were it to occur today. Providing realistic estimates for natural disaster scenarios is essential for emergency managers. A systematic approach to developing such scenarios can reveal blind spots and vulnerabilities in planning. Following the Adelaide Scenario delivered in 2015 we now look into a series of realistic disaster earthquake scenarios for the city of Melbourne.

click here to read entire report

Australia’s unique approach to understanding natural disaster risks

Article by Kevin Roche published in Asia Pacific Fire,  January 5, 2017. 

Five of Australia’s six most costly natural hazard events have come from different perils: a tropical cyclone, an earthquake, a flood, bushfire and a convective storm. Over the last 20 years, a unique approach to understanding these risks has developed in Australia through a close relationship between the insurance and academic sectors. And by doing so Australia has been at the cutting edge in applying advances in technology and science to the benefit of the broader community. Here we explore a little of this history and explain how it has helped communities and emergency services better manage the risks they face.

Click here to read entire article.