Building evidence for risk-based insurance

Professor John McAneney and Andrew Gissing were invited to contribute to the 2016 World Disaster Report by the International Federation of Red Cross and Red Crescent Societies. Their contribution is provided below.

 Improving societal resilience in the face of the growing cost of disasters triggered by natural disasters and how to do so in a fair and affordable manner is an increasing challenge. Many governments are looking to insurance as a partial solution to this problem.

Insurance is a contract between a policy-holder and a company that guarantees compensation for a specified loss in return for the payment of a premium. Conventional insurance works by pooling risks, an approach that works well for car accidents and house fires but not for the spatially-related nature of losses from disasters caused by natural hazards. It is the global reinsurance market that ultimately accepts much of this catastrophe risk (Roche et al., 2010). Relatively new financial instruments such as Catastrophe Bonds and Insurance-Linked Securities are also being employed to transfer some catastrophe risks to the capital markets.

Insurance is part of the essential infrastructure of a developed economy but it would be a mistake to see it as an instrument of social policy. It cannot in itself prevent flooding or earthquakes. On the other hand, insurance can promote socially desirable outcomes by helping policy-holders fund their post-disaster recovery more effectively. The greater the proportion of home-owners and businesses having insurance against naturally-triggered disasters, the more resilient the community will be.

Insurers can also help promote risk awareness by property owners and motivate them and communities, as well as governments, to take mitigation actions to reduce damaging losses (McAneney et al., 2016). The mechanism for doing this is by way of insurance premiums that properly reflect risk. Insurance is not the only means of providing transparency on the cost of risk, but private insurers are the only ones with a financial incentive to acknowledge such costs. Moreover, they are the only entities that can reward policy-holders when risks are reduced (Kunreuther, 2015; McAneney et al., 2016).

It is in the interest of communities to have a viable private sector insurance market and, arguably, governments should only become involved in the case of market failure (Roche et al., 2010). Of those government-authorized catastrophe insurance schemes examined by McAneney et al. (2016), many are actuarially unsound and end up creating a continuing liability for governments, and/or, in not pricing individual risks correctly, they encourage property development in risky locations while failing to provide incentives for retrofitting older properties at high risk. In less-developed insurance markets some government involvement may encourage the uptake of insurance (e.g., Tinh and Hung, 2014).

How do we assemble the evidence to support risk-reflective insurance premiums? New technologies such as catastrophe loss modelling, satellite imagery and improved geospatial tools are proving helpful in allowing insurers to better understand their exposure to natural hazard risks. While these technologies are increasingly available, in some countries the normal outcomes of such data gathering and analysis – insurance premiums – are constrained politically. This is the case in the United States of America where there has been a tendency to keep premiums low across the board and to have policy-holders in low-risk areas cross-subsidizing those at higher risk (Czajkowski, 2012). Such practices do little to constrain poor land-use planning decisions that lie at the heart of many disasters triggered by natural hazards (e.g., Pielke Jr et al., 2008; Crompton and McAneney, 2008). McAneney et al. (2010) show that most of the homes destroyed in the 2009 Black Saturday fires in Australia were located very close to fire-prone bushland with some 25 per cent actually constructed within the bushland. Effectively these homes were part of the fuel load and their destruction was unsurprising.

One way to build a wider evidence base for collective action to support risk-based insurance policies is for governments to share information on risks of disasters related to natural hazards, both with insurers as well as the community. This information might be hazard footprints as well as the likely cost of the damage (The Wharton School, 2016). In Australia, governments have been reluctant to do this. In some developing insurance markets, home-owners or farmers may have a better understanding of the risks than do insurers, who will price this uncertainty into premiums. Unrestricted access to hazard data for all parties would encourage fairer insurance pricing.

Gathering hazard data for building evidence for risk-reflective premiums depends on the type of hazard. For example, the distance of buildings from fire-prone bushland or the local likelihood of flooding are key determinants of vulnerability to these location-specific hazards. In other areas, or within the same areas in some cases, the annual likelihood of exceeding damaging levels of seismic ground-shaking, wind speed or volcanic ash are important metrics, as are distance from the sea and the elevation of a property when it comes to coastal hazards like tsunami and storm surge.

When this risk evidence is established and becomes reflected in national construction standards, improvements in resilience follow. For example, improvements in construction standards introduced in Australia after the destruction of Darwin by Tropical Cyclone Tracy in 1974 have been credited with reducing subsequent losses from tropical cyclones by some 67 per cent (McAneney et al., 2007).

The availability of such data may result in reductions in some insurance premiums, an increase for others, or, in extreme cases, the withdrawal of insurers from areas where the risk is considered to be too high. The latter outcome will send a strong signal to communities and government for investments in mitigation; subsidized insurance is not the answer. Governments should also ensure that humanitarian aid provided after disasters is targeted effectively, in order to avoid creating disincentives for people to purchase insurance.

Lastly, and to return to the issue of poor land-use planning, it is worth remembering that the 1945 thesis of the famous American geographer, Gilbert White, that “Floods are an act of God, but flood losses are largely an act of man”, still rings true and applicable to a wider range of disasters triggered by natural hazards than just floods.

A full copy of the report can be found at


The June 2016 Australian East Coast Low: Importance of Wave Direction for Coastal Erosion Assessment

by Thomas R. Mortlock, Ian D. Goodwin, John K. McAneney and Kevin Roche.

In June 2016, an unusual East Coast Low storm affected some 2000 km of the eastern
seaboard of Australia bringing heavy rain, strong winds and powerful wave conditions. While wave
heights offshore of Sydney were not exceptional, nearshore wave conditions were such that
beaches experienced some of the worst erosion in 40 years. Hydrodynamic modelling of wave
and current behaviour as well as contemporaneous sand transport shows the east to north-east
storm wave direction to be the major determinant of erosion magnitude. This arises because of
reduced energy attenuation across the continental shelf and the focussing of wave energy on coastal sections not equilibrated with such wave exposure under the prevailing south-easterly wave climate. Narrabeen–Collaroy, a well-known erosion hot spot on Sydney’s Northern Beaches, is shown to be particularly vulnerable to storms from this direction because the destructive erosion potential is amplified by the influence of the local embayment geometry. We demonstrate the magnified erosion response that occurs when there is bi-directionality between an extreme wave event and preceding modal conditions and the importance of considering wave direction in extreme value analyses.

Click on the link to read entire article:


Crowds are wise enough to know when other people will get it wrong

Unexpected yet popular answers often turn out to be correct.

This article by Cathleen O’Grady was published by Ars Technical on 29th January, 2017. O’Grady  is Ars Technica’s contributing science reporter. She has a background in cognitive science and evolutionary linguistics.

Flickr user. Hsing Wei

The “wisdom of the crowd” is a simple approach that can be surprisingly effective at finding the correct answer to certain problems. For instance, if a large group of people is asked to estimate the number of jelly beans in a jar, the average of all the answers gets closer to the truth than individual responses. The algorithm is applicable to limited types of questions, but there’s evidence of real-world usefulness, like improving medical diagnoses.

This process has some pretty obvious limits, but a team of researchers at MIT and Princeton published a paper in Nature [Nature, 2016. DOI: doi:10.1038/nature21054] this week suggesting a way to make it more reliable: look for an answer that comes up more often than people think it will, and it’s likely to be correct.

As part of their paper, Dražen Prelec and his colleagues used a survey on capital cities in the US. Each question was a simple True/False statement with the format “Philadelphia is the capital of Pennsylvania.” The city listed was always the most populous city in the state, but that’s not necessarily the capital. In the case of Pennsylvania, the capital is actually Harrisburg, but plenty of people don’t know that.

The wisdom of crowds approach fails this question. The problem is that questions sometimes rely on people having unusual or otherwise specialized knowledge that isn’t shared by a majority of people. Because most people don’t have that knowledge, the crowd’s answer will be resoundingly wrong.

Previous tweaks have tried to correct for this problem by taking confidence into account. People are asked how confident they are in their answers, and higher weight is given to more confident answers. However, this only works if people are aware that they don’t know something—and this is often strikingly not the case.

In the case of the Philadelphia question, people who incorrectly answered “True” were about as confident in their answers as people who correctly answered “False,” so confidence ratings didn’t improve the algorithm. But when people were asked to predict what they thought the overall answer would be, there was a difference between the two groups: people who answered “True” thought most people would agree with them, because they didn’t know they were wrong. The people who answered “False,” by contrast, knew they had unique knowledge and correctly assumed that most people would answer incorrectly, predicting that most people would answer “True.”

Because of this, the group at large predicted that “True” would be the overwhelmingly popular answer. And it was—but not to the extent that they predicted. More people knew it was a trick question than the crowd expected. That discrepancy is what allows the approach to be tweaked. The new version looks at how people predict the population will vote, looks for the answer that people gave more often than those predictions would suggest, and then picks that “surprisingly popular” answer as the correct one.

To go back to our example: most people will think others will pick Philadelphia, while very few will expect others to name Harrisburg. But, because Harrisburg is the right answer, it’ll come up much more often than the predictions would suggest.

Prelec and his colleagues constructed a statistical theorem suggesting that this process would improve matters and then tested it on a number of real-world examples. In addition to the state capitals survey, they used a general knowledge survey, a questionnaire asking art professionals and laypeople to assess the prices of certain artworks, and a survey asking dermatologists to assess whether skin lesions were malignant or benign.

Across the aggregated results from all of these surveys, the “surprisingly popular” (SP) algorithm had 21.3 percent fewer errors than a standard “popular vote” approach. In 290 of the 490 questions across all the surveys, they also assessed people’s confidence in their answers. The SP algorithm did better here, too: it had 24.2 percent fewer errors than an algorithm that chose confidence-weighted answers.

It’s easy to misinterpret the “wisdom of crowds” approach as suggesting that any answer reached by a large group of people will be the correct one. That’s not the case; it can pretty easily be undermined by social influences, like being told how other people had answered. These failings are a problem, because it could be a really useful tool, as demonstrated by its hypothetical uses in medical settings.

Improvements like these, then, contribute to sharpening the tool to the point where it could have robust real-world applications. “It would be hard to trust a method if it fails with ideal respondents on simple problems like [the capital of Pennsylvania],” the authors write. Fixing it so that it gets simple questions like these right is a big step in the right direction.


Estimating building vulnerability to volcanic ash fall for insurance and other purposes

This paper by R. J. Blong, P. Grasso, S. F. Jenkins, C. R. Magill, T. M. Wilson, K. McMullan and J. Kandlbauer was published on 26th January 2017 in the Journal of Applied Volcanology.


Volcanic ash falls are one of the most widespread and frequent volcanic hazards, and are produced by all explosive volcanic eruptions. Ash falls are arguably the most disruptive volcanic hazard because of their ability to affect large areas and to impact a wide range of assets, even at relatively small thicknesses. From an insurance perspective, the most valuable insured assets are buildings. Ash fall vulnerability curves or functions, which relate the magnitude of ash fall to likely damage, are the most developed for buildings, although there have been important recent advances for agriculture and infrastructure.  Read more