History of early season bushfires in NSW and Queensland

By Lucinda Coates, Andrew Gissing & Paul Somerville

The ongoing fire emergencies in northeast New South Wales (NSW) and southeast Queensland (QLD) have attracted significant media attention and concern given the resulting damage early in the bushfire season. Nine homes in NSW and some 17 homes in QLD were destroyed last week. The NSW Rural Fire Service stated that fire dangers have not previously been recorded this high early in the fire season (Hannam, 2019a).  The fires may reflect not only high temperatures and strong winds but also a rainfall deficiency.  According to the Bureau of Meteorology, rainfall over much of the fire-hit regions has been the lowest on record for the 20 months starting January 2018, and the 32 months starting January 2017 (Figure 1).

Figure 1. Rainfall deficiency across Australia over the past 17 months, April 1, 2018 – August 31, 2019. Source: BOM; Hannan (2019a).

The severity of the drought in the catchment of Burrendong dam, 40 km west of Mudgee in central NSW, is illustrated in Figure 2. None of four previous droughts dating back to 1906, including one as recent as 2012-2015, has been anywhere near as severe as the current one (Water NSW, Hannan, 2019b).

Figure 2. Cumulative inflows into Burrendong Dam, NSW during droughts. Source: Water NSW, Hannan (2019b).

A search of the Risk Frontiers’ PerilAUS database provides further details about historical early season bushfires in NSW and QLD. Analysis of recorded events supports that the September 2019 bushfires may be the most damaging on record for bushfires occurring in August or September. The fires, however, have caused significantly less damage than those that have occurred later in the fire season.

QLD did, nevertheless, experience a large number of separate fires in its southeast/ southern area in 2000 in the last ten days of August and in 2011 from August to October, and also in the Gulf Savannah in 2012 from July to September. NSW experienced several damaging fires on 10 September 2013 just before the devastating fires of late September/ October 2013. Details of previous bushfires that occured during August and September are provided below.

August

QLD – Mountain Creek bushfire, 15 August 2013. A bushfire caused the closure of the Sunshine Motorway, having started in bushland beside the east-west slip-lane to the motor way and the Sunshine Coast TAFE. No buildings were damaged, however the fire threatened to engulf the Maroochydore SES headquarters. 1000 people were evacuated from the TAFE campus.

NSW – Ettalong bushfire, 4 August 1936. Burning for 8 hours and accompanied by strong winds this bushfire destroyed houses, fencing and outhouses.

QLD – Toogoolawah bushfire, 25 August 1970. Horses and cattle were lost in the Toogoolawah bushfire, and one injury was sustained.

QLD – Mount Tambourine bushfire, 25 August 1991. One death occurred in the Mount Tambourine bushfire.

QLD – Palmerville bushfire, 25 August 1996. 110,000ha of pasture was destroyed in the Palmerville bushfire.

QLD – South East Qld bushfires, 29 August 2000. Queensland Fire Service had received more than 1,000 reports of fires in southern Queensland in the previous 10 days.  Fires scorched hundreds of hectares of scrub and bushland at Deception Bay, Caboolture, Elimbah, Morayfield, Tarragindi, Wynnum, Mt Crosby, Bribie Island, Woodridge and Redbank.  A house in Deception Bay was destroyed. A total fire ban was extended in the council regions of Ipswich, Boonah, Gatton, Laidley, Esk and Beaudesert.

NSW – Lake Macquarie bushfire, 30 August 1995. A granny flat and a caravan were destroyed in the Lake Macquarie bushfire.

QLD – Southern Queensland bushfire, August, 2011. From August to October 2011, approximately 345 fires occurred in Queensland over 42 local government areas. No homes were lost but there was significant loss of farm infrastructure such as fences, tanks and sheds.

September

QLD – Gulf Savannah bushfires, September 2012. Many fires throughout the Queensland Gulf Savannah had been burning over a period of three months, affecting more than 20 cattle stations to varying degrees. The historic Croydon-Esmeralda Homestead, dating back to the late 1800s, was lost, along with 90% of the Abingdon Downs Station. More than 500,000ha were burned, leaving no food for more than 60,000 cattle. At least 1500 cattle were lost.

NSW – Marsden Park grassfire, 10 September 2013. A 120 ha grassfire caused the loss of a home and damaged cars and sheds. Winds gusting 70km/h fanned the flames, which were sparked by record temperatures of 32 degrees. The fire is suspected to have been started by arson.

NSW – Windsor grassfire, 10 September 2013. A severe bushfire sparked by powerlines falling on trees caused the evacuation of the University of Western Sydney campus. No property or homes were damaged.

NSW – Castlereagh grassfire, 10 September 2013. Bushfires burnt through more than 63ha of bushland, with more than 1000 firefighters needed across the region. Winds gusting 70km/h fanned the flames, which were sparked by record September temperatures. Only one shed was lost but homes and property were threatened, causing the evacuation of some areas.

NSW – Winmalee bushfire, 10 September 2013. 1370ha of bushland was burnt when hazard reduction burns jumped containment lines after a change in weather conditions. One house was destroyed as a result of the event as well as sheds, cars and caravans.

QLD – Pomona bushfire, 11 September 1970. Some pasture was destroyed (8ha of thickly timbered country was burnt) and one death occurred in the bushfire between Pomona and Cooran, Qld.

NSW – Morisset bushfire, 12 September 2013. A hazard reduction burn turned into a bushfire at Morisset, causing the closure of the F3 motorway (M1). A change in wind direction caused the controlled burn to jump and start a spot fire, causing the bushfire on the western side of the M1. The entire freeway was closed for an hour; a single lane was later opened. Motorists experienced significant delays.

QLD – Woodgate bushfire, 15 September 1969. Outhouses were destroyed and one death occurred in the Woodgate bushfire.  More than 1,013ha were burnt in total. A huge “wallum” bushfire threatened to engulf 200 houses.  Fires had been in the area for one week; no rain for a month: the bush “was like tinder”.

NSW – Sydney outskirts bushfires, 24 September 2006. High temperatures and strong winds of 110km/h saw seven homes lost around south west and north western Sydney. 32 fires were battled across NSW with 2000 ha burning across the state.

NSW – Taree Bushfire, 26 September 2013. A bushfire to the south of Taree burnt over 100ha, threatened 20 homes and caused the closure of the Pacific Highway in both directions. Taree South service centre was also evacuated as a result of the fire.

NSW – Shallow Bay bushfire, 26 September 2013. 50 homes were threatened by an out-of-control bushfire at Shallow Bay. The fire destroyed several sheds and burnt through 70 hectares of bushland. Residents in Shallow Bay were advised to stay in their homes as, for many, it was too late to evacuate.

NSW – Yarrowitch bushfire, 26 September 2013. A bushfire burnt through 300ha and threatened 6 properties near Blomfields Rd and Kangaroo Flat Rd, Yarrowitch. One firefighter was injured after the NSW RFS truck he was driving crashed into a tree due to poor visibility.

NSW – Barrenjoey Headland bushfire, 28 September 2013. A blaze destroyed 60% of the headland surrounding the Barrenjoey lighthouse: the lighthouse was saved but the nearby lighthouse cottage sustained some roof damage. 80 firefighters and three aircraft were needed to contain the fire.

About PerilAUS

Risk Frontiers has, since the early 1980s, built and maintained its PerilAUS database. PerilAUS holds records on natural hazard impacts in Australia from European settlement (1788), but with good confidence from 1900. It includes building damage and fatality information for bushfire, earthquake, flood, gust, hail, heatwave, landslide, lightning, rain, tornado, tropical cyclone and tsunami.

PerilAUS is unique in Australia and is distinguished from other hazard databases by the length of the period covered, the wealth of descriptive detail and the use of a “house equivalent” damage indicator (Blong, 2003; Blong, 2005). PerilAUS is comparable in some respects to well-known international disaster databases such as the Dartmouth Flood Observatory global flood database and the CRED/OFTA International Disaster Database, EM-DAT.

The database contains about 15,700 records from 1900 to 2015. Data has been sourced from news media, government reports, the Insurance Council of Australia (ICA) Disaster List and publicly available coronial records.

The database has been used as a key data source in numerous peer-reviewed research papers and major reports. Its completeness has in part been supported by the Bushfire and Natural Hazards Cooperative Research Centre.

For any further information about PerilAUS please contact Risk Frontiers at info@riskfrontiers.com.

Further information

For further information please contact Andrew Gissing at andrew.gissing@riskfrontiers.com.

References

Blong, R. J., 2003: A new damage index.  Natural Hazards 30: 1-23.

Blong, R. J., 2005: Natural hazards risk assessment: an Australian perspective. Issues in Risk Science 4. Benfield Hazard Research Centre, London. 29 pp.

Hannam, P (2019a) An ill wind fans the flames. Sydney Morning Herald. [Available Online] https://www.smh.com.au/environment/climate-change/an-ill-wind-fans-the-flames-20190912-p52qir.html

Hannan, P. (2019b). ‘We’ll be bathing in salt water’: At the epicentre of Australia’s big drought. [Available Online]
https://www.smh.com.au/environment/sustainability.we-ll-be-bathing-in-salt-water-at-the-epicentre-of-australia-s-big-drought-20190828-p52lsx.html

Death Benefits

In her latter years the author of To Kill a Mocking Bird, Harper Lee, obsessively followed the case of a rural preacher, Reverend Willie Maxwell. The case gripped Alabama. Maxwell was accused of murdering five of his family for insurance money in the 1970s. With the help of a savvy lawyer, Maxwell escaped justice for years until a relative shot him dead at the funeral of his last victim. Despite hundreds of witnesses, Maxwell’s murderer was acquitted – thanks to the same attorney who had previously defended the Reverend.

Lee had the idea of writing a true-crime classic like the one she had helped her friend Truman Capote research (In Cold Blood). Casey Cep’s book, Furious Hours, details this history and that of the book that was never written. This extract provides a short history of insurance.


“Before Lieutenant Henry Farley fired the first ten-inch mortar at Fort Sumter, there was not much of a life insurance industry in the United States. There was property insurance, of course, for ships and warehouses, and, appallingly, for slaves, but even the most entrepreneurial types in an entrepreneurial young nation had not figured out a way to make money from insuring lives. To know how much to charge people until they died, you had to know how  long they were likely to live, which was impossible because companies lacked actuarial data; to maintain consumer confidence, you had to have enough money on hand to cover all death benefits, no matter how early or unexpected someone’s demise, which was difficult because capital was hard to raise. The Civil War solved both of those problems, changing not only the way Americans died but how they prepared for death. By the time that Union soldiers had taken all the souvenirs they could from the house at Appomattox where General Lee surrendered, Americans were insuring their lives at record rates.

Although it took hold in the United States over the course of four short years, the life insurance industry was, by then, thousands of years old. lts earliest incarnation, however, looked less like companies selling policies than like clubs offering memberships. During the Roman Empire, individuals banded together in burial societies, which charged initiation and maintenance fees that they then used to cover funeral expenses when members died. Similarly, religious groups often took up collections for grieving parishioners to cover the costs of burial and to provide aid to widows and orphans. It was centuries before these fraternal organizations came to operate like financial markets, and it took one city burning and another one crumbling for them to do so.

The city that burned was London. One Sunday morning in 1666, at the end of a long, dry summer, a bakery on Pudding Lane went up in flames. The houses around it caught fire one after another, like a row of matches in a book, and strong winds carried the blaze toward the Thames River, where it met warehouses filled with coal, gunpowder, oil, sugar, tallow, turpentine, and other combustibles. By Monday, flames and embers were falling from the sky; by Tuesday, the blaze had melted the lead roof of St. Paul’s Cathedral and the iron locks of the city gates. On Wednesday, the winds shifted, and the breaks made by demolishing buildings at the edges of the disaster finally held. By then, though, the Great Fire of London had destroyed more than thirteen thousand structures and left one hundred thousand people homeless.

One of the men who made a fortune rebuilding the city after the blaze was a medical doctor turned developer with the appropriately fiery name of Nicholas If-Christ-Had-Not-Died-for-Thee-Thou-Hadst-Been-Damned Barebone. (The hortatory name had been given to him by his father, the millenarian preacher Praise God Barebone. With his considerable profits, Dr. Barebone founded an “Insurance Office for Houses” that employed its own team of firefighters to protect the buildings on which it held insurance—five thousand of them, eventually. In an apt abridgment, the doctor became known around London as “Damned Barebone,” not only because of the ruthlessness with which he ignored housing regulations and local opposition to his construction projects, but also because of the soullessness with which his firefighters responded exclusively to fires in homes where a small tin plaque indicated that the owners were clients. Barebone’s “firemarks” soon proliferated in first-floor windows around the city, and the practice of paying a little money now to insure against larger risks later became more popular. Within a decade, Barebone had come up with another innovation in the field, one that paved the way from fire insurance to life insurance: he created a joint-stock company to finance his policies. The first of its kind, it allowed investors to buy and own stock in an insurance company, the way they already could in mills, mineral mines, and spice trades.

Newly able to attract investors, insurance companies could finally raise capital. But the value of any given life was uncertain—far more so, even, than the fluctuating prices of saffron or gold. Say a banker in Dover bought a policy and then lived another four decades; by the time he died, he would have paid premiums for forty years, and his policy would have matured enough for the insurer to provide the full benefit to his widow and still make a profit. But say the same banker went straight from buying his policy to visiting the White Cliffs and promptly drowned in the English Channel. In that case, the banker’s wife would get the full benefit at a fraction of the cost, while the insurer, far from making a profit, would take a substantial loss. The success of insurance companies depended on being able to guess which scenario was more likely, dying of old age or falling off a cliff—in the utter absence of any actual information about aging, falling, or all the other myriad ways that people die.

Part of the reason that information didn’t exist was theological. Devout Christians were not meant to concern themselves with the details of their deaths. Like the timing of the Second Coming, as Christ proclaimed in the Gospel of Matthew: “Of that day and hour knoweth no man, no, not the angels of heaven.” God, who kept watch even over the sparrow, would provide, and to doubt those provisions by making one’s own end-of-life preparations was thought to reveal a lack of faith. Thus was the life insurance industry caught between a math problem and God.

To make matters worse, the overall reputation of the insurance industry had been tarnished by the sale of speculative policies, a practice barely distinguishable from betting. You could buy speculative policies with payouts contingent on everything from whether a given couple got divorced to when a particular person lost his virginity—or, in one infamous case, if a well-known cross-dressing French diplomat was biologically a man or a woman. Such policies could be purchased in secret, and the purchaser did not need to have any connection to the “insured.” These seedy practices, along with the obvious incentive to murder someone whose life you had insurance on, had led France, Germany, and Spain to ban life insurance outright. England, meanwhile, created the insurable interest standard, which mandated that an insurance policy could be sold only to the person being insured or someone who had an “interest” in his life—that is, an interest in his remaining alive. But not even those advances cleaned up the industry. They only encouraged a new kind of speculation, in which elderly, indigent, or ill policyholders auctioned their insurance policies to investors who bid based on how long they thought the seller would live.

Of these various obstacles to establishing a life insurance industry—spiritual, mathematical, reputational—the mathematical one was solved first. Everyone knew that death, while uncertain, was also inevitable, yet before the seventeenth century no one had even tried tracking it, let alone measuring life spans in particular populations or for specific professions. The closest thing to an actuarial table at the time was a Bill of Mortality, a grim British innovation that listed plague victims in various parishes around the country. In 1629, a quarter century after he commissioned a new translation of the Bible, King James I instructed his clergy to start issuing those bills for all deaths, not just the ones caused by plague. Later, around the time of the Great Fire, John Graunt, a London haberdasher who dabbled in demography, organized those bills, arranging twenty years’ worth of death into eighty-one causes and making it possible to see when people were most likely to die and what was most likely to kill them.

Armed with population information for the first time, insurance companies began to get a handle on probability calculations, and soon enough a natural disaster helped ease their difficulties with religion. On the feast of All Saints in 1755, just before ten in the morning, one of the deadliest earthquakes ever recorded struck the city of Lisbon. When the shaking finally stopped—fully six minutes later, some records say—tens of thousands of people had died as homes and churches collapsed, and fissures up to sixteen feet wide gaped open in the earth. Not long after, the waters along the coast of Portugal drew back in a sharp gasp, exposing the bottom of the harbor. Throngs of amazed onlookers had flocked to see old shipwrecks newly revealed on the seabed when, nearly an hour later, the ocean exhaled and a tsunami washed over the city, killing thousands more. The scale of the tragedy was so vast that existing theodicies seemed inadequate, and all of Europe struggled to answer the existential questions raised by the Lisbon catastrophe.

In the course of that struggle, theologians found themselves competing with Enlightenment philosophers, who seized on the earthquake to offer a rival account of the workings of the natural world. If earthquakes were not divine punishments but geological inevitabilities, then perhaps insuring oneself against death was not contrary to God’s plan but a responsible and pious way to provide for one’s family. By the end of the eighteenth century, that idea had gained legitimacy throughout Europe. Once it took hold, religious groups, initially opposed to the entire notion of life insurance, became some of its strongest advocates, in some cases even starting denominational funds to sell policies to their members.

That practice eventually spread to the United States, where even today millions of Americans buy their life insurance through religiously affiliated companies like Catholic Financial Life and Thrivent Financial for Lutherans. But such developments were a long time coming. Unlike Europe, which had decades’ worth of mortality tables by the eighteenth century, colonial America had little reliable information on life expectancy, making it difficult for insurers to set prices and underwrite policies. When companies did try to offer life insurance, there were often too many beneficiaries attempting to make claims at once and rarely enough money to cover them.

In addition, although most states required insurable interest, the American life insurance industry remained exceptionally vulnerable to fraud. Some policyholders lied from the start, fibbing about their age or forging their medical history. Others lied as they went along, violating the terms of their policies by traveling to restricted places (the malarial South, for instance) or by restricted means (by railroad, without the appropriate rider). Still others lied at the end, faking their own deaths or disguising their suicides as accidents. But calling out such lies was tricky. Contesting any claim was expensive, and litigation rarely resulted in denial of coverage, since jury members were far more likely to want to see their own policies honored than care about the profit margins of insurance companies. Moreover, whenever a company preserved its profits by denying a fraudulent claim—say, a father who had failed to disclose an illness, or a husband who had purchased arsenic a few days before he died—it risked damaging its reputation in the eyes of a skeptical public, who worried that their own heirs might be cheated, too.

As companies attempted to grow, they exposed themselves to even more fraud through their own lapses in judgment. Some of their agents approved policies too freely in an effort to earn larger commissions, while some managers invested assets too dangerously in an effort to earn larger returns. Spreading into new territories meant recruiting new agents, not all of whom were scrupulous, and the more geographically diverse a company became, the less it knew about the background, life, and likely death of its would-be customers, making arbitrage of any kind difficult. The expansion of the postal service in the second half of the nineteenth century enabled mail-based sales but also mail-based fraud, on both ends: nonexistent companies could market nonexistent policies by mail, while unscrupulous clients could send away for policies they might never have qualified for in person.

Individual states tried to protect consumers by setting deposit requirements for companies and restricting their investments. But those same protections slowed sales, because they required more due diligence at every stage of the process, and decreased investment returns, because they left firms with less freedom to take the kinds of risks that could make their stocks rise. Unable to sell as many policies, companies had to pool risks across a smaller population, which left them struggling to remain profitable. Eventually, however, an industry shift from stock companies, which were owned by investors, to mutual companies, which were owned by policyholders themselves, allowed insurance companies to free themselves from the capital game; instead of attracting investors, they needed only to recruit customers. That became possible due to the carnage of the Civil War, which did for the United States what earthquakes and fires had done for Europe: spread a sense of both dread and obligation around the country, creating a massive demand for life insurance.

The total value of policies increased from $160 million in 1862 to an incredible $1.3 billion in 1870. Within fifty years there were almost as many life insurance policies as there were Americans.”

 

Using catastrophe loss models to improve decision making in disaster management

By Andrew Gissing and Ryan Crompton

Catastrophe loss models

Catastrophe loss models are decision support systems used extensively in the (re)insurance industry to assist in pricing risk and aggregate exposure management. They also offer significant benefits in improving disaster risk reduction decision making.

Figure (1): Catastrophe model framework

Risk Frontiers over the last 25 years has developed a sophisticated suite of Australian probabilistic catastrophe loss models to quantify the impacts of flood, bushfire, hail, tropical cyclone and earthquake. These risk models have nationwide coverage and are comprised of the following modules (see Figure 1):

Figure (2): Risk exposure comparison
  • Hazard – estimates the hazard intensity footprint for a specific event. For example, flood extent or ground shaking intensity.
  • Exposure – provides location-based information about relevant assets.
  • Vulnerability – estimates the level of financial loss to different types of property as a function of hazard intensity.

Risk Frontiers’ catastrophe loss models provide scientifically based damage estimates to insurable assets such as residential, commercial and industrial properties and provide users with information about possible financial losses and associated average recurrence intervals (ARIs). Standard outputs from the financial module include exceedance probability (EP) curves (return periods) and average annual losses (AALs).

Using a suite of models enables the comparison of possible losses between hazards at various ARIs for a given geographic area (Figure 2). Loss estimates can also be used to inform benefit cost estimations of different disaster risk reduction investments by varying the vulnerability module1.

In addition to estimating financial losses, model outputs can be combined with vulnerability functions that enable the estimation of loss of life and infrastructure disruption.

Catastrophe loss models can be used to inform disaster planning and capability analysis by enabling the development of ‘what if’ scenarios. For example, to estimate the impact of a magnitude 7 earthquake occurring underneath Melbourne (Figure 3). The models can also be used before or during actual events to forecast possible impacts.

Risk Frontiers combines output from its hazard modules with other data sources to maintain a multi-hazard database for Australia (Figure 4). This database provides national address-based risk ratings for flood, bushfire, earthquake, severe storms, storm tide, tropical cyclones and other hazards. It can be used to assess risk across national asset portfolios, identify community risk profiles and inform property owners of their natural hazard risk profile.

Figure (3): Damage estimate for magnitude 7 earthquake in Melbourne
1 Walker, G. R., M. S. Mason, R. P. Crompton, and R. T. Musulin, 2016. Application of insurance modelling tools to climate change adaptation decision-making relating to the built environment. Struct Infrastruct E., 12, 450-462.

Understanding future risk

The catastrophe loss modelling framework is ideally suited to consider influences on future risk such as climate change, mitigation investment, increased development and changes to building codes. The Geneva Association, a peak insurance industry think tank, concluded that by combining catastrophe models with latest climate science an enhanced understanding of future weather-related risk impacts could be developed. Such use provides greater insights into the impacts of climate change on natural hazards not currently possible using Global Climate Models.

Figure (4): Multi-hazard address-based risk rating

Risk Frontiers’ Australian catastrophe loss models

FloodAUS. FloodAUS is based on the National Flood Information Database. The scope of the model is further extended using Risk Frontiers’ Flood Exclusion Methodology. Correlations between catchments are modelled to provide estimates of total event losses.

FireAUS. The upcoming release uses MODIS Burnt Area Products along with other data sources, machine learning models and fire-tracking algorithms to derive a national synthetic event set from which losses are calculated.

QuakeAUS. QuakeAUS is a national earthquake model for Australia. Starting from a record of historical seismicity, it uses a ground motion prediction model developed specifically for Australia. A major update of the model has been completed to incorporate Geoscience Australia’s recent revision of the National Seismic Hazard Assessment, including a revision of the Australian Earthquake Catalogue. It also includes for the first time an active fault model.

HailAUS. HailAUS is a loss model for hail with nationwide coverage. It includes a catalogue of hailstorms reflecting the frequency and severity of ‘high storm potential days’ derived from reanalysis data and the observed historical record. In addition to calculating damage to property the model includes a motor vehicle damage estimation module.

CyclAUS. CyclAUS is a tropical cyclone wind loss model for Australia. It covers the entire region at-risk of tropical cyclone. Detailed vulnerability functions enable the estimate of loss.

 

For further information contact Andrew Gissing at andrew.gissing@riskfrontiers.com

 

The 14 July 2019 Mw 6.6 Offshore Broome and Mw 7.3 Halmahera Earthquakes

by Paul Somerville

A magnitude Mw 6.6 earthquake occurred about 200 km west of Broome on 14 July 2019 (Figure 1). It is the second largest earthquake to have occurred in or near Western Australia in historical time. This earthquake was followed about 3.5 hours later by a magnitude Mw 7.3 earthquake in Halmahera, Indonesia (Figure 2).  Both earthquakes occurred at shallow depths of about 10 km, and both had strike-slip focal mechanisms, which involve the horizontal movement of one side of the fault past the other side. This is presumably why no tsunami warning was issued (and none was observed) for either event, because tsunami generation requires uplift or subsidence of the sea floor, or some other form of volume change that could be caused by submarine landsliding or volcanic eruption.  As described below, the focal mechanisms indicate that the earthquakes were caused by similarly oriented stress fields, but the large distance separating them suggests that their close time of occurrence was coincidental.

The Offshore Broome earthquake occurred at about 1:30 pm local time and was felt widely in Western Australia, from Esperance to Darwin.  The duration of the shaking was consistently reported to be between 45 seconds and one minute. There are no known reports of structural damage, but there was some non-structural damage to ceilings, and objects fell from supermarket shelves in Broome and other towns. Based on these reports, it is likely that the peak ground accelerations were in the range of 2-10%g. Several people reported that the initial shaking was accompanied by a roaring noise. This may have been caused by the acoustic coupling of the compressional (P) waves, which are sound waves in rock, into the air.  Although no tsunami warning was issued, the Shire of Broome announced just after 4pm that it would be closing Cable Beach, Town Beach, Entrance Point and Reddell Beach for the time being as a temporary precaution against any potential tidal surge.

The Halmahera earthquake occurred at about 4pm local time, and Indonesia’s earthquake monitoring agency, the BMKG, estimated that 4000 people were exposed to “very strong” effects from the earthquake.  It is reported that people were seen fleeing a building on the neighbouring island of Ternate, about 168 kilometres north-west of the epicentre.  Ternate is the largest city in the province of North Maluku, and home to about 200,000 people.  One person is reported to have been killed in southern Halmahera.

Figure 1. Left: The small white star shows the location of the Offshore Broome earthquake, and the beach ball shows a map view of the two possible fault planes, one oriented northeast and the other oriented northwest. Right: Historical earthquakes in northwestern Western Australia, showing the Offshore Broome earthquake as a red star. Source: European-Mediterranean Seismological Centre.
Figure 2. Left: Location map of the Halmahera earthquake. Centre: Detailed location map. Right: Focal mechanism showing possible northeast and northwest oriented fault planes. Source: USGS.

Causes of the earthquakes

As shown in Figure 3, the Indo-Australian Plate is subducting (diving down beneath) the Sunda Plate along the Java Trench (western part of the map), because oceanic crust is thin and dense and easily subducts.  However, the Indo-Australian Plate is colliding with the Sunda Plate in the Timor region (eastern part of the map), because in this region the Indo-Australian plate consists of thick, buoyant continental crust that cannot be subducted.  Consequently, the part of the Indo-Pacific plate that is subducting beneath the Sumatra Plate is moving to the northeast at a higher rate than the part that is colliding with Timor. This causes strike-slip earthquakes, like the Offshore Broome earthquake, to occur along the Western Australia Shear Zone, as shown by the numerous offshore earthquakes with a northeast-southwest alignment on the right side of Figure 1. The focal mechanism of the earthquake, shown on the left side of Figure 1, indicates faulting on either a northeast or northwest oriented fault plane; the northeast plane is consistent with the northeast orientation of the Western Australia Shear Zone in Figure 3. In either case, the earthquake was caused by local crustal shortening in a north-south direction and extension in an east-west direction.

Unlike the Offshore Broome earthquake, which occurred within the Indo-Australian Plate, the Halmahera earthquake occurred within the Sunda Plate, and was caused by the collision of those two plates in Timor.  As for the Offshore Broome earthquake, the focal mechanism of the Halmahera earthquake, shown on the right side of Figure 2, indicates faulting on either a northeast or northwest oriented fault plane. In either case, the earthquake was caused by local crustal shortening in a north-south direction and extension in an east-west direction, due to the plate collision shown in Figure 3.

As shown in Figure 3, the Western Australia Seismic Zone extends onshore in the vicinity of Dampier. The largest earthquake known to have occurred in historical time in Western Australia is the Mw 7.25 Offshore Geraldton earthquake of 11 November 1909. The location of that event is shown in the bottom left corner of Figure 3; the Ms of 7.8 is the surface wave magnitude. The Mw 5.58 1968 Meckering earthquake had a marginally lower magnitude that the Offshore Broome event, and practically destroyed the town of Meckering. The Offshore Broome earthquake is also larger than the 1941 Mw 5.52 Meeberrie earthquake.

Figure 3. Tectonic map of northwestern Western Australia showing the locations of the Western Australia Shear Zone (the zone outlined by white lines; mapped faults shown by red lines). The Offshore Broome earthquake occurred near the eastern edge of the zone, southeast of the letters “RS” (Rowley Shoals). The purple line, which lies just north of Timor, shows the edge of Australian continental basement. The Indonesian islands are located on the Sunda Plate to the north, and Australia is located on the Indo-Australian plate to the south. Source: Hengesh & Whitney, 2016.

Reference

Hengesh, J. V., and B. B. Whitney (2016). Transcurrent reactivation of Australia’s western passive margin: An example of intraplate deformation from the central Indo-Australian plate, Tectonics, 35, 1066–1089, doi:10.1002/2015TC004103.

 

The 4-5 July 2019 M 6.4 and 7.1 Ridgecrest, California Earthquakes

Paul Somerville, Chief Geoscientist, Risk Frontiers

An M 6.4 earthquake occurred near Ridgecrest, Southern California, about 180 km north of Los Angeles,  on July 4th, 2019, preceded by a short series of small foreshocks (including an M 4.0 earthquake 30 minutes prior), and was followed by a strong sequence of aftershocks, whose epicentres aligned with both possible fault planes (NE-SW and NW-SE) of the focal mechanism solution of the M 6.4 event, as shown in Figure 1. On July 6th UTC (July 5th 20:19 locally) an Mw 7.1 earthquake at the northwest extension of the M 6.4 event was preceded by 20 seconds by a magnitude 5.5 earthquake.

Figure 1. Location of the M 6.4 July 4 earthquake and aftershocks (left) and the M 7.1 July 5 earthquake and aftershocks (right), also showing in green the M 6.4 event and its late aftershocks. Source: Temblor.

M 6.4 Earthquake Ruptured Two Orthogonal Faults

The epicenter of the M 6.4 earthquake is located near the intersection of its two possible fault planes, and the distribution of aftershocks on two orthogonal planes, shown on the left of Figure 1, suggests that it ruptured both of them. As shown on the left side of Figure 2, an earthquake is represented by a shear dislocation on a fault, shown by the opposing thick arrows, which has a force representation consisting of two equal but opposing couples, represented by the two pairs of thin arrows shown near the outer edges of the cloverleaf pattern. The opposing couples maintain dynamic equilibrium by yielding no net force and no net torque. This means that, observed from a distance and only using information on the direction of first motion of the seismic waves (up or down), we cannot identify on which of the two possible fault planes the earthquake occurred.

Figure 2. Map views of the double couple representation of a vertical strike-slip earthquake mechanism.

As shown in the centre panel of Figure 2, the same force system – north-south compression and east-west extension, shown by the wide double arrows, can produce strike-slip movement on either the northwest (left) or northeast (right) striking fault plane.  This explains why the M 6.4 event could rupture both of the two possible fault planes in the same earthquake. Seen from afar using first motions of P waves, the two potential fault planes are demarked by up (compressional) and down (rarefactional) quadrants that cover the earth’s surface, as shown on the right of Figure 2.

To resolve which plane or planes hosted the earthquake, we need to look at aftershock locations (Figure 1), look for surface faulting (Figure 3), or analyse the mainshock waveforms to identify where the seismic waves actually came from (Figure 4).  The latter has been done of the M 7.1 event (Figure 4), which shows horizontal rupture of up to 2.5 metres over a length of 60 km of the fault extending from the ground surface to a depth of about 10 km.  The road that was ruptured by both earthquakes is State Highway 178, whose location is shown partially in Figure 5; it extends eastward from Ridgecrest.

Figure 3. A few cm of left-lateral surface rupture of the M 6.4 earthquake (left) and about one metre total of right-lateral surface rupture of the M 7.1 earthquake (right). Here and in Figure 2, left lateral means the other side of the fault from the one on which you are standing has moved to the left; and right-lateral means it has moved to the right.
Figure 4. Slip on the vertical fault plane of the M 7.1 earthquake, right end is southeast. Source: USGS.
Figure 5. Map of earthquake epicentres and roads. Source: USGS.

Relationship to the San Andreas Fault

The San Andreas fault, which runs diagonally from northwest to southeast on the left side of Figure 6, forms the main and long-established boundary between the North American plate to the east and the Pacific plate to the west. It runs through San Francisco and lies close to Los Angeles and other Southern California cities, so news of a large earthquake in California raises alarm among the general public.

Figure 6. Fault map of central and southern California (CGS, left), with Ridgecrest located above the first “M” in “Mojave Desert”(see also Figure 7 for location), and map of the 1872 Owens Valley earthquake showing the location of the Ridgecrest earthquakes in the lower right (Temblor, right).

However, another incipient part of the boundary is forming to the east of it, running in a north-northwesterly direction, and it hosted the M 7.6 Owens Valley earthquake of 1872 and the July 2019 Ridgecrest earthquakes, shown on the right side of Figure 6. The orange lines in Figure 6 indicate faults that have not ruptured in historical time, and the red lines are ones that have. The three big historical earthquakes in California are the M 7.9 1857 Fort Tejon, south-central San Andreas earthquake (roughly from the latitudes of San Luis Obispo to Riverside, Figure 6, left), the 1872 M 7.6 Owens Valley earthquake (Figure 6, right), and the 1906 M 7.8 San Francisco earthquake on the Northern San Andreas fault (roughly from the latitude of San Luis Obispo past San Francisco, off the map in Figure 6, left). Since the 1872 Owens Valley earthquake, other earthquakes that have occurred recently on this incipient eastern plate boundary are the 1992 M 6.3, Joshua Tree, 1992 M 7.2 Landers, 1992 Big Bear, 1995 M 5.8 Ridgecrest, and 1999 Hector Mine earthquakes. The rupture zones of the Landers and Hector Mine earthquakes are shown by red lines east of Victorville on the left side of Figure 6.

Reported Damage and Future Warnings

The MMI Intensity shakemap of the earthquake is shown on the left side of Figure 7.  The main damage to the towns on Ridgecrest (population 29,000) and Trona (population 1,900) appears to have been incurred by older houses and trailer homes (which readily topple from their foundations, right side of Figure 7); some house fires also started but were soon extinguished.  There were no reported deaths or serious injuries.

Figure 7. Shakemap (USGS, left), and damage to a trailer home and a fire (right).

The Governor of California announced that the estimated losses are about $US100 million, but the USGS made a preliminary estimate of at least $1billion. It is possible that this larger estimate may include damage to the Naval Air Weapons Station China Lake, a 1.1 million acre weapons testing facility, the Navy’s largest, whose location is shown in Figure 1 and within which the epicentres of both the M 6.4 and 7.1 earthquakes were located.  The facility’s Facebook page announced that it is “not mission capable until further notice,” but officials said that security protocols “remain in effect.”

According to the current USGS forecast, over the next one week, beginning on July 6, 2019 at 2:20 p.m. Pacific Time (5:20 p.m. ET), there is a 2% chance of one or more aftershocks that are larger than magnitude 7.1. The number of aftershocks will drop off over time, with the largest expected to have a magnitude of about 6, but a large aftershock can increase the numbers again temporarily.

The recently installed early earthquake warning system ShakeAlert, which detects earthquakes and announces that they have occurred in near real time, worked as planned but not as some people would have liked.  The designers of the ShakeAlert LA app decided that most people would not want to be notified of large earthquakes that are too distant to cause damaging shaking near them.  Consequently, the ShakeAlert LA app was designed to only send an alert if the magnitude is above 5 and the MMI intensity is 4 or higher somewhere in Los Angeles County.  The Ridgecrest earthquakes met the magnitude criterion but not the intensity criterion in Los Angeles, and so did not result in alarms.  However, many people were alarmed because they clearly felt the long “rolling” motions (surface waves) of the distant large earthquakes and were concerned that the alarm was not working properly.  The designers of the app may now consider providing alarms, perhaps nuanced, of large distant earthquakes that people may feel but that do not present local damage potential.

Darwin shaken by a deep Mw 7.3 earthquake in the Banda Sea

By Paul Somerville, Chief Geoscientist, Risk Frontiers

Darwin was shaken at around noon today by a deep Mw 7.3 earthquake that occurred in the Banda Sea. Both Geoscience Australia and the United States Geological Survey reported that the earthquake occurred at a depth of about 200 km.  Such deep earthquakes do not generate tsunamis, and no tsunami warning has been issued.

Figure 1. Location of the Mw 7.3 Banda Sea earthquake of 24 June 2019. Source: Geoscience Australia.

There was an alarming level of shaking in Darwin, but the earthquake was far enough away that damage would not be expected, and none has been reported to date. Parts of the Darwin CBD were evacuated, but we are not aware of any evacuation order and surmise that the long duration of the earthquake shaking caused sufficient alarm to prompt voluntary evacuation.

If proper procedures had been followed, the people who evacuated would have followed the “drop, cover and hold on” procedure described here: https://www.shakeout.org/dropcoverholdon/, and would not have evacuated the building until the shaking was over.

Unlike the sharp (high frequency or short period), short duration ground motion that is experienced near small earthquakes in Australia, the shaking from large distant earthquakes is often described as “rolling” (low frequency or long period) ground motion that can last for a long time.  Some people reported shaking lasting 5 minutes. This long duration may have added to the sense of alarm and prompted people to evacuate.  This shaking is expected to have been most pronounced in the upper floors of the taller buildings in the CBD, because their height causes them to have relatively long natural periods of vibration and they are thus “tuned” to the long period of the incoming seismic waves.  In contrast, low-rise buildings are most vulnerable to the short period ground motions from nearby small earthquakes, and are less vulnerable to long period ground motions.

Unfortunately, it appears that we have missed an opportunity to record the ground motions in these Darwin CBD buildings. Without such recordings, we are left with uncertainty in the level of ground motion that they experienced. We need recordings so that we are better able to estimate the ground shaking levels that should be used in northern Australia to design buildings and infrastructure to withstand large earthquakes to our north. The frequent large earthquakes that occur to the north and east of Australia are illustrated by the earthquake epicenter map for 2019 shown in Figure 2.

Figure 2. Locations of earthquakes that have occurred in 2019. The Mw 7.3 Banda Sea earthquake of 24 June 2019 is shown by the large red circle north of Darwin. The radius of the circle increases with increasing magnitude. Source: Geoscience Australia.

Risk Frontiers Seminar 2019

Wednesday, 11th September 2019

at the Museum of Sydney
cnr Bridge and Phillip Streets, Sydney
2pm until 4.30pm followed by light refreshments in the foyer.

Provisional Programme:

  • Prof Andy Pitman AO – Do climate models tell us about future extremes?
  • A/Prof Lisa Alexander – Using climate observations in actuarial assessments of risk
  • Dr Greg Holland – Projecting changes in tropical cyclone activity in a warmer world
  • Prof Seth Westra – Quantifying the impacts of climate change and variability on floods and drought
  • Dr Ryan Crompton – Assessing future natural catastrophe risk using NAT CAT models

At the conclusion of the presentations, there will be an interactive panel session including all of the above speakers.

To register please email info@riskfrontiers.com

Risk Frontiers Newsletter V18 I3 June 2019

In this issue:

The need for transparency in climate services

by Thomas Mortlock, Stuart Browning, Andrew Gissing, Ryan Crompton & John McAneney

Figure 1. Bushfires in Tasmania during the January 2019 heatwave. Source: Sky News

The rapidly expanding market of climate change service providers spawns from developments both internationally and in Australia focused on the disclosure of climate-change related financial risks and regulatory changes (more detail in our previous Briefing Note 386).

Private sector companies are increasingly aware of the need to understand their exposure to extreme weather in a climate-changed future, and in doing so require granular, short-term and accurate climate data to incorporate into business risk models. They also require knowledge brokers to translate this information and understand its inherent uncertainty. A growing number of products now offer this service. However, the use of global climate model output to project climate change impacts from extreme weather at a business-level is not a simple task.

Recent research highlights both the appetite for consuming climate model data (Goldstein et al., 2018; Meah, 2019) and, in some cases, the misapplication of what is available (Nissan et al., 2019). This briefing note attempts to explain – in simple terms – what climate models do and do not tell us.

Climate change is happening, now

Recent advances in model-based climate attribution studies and an a priori conceptual understanding of the climate system both indicate that the rise in the mean global temperature over the past several decades (IPCC, 2013), and some extreme weather events (e.g. Patricola and Wehner, 2018), are driven at least in part, by human-induced greenhouse gas emissions.

Climate attribution studies use high-resolution atmospheric models to replicate historical temperatures and, in some cases, extreme events both with and without anthropogenic greenhouse gas (GHG) emissions (see Figure 2). If the model result without GHG input shows a significant difference from the observed climate state, then the “difference” can be attributed to the effects of GHGs. Since these studies focus on the past, the models can be calibrated to available observations, giving a much higher confidence in their results than projections of future changes.

Figure 2. Australia’s average annual temperature relative to the 1861–1900 period. The grey line represents Australian temperature observations since 1910, with the black line the ten‑year running mean. The shaded bands are the 10–90% range of the 20-year running mean temperatures simulated from the latest generation of Global Climate Models. The grey band shows simulations that include observed conditions of greenhouse gases, aerosols, solar input and volcanoes; the blue band shows simulations of observed conditions but not including human emissions of greenhouse gases or aerosols; the red band shows simulations projecting forward into the future (all emissions scenarios are included). Warming over Australia is expected to be slightly higher than the global average. The dotted lines represent the Australian equivalent of the global warming thresholds of 1.5 °C and 2 °C above preindustrial levels, which are used to inform possible risks and responses for coming decades Source: BoM (2019).

Climate Change impacts are inevitable for decades to come

The anthropogenic component of climate changes we are experiencing today is the result of accumulated carbon emissions over past decades.

Given the thermal retention of global oceans, we are inexorably bound to undergo anthropogenic climate change impacts for decades to come, even if we transition to a carbon-neutral economy tomorrow. Consequently, we are going to be living with, and adapting to changes in the distribution of extreme weather events, for decades to come.

The problems of temporal and spatial scales

A relevant risk time horizon for most business applications lies between one year and several decades. At these timescales, internal climate variability (such as ENSO) remains a strong influencer of extreme weather. This is especially so for the Australian region.

Internal variability is difficult to forecast, with or without a climate change influence. In addition, the spatial scale and some physical restrictions of GCMs mean there is a general underrepresentation of the frequency of extreme weather events in the projections. For these reasons, projections of near-future changes in extreme weather are uncertain.

Assigning probability

Climate change projections are expressed via the IPCC’s four Representative Concentration Pathways (RCPs). RCPs represent plausible scenarios of how carbon emissions will be mitigated in the future. Although intended as scenarios of the future, RCPs are often interpreted as quantitatively meaningful forecasts. However, probabilities assigned to RCPs represent the relative frequencies with which different outcomes occur within an ensemble of several models and simulations – not the probability of future occurrence, which cannot be known with any certainty.

It is also difficult to assign probabilities to scenarios that occur outside the range of modelled futures – for example, extreme sea level rise resulting from non-linear ice sheet dynamics. This problem is known as “deep uncertainty” and is a relatively young area of climate research (e.g. Bakker et al., 2017; Bamber et al., 2019).

The solution?

Despite their limitations, GCM simulations for multiple scenarios are the best we have. When interpreted together with a sound understanding of atmospheric dynamics and a clear appreciation of model limitations, GCM projections can provide valuable information. The upcoming CMIP6 suite of experiments promises to address some of the previous limitations.

However, there needs to be much more transparency over how climate data are being applied in the ever-expanding market of climate service tools. A suitable approach for assessing business-scale exposure to extreme weather events in a climate-changed future is a key challenge for climate service providers in Australia and worldwide. The UN Environment Finance Initiative, for example, is currently looking into new methodologies that address this issue. Increased transparency in the market for climate services will limit maladaptation, the future cost of which is unknown.

Risk Frontiers, in consultation with business and climate experts at the ARC Centre of Excellence for Climate Extremes, is applying their suite of catastrophe loss models and associated 25 years of research in this field, to develop a robust way of assessing financial risks associated with climate-changed weather extremes and exploring adaptation pathways. For more information on how we are approaching this, get in touch.

References

Bakker, A.M.R., Wong, T.E. et al. (2017). Sea-level projections representing the deeply uncertain contribution of the West Antarctic ice sheet. Scientific Reports, 3880.
Bamber, J.L., Oppenheimer, M. et al. (2019). Ice sheet contribution to future sea-level rise from structured expert judgement. PNAS.
Bureau of Meteorology [BoM] (2019). State of the Climate Report 2018. Bureau of Meteorology, Australia.
Goldstein, A. Turner, W.R., et al. (2019). The private sector’s climate change risk and adaptation blind spots. Nature Climate Change 9, 18-25.
IPCC AR5 WG1 (2013). Stocker et al. (eds). Climate Change 2013. The Physical Science Basis. Cambridge University Press.
Mear, N. (2019). Climate uncertainty and policy making – what do policy makers want to know? Regional Environmental Change.
Nissan, H., Goddard, L., et al. (2019). On the use and misuse of climate change projections in international development. WIRES Climate Change.
Patricola, C.M., Wehner, M.F. (2018). Anthropogenic influences on major tropical cyclone events. Nature 563, 339–346.


RISK FRONTIERS SEMINAR SERIES 2019

Wednesday, 11th September 2019

at the Museum of Sydney
cnr Bridge and Phillip Streets, Sydney
2pm until 4.30pm followed by light refreshments in the foyer.

Provisional Programme:

  • Prof Andy Pitman – GCMs and their limitations (including in modelling climate extremes)
  • A/Prof Lisa Alexander – how climate observations are now being used in actuarial assessments of risk
  • Dr Greg Holland – Projecting changes in tropical cyclone activity in a warmer world
  • Prof Seth Westra – Quantifying the impacts of climate change and variability on Australian and international water resources, including on floods and drought
  • Dr Ryan Crompton – Assessing future risk assuming projected changes in hazard activity, exposure and vulnerability using NAT CAT models

At the conclusion of the presentations, we plan to run an unscripted panel session (30-40 mins) including all of the above speakers.


Ryan is the latest addition to the Risk Frontiers team and will take on a role as General Manager. He brings to this role extensive experience working at senior levels of Aon, both within Australia and abroad, and formerly as a senior analyst within the Australian Defence Intelligence Organisation. A physicist by background, Ryan blends analytical disciplines and a scientific mindset with a passion for engagement and development of relationships across the insurance and reinsurance sectors.

Ryan’s current professional focus / interests include:

  • Identifying strategies to quantify risk / reward strategies and capital allocation / optimisation
  • Directing company and public sector financial resilience to large scale natural and man-made shocks, and long-term risk planning
  • Business partnerships and innovation capital

Career overview

Prior to joining Risk Frontiers, Ryan was a Senior Broker and manager at Aon Japan in Tokyo. In his capacity as a Senior Client Executive for Global Japanese non-life insurance firms, his primary professional focus was directing risk capital strategies, transacting reinsurance and client advocacy. This included managing a team of brokers responsible for delivering the full capability of the firm to (re)insurance clients; insurance product development across property, casualty and cyber lines; identification of risk capital mitigation and transfer solutions, directing risk and concentration analysis, portfolio optimization, catastrophe modelling, pricing and cost recovery, and economic capital modelling. In Australia, Ryan was responsible for the firm’s reinsurance treaty operations in Melbourne.

Ryan has extensive experience in strategic and quantitative disciplines in government, industry and academic sectors, where he has led the production and coordination of complex analysis for government and industry stakeholders. He previously worked for the Australian Department of Defence on counter proliferation and global security issues. At the Department of Defence, Ryan held a Top Secret security clearance and represented the Australian Government at International Conferences and classified information exchanges.

Prior to joining the Government sector, Ryan was a Senior Research fellow in Geophysics at RMIT University in Melbourne. He was the lead theorist on an airborne EM research project looking to develop green field exploration techniques for mining and ground water applications. He holds a PhD in Physics and his research focused on the mathematical understanding of quantum systems, for which he developed a mathematical theory that has been published in international journals. As part of this research, Ryan developed a path integral Monte Carlo simulation code and was a visiting student at Cambridge University UK.

In his spare time, Ryan enjoys hiking, sailing and playing guitar. He looks forward to returning to Sydney to enjoy the beach and outdoor culture. While in Japan he travelled extensively and conquered many of the sacred mountains – including Fuji-san. He has also completed an ultra-marathon across the Simpson desert in Australia, raising money to support research into type 1 diabetes. He is expected to raise the average fitness level at Risk Frontiers.


Cracks in Strata Building Integrity

In an article in the Sydney Morning Herald on 18 June 2019, Stephen Goddard, chair of Owners Corporation Network, describes how, for 20 years, new residential strata schemes have been plagued with building defects. According to one estimate, 80% of all new residential strata schemes are constructed with defects. The most common defects are those that allow water penetration and that lack fire safety requirements. Facades falling into the street come in at third place. Typically, building defects take years to be identified by which time the statutory warranty period of 6 years has expired. Even if the fault is discovered within the warranty period, the builder/developer can be hard to find. Consequently, most buildings have hidden their problem to protect their capital worth. Special levies are raised for the millions of dollars required to remediate the common property. People are forced to live within the building while the remediation work happens around them. Keeping silent and fixing the defects with special levies has had the benefit of enabling owners to resell without a capital loss, and ever-increasing property prices have masked the strata building defects problem. But now, prudent purchasers would not buy strata “off the plan” or newer than 10 years of age. Goddard asserts that “No longer can bad building practices be hidden by increased capital values. Preserving whatever is left of public confidence in strata living requires action now in this and all our states and territories to adopt the measures NSW took to COAG earlier this year.”

The need for transparency in climate services

by Thomas Mortlock, Stuart Browning, Andrew Gissing, Ryan Crompton

Figure 1. Bushfires in Tasmania during the January 2019 heatwave. Source: Sky News.

The rapidly expanding market of climate change service providers spawns from developments both internationally and in Australia focused on the disclosure of climate-change related financial risks and regulatory changes (more detail in our previous Briefing Note 386).

Private sector companies are increasingly aware of the need to understand their exposure to extreme weather in a climate-changed future, and in doing so require granular, short-term and accurate climate data to incorporate into business risk models. They also require knowledge brokers to translate this information and understand its inherent uncertainty. A growing number of products now offer this service. However, the use of global climate model output to project climate change impacts from extreme weather at a business-level is not a simple task.

Recent research highlights both the appetite for consuming climate model data (Goldstein et al., 2018; Meah, 2019) and, in some cases, the misapplication of what is available (Nissan et al., 2019). This briefing note attempts to explain – in simple terms – what climate models do and do not tell us.

Climate change is happening, now

Recent advances in model-based climate attribution studies and an a priori conceptual understanding of the climate system both indicate that the rise in the mean global temperature over the past several decades (IPCC, 2013), and some extreme weather events (e.g. Patricola and Wehner, 2018), are driven at least in part, by human-induced greenhouse gas emissions.

Climate attribution studies use high-resolution atmospheric models to replicate historic climate conditions both with and without anthropogenic greenhouse gas (GHG) emissions. If the model result without GHG input shows a significant difference to the observed climate state, then the “difference” can be attributed to the effects of GHGs. Since these studies focus on the past, the models can be calibrated to available observations, giving a much higher confidence in their results than projections of future changes.

We are committed to climate change impacts for decades to come

The anthropogenic component of climate changes we are experiencing today is the result of accumulated carbon emissions of past decades.

Given the thermal retention of global oceans, we are committed to anthropogenic climate change impacts for decades to come, even if we transition to a carbon-neutral economy tomorrow. Since this is unlikely, changes in the distribution of extreme weather events is something we are going to be living with, and adapting to, for decades to come.

The problem of scale

A relevant risk time horizon for most business applications lies between one year and several decades. At these timescales, internal climate variability (such as ENSO) remains a strong influencer of extreme weather. This is especially so for the Australian region.

Internal variability is difficult to forecast. In addition, the spatial scale and some physical restrictions of GCMs mean there is a general underrepresentation of the frequency of extreme weather events in the projections. For these reasons, projections of near-future changes in extreme weather are uncertain.

Assigning probability

Climate change projections are expressed via the IPCC’s four Representative Concentration Pathways (RCPs). RCPs represent plausible scenarios of how carbon emissions will be mitigated in the future. Although intended as scenarios of the future, RCPs are often interpreted as quantitatively meaningful forecasts.

Instead, probabilities assigned to RCPs represent the relative frequencies with which different outcomes occur within an ensemble of several models and simulations – not the probability of future occurrence, which cannot be known with any certainty.

It is also difficult to assign a probability to scenarios that occur outside the range of modelled futures – for example, extreme sea level rise resulting from non-linear ice sheet dynamics. This problem is known as “deep uncertainty” and is a relatively young area of climate research (e.g. Bakker et al., 2017; Bamber et al., 2019).

The solution?

Despite their limitations, GCM simulations for multiple scenarios are the best we have. When interpreted together with a sound understanding of atmospheric dynamics and a clear appreciation of model limitations, GCM projections can provide valuable information. The upcoming CMIP6 suit of experiments promises to address some of the previous limitations.

However, there needs to be much more transparency over how climate data is being applied in the ever-expanding market of climate service tools. A suitable approach for assessing business-scale exposure to extreme weather events in a climate-changed future is a key challenge for climate service providers in Australia and worldwide. The UN Environment Finance Initiative, for example, are currently looking into new methodologies that address this issue. Increased transparency in the market for climate services will limit maladaptation, the future cost of which is unknown.

Risk Frontiers, in consultation with business and climate experts at the ARC Centre of Excellence for Climate Extremes, is applying their suite of catastrophe loss models and associated 25 years of research in this field, to develop a robust way of assessing financial risks associated with climate-changed weather extremes and exploring adaptation pathways. For more information on how we are approaching this, get in touch.

 

References

Bakker, A.M.R., Wong, T.E. et al. (2017). Sea-level projections representing the deeply uncertain contribution of the West Antarctic ice sheet. Scientific Reports, 3880.

Bamber, J.L., Oppenheimer, M. et al. (2019). Ice sheet contribution to future sea-level rise from structured expert judgement. PNAS.

Goldstein, A. Turner, W.R., et al. (2019). The private sector’s climate change risk and adaptation blind spots. Nature Climate Change 9, 18-25.

IPCC AR5 WG1 (2013). Stocker et al. (eds). Climate Change 2013. The Physical Science Basis. Cambridge University Press.

Mear, N. (2019). Climate uncertainty and policy making – what do policy makers want to know? Regional Environmental Change.

Nissan, H., Goddard, L., et al. (2019). On the use and misuse of climate change projections in international development. WIRES Climate Change.

Patricola, C.M., Wehner, M.F. (2018). Anthropogenic influences on major tropical cyclone events. Nature 563, 339–346.

Australia’s largest hailstorm disaster

By Andrew Gissing, Chas Keys, Ryan Crompton

The 14th of April, marked the 20th anniversary of the Sydney 1999 hailstorm. The storm is considered to have been Australia’s most expensive insured natural disaster with insurers paying out claims to the tune of 5.5 billion dollars in today’s terms. But such a hailstorm cannot be regarded as a once-in-a-lifetime event.

The storm occurred outside the typical “storm season” usually taken to occur between September and March. It forced a rethink on how the season should be defined.

The storm pelted thousands of homes and vehicles in Sydney’s eastern suburbs with cricket ball and grapefruit-sized hailstones, along with heavy rain and strong winds. The hail was estimated to have weighed some 500,000 tones. Over 100,000 people were affected: one person died and several were injured and attended hospitals. But the real story that emerged was one of the damage caused and the disruption to lives that resulted.

Some 24,000 homes, 70,000 vehicles, 60 schools and 23 aircraft were damaged. The financial scale of the disaster surpassed more recent extreme events such as Queensland’s 2011 floods and the Black Saturday Bushfires of 2009 in Victoria. Severe storms, especially those that bring large hail, are among Australia’s most costly forms of natural perils.

Source: Risk Frontiers

Where large hailstones fell there was substantial damage to roofing tiles and building windows. In the worst hit areas of Rosebery and Kensington almost every dwelling in entire street blocks had been damaged. Some hailstones were confirmed to have had diameters of more than nine centimetres.

Damaged roofs and windows resulted in heavy rainwaters entering homes, with much water damage to home contents. In the most extreme cases ceilings collapsed under the weight of saturated insulation batts. Hailstones punched holes through pergolas and outdoor furniture, shredding gardens. Vehicles too suffered dents to body work and broken windows; about one third of the insurance payout went to cover this kind of damage.

Some motorists became trapped in floodwaters. Days after the storm, some elderly people were found in a state of shock still living in their homes.

Many homes were rendered uninhabitable and some remained so for months thanks to delays to repairs due to the scale of the event: many people had to be given emergency shelter for long periods at public expense. The construction industry was placed under stress and interstate resources were needed to meet the demand. An unusually wet and windy autumn and winter slowed the emergency response and the completion of repairs.

The State Emergency Service (SES) as the lead agency for storm response in NSW formed the front line in working with households to effect temporary building repairs. SES Volunteers from across the nation travelled to help and were supported by crews from the then NSW Fire Brigades, Rural Fire Service, Volunteer Rescue Association and many other organisations. At the peak of the response some 3000 personnel were deployed in the field. In all some 44,000 calls for help were attended involving around 20,000 properties, requiring 12,450 personnel in all to deal with.

Severe hailstorms have always been a feature of Sydney’s climate, and the costs associated with the worst of them have been huge. The hailstorm that hit Sydney in December 2018 led to losses that have now surpassed one billion dollars, and those that struck the Blacktown-Baulkham Hills area over the summer of 2007-08 were of similar impact in terms of insured costs. Severely consequential hailstorms were also experienced in Auburn and nearby in 1990 and on the upper north shore in 1991. One of the most damaging occurred in January 1947.

Though the influence of climate change on hailstorms in Australia is uncertain with only few projection studies undertaken, increases in wealth and the size of Sydney and other capital cities mean that metropolitan areas are more exposed to severe storm events. Since 1999 Greater Sydney has grown by over 1.3 million people with the number of dwellings increasing around 30%.

With increasing exposure will come increased loss activity but thankfully models exist to assess hailstorm risk on a national scale. For those with access, the likelihood of the April 1999 Sydney insured loss, or similar, is easily quantifiable.

Risk Frontiers offers a national hail loss model for Australia. Contact us at info@riskfrontiers.com