Working with the Bushfire and Natural Hazards Cooperative Research Centre and the Bureau of Meteorology, my colleagues and I surveyed 250 residents and 60 business managers in Western Sydney and the NSW North Coast.
We found that 45% of those at risk – including the elderly, ill and very young – did not proactively respond to heatwave warnings as they did not think it necessary or did not know what to do.”
Sydney was the hottest city on earth on Sunday 7 January 2018 (and no, I’m not talking about its nightlife) when Penrith, in the outer west, reached 47.30C, pipping its previous record set on 11 February 2017 (News Limited, 2018).
But if you want really hot, then travel back in time to 1939, when the Old Richmond Station set Sydney’s official heat record at 47.80C. Yes, we’ve had heatwaves before. Way before.
The table below shows numbers of deaths and death rates per 100,000 population from episodes of extreme heat in Australia by decade between 1844 and 2010, as recorded in Risk Frontiers’ PerilAUS database (after Coates et al. (2013)). PerilAUS is a resource of natural hazard event impacts reaching back to the early days of Australia’s European settlement. The death rate is the number of deaths per head of population in the country at that time, and was consistently significantly higher between 1890 and 1939 than for any period before or since.
Of all of the entrIes in the Table, the January 1939 event was notable for its longevity and record daily temperature maxima. Victoria and South Australia, as well as country NSW, were affected with Melbourne reaching a high of 45.60C and Adelaide 46.10C. In NSW, Bourke suffered through 37 consecutive days over 380C.
PerilAUS records show that at least 420 people died in the 1939 event across Australia, most (77%) in NSW. The series of heatwaves were accompanied by strong northerly winds, and followed a very dry six months. This led to the disastrous Black Friday bushfires in Victoria, which killed 71 people.
Most will remember the catastrophic bushfires that destroyed several towns in Victoria in 2009 but not many will remember that these fires also followed two heatwave events across Victoria and SA, where at least 432 people died.
This figure comprises mainly a measure of excess deaths rather than recorded individual deaths. An excess death is a premature death and, in this context, a measure of the number of deaths occurring over and above that expected for that location and time of year.
In 2009, new records of three consecutive days over 430C in Melbourne and eight over 400C in Adelaide were set. A feature of these heatwaves was the very hot minimum temperatures, with Melbourne’s temperature falling to between 20-250C overnight and Adelaide to just 300C.
A similar death toll resulted from the heatwave that occurred from October 1895 to January 1896 that impacted nearly the entire continent but especially the interior. PerilAUS records 435 deaths, 89% of them within NSW. Deaths also occurred in SA, WA, Victoria and Queensland. Bourke, in NSW, lost 1.6% of its population to the heat: temperatures of 400C in the shade were already being recorded in October, mid-Spring.
Heatwaves in Australia, including catastrophic ones, are not new. Risk Frontiers first noted the fact that they are Australia’s number one natural hazard killer more than two decades ago (Coates, 1996). For further reading on this important natural hazard, the reader is referred to Coates et al. (2013).
The following article, written by Tom Hubble and Samantha Clarke (U. Sydney) and Hannah Power and Kaya Wilson (U. Newcastle), appeared on The Conversation on December 10, 2017. The authors have modelled tsunamis that would be generated by these slides and conclude that “we suspect that such tsunamis pose little to no immediate threat to the coastal communities of eastern Australia” although it seems that very localised effects could be significant. Notably the article does not mention onshore geological evidence for the occurrence of large tsunamis in Australia (e.g. Bryant and Nott, 2001, attributed to cosmogenic sources by Bryant et al. 2007), perhaps because this evidence is highly controversial. There are few data on the speed with which these submarine landslides move; if they move slowly they may not be tsunamigenic.
One recent example of a destructive tsunami that may have been caused by an undersea landslide triggered by an earthquake is the 1998 Sissano Lagoon, New Guinea tsunami associated with an Mw 7.0 earthquake (Tappin et al., 2008; 2014). There is evidence that a delayed, earthquake-triggered, submarine slump caused 2200 deaths from a tsunami with maximum coastal flow depths of 16 m, and a focused runup along a limited length of coast. (I was a high school teacher for two years in Wewak, just down the coast from this event, and was motivated to study seismology after experiencing, some years earlier, a neighbouring earthquake that did not generate a tsunami).
The majority of our seafloor maps depict most of the ocean as blank and featureless (and the majority still do!). These maps are derived from wide-scale satellite data, which produce images showing only very large features such as sub-oceanic mountain ranges (like those seen on Google Earth). Compare that with the resolution of land-based imagery, which allows you to zoom in on individual trees in your own neighbourhood if you want to. But using a state-of-the art sonar system attached to the Southern Surveyor, we have now studied sections of the seafloor in more detail. In the process, we found evidence of huge underwater landslides close to shore over the past 25,000 years. Generally triggered by earthquakes, landslides like these can cause tsunamis.
Into the void
For 90% of the ocean, we still struggle to identify any feature the size of, say, Canberra. For this reason, we know more about the surface of Venus than we do about our own ocean’s depths. As we sailed the Southern Surveyor in 2013, a multibeam sonar system attached to the vessel revealed images of the ocean floor in unprecedented detail. Only 40-60km offshore from major cities including Sydney, Wollongong, Byron Bay and Brisbane, we found huge scars where sediment had collapsed, forming submarine landslides up to several tens of kilometres across.
What are submarine landslides?
Submarine landslides, as the name suggests, are underwater landslides where seafloor sediments or rocks move down a slope towards the deep seafloor. They are caused by a variety of different triggers, including earthquakes and volcanic activity.
As we processed the incoming data to our vessel, images of the seafloor started to become clear. What we discovered was that an extensive region of the seafloor offshore New South Wales and Southern Queensland had experienced intense submarine landsliding over the past 15 million years. From these new, high-resolution images, we were able to identify over 250 individual historic submarine landslide scars, a number of which had the potential to generate a tsunami. The Byron Slide in the image below is a good example of one of the “smaller” submarine landslides we found – at 5.6km long, 3.5km wide, 220m thick and 1.5 cubic km in volume. This is equivalent to almost 1,000 Melbourne Cricket Grounds.
The historic slides we found range in size from less than 0.5 cubic km to more than 20 cubic km – the same as roughly 300 to 12,000 Melbourne Cricket Grounds. The slides travelled down slopes that were less than 6° on average (a 10% gradient), which is low in comparison to slides on land, which usually fail on slopes steeper than 11°.
We found several sites with cracks in the seafloor slope, suggesting that these regions may be unstable and ready to slide in the future. However, it is likely that these submarine landslides occur sporadically over geological timescales, which are much longer than a human lifetime. At a given site, landslides might happen once every 10,000 years, or even less frequently than this.
Since returning home, our investigations have focused on how, when, and why these submarine landslides occur. We found that east Australia’s submarine landslides are unexpectedly recent, at less than 25,000 years old, and relatively frequent in geological terms. We also found that for a submarine landslide to generate along east Australia today, it is highly likely that an external trigger is needed, such as an earthquake of magnitude 7 or greater. The generation of submarine landslides is associated with earthquakes from other places in the world.
We are concerned about the hazard we would face if a submarine landslide were to occur in the future, so we model what would happen in likely locations. Modelling is our best prediction method and requires combining seafloor maps and sediment data in computer models to work out how likely and dangerous a landslide threat is.
Our current models of tsunamis generated by submarine landslides suggest that some sites could represent a future tsunami risk for Australia’s east coast. We are currently investigating exactly what this threat might be, but we suspect that such tsunamis pose little to no immediate threat to the coastal communities of eastern Australia. That said, submarine landslides are an ongoing, widespread process on the east Australian continental slope, so the risk cannot be ignored (by scientists, at least). Of course it is hard to predict exactly when, where and how these submarine landslides will happen in future. Understanding past and potential slides, as well as improving the hazard and risk evaluation posed by any resulting tsunamis, is an important and ongoing task. In Australia, more than 85% of us live within 50km of the coast. Knowing what is happening far beneath the waves is a logical next step in the journey of scientific discovery.
Bryant, E.A. and J. Nott (2001). Geological indicators of large tsunami in Australia. Natural Hazards, 24, 231–249.
Bryant, E.A., G. Walsh and D. Abbott (2007). Cosmogenic mega-tsunami in the Australia region: are they supported by Aboriginal and Maori legends? University of Wollongong, Research Online
Tappin, D., P. Watts, and S.T, Grilli, (2008). The Papua New Guinea tsunami of 1998: anatomy of a catastrophic event. Natural Hazards and Earth System Sciences 8, 243–266.
Tappin, D. et al. (2014). Did a submarine landslide contribute to the 2011 Tohoku tsunami? Marine Geology 357 (2014) 344-361.
The floods that deluged parts of Victoria over the weekend are the latest in the state’s long history of flooding, following on from major floods in 2010, 2011, 2012 and 2016. In all such events, emergency services are on standby to rescue motorists who drive into floodwaters and get stuck or washed away – with potentially fatal consequences.
Most of the 178 flood-related deaths since 2000 have been a consequence of motorists driving into floodwaters.
Although there is a growing body of research on the decision-making of people who choose to enter floodwater, little research has been done before now on the factors that make some stretches of road more dangerous than others.
In the past few weeks there have been sensational reports about a forecast accelerated rate of occurrence of large earthquakes in 2018. Fortunately, one of the authors of the work that lies behind these reports has explained her calm view of the situation. The following article, written by Sarah Kaplan, appeared in the Washington Post, last updated 22 November 2017.
Rebecca Bendick would like you to not panic. The University of Montana geophysicist knows you may have read the articles warning about “swarms of devastating earthquakes” that will allegedly rock the planet next year thanks to a slowdown of the Earth’s rotation. And she feels “very awful” if you’ve been alarmed. Those dire threats are based on Bendick’s research into patterns that might predict earthquakes – but claims of an impending “earthquake boom” are mostly sensationalism.
There is no way to predict an individual earthquake. Earthquakes occur when potential energy stored along cracks in the planet’s crust gets released, sending seismic waves through the Earth. Since scientists know where those cracks exist, and how they are likely to convulse, they can develop forecasts of the general threat for an area. But the forces that contribute to this energy buildup and trigger its release are global and complex, and we still cannot sort out exactly how it might unfold.
In a paper published in August in the journal Geophysical Research Letters, Bendick and colleague Roger Bilham, a geophysicist at the University of Colorado, Boulder, did find a curious correlation between clusters of certain earthquakes and periodic fluctuations in the Earth’s rotation. By examining the historic earthquake record and monitoring those fluctuations, scientists might be able to forecast years when earthquakes are more likely to occur, they suggest.
“Something that people have always hoped to find . . . is some kind of a leading indicator for seismicity, because that gives us a warning about these events,” Bendick said. But that conclusion is by no means set in stone. It hasn’t been demonstrated in the lab or confirmed by follow-up studies. Several scientists have said they’re not yet convinced by Bendick’s and Bilham’s research. “The main thing I came away thinking was real old-fashioned scientific ‘let’s check this’ kind of thoughts,” research geophysicist Ken Hudnut told Popular Science. Hudnut, who works on earthquake-risk programs at the US Geological Survey, was not involved in the paper. And that reaction is okay with Bendick. That’s how these things are supposed to go: “Someone says something kind of marginally outlandish, and everyone checks their work and that’s how science progresses,” she said.
Historically, the field of earthquake forecasting has seen some particularly outlandish claims. People have tried to predict temblors based on the behaviour of animals, gas emissions from rocks, low-frequency electric signals rippling through the Earth – all without much success. For that reason, Bendick said, “it’s a little bit scary to get into the game.” But getting a prediction right can mean the difference between life and death for countless people. The stakes are too high not to try.
For their recent paper, she and Bilham looked through the century-long global earthquake record to see if they could spot any signs that temblors around the world are linked. Initially, the data appeared completely random. But then Bendick and Bilham added a new number to their analysis: the “renewal interval,” or the amount of time a given earthquake zone requires to build up potential energy for a really big quake. “Basically you can think of earthquakes as something like a battery or a neuron; they have a certain amount of time they need to be charged up,” Bendick said.
A certain class of earthquakes – those with a magnitude of 7.0 or more, and a short renewal interval between 20 and 70 years – seemed to cluster in the historic record. Every three decades or so, the planet seemed to experience a bunch of them – as many as 20 per year, instead of the typical 8 to 10. It was as if something was causing the earthquakes to synchronise, even though they were happening in spots scattered around the globe. Contrary to some reports on the study, “it’s not exactly the case that every 32 years we have a bad patch,” Bendick said. “If it were that, people would have found [the pattern] ages ago. That would be super obvious in the record.” Instead, she explained, “events with that renewal interval happen together more often than they happen at random, and that pattern is statistically significant.” Sure, it’s a less flashy finding than, “we know when earthquakes will happen,” she acknowledged. But that’s geophysics for you. “We’re scientists, not magicians,” she said.
Next, Bendick and Bilham tried to figure out what mechanism might explain these earthquake clusters. They studied a wide range of global phenomena that unfold over the same time scales: sloshing of the molten rock in the mantle, ocean circulation changes, momentum transfer between the Earth’s core and the lithosphere (the planet’s solid, outermost shell).
The best fit were tiny, cyclical changes in the speed of the Earth’s rotation. The planet slows down infinitesimally every 30 years or so, and roughly five years later, a cluster of these severe, short-interval earthquakes appears. Russian geophysicists Boris Levin and Elena Sasorova have pointed out this correlation before, Bendick noted. So she and Bilham tried to take it a step further: They found a mechanism that might link the Earth’s rotation and clusters of quakes.
See, when the Earth’s rotation rate changes, its shape shifts. As the planet speeds up, mass moves toward the equator, much the way a dancer’s skirt flares out when she spins. When it slows, that mass shifts back toward the poles. The cumulative effect is tiny – a millimetre difference in the width of the globe. But if potential energy has already built up at a number of faults – “if they’re locked and loaded, as we’d say in Montana,” Bendick noted – “that tiny change is enough to kick some proportion of the faults over into their failure mode, which is earthquakes.”
Earth is currently at the end of a slowing period, Bendick pointed out, and the historic record would indicate another “cluster” may be on its way. She and Bilham hope the pattern might help scientists and public officials make some sense of the Earth’s unpredictable shaking. If disaster planners can say with some assurance that the planet is entering a period in which quakes are more likely, they might have an easier time making the case for preparedness measures.
But that doesn’t necessarily mean 2018 will be a particularly devastating year. For one thing, the kinds of temblors Bendick and Bilham analysed happen in areas that are already earthquake-prone – Japan, New Zealand, the west coast of the United States. For people who live in those regions, there is always a risk of a quake, and it is always good to be prepared.
Their study is about probabilities, not predictions, Bendick cautioned. Earth’s slowing does not mean that a quake will happen in the next year or so, just that the likelihood may have gone up. Moreover, this pattern of earthquake occurrence is definitely not the only factor influencing the Earth’s behaviour – if it were, scientists would have noticed the pattern a long time ago. There are doubtless other earthquake cycles on the planet, driven by phenomena not considered in the paper.
The research got a lot of attention after Bilham presented it at the October meeting of the Geological Society of America. Several critics noted that correlation is not causation – earthquake clusters and fluctuations of Earth’s rotation might happen on the same time scales, but that does not mean they are linked. Bendick acknowledged that there is less evidence for the proposed mechanism than for the pattern itself. But she’s confident the pattern is there. “I think this is likely to inspire many people to look at this pattern, and it’s possibly someone will come up with an even better explanation,” she said.
Notes by Paul Somerville
The following is excerpted from the abstract of Bilham and Bendick (2017).
On five occasions in the past century a 25-30% increase in annual numbers of Mw≥7 earthquakes has coincided with a slowing in the mean rotation velocity of the Earth, with a corresponding decrease at times when the length-of-day (LoD) is short. The correlation between Earth’s angular deceleration (d[LoD]/dt) and global seismic productivity is yet more striking, and can be shown to precede seismicity by 5-6 years, permitting societies at risk from earthquakes an unexpected glimpse of future seismic hazard.
The cause of Earth’s variable rotation is the exchange of angular momentum between the solid and fluid Earth (atmospheres, oceans and outer core). Maximum LoD is preceded by an angular deceleration of the Earth by 6-8 years. We show delayed (increase in) global seismic productivity is most pronounced at equatorial latitudes 10°N-30°S.
The observed relationship is unable to indicate precisely when and where these future earthquakes will occur, although we note that most of the additional Mw>7 earthquakes have historically occurred near the equator in the West and East Indies. A striking example is that since 1900 more than 80% of all M≥7 earthquakes on the eastern Caribbean plate boundary have occurred 5 years following a maximum deceleration (including the 2010 Haiti earthquake).
The 5-6 year advanced warning of increased seismic hazards afforded by the first derivative of the LoD is fortuitous, and has utility in disaster planning. The year 2017 marks six years following a deceleration episode that commenced in 2011, suggesting that the world has now entered a period of enhanced global seismic productivity with a duration of at least five years.
The correlation between the change in Earth’s rotation rate and the frequency of Mw>7 earthquakes from Bendick and Bilham (2017) is shown in Figure 1. I have not seen the Bilham and Bendick (2017) presentation.
Figure 1. Changes in the length of the day correlate with decadal fluctuations in annual M ≥ 7 earthquakes, smoothed with 10 year running mean. Peak seismic activity and rotational acceleration occur at 15, 33, 60, and 88 year intervals. Source: Bendick and Bilham, 2017.
Bendick, R., and R. Bilham (2017), Do weak global stresses synchronize earthquakes?. Geophys. Res. Lett., 44, 8320–8327, doi:10.1002/2017GL074934
Bilham, R. and R. Bendick (2017). A five year forecast for increased global seismic hazard. Invited presentation, Geological Society of America Meeting, Seattle, Washington.
Geoscience Australia (GA) has embarked on a project to update the seismic hazard model for Australia through the National Seismic Hazard Assessment (NSHA18) project. The following information is excerpted from Allen et al. (2017) and from discussions that took place at the Annual Conference of the Australian Earthquake Engineering Society (AEES) in Canberra, November 24-26, 2017 and a pre-conference workshop organised by GA on the NSHA18 project held on November 23.
The draft NSHA18 update yields many important advances on its predecessors, including:
calculation in a full probabilistic framework using the Global Earthquake Model’s OpenQuake-engine;
consistent expression of earthquake magnitudes in terms of moment magnitude, Mw;
inclusion of epistemic uncertainty through the use of alternative source models;
inclusion of a national fault-source model based on the Australian Neotectonic Features database;
the use of modern ground-motion models; and
inclusion of epistemic uncertainty on seismic source models, ground-motion models and fault occurrence and earthquake clustering models.
The draft NSHA18 seismic design ground motions are significantly lower than those in the current (1991-era) Standards Australia AS1170.4:2007 hazard map at the 1/500-year annual ground-motion exceedance probability (AEP) level. The large reduction in seismic hazard at the 1/500-year AEP level has led engineering design professionals to question whether the new draft design values will provide enough structural resilience to potential seismic loads from rare large earthquakes. These professionals are planning to use a seismic design factor of 0.08g as a minimum design level for the revised AS1170.4 standard, due to be released in 2018, and are discussing the idea of transitioning to a 1/2475-year AEP in the longer term, consistent with the trend in other countries including Canada and the United States.
The primary reason for the significant drop in seismic hazard is due to adjustments to earthquake catalogue magnitudes. Firstly, prior to the early 1990’s, most Australian seismic observatories relied on the Richter (1935) local magnitude (ML) formula developed for southern California. At regional distances (where many earthquakes are recorded), the Richter scale will tend to overestimate ML relative to modern Australian magnitude formulae. Because of the likely overestimation of local magnitudes for Australian earthquakes recorded at regional distances, there is a need to account for pre-1990 magnitude estimates due to the use of inappropriate Californian magnitude formulae. A process was employed that systematically corrected local magnitudes using the difference between the original (inappropriate) magnitude formula (e.g., Richter, 1935) and the Australian-specific correction curves (e.g., Michael-Leiba and Malafant, 1992) at a distance determined by the nearest recording station likely to have recorded a specific earthquake (Allen, 2010).
Another important factor determining the reduction in hazard is the conversion of catalogue magnitudes such that magnitudes are consistently expressed in terms of moment magnitude, MW. Moment magnitude is the preferred magnitude type for probabilistic seismic hazard analyses (PSHAs), and all modern ground-motion models (GMMs) are calibrated to this magnitude type. Relationships between MW and other magnitude types were developed for the NSHA18. The most important of these is the relationship between ML and MW because of the abundance of local magnitudes in the Australian earthquake catalogue. The preferred bi-linear relationship demonstrates that MW is approximately 0.3 magnitude units lower than ML for moderate-to-large earthquakes (4.0 < MW < 6.0). Together, the ML corrections and the subsequent conversions to MW effectively halve the number (and subsequently the annual rate) of earthquakes exceeding magnitude 4.0 and 5.0, respectively. This has downstream effects on hazard calculations when forecasting the rate of rare large earthquakes using Gutenberg-Richter magnitude-frequency distributions in PSHA.
The secondary effect of the ML and MW magnitude conversion is that it tends to increase the number of small and moderate-sized earthquakes relative to large earthquakes. This increases the Gutenberg–Richter b-value, which in turn further decreases the relative annual rates of larger potentially damaging earthquakes (Allen et al., 2017).
The final main factor driving the reduction of calculated seismic hazard in Australia is the use of modern ground motion models (GMMs). While seismologists in stable continental regions (SCRs) worldwide recognise the complexity in characterising the likely ground motions from rare large earthquakes, more abundant ground-motion datasets of moderate-magnitude earthquakes are emerging. The NSHA18 hazard values are based on modern GMMs with improved understanding of instrumental ground-motion source amplitudes and attenuation in Australia and analogue regions. The peak ground accelerations (PGAs) predicted by these modern models in general are up to a factor of two lower than the Gaull et al. (1990) peak ground velocity (PGV)-based relationships at distances of engineering significance (generally less than 100 km). At larger distances, the lower rates of attenuation of the Gaull et al. (1990) relationships yield ground-motion values up to factors of 10 higher than modern GMMs (Allen et al., 2017).
It is anticipated that the National Seismic Hazard Assessment (NSHA18) project will be complete in mid-2018, at which time Geoscience Australia has agreed in principle to provide a briefing on it in Sydney for the insurance Industry. The updated version of AS1170.4 will be released in 2018.
Allen, T., J. Griffin, M. Leonard, D. Clark and H. Ghasemi (2017). An updated National Seismic Hazard Assessment for Australia: Are we designing for the right earthquakes? Proceedings of the Annual Conference of the Australian Earthquake Engineering Society in Canberra, November 24-26, 2017.
Standards Australia (2007). Structural Design Actions, Part 4 Earthquake Actions in Australia. AS1170.4:2007.
As reported by the USGS, the September 19, 2017, Mw 7.1 Puebla earthquake in Central Mexico occurred as the result of faulting within the subducted Cocos plate at a depth of approximately 50 km and about 120 km southeast of Mexico City. At least 220 people were killed at Mexico City, 74 in Morelos, 45 in Puebla, 13 in Estado de Mexico, 6 in Guerrero and 4 in Oaxaca. At least 6,000 people were injured. At least 44 buildings collapsed and many others were damaged at Mexico City. Many other buildings were damaged or destroyed in the surrounding area. Significant damage occurred to the electrical grid in Estado de Mexico, Guerrero, Mexico City, Morelos, Oaxaca, Puebla and Tlaxcala.
This earthquake occurred on the anniversary of the devastating Mw 8.0 Michoacan earthquake of 19 September 1985, which caused extensive damage to Mexico City and the surrounding region. That event occurred as the result of thrust faulting on the plate interface between the Cocos and North America plates, about 450 km to the west of the September 19, 2017 earthquake.
Most of Mexico City is founded on a clay-filled lake. The clay has a resonant period of 1 to 2 seconds and has very unusual properties – it is very elastic (has low damping), which allows a very large resonance to build up due to the trapping of energy within this shallow sedimentary basin (Figures 1 and 2). This resonance caused the collapse of buildings, especially ones having natural periods of 1 to 2 seconds, and generated a seiche in Lake Chapultepec (part of the original lake that has not been filled in) seen in a widely viewed video, in which the waves have a period of about 2 seconds. https://www.youtube.com/watch?v=vfTaHOoC2rs
The 1 to 2 second resonance of the lakebed can also be set up by marching soldiers. This occurred exactly 33 years earlier to the day, when I was on holiday in Mexico City. It was September 19, Independence Day, and the soldiers were marching down Reforma Avenue. I was standing on the roof of my ten story hotel, which was swaying noticeably. One year to the day later, at 07:17 am on 19 September 1985, the Mw 8.0 Michoacan earthquake occurred. I doubt that my hotel survived the earthquake.
After the 1985 earthquake I spoke with my colleague, Lloyd Cluff, who had been at a meeting with Mexican government officials on the day of the earthquake to discuss seismic issues for nuclear power plants. The meeting was held on the edge of Mexico City outside the lakebed area (blue area of Figure 1). After he returned to his hotel that evening he turned on the TV and saw photos of a disastrous earthquake. It took him some time to recognise the scene of the disaster as Mexico City. No one at the meeting had known that it had occurred early that morning in Mexico City, because the shaking outside the lakebed area had been so weak.
Severe Tropical Cyclone Debbie made landfall at Airlie Beach on the Whitsunday Coast earlier on in the year, with an estimated property insurance market loss estimate over AUD $1.6 billion (PERILS, 2017). Debbie had all the ingredients for a large storm surge potential – a low and dropping pressure before landfall (down to 943 mB), high and sustained onshore wind speeds (landfalling as a Cat 4 system), a track perpendicular to the coast, and a very slow forward moving speed (7 km/hr at landfall).
Debbie also coincided with a relatively high state of tide (landfall occurring 2 hours after high water) and large waves (> 9 m), to produce a storm tide inundation, according to Risk Frontiers’ own survey estimates, of around 5 m above mean sea level. This was roughly equivalent to the height of most coastal foredunes, meaning direct coastal inundation damage to property was limited.
While the storm tide inundation could have been much higher if Debbie had made landfall two hours earlier, the storm surge itself (a combination of elevated coastal water levels due to high wind speeds and low atmospheric pressure, minus tides and waves) should have been bigger. The open question posed since March has been – why was it not?
A similar question has been asked of storms and surges in the Wadden Sea, a fringe basin in the North Sea between the Netherlands and Denmark. The article below, written by Giordano Lipari in the Netherlands, makes the point that superstorms don’t always lead to supersurges, especially in coastal areas fronted by islands. In the case of the Wadden Sea, and the article below, these are “barrier [sand] islands”, but in the case of Debbie and the Whitsunday region, the same effect may also be caused by the numerous rock and coral islands and reefs that fringe the mainland coast.
Below is an edited version of Lipari’s article. The original version can be found at bit.ly/supersurge.
1. Big can fail, little can hit
The Wadden Sea (Figure 1) is a fringe basin of the North Sea delimited by a strip of barrier islands. When it comes to storms and surges, the Wadden Sea stages an intriguing three-way interaction between physiographic features (in plain language: the water container), atmospheric systems (weather), and flow patterns (water). In the Dutch part of basin, in particular, this interplay defeats the intuition that the most severe surges are caused by the most severe storms when ranked by wind speed alone.
2. Back by the beach
When raging winds raise the water against the coast, it is generally taken as ground truth that the higher the peak wind speed, the higher the peak water level. Some tide gauges in the Dutch Wadden Sea, however, showed that record-breaking surges were not caused by the most severe winds in the same control period.
The unlimited presence of water is self-evident on a shore squarely facing the ocean’s expanse. In contrast, the water volume contained in the Wadden Sea depends on the course of the waters flowing in and out across its several tidal inlets. However hard the wind pushes in the water across one tidal inlet, some water may still escape from another, leading to no noteworthy accumulation of water inside the basin. In the extreme, there is no surge if no extra water stays in and for long enough. Hence, there could be much barking in the wind, little biting in the water.
In a basin delimited by barrier islands [or rock islands and reefs, in the case of the Whitsunday coastline] the surges are significantly modulated by the physical geography. Only those storms causing a substantial piling-up of water behind the islands can cause severe surges, once they have managed to bring in the excess water to raise in the first place.
The arrows in the picture above, based on computer simulations, indicate qualitatively where water is going in and out at a storm’s given moment: clearly, it’s not the same everywhere, nor will it stay unchanged while the storm unfolds itself.
In sum, the Wadden Sea evidence is that high wind speeds alone are neither necessary nor sufficient to cause, or expect, record-breaking surges. Since the container drives both water storage and motion, the Wadden Sea itself determines effectively which storms result in a surge with a certain level of flood hazard with possibly counter-intuitive outcomes. The scope and degree of generality to which the cautionary tale is applicable to all/other situations is matter of orderly scientific discourse. Certainly, the severity of storm surges is pretty much a situation-specific matter, and it cannot be reduced to a single number defining the storm alone, such as the Beaufort or the Saffir-Simpson scale, except in the simplest configurations.
3. Thinking onwards and upwards
There are many ways in which the Wadden Sea insights might be helpful beyond the specifics. Societal concerns for the coastal areas are justified due to the growing concentration of the global population, to the weather anomalies and outliers expected to increase after climate change, and to the land subsidence aggravating flood-proneness.
The investigations on the Wadden Sea made it at least clear that the excess of simplification in the superstorm-supersurge anticipation could mishandle the exposure and vulnerability of certain coastal areas.
As often said — and hence attributed to Albert Einstein for good measure —, every problem statement should be as simple as possible, but not simpler. In the case of storm surge prediction – this may not be that simple.
Risk Frontiers staff and associates have significant experience in coastal process and hydrodynamic modelling, in particular, understanding the dynamics and impacts of extreme waves and water levels in Australia. In association with a consortium of eight other research institutes and government agencies, we are coordinating the analysis of an unprecedented number of coastal impact observations post-Debbie, to be published soon.