Australian Journal of Emergency Management. July 2018 edition.
Flood levees are a commonly used method of flood protection. Previous research has proposed the concept of the ‘levee paradox’ to describe the situation whereby the construction of levees leads to a lowered community awareness of the risks of flooding and increased development in the ‘protected’ area. The consequences of this are the risks of larger losses in less frequent but deeper floods when levees overtop or fail. This paper uses the recent history of flooding and levee construction to investigate the ‘levee paradox’ through a study of flood preparedness and floodplain development in Lismore, NSW.
As reported in the San Francisco Chronicle on 21 June 2018, Pacific Gas and Electric Co. and its parent company, PG&E Corp., reported Thursday that they will take a $2.5 billion charge to cover expected losses from October’s deadly Wine Country wildfires. PG&E is blamed for sparking some of the most destructive blazes in California history, and warned investors that the financial pain may just be beginning. The damage charge, which will be recorded in the current quarter, is larger than PG&E Corp.’s 2017 profit of $1.66 billion. But PG&E executives said that it represents just the low end of the utility’s potential losses from the fires; the final amount could be much higher. In Australia, there were numerous large court cases against power companies in the aftermath of Black Saturday fire.
The following article, written by David R. Baker, appeared in the San Francisco Chronicle on 9 June 2018.
Firefighters were still struggling to contain the flames scorching the North Bay last October when residents first started lining up to sue Pacific Gas and Electric Co. State fire officials had already named PG&E’s power lines as possible ignition sources for the dozens of fires that erupted during a windstorm on Oct. 8, destroying more than 8,800 buildings across Northern California and killing 45 people. But they cautioned that their investigation was just getting under way.
Many survivors, however, were convinced that the culprit was PG&E. Eight days after the fires began, at least 175 people gathered at the Santa Rosa Hyatt to hear from three law firms preparing to take on the utility. In the months that followed, more than 150 individual suits would be filed against PG&E. Investigators with the California Department of Forestry and Fire Protection, or Cal Fire, are now finally releasing their reports on the causes of the fires. In every case so far, Cal Fire has traced the flames back to PG&E’s equipment.
Even more damning, in 11 of the 16 fires for which Cal Fire has issued reports, investigators found reason to believe that PG&E had broken state safety rules. They sent their findings to the district attorneys in the counties involved to explore possible prosecution. Cal Fire still has not named a cause for the biggest blaze that night — the Tubbs Fire, which raced from Calistoga to Santa Rosa, leveled whole neighborhoods and killed 24 people. The Cal Fire reports issued to date, however, could lead to criminal charges against PG&E, which was convicted on six felony charges following the fatal 2010 San Bruno gas pipeline explosion.
Even some of the lawyers now suing the company, however, consider criminal charges unlikely. Instead, the agency’s findings could give the survivors suing PG&E a way to hold the utility liable for economic damages caused by the fires, even in the instances in which Cal Fire did not accuse the utility of doing anything wrong. Under a legal concept called inverse condemnation, California utilities can be made to pay economic damages for fires tied to their equipment, regardless of whether they followed the state’s safety regulations.
Furthermore, by raising the possibility of wrongdoing, the reports could end up blocking PG&E’s ability to pass along any of those costs to its more than 5 million customers. California regulators have refused to let utilities incorporate wildfire lawsuit costs into their rates when negligence is involved. PG&E and the state’s other big utilities have been waging a lobbying campaign in Sacramento to change the state’s liability laws and shield them from wildfire lawsuits, or at least let them make their customers pay the costs. That effort may now be moot.
“I think that is now off the table,” said Patrick McCallum, a Sacramento lobbyist who lost his own home in the fires and has been trying to thwart PG&E’s push on liability laws. He leads a campaign called Upfrom the Ashes funded by some of the lawyers suing PG&E. “In my opinion, there are not the votes in the Legislature today to change inverse condemnation or strict liability,” McCallum said. “These reports show the Legislature and their staff what others have known, that there’s a history of mismanagement at PG&E.”
PG&E said it will continue pushing for liability changes, as well as work with state officials on fire prevention measures. “Liability regardless of negligence undermines the financial health of the state’s utilities, discourages investment in California and has the potential to materially impact the ability of utilities to access the capital markets to fund utility operations and California’s bold clean energy vision,” the company said.
The stakes for PG&E are high. Damage estimates from all of the Northern California wildfires, viewed together, stand at nearly $10 billion, according to the California Department of Insurance. Wall Street analysts don’t believe liability for the fires would bankrupt PG&E, but it would at the very least raise insurance prices for the company, and its customers would bear that extra cost. PG&E Corp., the utility’s parent company, made a $1.7 billion profit last year, on $17.1 billion in revenue. PG&E in December suspended its dividend to stockpile money, should it be held responsible for the fires.
Much still hinges on whether Cal Fire blames the Tubbs Fire on PG&E’s equipment. The company has claimed that a power line installed and owned by a private property owner started the blaze. Gerald Singleton, one of the attorneys suing PG&E, estimates that the Tubbs Fire alone accounts for close to half of the liability PG&E could face. “If the Tubbs report comes back, and they say, ‘No, remarkably, PG&E’s equipment wasn’t involved,’ then PG&E no longer has an immediate financial problem,” said Singleton, with the Singleton law firm.
PG&E said in its first-quarter financial report that it could need to raise money to deal with the fallout. Already, it has spent $259 million on repairs and service restoration. It has approximately $840 million in liability insurance — far short of what it might be required to pay.
The findings issued to date should make it easier for insurance companies, fire departments, cities and others to sue PG&E for losses caused by the fires, a process known as subrogation. Insurance companies will seek to recoup at least some of the claims they paid out to policyholders, the same way auto insurers after an accident will pay their customers first, then seek reimbursement from the at-fault party or his insurer. “It’s part of the normal process on how issues like this are resolved, making sure that the responsible party pays for the damage they cause,” said Mark Sektan, vice president with the Property Casualty Insurers Association of America. Sektan said any result from industry lawsuits against PG&E “will be a couple years away. Where it will help homeowners who have insurance is that when the insurers receive a settlement, they will refund whatever deductible the homeowner has paid.”
A glass sculptor who lost his life’s work in Napa’s Atlas Fire, Clifford Rainey is among many victims who are suing PG&E. Rainey also lost the Napa home he shared with his partner, Rachel Raiser, a floral designer, who also lost her studio. “We’re in a pickle financially,” Rainey said, noting that he had no insurance on his art studio. “The only way I can see to get any compensation is through one of these lawsuits.” So Friday’s news that Cal Fire investigators have determined that the Atlas Fire started with a PG&E power line gives him hope that he and Raiser will one day be able to rebuild. But he fears that PG&E power lines will remain unsafe, despite the finding. “It’s amazing that in California we still have these power cables above ground,” he said. “I’m from the U.K., and across most of Europe, electrical wires are always underground. In Napa, where I live, the power cables actually weave through trees. I cringe.”
Chronicle staff writers Kathleen Pender and Nanette Asimov contributed to this report.
In December 2017, the credit rating agency Moody’s warned U.S. cities and states to prepare for the effects of climate change or risk being downgraded. It explained how it assesses the credit risks to a city or state that’s being impacted by climate change — whether that impact be a short-term “climate shock” like a wildfire, hurricane or drought, or a longer-term “incremental climate trend” like rising sea levels or increased temperatures. It also takes into consideration communities’ preparedness for such shocks and their activities adapting to climate trends.
A recent report by Charles Donovan and Christopher Corbishley of Imperial College predicts that countries disproportionately impacted by climate change could have to pay an extra $170 billion in interest rates over the next 10 years. The following article by Henry Grabar, which appeared on Slate on Oct. 28 2017, explains why the bond market is not more worried by climate change.
The article draws examples from recent flooding in US cities and the infamous National Flood Insurance Program. Parts of the US like Miami, New Orleans and New York are feeling the effects of sea level rise now during extreme weather events, in part because of the low-lying topography and high population density of these coastal areas. In Eastern Australia, shorelines have – until now – broadly been able to keep pace with a rising tidal prism because of antecedent sediment conditions and a relatively steep coastal hinterland.
However, high and rising coastal populations and expanding infrastructure (~ 85 % of Australia’s population currently lives near the coast) leave some big east coast cities like Newcastle, Brisbane and Cairns with significant exposure to higher sea levels in the coming decades.
We should perhaps be looking to examples in the US and elsewhere as a present-day ‘litmus test’ of financial markets response, and a window to the near-future time when sea level rise begins to have a more significant impact on some of the big east coast cities in Australia.
Early this month, when the annual king tide swept ocean water into the streets of Miami, the city’s Republican mayor, Tomás Regalado, used the occasion to stump for a vote. He’d like Miami residents to pass the “Miami Forever” bond issue, a $400-million property tax increase to fund seawalls and drainage pumps (they’ll vote on it on Election Day). “We cannot control nature,” Regalado says in a recent television ad, “but we can prepare the city.”
Miami is considered among the most exposed big cities in the U.S. to climate change. One study predicts the region could lose 2.5 million residents to climate migration by the end of the century. As on much of the Eastern Seaboard, the flooding is no longer hypothetical. Low-lying properties already get submerged during the year’s highest tides. So-called “nuisance flooding” has surged 400 percent since 2006.
Business leaders are excited about the timing of the vote in part because Miami currently has its best credit ratings in 30 years, meaning that the city can borrow money at low rates. Amid the dire predictions and the full moon floods, that rating is a bulwark. It signifies that the financial industry doesn’t think sea level rise and storm risk will prevent Miami from paying off its debts. In December, a report issued by President Obama’s budget office outlined a potential virtuous cycle: Borrow money to build seawalls and the like while your credit is good, and your credit will still be good when you need to borrow in the future.
The alternative: Flood-prone jurisdictions go into the financial tailspin we recognize from cities like Detroit, unable to borrow enough to protect the assets whose declining value makes it harder to borrow. The long ribbon of vulnerable coastal homes from Brownsville to Acadia has managed to stave off that cycle in part thanks to a familiar, federally backed consensus between homebuyers and politicians. Homebuyers continue to place high values on homes, even when they’ve suffered repeated flood damage. That’s because the federal government is generous with disaster aid and its subsidy of the National Flood Insurance Program, which helps coastal homeowners buy new washing machines when theirs get wrecked. Banks require coastal homeowners with FHA-backed mortgages to purchase flood insurance, and in turn, coastal homes are rebuilt again and again and again—even when it might no longer be prudent.
But there’s another element that helps cement the bargain: investors’ confidence that coastal towns will pay back the money they borrow. Homebuyers are irrational. Politicians are self-interested. But lenders—and the ratings agencies that help direct their investments—ought to have a more clinical view. Evaluating long-term risk is exactly their business model. If they thought environmental conditions threatened investments, they would sound the alarm—or just vote with their wallets. They’ve done it before—cities like New Orleans, Galveston, Texas, and Seaside Heights, New Jersey were all downgraded by rating agencies after damage from Hurricanes Katrina, Ike, and Sandy. But all have since rebounded. There does not appear to be a single jurisdiction in the United States that has suffered a credit downgrade related to sea level rise or storm risk. Yet.
To understand why, it helps to look at communities like Seaside Heights, the boardwalk enclave along the Jersey Shore whose marooned roller coaster provided the definitive image of the 2012 storm. Seaside Heights was given an A3 rating from Moody’s in 2013, meaning “low credit risk.” Ocean County, New Jersey—the county in which Seaside Heights sits—has a AAA rating. In the summer of 2016, before Ocean County sold $31 million in 20-year bonds, neither Moody’s Investor Services nor S&P Global Ratings asked about how climate change might affect its finances, the county’s negotiator told Bloomberg this summer. “It didn’t come up, which says to me they’re not concerned about it.”
The credit rating agencies would deny that characterization—to a point. They do know about sea level rise. They just don’t think it matters yet. In 2015, analysts from Fitch concluded, “sea level rise has not played a material role” in assessing creditworthiness, despite “real threats.” Hurricane Sandy had no discernible effect on the median home prices in Monmouth, Ocean, and Atlantic Counties, which make up New Jersey’s Atlantic Coast. The effect on tourism spending was also negligible.
“We take a lot from history, and historically what’s happened is that these places are desirable to be in,” explains Amy Laskey, a managing director at Fitch Ratings. “People continue to want to be there and will rebuild properties, usually with significant help from federal and state governments, so we haven’t felt it affects the credit of the places we rate.”
There are three reasons for that. The first is that disasters tend to be good for credit, thanks to cash infusions from FEMA’s generous Disaster Relief Fund. “The tax base of New Orleans now is about twice what it was prior to Katrina,” Laskey says, despite a population that remains 60,000 persons shy of its 2005 peak. “Longer term what tends to happen is there’s rebuilding, a tremendous influx of funds from the federal and state governments and private insurers.” Local Home Depots are busy. Rental apartments fill up with construction workers. Contractors have to schedule work months in advance. Look at Homestead, Florida, Laskey advised, a sprawling city south of Miami that was nearly destroyed by Hurricane Andrew. Today it is bigger than ever. “If there was going to be a place that wasn’t going to come back, that would have been it.”
What emerges from the destruction, for the most part, are communities full of properties that are more valuable than they were before, because they’re both newer and better prepared for the next storm. Or as a Moody’s report on environmental risk puts it, “generally disasters have been positive for state finances.” But this is entirely dependent on federal largesse: After Massachusetts brutal winter of 2015, FEMA granted only a quarter of the state’s request for aid. Moody’s determined that could negatively impact the credit ratings of local governments that had to shoulder the cost of snow and ice removal.
Second is that people still want to live on the shore. “The amenity value of the beach is something you can enjoy every day of the summer,” says Robert Muir-Wood, the chief research officer at Risk Management Solutions. “People may say, ‘The benefits of living on the beach to my health and wellbeing outweigh the impact of the flood.’” That calculus is strongly influenced by affordable flood insurance policies, but it has not changed. In a way, despite the risks, the sea is a more dependable economic engine for a community than, say, a factory that could shut its doors and move away any minute. Most bonds get paid off from property taxes. If property values remain high, bondholders have little to worry about. If, on the other hand, property values fall, tax rates must rise. If buildings go into foreclosure, or neighborhoods undergo “buy-outs” to restore wetlands or dunes, more of the burden to pay off that new seawall falls on everyone else.
Third: Most jurisdictions are large. New Jersey’s coastal counties also contain thousands of inland homes whose risk exposure is much, much lower. Adam Stern, a co-head of research at Boston’s Breckinridge Capital Advisors, argues that the first credit problems will come for small communities devastated by major storms.
Still, Stern said, his firm looks at these issues. “One of the things we try to get at when we look at an issuer of bonds that’s on the coast: Do you take climate change seriously? Are you planning for that?” Still, he said, bond buyers—like everyone else—discount the value of future money, and hence future risk. When could the breaking point for the muni market come? Stern predicts that will happen when property values start to discernibly change in reaction to climate risk. It’s a game of chicken between infrastructure investors and homeowners.
As the Earth’s atmosphere warms, the atmospheric circulation changes. These changes vary by region and time of year, but there is evidence to suggest that anthropogenic warming causes a general weakening of summertime tropical circulation. Because tropical cyclones are carried along within the ambient environmental wind, there is an expectation that the translation speed of tropical cyclones has or will slow with warming.
Severe Tropical Cyclone Debbie, which made landfall near Mackay in March 2017, was an unusually slow event, crossing the coast at only seven kilometers per hour. Likewise, the “stalling” of Hurricane Harvey over Texas in August 2017 is another example of a recent, slow-moving event. While two events by no means constitute a trend, slow-moving cyclones can be especially damaging in terms of the rainfall volumes that are precipitated out over a single catchment or town (Fig. 1). A slow translation speed means strong wind speeds are sustained for longer periods of time and it can also increase the surge-producing potential of a tropical cyclone.
But have changes in the translation speeds of tropical cyclones been observed in the Australian region and can we draw any conclusions about any impact of these changes on related flooding?
A recent article published in the journal Nature by James Kossin of NOAA looks at tropical cyclone translation speeds from 1949 through to 2016, using data from the US National Hurricane Center (NHC) and Joint Typhoon Warning Center (JTWC), and finds a 10 percent global decrease. For western North Pacific and North Atlantic tropical cyclones, he reports a slowdown over land areas of 30 percent and 20 percent respectively, and a slowdown of 19 percent over land areas in Australia.
The following is an extract from Kossin’s article, followed by some comments on the significance of his work for the Australian region. The full article and associated references are available here.
Kossin’s article – in short
Anthropogenic warming, both past and projected, is expected to affect the strength and patterns of global atmospheric circulation. Tropical cyclones are generally carried along within these circulation patterns, so their past translation speeds may be indicative of past circulation changes. In particular, warming is linked to a weakening of tropical summertime circulation and there is a plausible a priori expectation that tropical-cyclone translation speed may be decreasing. In addition to changing circulation, anthropogenic warming is expected to increase lower-tropospheric water-vapour capacity by about 7 percent per degree (Celsius) of warming. Expectations of increased mean precipitation under global warming are well documented. Increases in global precipitation are constrained by the atmospheric energy budget but precipitation extremes can vary more broadly and are less constrained by energy considerations.
Because the amount of local tropical-cyclone-related rainfall depends on both rain rate and translation speed (with a decrease in translation speed having about the same local effect, proportionally, as an increase in rain rate), each of these two independent effects of anthropogenic warming is expected to increase local rainfall.
Time series of annual-mean global and hemispheric translation speed are shown in Fig. 2, based on global tropical-cyclone ‘best-track’ data. A highly significant global slowdown of tropical-cyclone translation speed is evident, of −10 percent over the 68-yr period 1949–2016. During this period, global-mean surface temperature has increased by about 0.5 °C. The global distribution of translation speed exhibits a clear shift towards slower speeds in the second half of the 68-yr period, and the differences are highly significant throughout most of the distribution.
This slowing is found in both the Northern and Southern Hemispheres (Fig. 2b) but is stronger and more significant in the Northern Hemisphere, where the annual number of tropical cyclones is generally greater. The times series for the Southern Hemisphere exhibits a change-point around 1980, but the reason for this is not clear.
The trends in tropical-cyclone translation speed and their signal-to-noise ratios vary considerably when the data are parsed by region but slowing over water is found in every basin except the northern Indian Ocean. Significant slowing of −20 percent in the western North Pacific Ocean and of −15 percent in the region around Australia (Southern Hemisphere, east of 100° E) are observed.
When the data are constrained within global latitude belts, significant slowing is observed at latitudes above 25° N and between 0° and 30° S. Slowing trends near the equator tend to be smaller and not significant, whereas there is a substantial (but insignificant) increasing trend in translation speed at higher latitudes in the Southern Hemisphere.
Changes in tropical-cyclone translation speed over land vary substantially by region (Fig. 3). There is a substantial and significant slowing trend over land areas affected by North Atlantic tropical cyclones (20 percent reduction over the 68-yr period), by western North Pacific tropical cyclones (30 percent reduction) and by tropical cyclones in the Australian region (19 percent reduction, but the significance is marginal).
Contrarily, the tropical-cyclone translation speeds over land areas affected by eastern North Pacific and northern Indian tropical cyclones, and of tropical cyclones that have affected Madagascar and the east coast of Africa, all exhibit positive trends, although none are significant.
In addition to the global slowing of tropical-cyclone translation speed identified here, there is evidence that tropical cyclones have migrated poleward in several regions. The rate of migration in the western North Pacific was found to be large, which has had a substantial effect on regional tropical-cyclone-related hazard exposure.
These recently identified trends in tropical-cyclone track behaviour emphasize that tropical-cyclone frequency and intensity should not be the only metrics considered when establishing connections between climate variability and change and the risks associated with tropical cyclones, both past and future.
These trends further support the idea that the behaviours of tropical cyclones are being altered in societally relevant ways by anthropogenic factors. Continued research into the connections between tropical cyclones and climate is essential to understanding and predicting the changes in risk that are occurring on a global scale.
Significance for the Australian region
While an interesting piece of work, the results for the Southern Hemisphere and the Australian region, are less clear than for the North Atlantic and North Pacific basins.
The trend shown in Fig. 2b for the whole of the Southern Hemisphere is not significant and is clearly composed of two separate trends, each spanning around 30 years. Assuming a homogenous dataset, the time series may be reflecting the strong influence of inter-decadal climate forcing.
In the Southern Hemisphere, the role of multi-decadal climate-ocean variability, like the Pacific Decadal Oscillation (PDO) or the Indian Ocean Dipole (IOD) has a large influence on decadal-scale climate variability (particularly in Australia) and can mask a linear, anthropogenically-forced trend.
The paper also mentions that global slowdown rates are only significant over-water (which makes up around 90 percent of the best track data used), whereas the trend for the 10 percent of global data that corresponds to cyclones over land (where rainfall effects become most societally relevant) is not significant. Therefore, it is unclear, at a global scale, whether tropical cyclones have slowed down over land or not. The trend for the Australian region (Fig. 3f, Southern Hemisphere > 100 °E), for both over land and over water slowdowns (approx. -19 percent), is only marginally significant. Further work could analyse translation speeds in the Australian region using our Bureau of Meteorology tropical cyclone database.
As with previous studies of changes to tropical cyclone behaviour in Australia, results are unclear. The relatively short time span of consistent records, combined with high year-to-year variability, makes it difficult to discern any clear trends in tropical cyclone frequency or intensity in this region (CSIRO, 2015).
For the period 1981 to 2007, no statistically significant trends in the total numbers of cyclones, or in the proportion of the most intense cyclones, have been found in the Australian region, South Indian Ocean or South Pacific Ocean (Kuleshov et al. 2010). However, observations of tropical cyclone numbers from 1981–82 to 2012–13 in the Australian region show a decreasing trend that is significant at the 93-98 percent confidence level when variability associated with ENSO is accounted for (Dowdy, 2014). Only limited conclusions can be drawn regarding tropical cyclone frequency and intensity in the Australian region prior to 1981, due to a lack of data. However, a long-term decline in numbers on the Queensland coast has been suggested (Callaghan and Power, 2010) and northeast Australia is also a region of projected decrease in tropical cyclone activity, including cat 4-5 storms, according to Knutson et al. (2015).
In summary, based on global and regional studies, tropical cyclones are in general projected to become less frequent with a greater proportion of high intensity storms (stronger winds and greater rainfall). This may be accompanied with a general slow-down in translation speed. A greater proportion of storms may reach south (CSIRO, 2015).
The take home message? The known-unknowns are still quite a bit greater than the known-knowns.
CALLAGHAN, J. & POWER, S. 2010. A reduction in the frequency of severe land-falling tropical cyclones over eastern Australia in recent decades. Clim Dynam.
CSIRO and BoM [CSIRO] 2015. Climate Change in Australia Information for Australia’s Natural Resource Management Regions: Technical Report, CSIRO and Bureau of Meteorology, Australia, pp 222.
DOWDY, A. J. 2014. Long-term changes in Australian tropical cyclone numbers. Atmospheric Science Letters.
KNUTSON, T.R., SIRUTIS, J.J., ZHAO, M., TULEYA, R.E., BENDER, M., VECCHI, G.A., VILLARINI, G. & CHAVAS, D. 2015. Global Projections of Intense Tropical Cyclone Activity for the Late Twenty-First Century from Dynamical Downscaling of CMIP5/RCP4.5 Scenarios. Journal of Climate, 28, 7203-7224.
KOSSIN, J.P. 2018. A global slowdown of tropical-cyclone translation speed. Nature 558, 104-107.
KULESHOV, Y., FAWCETT, R., QI, L., TREWIN, B., JONES, D., MCBRIDE, J. & RAMSAY, H. 2010. Trends in tropical cyclones in the South Indian Ocean and the South Pacific Ocean. Journal of Geophysical Research-Atmospheres, 115.
OFFICE OF THE INSPECTOR-GENERAL EMERGENCY MANAGEMENT 2017. The Cyclone Debbie Review: Lessons for delivering value and confidence through trust and empowerment. Report 1: 2017-18.
Jacob Evans, Risk Frontiers (firstname.lastname@example.org)
Cyclocopters are a new concept of drone that has recently shown success in development, garnering significant interest from leading robotic institutions and the US Army. The commercially available drones most people are familiar with are referred to as polycopters. Polycopters typically have four or six equally spaced helicopter style blades. They have a wide range of uses, from recreational to military, with drones recently being used by Risk Frontiers to analyse disaster areas after natural disasters such as volcanic lahars. Though these types of drones offer a wide variety of applications and already play a significant role in society, cyclocopters are viewed as the next stage in their evolution, with the potential ability to extensively survey during natural disasters and perform risk assessment.
The cyclocopter concept was developed about 100 years ago, however only recently have the materials and technology been available to turn this futuristic looking machine into reality. Cyclocopters can be visualised as an aerial paddleboat, having two or four cycloidal rotors (cyclorotors) (Figure 1). The rotors stir the air into vortices, creating lift, thrust and control. Each rotor has multiple (conventionally four) aerofoils, whose pitch (angle) can be adjusted in synchronisation to move the cyclocopter in any direction perpendicular to the cyclorotor. There is also a tail propeller to keep the drone level. Hence the aerodynamics can be viewed like that of an insect, imagine a dragonfly.
The cyclocopter design has several advantages. Unlike conventional drones which, like a helicopter, tilt in the direction of flight, cyclocopters remain parallel. Their engineering design also provides them with better maneuverability, forward speed, and altitude limit, as well as making them less disturbed by wind gusts. They are also much quieter, having lower blade-tip speeds which are responsible for the typical noise from bladed aircraft. However, the most significant advantage is that these drones actually perform better when scaled down. The vortex created by the cyclorotor configuration get proportionally more powerful as the size shrinks. This makes cyclocopters the leading candidate for miniaturised drones, with the ability to withstand strong winds during natural disasters and survey inaccessible areas.
Research into cyclocopters in the USA is being carried out at the University of Maryland, Texas A&M University and the University of California, Berkley, formally as part of the Micro Autonomous Systems and Technology (MAST) programme funded by the US Army, and now under the Distributed and Collaborative Intelligent Systems and Technology (DCIST) programme. Over the last 10 years, they have developed fully functional cyclocopters whilst reducing the size and weight from 500 g to just 29 g. A video of the MAST research groups’ latest cyclocopter can be found here (https://youtu.be/WTUCCkTcIW0). The next step in their evolution involves further miniaturisation and optimisation, and also getting drones to swarm and coordinate together.
Commercial cyclocopters are viewed to be only a couple of years away. They could play a significant part in saving lives. A common concept is the formation of an advanced network of drones with different capabilities. In search and rescue operations during natural disasters, cyclocopters could quickly scour the disaster area, including inaccessible areas, alerting authorities or communicating with larger ambulance drones which could provide survivors with necessities or even airlift them to safety. During gusty bushfires, a network of stabile cyclocopters could detect ignition points or homes at risk, communicating with larger extinguishing drones.
For cyclocopters individually, the military application presented by the MAST research group also focuses on saving lives, with the initiative of drones being able to fly ahead of military troops looking over ridges and embankments ensuring the soldiers safety. For the insurance industry, they could be used for the rapid assessment of unsafe and contaminated premises. From a perils standpoint, tiny cyclocopters could be used to access obstructed areas, and their stability and coordination would allow for faster and more accurate mapping of disaster relief areas, providing invaluable information for modelling.
The following briefing, by Esprit Smith of NASA’s Jet Propulsion Laboratory, was published on the NASA website on 24 May 2018.
The study described below considers projections based on two Representative Concentration Pathways (RCPs) – 4.5 and 8.5. There are four pathways in total (including RCP2.6 and RCP6) and the findings of the IPCC Fifth Assessment Report are based upon these. Most of the discussion of results presented below is based on the RCP8.5 analysis which is the most extreme scenario based on minimal effort to reduce emissions. Toward the end of the briefing the results from the RCP4.5 analysis are noted as follows: ‘The team also tested the algorithm with a different climate model scenario that assumed more conservative increases in the rate of greenhouse gas emissions. They found similar, though less drastic changes.’
A new NASA-led study shows that climate change is likely to intensify extreme weather events known as atmospheric rivers across most of the globe by the end of this century, while slightly reducing their number. The new study projects atmospheric rivers will be significantly longer and wider than the ones we observe today, leading to more frequent atmospheric river conditions in affected areas.
“The results project that in a scenario where greenhouse gas emissions continue at the current rate, there will be about 10 percent fewer atmospheric rivers globally by the end of the 21st century,” said the study’s lead author, Duane Waliser, of NASA’s Jet Propulsion Laboratory in Pasadena, California. “However, because the findings project that the atmospheric rivers will be, on average, about 25 percent wider and longer, the global frequency of atmospheric river conditions — like heavy rain and strong winds — will actually increase by about 50 percent.” The results also show that the frequency of the most intense atmospheric river storms is projected to nearly double.
Atmospheric rivers are long, narrow jets of air that carry huge amounts of water vapor from the tropics to Earth’s continents and polar regions. These “rivers in the sky” typically range from 250 to 375 miles (400 to 600 kilometers) wide and carry as much water — in the form of water vapor — as about 25 Mississippi Rivers. When an atmospheric river makes landfall, particularly against mountainous terrain (such as the Sierra Nevada and the Andes), it releases much of that water vapor in the form of rain or snow.
These storm systems are common — on average, there are about 11 present on Earth at any time. In many areas of the globe, they bring much-needed precipitation and are an important contribution to annual freshwater supplies. However, stronger atmospheric rivers — especially those that stall at landfall or that produce rain on top of snowpack — can cause disastrous flooding. Atmospheric rivers show up on satellite imagery, including in data from a series of actual atmospheric river storms that drenched the U.S. West Coast and caused severe flooding in early 2017.
Climate change studies on atmospheric rivers to date have been mostly limited to two specific regions, the western United States and Europe. They have typically used different methodologies for identifying atmospheric rivers and different climate projection models — meaning results from one are not quantitatively comparable to another.
The team sought to provide a more streamlined and global approach to evaluating the effects of climate change on atmospheric river storms. The study relied on two resources — a set of commonly used global climate model projections for the 21st century developed for the Intergovernmental Panel on Climate Change’s latest assessment report, and a global atmospheric river detection algorithm that can be applied to climate model output. The algorithm, developed earlier by members of the study team, identifies atmospheric river events from every day of the model simulations, quantifying their length, width and how much water vapor they transport.
The team applied the atmospheric river detection algorithm to both actual observations and model simulations for the late 20th century. Comparing the data showed that the models produced a relatively realistic representation of atmospheric rivers for the late 20th century climate. They then applied the algorithm to model projections of climate in the late 21st century. In doing this, they were able to compare the frequency and characteristics of atmospheric rivers for the current climate with the projections for future climate.
The team also tested the algorithm with a different climate model scenario that assumed more conservative increases in the rate of greenhouse gas emissions. They found similar, though less drastic changes. Together, the consideration of the two climate scenarios indicates a direct link between the extent of warming and the frequency and severity of atmospheric river conditions.
What does this mean?
The significance of the study is two-fold. First, “knowing the nature of how these atmospheric river events might change with future climate conditions allows for scientists, water managers, stakeholders and citizens living in atmospheric river-prone regions [e.g. western N. America, western S. America, S. Africa, New Zealand, western Europe] to consider the potential implications that might come with a change to these extreme precipitation events,” said Vicky Espinoza, postdoctoral fellow at the University of California-Merced and first author of the study. And secondly, the study and its approach provide a much-needed, uniform way to research atmospheric rivers on a global level — illustrating a foundation to analyze and compare them that did not previously exist.
Data across the models are generally consistent — all support the projection that atmospheric river conditions are linked to warming and will increase in the future; however, co-author Marty Ralph of the University of California, San Diego, points out that there is still work to be done. “While all the models project increases in the frequency of atmospheric river conditions, the results also illustrate uncertainties in the details of the climate projections of this key phenomenon,” he said. “This highlights the need to better understand why the models’ representations of atmospheric rivers vary.”
The new QuakeAUS: impact of revised GA earthquake magnitudes on hazards and losses
Paul Somerville and Valentina Koschatsky, Risk Frontiers
Geoscience Australia (GA) is updating the seismic hazard model for Australia through the National Seismic Hazard Assessment (NSHA18) project (Allen et al., 2017). The update includes the corrections of measurements of local magnitude, ML and the conversion of the ML values to moment magnitude, MW. Moment magnitude is the preferred magnitude type for probabilistic seismic hazard analyses, and all modern ground motion prediction equations use this magnitude type. This is because ML is a purely empirical estimate of earthquake size whereas MW is a theoretically-based measure of earthquake size, derived from the seismic moment, M0 of the earthquake which is given by:
M0 = u A D
where A is the rupture area of the fault, D is the average displacement on the fault and u is the shear modulus of rock. The seismic moment quantifies the size of each of the pair of opposing force couples that constitute the force representation of the shear dislocation on the fault plane. For comparison with the more familiar magnitude scale, MW is calibrated to M0 using the following equation:
MW = 2/3 log10 M0 – 10.7
Prior to the early 1990s, most Australian seismic observatories relied on the Richter (1935) local magnitude (ML) formula developed for southern California. At regional distances (where many earthquakes are recorded), the Richter scale tends to overestimate ML relative to modern Australian magnitude formulae. Because of the likely overestimation of local magnitudes for Australian earthquakes recorded at regional distances, there is a need to account for pre-1990 magnitude estimates due to the use of inappropriate Californian magnitude formulae. A process was employed that systematically corrected local magnitudes using the difference between the original (inappropriate) magnitude formula (e.g., Richter, 1935) and the Australian-specific correction curves (e.g., Michael-Leiba and Malafant, 1992) at a distance determined by the nearest recording station likely to have recorded a specific earthquake.
The relationship between ML and MW developed for the NSHA18 demonstrates that MW is approximately 0.3 magnitude units lower than ML for moderate-to-large earthquakes (4.0<MW<6.0). Together, the ML corrections and the subsequent conversions to MW more than halve the number (and consequently the annual rate) of earthquakes exceeding magnitude 4.5 and 5.0, as shown in Figure 1. This has downstream effects on hazard calculations when forecasting the rate of rare large earthquakes using Gutenberg-Richter magnitude-frequency distributions in PSHA. A secondary effect of the ML to MW magnitude conversion is that it tends to increase the number of small and moderate-sized earthquakes relative to large earthquakes. This increases the Gutenberg–Richter b-value, which in turn further decreases the relative annual rates of larger potentially damaging earthquakes (Allen et al., 2017).
Preliminary seismic hazard calculations by Allen et al. (2017b) using the new earthquake source catalogue are compared with the existing PGA hazard map for Be site conditions for a return period of 500 years in Figure 2. We have updated the earthquake source model to incorporate the new GA catalogue into QuakeAUS , and obtained a new hazard map for Australia similar to that in Figure 2.
Preliminary loss estimates using the new version of QuakeAUS show large scale reductions. Losses in a national residential portfolio for 200 year ARP and for AAL are 30% and 35% of their former values respectively. The changes are not regionally uniform, with the largest reductions occurring in Perth and the lowest reductions occurring in Darwin. Among the five perils that are modelled on Risk Frontiers’ Multiperil Workbench (earthquake, fire, flood, hail and tropical cyclone), earthquake previously had the largest 200 year ARP loss but now lies below tropical cyclone in a near tie with flood and hail, and its AAL has dropped from second last to last, below hail.
We expect to release QuakeAUS 6.0, including these changes, early in the third quarter of 2018.
Allen, T., J. Griffin, M. Leonard, D. Clark and H. Ghasemi (2017). An updated National Seismic Hazard Assessment for Australia: Are we designing for the right earthquakes? Proceedings of the Annual Conference of the Australian Earthquake Engineering Society in Canberra, November 24-26, 2017.
Michael-Leiba, M., and Malafant, K. (1992). A new local magnitude scale for southeastern Australia, BMR J. Aust. Geol. Geophys. Vol 13, No 3, pp 201-205.
Tathra 2018 Bushfires
James O’Brien, Mingzhu Wang, Jacob Evans
The 2017/18 bushfire season across southeastern Australia during this hot summer season burned through 237,869 hectares from 11,182 fires prompting seven Emergency Warnings, 25 Watch and Act alerts and 16 Total Fire Ban days1. Despite the high number of fires, the losses were limited, until the Tathra fires with two homes lost in Comboyne. True to its mission of better understanding natural disasters, Risk Frontiers produced in-depth intelligence from aerial photography, field survey and GIS analytics. In what follows we report the results of these exercises.
Observations from the field
The early December 2017 heatwave (December was the 5th hottest on record) set the conditions for the bushfires in New South Wales on 18 March 2018. The high temperatures combined with high winds established the conditions under which an electrical fault apparently triggered the fire. The bushfires in Tathra destroyed around 65 homes, damaged 48 homes, destroyed 35 caravans and cabins and burned 1250 hectares of bushland, in additional to the emotional trauma experienced by survivors. Fortunately there were no casualties.
Risk Frontiers scientists (James, Mingzhu and Jacob) arrived in Tathra on April 10th, a little over three weeks following the peak of the bushfire damage, due to the high proportion (around 50%) of properties which contained asbestos. Our objective was to investigate the most affected areas in Tathra.
New above-ground electricity infrastructure in the region was a clear sign of the work undertaken to repair the obliterated power network and an indication of the extensive damage to infrastructure that left Tathra without power and water for a number of days following the fire.
We were able to quickly cover the whole town in less than a day on foot with the exception of some isolated areas in Reedy Swamp where the fire started and a small number of houses are located. This survey was useful to qualitatively gauge the assumptions used in our bushfire loss model, FireAUS. Our observations can be summarised as follows:
Zero-One (binary) damage ratios: We saw very few cases of partial damage to structures. It appears that once fire hits a structure during a bushfire it will almost certainly be completely destroyed. That’s not to say that the adjacent structures at the same address will always burn; we observed several cases of sheds that were burnt while the main house was unscathed and vice versa. The partial damage we did observe was charring to the sides of properties, where it appeared an active effort had been made to save the property.
Statistical dependence of bushfire risk on distance to bush: As described above, there is no clear pattern in the spatial distribution of damage when observed at close-range. However, the statistics of bushfire damage based on aggregated data from a broad area do show the importance of distance of a property to the nearby bush (see Figure 2). Whether a property is burnt in a bushfire seems determined by random chance and this chance is conditioned by the distance to the bushland. In FireAUS, we assume that any two addresses equidistant from the bush have equal probabilities of burning.
Independence of risk from building types: We observed damage to different construction types: unreinforced masonry, wood, fibro, mobile homes and even stone. There were destroyed brick houses away from the bush and spared wood and fibro houses close to the bush and vice-versa. The damage for this locality appears independent of building types even when globally influenced by proximity to bushland. If there are other risk factors that could explain the building damage, they are not visible in a short inspection and would require a full forensic investigation of each damaged building. The prevailing view was that newer homes generally seemed to perform better than older homes – and in one case a home built within the last 5 years sustained minimal bushfire damage (timber steps were destroyed) although that property was also actively defended by neighbours.
As the events in Tathra unfolded, Risk Frontiers started the data gathering process to provide a view of this event. Our damage analysis is based on post-fire ground surveys and RFS burned area data captured from live data feeds on Sunday. We also acquired 25 km2 of pre-fire satellite imagery (WorldView-2, 2m resolution) for vegetation analysis and utilized Pitney Bowes Geovision for building location and bushland / tree data.
Figure 2 provides a complete map of damaged properties (house icons) overlain with bushland boundaries (green shading) derived from GeoVision data. It is clear that a number of these properties are surrounded by bushland and are therefore deemed to be at a distance of zero metres from the urban and bushland interface. Properties not within the bushland areas are assigned the linear distance in metres to the nearest pre-fire bushland area greater than 0.5 sq km in area, not necessarily the bushland that burned. Further analysis could be undertaken to classify the burned vegetation – however, in the Tathra region, the majority of bushland burned around properties and it is difficult to recover the clear timeline of local ignition.
There are eyewitness reports of ember attack and the pattern of damage around the different locations has destroyed houses at some distance from the bushland interface with adjacent properties destroyed by either further ember attack or contagion from the neighbouring property.
While Figure 2 demonstrates the spatial distribution of destroyed homes graphically, it is useful to quantify the loss as a function of distance to adjacent bushland. The data presented are in cumulative form so as to be consistent with other Risk Frontiers reports and other research. Figure 3 shows the percentile of destroyed buildings in relation to nearby bushland from recent major bushfires in Australia:
January 2003 Canberra bushfires (damaged suburbs include Duffy)
February 2009 “Black Saturday” bushfires in Victoria (damaged suburbs include Marysville and Kinglake)
February 2011 Perth bushfires (damaged suburbs include Roleystone)
January 2013 Tasmania bushfires (damaged suburbs include Dunalley)
January 2016 Yarloop, WA bushfire
Some new statistics and evidence that emerged from the bushfire damage in Tathra are as follows:
42% of destroyed homes were within 0m of classified bushland boundaries.
50% of surveyed destroyed homes were within 30m of the bushland interface and 72.6% of surveyed homes destroyed were within 100m of the bushland interface. These results closely match the findings previously presented in the “Bushfire Penetration into Urban Areas in Australia” report prepared for the 2009 Victorian Bushfires Royal Commission by Risk Frontiers.
No homes were destroyed further than 630m from bushland.
By Tahiry Rabehaja. Email: email@example.com.
2017 was not a good year for cyber security. Victims ranged from small businesses to corporate giants such as Equifax, Deloitte and Kmart with the impacts of ‘improved’ ransomware such as WannaCry and NotPetya just two well-publicised examples. Such breaches emphasise that cybersecurity poses not just a headache for IT departments but is an issue warranting a top-down solution, starting with C-level executives. To this end, the Financial Services Information Sharing and Analysis Center (FS-ISAC), have recently published a report summarising the thoughts of over 100 financial sector Chief Information Security Officers (CISO) regarding key priorities to improve digital security postures for 2018 (FS-ISAC, 2018). This survey shows most executives focused on improving their defensive strategies against cyber attacks.
[FS-ISAC is a non-profit global organisation providing a platform for sharing and analysing cyber and physical security information and intelligence. It currently has approximately 7000 members from 39 different countries. It was an initiative established by the financial service sector in response to the 1998 US Presidential Directive 63.]
For more than a third (35%) of the executives, improving employees’ awareness about digital threats ranks top of the list. This comes as no surprise given employees have always been on the front line of defence against cyber attacks while remaining the weakest link. Indeed, most attacks against financial services companies exploit human weaknesses using social engineering, spear phishing and account take-over due to weak and reused passwords, etc. In 2017, Verizon reported that 1 in 14 employees were opening attachments or links sent through phishing emails and 1 in 4 were giving out account credentials or personal information (Verizon, 2017).
Investment into modern cyber resilient infrastructures (25%) comes in as runner up. Such an investment includes a progressive upgrade of existing network defence hardware and software as well as the creation of specialised departments that ensure digital information security.
Another recent study shows that subscription to Threat Intelligence, the emergent use of defence systems based on Machine Learning as well as strategic use of Cyber Analytics rank amongst the more cost-effective security investments (Accenture, 2017). That same study shows many companies over-investing in technologies that fail to deliver the desired cost-benefit ratios. These include extensive applications of Advanced Perimeter Controls and incongruous use of data loss prevention such as full disk encryption. Thus, efficient security programs should be implemented by ensuring an optimal cost-benefit ratio. This can be achieved by prioritising the security of critical assets and related infrastructures.
2018 will also mark a long-awaited ratification of various breach notification regulatory laws. These include changes to the General Data Protection Regulation in Europe, the Notifiable Data Breaches scheme that has just come into effect in Australia, and upcoming changes to China’s Cybersecurity and Data Protection laws. These entail that compliance, explicitly voted by 2% of the surveyed executives, will also play an important role in shaping digital security especially for companies dealing with personally identifiable information.
The focus towards defensive solutions (FS-ISAC, 2018) is disturbing. The report also investigates the impact of hierarchical organization on reporting frequency but nothing is said about responses. This may be due to the fact that those executives interviewed were mainly from the financial industry. However, historical breaches shows response is equally as important as is defence. In fact, it is very likely that a resourceful hacker interested in a particular asset of a certain company will be able to hack in and extract or destroy the targeted information.
Targeted attacks are amongst the most costly and usually affect critical assets such as Intellectual Property. A successful attack on these key assets can have destructive impacts on the victim’s business model itself. Expenses incurred during a cyber event will span from direct costs — forensic and remediation cost, customer protection, regulatory penalty, etc. — to collateral damages — loss of customers, damage to reputation and brand name, increased cost of capital, etc. These costs can be considerably reduced using efficient incident response and mitigation policies as well as cyber insurance.
The White House Council of Economic Advisers estimate the average cost of a breach to be as high as $330 million when an event negatively affects the market value of the victim (Advisers, 2018). For instance, Equifax’s stock price dropped by more than 35% within 7 days of last year’s massive data breach disclosure. The emergence of cyber insurance is anticipated to provide cover against some of the financial losses. Various vendors are already providing cyber insurance products and it is expected this market will grow to over $7 billion within the next three years (PwC, 2015).
Accenture. (2017). Cost of Cybercrime Study. Retrieved from Accenture: https://www.accenture.com/au-en/insight-cost-of-cybercrime-2017
Advisers, W. H. (2018, February 16). Cost of malicious cyber activity to the US economy. Retrieved from https://www.whitehouse.gov/articles/cea-report-cost-malicious-cyber-activity-u-s-economy/
FS-ISAC. (2018, February 12). FS-ISAC Unveils 2018 Cybersecurity Trends According to Top Financial CISOs. Retrieved from FS-ISAC: https://www.fsisac.com/article/fs-isac-unveils-2018-cybersecurity-trends-according-top-financial-cisos
PwC. (2015). Insurance 2020 and beyond: Reaping the dividends of cyber resilience. Retrieved from https://www.pwc.com/gx/en/industries/financial-services/publications/insurance-2020-cyber.html
Verizon. (2017). Verizon Data Breach Investigation Report. Retrieved from Verizon: http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/
Jacob Evans, Risk Frontiers (firstname.lastname@example.org)
Modern concrete is porous and degrades in contact with seawater. Seawater can seep into its pores, and when dried out the salts crystalize. The crystallization pressure of the salts produces stresses that can result in cracks and spalls. There are also other chemical processes such as sulphate attack, lime leaching and alkali-aggregate expansion all of which degrade modern concrete. Some submerged concrete objects may last only 10 years; meanwhile, 2000-year old concrete constructed during the Roman Empire is still going strong (Figure 1). Why this is so is a question an international research team led by geologist Marie Jackson of Utah University sought to reveal.
The composition of Roman concrete has been long known, being a mixture of volcanic ash, quicklime (calcium oxide) and volcanic rock, but the science behind its resilience to seawater remained unknown until recently. It is thought volcanic material was used after the Romans observed ash from volcanic eruptions crystallize to form durable rock.
The research team discovered that while modern concrete is made to be inert, the Roman version interacts with the environment. When seawater interacts with the mixture, it forms rare minerals aluminous tobermorite and phillipsite which are believed to strengthen the material. This discovery could lead to the development of more resilient concrete to be used in coastal environments.
Modern concrete is generally limestone mixed with other ingredients such as sandstone, ash, chalk, iron and clay. The mixture is designed to be inert and not interact with the environment. In coastal environments building regulations govern the type of concrete used and water-cement ratio, but the concrete is still porous: seawater can pass through the material, leading to corrosion and destructuralisation.
As well as salt crystallization, the process whereby dried out salts within the concrete lead to a buildup of pressure, other chemical reactions can affect the integrity of concrete. These include sulphate attack, lime leaching and alkali-aggregate expansion (Figure 2). Sulphate attack occurs when sulphates in the water react with the hydrated calcium aluminate within the concrete. This changes the microstructure and leads to an increase in volume within the concrete, resulting in physical stress and potential cracking. Lime leaching is the simple process of water passing through the concrete and dissolving calcium hydroxide from the concrete. (Calcium hydroxide is formed from the action of calcium oxide and water.) This is often seen as white patches or stalactites on the exterior of the concrete and reduces its strength. Alkali-aggregate expansion is when aggregates, such as silica, decrease the alkalinity of the cement paste, resulting in the expansion of minerals and cracking of the cement.
Roman concrete however does not appear susceptible to any of these processes. The research team found that seawater, the kryptonite to modern concrete, was the magic ingredient responsible for the structural stability of the Roman mixture. The Roman concrete samples were found to contain rare aluminous tobermorite and phillipsite crystals. It is believed that with long-term exposure to seawater, tobermorite crystalizes from the phillipsite as it becomes more alkaline. This crystallization is thought to strengthen the compound, as tobermorite has long plate-like crystals that allow the material to bend rather than crack under stress. Pliny the Elder in the first century CE exclaimed “that as soon as it [concrete] comes into contact with the waves of the sea and is submerged [it] becomes a single stone mass (fierem unum lapidem), impregnable to the waves and every day stronger.”
To arrive at these conclusions, Jackson et. al. (2017) performed scanning electron microscopy (SEM), micro x-ray diffraction (XRD), Raman spectroscopy and electron probe microanalysis at the Advanced Light Source at the Lawrence Berkeley National Laboratory. Samples were obtained by drilling Roman harbour structures, and were compared with volcanic rock (Figure 3). The combination of these techniques in conjunction with in situ analysis provided evidence of crystallized aluminous tobermorite and phillipsite within Roman marine concrete (Figure 4). These crystals formed long after the original setting of the concrete. This finding was surprising, as tobermorite typically forms only at temperatures above 80 °C, though there is one occurrence of it forming at ambient temperature in the Surtsey volcano.
After this discovery, there is now a desire to develop a concrete mixture which replicates ancient Roman marine concrete. It could result in more environmentally friendly concrete construction, and would provide a mixture resilient to seawater and advantageous to coastal defence.
Jackson, M.D. et. al. (2017). Phillipsite and Al-tobermorite mineral cements produced through low-temperature water-rock reactions in Roman marine concrete. American Mineralogist: Journal of Earth and Planetary Materials, 102(7), pp.1435-1450.
Jackson, M.D. et. al. (2013). Unlocking the secrets of Al-tobermorite in Roman seawater concrete. American Mineralogist, 98(10), pp.1669-1687.
Suprenant, B.A. (1991). Designing concrete for exposure to seawater. Concrete Construction Magazine, pp.814-816.