Will COVID-19 affect ECL forecasts on the 46th anniversary of the Sygna storm?

By Stuart Browning

Australia’s Eastern Seaboard is set to be lashed by the first real East Coast Low (ECL) of the cold season over the next couple of days beginning on 22 May 2020, (Figure 1). Unlike the February 5-10 2020 ECL, which originated in the tropics and impacted Southeast Queensland (Mortlock and Somerville 2020), this one has its origin in the cold Southern Ocean and will mostly impact Sydney. The cold pool of upper atmospheric air which is expected to drive intensification of this ECL has already dumped snow on the Alps as it passed over Southeast Australia.

The early stage development of this storm is remarkably similar to the Sygna storm: one of the most powerful East Coast Lows on record, and one of the worst storms to impact Sydney and Newcastle. Exactly 46 years ago, on the 21st of May 1974, a precursor to the Sygna storm was identified as a pool of very cold air over Adelaide (Bridgman 1985). After dropping heavy snow on the Alps it moved into the Tasman Sea and intensified into a powerful ECL, where it not only wrecked the Norwegian bulk carrier the Sygna, but caused extensive damage to coastal infrastructure including the destruction of Manly’s famous harbour pool.

While this weekend’s storm is forecast to produce typical ECL conditions of strong winds, heavy rainfall and dangerous surf, it is not forecast to reach the magnitude of truly destructive storms such as the Sygna, or the more recent Pasha Bulka Storm of 2007. However, ECL have proven notoriously difficult to predict. One of the key drivers of ECL intensification is a cold pool of air in the upper atmosphere, hence the alpine snow which often precedes intense storms. The behaviour of these cold pools of air presents a challenge for numerical forecast models under usual circumstances, but the COVID-19 pandemic has made their job even more difficult.

COVID-19 Grounding of Flights Impacting Global Weather Data Collection

Weather forecast models rely on a vast network of observations to describe the current state of the atmosphere. According to the European Centre for Medium-range Weather Forecasting (ECWMF) aircraft-based observations are second only to satellite data in their impact on forecasts. The number of aircraft observations has plummeted since the COVID-19 pandemic effectively grounded most of the world’s commercial airline fleet (Figure 2). Prior to COVID-19 Sydney to Melbourne was one of the world busiest flight routes, and weather observations from those flights provided valuable information for developing weather forecasts – especially for the simulation of complex weather systems like ECL. A ECMWF study in 2019 showed that excluding half of the regular number of aircraft observations had a significant impact on forecasts of upper atmospheric winds and temperature, especially in the 24-hr ahead.

Whether or not a lack of aircraft observations will affect forecasts for tomorrow’s ECL remains to be seen. While this event is unlikely to reach the magnitude of its historical counterpart, the May 1974 Sygna storm, it will provide a timely reminder that ECL are a regular part of Tasman Sea weather and climate; and if you’re on Australia’s eastern seaboard then get ready for the first large maritime storm of the winter.

Figure 1BOM numerical forecast for a Tasman Sea ECL on Friday the 22nd of May
Figure 2 Number of aircraft reports over Europe received and used at ECMWF per day (https://www.ecmwf.int/en/about/media-centre/news/2020/drop-aircraft-observations-could-have-impact-weather-forecasts).

References

Bridgman, H. A.: The Sygna storm at Newcastle – 12 years later, Meteorology Australia, VBP 4574, 10–16, 1985.

ECMWF 2020 Drop in aircraft observations could have impact on weather forecasts. https://www.ecmwf.int/en/about/media-centre/news/2020/drop-aircraft-observations-could-have-impact-weather-forecasts

Mortlock and Somerville 2020 February 2020 East Coast Low: Sydney Impacts. https://riskfrontiers.com/february-2020-east-coast-low-sydney-impacts/

The 14 May 2020 Burra Earthquake Sequence and its Relation to Flinders Ranges Faults

Paul Somerville, Principal Geoscientist, Risk Frontiers

Three earthquakes occurred about 200km north of Adelaide between May 10 and May 14, 2020, as shown on the left side of Figure 1. The first event (yellow), local magnitude ML 2.6, occurred near Spalding on May 10 at 22:53 between the other two events. The second event (orange), ML 2.4, occurred to the northwest of the first event, northeast of Laura on 13 May at 19:18. The third event (red), ML 4.3, occurred to the southeast of the first event at Burra on 14 May at 15:23.

All three earthquakes are estimated by Geoscience Australia (GA) to have occurred at depths of 10 km, consistent with the depth of 7 km +/-3 km for the Burra event estimated by the United States Geological Survey (USGS). The USGS estimated a body wave magnitude mb of 4.3 for the Burra earthquake from worldwide recordings. Neither GA nor the USGS have estimated its moment magnitude Mw.

The Burra event is the largest earthquake to have occurred near Adelaide in the past decade. People felt shaking in Adelaide office and apartment buildings, as well as in the Adelaide Hills, the Yorke Peninsula and southern Barossa, but it is not known to have caused any damage.  Maps of estimated peak acceleration and Modified Mercalli Intensity are shown in Figures 3 and 4 respectively.

The distance between the three events spans about 85 km, and they presumably occurred over a segment of the western range front of the Flinders Ranges. One segment of the range front, formed by the Wilkatana fault (Quigley et al., 2006), is shown on the right side of Figure 1. The occurrence of the three events close in time suggests that they are related to a large scale disturbance in the stress field on the range front faults, because individually the dimensions of the fault ruptures (about 1 km for the Burra earthquake and 200 m for the two smaller events) is much less than their overall separation of 85 km, so they are unlikely to have influenced each other.

There is no indication that a larger earthquake is about to occur, but if a 100 km length of the western range front of the Flinders Ranges were to rupture, it would have a magnitude of about Mw 7.3. Repeated large earthquakes on both sides of the range fronts have raised the Flinders Ranges and Mt Lofty Ranges by several hundred metres over the past several million years (Sandiford, 2003; Figures 2 and 5).

Until the occurrence of the 1989 Newcastle earthquake, the 28 February 1954 Adelaide earthquake (left side of Figure 5) was Australia’s most damaging earthquake.  Its estimated magnitude has varied between 5.6 and 5.4 until the release of the 2018 National Seismic Hazard Assessment (NSHA18) by Geoscience Australia (Allen et al., 2019). As part of that assessment, the local magnitudes ML in the Australian earthquake catalogue were revised and converted to moment magnitude Mw (Allen et al., 2018).  On average across Australia, this resulted in a reduction of 0.3 magnitude units, but the magnitude of the 1954 Adelaide earthquake was reduced much more, to a moment magnitude Mw of 4.79.

The 1954 Adelaide earthquake is thought to have occurred on the Eden-Burnside fault that lies just east of Adelaide. As shown on the right side of Figure 5, the Eden-Burnside fault is one of several faults on the western flank of the Mt Lofty Ranges that are uplifting the ranges. No lives were lost in the 1954 Adelaide earthquake and there were only three recorded injuries. Many houses were cracked, and heavy pieces of masonry fell from parapets and tall buildings in the city. One of Adelaide’s earliest buildings, the Victoria Hotel, partially collapsed. Other major buildings that were severely damaged included St Francis Xavier Cathedral, the Adelaide Post Office clock tower and a newly completed hospital in Blackwood, which sustained major damage to its wards and offices.

Risk Frontiers (2016) estimated the impact of a magnitude 5.6 scenario earthquake on the Eden-Burnside fault based on the 1954 Adelaide earthquake, and found the scenario’s losses to be much larger than the adjusted historical losses for the 1954 earthquake. With the revision of the magnitude of the 1954 Adelaide from 5.6 to 4.79, we now understand the cause of the large discrepancy in losses.

Figure 1. Left: Locations of the earthquake sequence, from top: Laura (orange), Sperling, yellow) and Burra (red). Source: Geoscience Australia, 2020. Top Right: Segments of the Wilkatana fault (dashed yellow lines). Source: Quigley et al., 2006. The Laura event is near the southern end of the Wilkatana fault, and the Sperling and Burra events are off the southern end of the Wilkatana fault on an adjacent segment of the range front fault system (not mapped). Bottom Right: My relative Jonathan Teasdale looking down a fault that dips down to the east (right) at about 45 degrees (black line) in the Flinders Ranges, raising the mountains on the right (east) side. The two sides of the fault are converging towards each other due to east-west horizontal compression, with the west side moving east and down, and the east side moving west and up.
Figure 2. Left: Topographic relief map of the Flinders and Mount Lofty Ranges. Source: Sandiford, 2003. Right: Association of historical seismicity (dots) with topography and faults (black lines) of the Flinders and Mount Lofty Ranges. Source: Celerier et al., 2005.
Figure 3. Contours of estimated peak acceleration (in percent g) from the Burra earthquake; the yellow contour represents 10%g. Source: Geoscience Australia, 2020.

 

Figure 4. Estimated MMI intensity from the Burra earthquake; the epicentral intensity is MMI V. Source: Geoscience Australia, 2020.
Figure 5. Left: Historical seismicity of the Adelaide region showing the location of the 1954 Adelaide earthquake. Right: Active faults of the Mt Lofty Ranges including the Eden-Burnside fault to the east of Adelaide. Source: Sandiford (2003).

References

Allen, T. I., Leonard, M., Ghasemi, H, Gibson, G. 2018. The 2018 National Seismic Hazard Assessment for Australia – earthquake epicentre catalogue. Record 2018/30. Geoscience Australia, Canberra. http://dx.doi.org/10.11636/Record.2018.030.

Allen, T., J. Griffin, M. Leonard, D. Clark and H. Ghasemi, 2019. The 2018 National Seismic Hazard Assessment: Model overview. Record 2018/27. Geoscience Australia, Canberra. http://dx.doi.org/10.11636/Record.2018.027

Celerier, Julien, Mike Sandiford, David Lundbek Hansen, and Mark Quigley (2005).  Modes of active intraplate deformation, Flinders Ranges, Australia. Tectonics, Vol. 24, TC6006, doi:10.1029/2004TC001679, 2005.

Geoscience Australia (2020). https://earthquakes.ga.gov.au/event/ga2020jgwjhk

Quigley M. C., Cupper M. L. & Sandiford M. 2006. Quaternary faults of southern Australia: palaeoseismicity, slip rates and origin. Australian Journal of Earth Sciences 53, 285-301.

Risk Frontiers (2016). What if a large earthquake hit Adelaide? https://www.bnhcrc.com.au/news/2016/what-if-large-earthquake-hit-adelaide

Sandiford M. 2003. Neotectonics of southeastern Australia: linking the Quaternary faulting record with seismicity and in situ stress. In: Hillis R. R. & Muller R. D. eds. Evolution and Dynamics of the Australian Plate, pp. 101 – 113. Geological Society of Australia, Special Publication 22 and Geological Society of America Special Paper 372.

 

 

Ranking of Potential Causes of Human Extinction

Paul Somerville, Risk Frontiers

We are good at learning from recent experience; the availability heuristic is the tendency to estimate the likelihood of an event based on our ability to recall examples. However, we are much less skilled at anticipating potential catastrophes that have no precedent in living memory. Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it. This was the problem with COVID 19: many informed scientists (e.g. Gates, 2015) predicted that a global pandemic was almost certain to break out at some point in the near future, but very few governments did anything about it.

We are all familiar with the annual Global Risk Reports published by the World Economic Forum.  Looking at their ranking of the likelihood and severity of risks (see Figure I, page 1 of the 2020 report), we see that the rankings over the past three years have consistently attributed the highest likelihood to Extreme Weather events and the highest impact to Weapons of Mass Destruction. However, in 2020, Climate Action Failure displaced Weapons of Mass Destruction as the top impact risk. Further, the rankings have changed markedly over the past 22 years, and while it may be that human activity has had an inordinately large impact on objective risks levels such as that due to Weapons of Mass Destruction in the last three years, there is probably a large component of subjectivity and availability heuristic in the rankings reflecting changing risk perceptions.

The work of Toby Ord and colleagues described below stands in stark contrast with these risk assessments.  First, it addresses much more dire events that could lead to human extinction.  Second, it attempts to use objective methods to assess the risks to avoid problems arising from risk perception. This work results in some surprising and thought-provoking conclusions, including that most human extinction risk comes from anthropogenic sources other than nuclear war or climate change.

Australian-born Toby Ord is a moral philosopher at the Future of Humanity Institute at Oxford University who has advised organisations such as the World Health Organisation, the World Bank and the World Economic Forum. In The Precipice, he addresses the fundamental threats to humanity. He begins by stating that we live at a critical time for humanity’s future and concludes that in the last century we faced a one-in-a-hundred risk of human extinction, but that we now face a one-in-six risk this century.

In previous work, Snyder-Beattie et al. (2019) estimated an upper bound for the background rate of human extinction due to natural causes. Beckstead et al. (2014) addressed unprecedented technological risks of extreme catastrophes, including synthetic biology, geoengineering (employed to avert climate change), distributed manufacturing (of weapons), and Artificial General Intelligence (AGI); see also Hawking (2010). In what follows, the conclusions of these studies are summarised and the various potential causes of human extinction ranked (Table 1).

Natural risks, including asteroids and comets, supervolcanic eruptions and stellar explosions are estimated to have relatively low risks, which, taken together, contribute a one-in-a-million chance of extinction per century.

Turning to anthropogenic risks, the most obvious risk to human survival would seem to be that of nuclear war, and we have come near it, mainly by accident, on several occasions. However, Ord doubts that even nuclear winter would lead to total human extinction or the global unrecoverable collapse of civilisation. Similarly, Ord considers that while climate change has the capacity to be a global calamity of unprecedented scale, it similarly would not necessarily lead to human extinction. He also considers that environmental damage does not show a direct mechanism for existential risk. Nevertheless, he concludes that each of these anthropogenic risks has a higher probability than that of all natural risks put together (one-in-a-million per century).

Future risks that Ord considers include pandemics, “un­aligned artificial intelligence” (superintelligent AI systems with goals that are not aligned with human ethics), ­“dystopian scenarios” (“a world with civilisation intact, but locked into a terrible form, with little or no value”), nanotechnology, and extraterrestrial life.

Ord considers the risk represented by pandemics to be mostly anthropogenic, not natural, and the risk from engineered pandemics is estimated to be one-in-30 per century, constituting the second highest ranked risk. He does not consider COVID 19 to be a plausible existential threat.

Ord considers that the highest risk comes from unaligned artificial intelligence.  Substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. The human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. However, if AI surpasses humanity in general, then it becomes “superintelligent” and could become powerful and difficult to control. The risk of this is estimated to be one-in-10 per century.   These risks combine for a one-in-six chance of extinction per century.

The methodology behind Ord’s estimates is described in detail in the book and in the answers to questions he was asked in the 80,000 Hours podcast (2020). For example, for the case of AGI, Ord states that the typical AI expert’s view of the chance that we develop smarter than human AGI this century is about 50%.  Conditional on that, he states that experts working on trying to make sure that AGI would be aligned with our values estimate there is only an 80% chance of surviving this transition while still retaining control of our destiny. This yields a 10% chance of not surviving in the next hundred years.

In the rankings in Table 1, all considered anthropogenic risks, shown in Roman; exceed all natural risks,  shown in italics

Table 1. Ranking of Risks of Human Extinction
Table 1. Ranking of Risks of Human Extinction

References

80,000 Hours (2020). https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#robs-intro-000000

Beckstead, Nick, Nick Bostrom, Neil Bowerman, Owen Cotton-Barratt, William MacAskill, Seán Ó hÉigeartaigh, and Toby Ord (2014). Unprecedented Technological Risks. https://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf.

Gates, Bill. (2015).  The next outbreak? We’re not ready. https://www.ted.com/talks/bill_gates_the_next_outbreak_we_re_not_ready/transcript?language=en

Hawking S. (2010), Abandon Earth or Face Extinction, Bigthink.com, 6 August 2010.

Snyder-Beattie, Andrew E., Toby Ord and Michael B. Bonsall (2019). An upper bound for the background rate of human extinction. Nature Reports, https://doi.org/10.1038/s41598-019-47540-7.

Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury.

Rougier, J., Sparks, R. S. J., Cashman, K. V. & Brown, S. K. The global magnitude–frequency relationship for large explosive volcanic eruptions. Earth Planet. Sci. Lett. 482, 621–629 (2018).

World Economic Forum (2020). The Global Risks Report 2020. http://www3.weforum.org/docs/WEF_Global_Risk_Report_2020.pdf

 

No, senator, science can’t do away with models

Foster Langbein, Chief Technology Officer, Risk Frontiers

The following article was written in response to COVID-19 pandemic modelling but has a particular resonance with why we make CAT models and how and why they change. CAT models explore some interesting territory – integrating as they do a myriad of sources from models of key ‘hard science’ physical processes, historical data, assumptions about geographic distribution, engineering assumptions and interpretations of building codes through to models of financial conditions from policy documents. Integrating such disparate sources becomes severely intractable mathematically when more than a few different distributions and their associated uncertainties are involved. The solution – Monte-Carlo simulation – harks back to the 1940’s and was critical in the simulations required in the Manhattan Project – in which, incidentally, a young Richard Feynman (quoted in the article) was involved. This powerful technique of random sampling a great number of times only became practical with the advent of computers – so computer models of CAT events are here to stay. But the essential point remains – they are just tools to help us understand the consequences of all the assumptions we input. When better science emerges or new data are incorporated and these assumptions are updated – changes are expected! Navigating those assumptions and helping understand the consequences and inevitable changes are part and parcel of Risk Frontiers modelling work. In what follows, Scott K. Johnson explains why U.S. Senator John Cornyn’s critique of modelling is misguided. 


On Friday, Texas Senator John Cornyn took to Twitter with some advice for scientists: models aren’t part of the scientific method. Scientists have responded with a mix of bafflement and exasperation. And Cornyn’s misconception is common enough – and important enough -that it’s worth exploring.

@JohnCornyn:  After #COVIDー19 crisis passes, could we have a good faith discussion about the uses and abuses of “modeling” to predict the future?  Everything from public health, to economic to climate predictions.  It isn’t the scientific method, folks.

Cornyn’s beef with models echoes a talking point often brought up by people who want to reject inconvenient conclusions of systems sciences. In reality, “you can make a model say anything you want” is about as potent an argument as “all swans are white.” The latter is either a disingenuous argument, or you have an embarrassingly limited familiarity with swans.

Models aren’t perfect. They can generate inaccurate predictions. They can generate highly uncertain predictions when the science is uncertain. And some models can be genuinely bad, producing useless and poorly supported predictions. But the idea that models aren’t central to science is deeply and profoundly wrong. It’s true that the criticism is usually centered on mathematical simulations, but these are just one type of model on a spectrum—and there is no science without models.

What’s a model to do?

There’s something fundamental to scientific thinking – and indeed most of the things we navigate in daily life: the conceptual model. This is the image that exists in your head of how a thing works. Whether studying a bacterium or microwaving a burrito, you refer to your conceptual model to get what you’re looking for. Conceptual models can be extremely simplistic (turn key, engine starts) or extremely detailed (working knowledge of every component in your car’s ignition system), but they’re useful either way.

As science is a knowledge-seeking endeavor, it revolves around building ever-better conceptual models. While the interplay between model and data can take many forms, most of us learn a sort of laboratory-focused scientific method that consists of hypothesis, experiment, data, and revised hypothesis.

In a now-famous lecture, quantum physicist Richard Feynman similarly described to his students the process of discovering a new law of physics: “First, we guess it. Then we compute the consequences of the guess to see what… it would imply. And then we compare those computation results to nature… If it disagrees with experiment, it’s wrong. In that simple statement is the key to science.”

In order to “compute the consequences of the guess,” one needs a model. For some phenomena, a good conceptual model will suffice. For example, one of the bedrock principles taught to young geologists is T.C. Chamberlin’s “method of multiple working hypotheses.” He advised all geologists in the field to keep more than one hypothesis – built out into full conceptual models – in mind when walking around making observations.

That way, instead of simply tallying up all the observations that are consistent with your favored hypothesis, the data can more objectively highlight the one that is closer to reality. The more detailed your conceptual model, the easier it is for an observation to show that it is incorrect. If you know where you expect a certain rock layer to appear and it’s not there, there’s a problem with your hypothesis.

There is math involved

But at some point, the system being studied becomes too complex for a human to “compute the consequences” in their own head. Enter the mathematical model. This can be as simple as a single equation solved in a spreadsheet or as complex as a multi-layered global simulation requiring supercomputer time to run.

And this is where the modeler’s adage, coined by George E.P. Box, comes in: “All models are wrong, but some are useful.” Any mathematical model is necessarily a simplification of reality and is thus unlikely to be complete and perfect in every possible way. But perfection is not its job. Its job is to be more useful than no model.

Consider an example from a science that generates few partisan arguments: hydrogeology. Imagine that a leak has been discovered in a storage tank below a gas station. The water table is close enough to the surface here that gasoline has contaminated the groundwater. That contamination needs to be mapped out to see how far it has traveled and (ideally) to facilitate a cleanup.

If money and effort was no object, you could drill a thousand monitoring wells in a grid to find out where it went. Obviously, no one does this. Instead, you could drill three wells close to the tank, determining the characteristics of the soil or bedrock, the direction of groundwater flow, and the concentration of contaminants near the source. That information can be plugged into a groundwater model simple enough to run on your laptop, simulating likely flow rates, chemical reactions, and microbial activity breaking down the contaminants and so on, spitting out the probable location and extent of contamination. That’s simply too much math to do all in your head, but we can quantify the relevant physics and chemistry and let the computer do the heavy lifting.

A truly perfect model prediction would more or less require knowing the position of every sand grain and every rock fracture beneath the station. But a simplified model can generate a helpful hypothesis that can easily be tested with just a few more monitoring wells – certainly more effective than drilling on a hunch.

Don’t shoot the modeler

Of course, Senator Cornyn probably didn’t have groundwater models in mind. The tweet was prompted by work with epidemiological models projecting the effects of COVID-19 in the United States. Recent modeling incorporating the social distancing, testing, and treatment measures so far employed is projecting fewer deaths than earlier projections did. Instead of welcoming this sign of progress, some have inexplicably attacked the models, claiming these downward revisions show earlier warnings exaggerated the threat and led to excessive economic impacts.

There is a blindingly obvious fact being ignored in that argument: earlier projections showed what would happen if we didn’t adopt a strong response (as well as other scenarios), while new projections show where our current path sends us. The downward revision doesn’t mean the models were bad; it means we did something.

Often, the societal value of scientific “what if?” models is that we might want to change the “if.” If you calculate how soon your bank account will hit zero if you buy a new pair of pants every day, it might lead to a change in your overly ambitious wardrobe procurement plan. That’s why you crunched the numbers in the first place.

Yet complaints about “exaggerating models” are sadly predictable. All that fuss about a hole in the ozone layer, and it turns out it stopped growing! (Because we banned production of the pollutants responsible.) Acid rain was supposed to be some catastrophe, but I haven’t heard about it in years! (Because we required pollution controls on sulfur-emitting smokestacks.) The worst-case climate change scenario used to be over 4°C warming by 2100, and now they’re projecting closer to 3°C! (Because we’ve taken halting steps to reduce emissions.)

These complaints seem to view models as crystal balls or psychic visions of a future event. But they’re not. Models just take a scenario or hypothesis you’re interested in and “compute the consequences of the guess.” The result can be used to further the scientific understanding of how things work or to inform important decisions.

What, after all, is the alternative? Could science spurn models in favor of some other method? Imagine what would happen if NASA eyeballed Mars in a telescope, pointed the rocket, pushed the launch button, and hoped for the best. Or perhaps humanity could base its response to climate change on someone who waves their hands at the atmosphere and says, “I don’t know, 600 parts per million of carbon dioxide doesn’t sound like much.”

Obviously these aren’t alternatives that any reasonable individual should be seriously considering.

The spread of COVID-19 is an incredibly complex process and difficult to predict. It depends on some things that are well studied (like how pathogens can spread between people), some that are partly understood (like the characteristics of the SARS-CoV-2 virus and its lethality), and some that are unknowable (like the precise movements and actions of every single American). And it has to be simulated at fairly fine scale around the country if we want to understand the ability of hospitals to meet the local demand for care.

Without computer models, we’d be reduced to back-of-the-envelope spit-balling – and even that would require conceptual and mathematical models for individual variables. The reality is that big science requires big models. Those who pretend otherwise aren’t defending some “pure” scientific method. They just don’t understand science.

We can’t strip science of models any more than we can strip it of knowledge.

https://arstechnica.com/science/2020/04/no-senator-science-cant-do-away-with-models/