In this issue:
Director: Professor Russell Blong
NHRC is kindly sponsored by:
The Natural Hazards Research Centre has been responsible for the development of a large array of databases on a variety of natural hazards. Over the years, our databases have progressed from simple 'flat' databases (i.e. a spreadsheet) to more complicated 'relational' databases. Relational databases allow data to be broken down into several tables, which then have either a one-to-one or one-to-many relationship with each other. This means that more complex data can be entered, less repetition of data is necessary and it is easier to ask questions of the database.
The NHRC is now moving towards integrating all our separate databases into one large Natural Hazards Database, which will be housed in Microsoft Access. We have thought a lot about the type of information which is relevant in a natural hazard database in terms of present uses and possible future uses, and the best way to structure this information in order to maximise data consistency as well as retaining sufficient information. Integrating so many separate databases, all of different vintages and developed for different purposes in a variety of software packages, is a challenge, but it will be worth it in the end.
The new Natural Hazards Database will provide information about an event:-say Cyclone Rona - the locations affected by the event, the damage at those locations, the number of people who died at each location, the physical characteristics of the event, causes, associated hazards and a list of references used in the compilation of the record. This will facilitate comparision of data between different natural hazards, as well as ensuring the (hopefully) comprehensive entry of information about future natural hazard events.
Our experience in natural hazard database development has highlighted a couple of issues. Firstly, it is essential to think about the purpose of the database. While this seems obvious, it is remarkable how often it can be overlooked in this information age when everyone wants to collect data. Secondly, the structure of the database is of primary importance - get this right from the beginning and life is much easier! And finally, it is vital to include Metadata. This is data about the data, and can include information from confidence levels to the source of the data. The inclusion of Metadata means that not only are we able to estimate the one in one hundred year event, but we can be reasonably sure of how accurate the estimation is!
The Natural Hazards Database will contain information about bushfires, landslides, earthquakes, floods, tropical cyclones, hail storms, lightning, windstorms and tsunamis. Most of the data will be incorporated into a GIS based CD-ROM which will be available for distribution in mid 1999.
For futher information please contact:
We are used to seeing media reports of the latest world scale disaster. In the first week or so after the event, casualty and damage figures change daily, depending on who the individual reporters have interviewed. Then, just as the picture is getting clearer, the story reaches its use-by date and there are no more reports unless we take the time to search the Internet for the dedicated agency information. In many cases our search will be for particular aspects of the disaster that interest us - very often the hazard - but a clear overall picture is rarely found.
Yet the impact of each disaster must be more comprehensively assessed if response, recovery and reconstruction activities are to be prioritised effectively. Unfortunately assessment means different things to different people. This article merely deals with the initial or "Needs" assessment.
Disaster management authorities rely on reports from the affected area when making the first response decisions. They must then quickly identify the area affected, the casualties and immediate damage, and any factors that can cause further casualties and damage or secondary disasters, before committing too many resources. A preliminary vehicle reconnaissance can identify the worst affected areas but this must rapidly be followed by an early ground-assessment seeking additional information. The vehicle assessment will be fragmentary and possibly contradictory but it provides a basis for immediate action and can be upgraded as new information is received.
The first rapid assessment is usually followed by a more formal Needs Assessment that attempts to discover the disaster response needs of as much of the affected area as possible. It may be carried out as a separate exercise or as an extension of the initial rapid assessment and will utilise available national expertise. It must still be a fairly fast process and quality is likely to suffer because of the urgent need for information but it will be better structured and more comprehensive than early rapid assessments.
The main subjects addressed in a Needs Assessment are:
If international assistance is required to deal with the consequences of a disaster, countries and agencies intending to provide assistance (usually referred to collectively as "donors") are likely to want an independent Needs Assessment to confirm that the affected country has identified the needs correctly. Donors are also likely to want information on the response activities of the affected country and its ability to receive, store and distribute outside relief assistance effectively.
After many disasters, the need for independent assessment leads to a procession of "assessment teams" visiting a disaster scene and causing increasing distress to victims, as they ask questions already answered for earlier teams, and make support demands on the already stretched infrastructure. Some teams include members with no previous disaster experience or knowledge of the culture and disaster management structure of the country. They produce reports that may not be checked with the affected country or compared with those of other assessment teams. The resulting confusion can further delay response and increases the victims' distress.
In recent years the United Nations Disaster Assessment and Coordination Team has been formed. Experts of various disciplines from around the world may be called upon to form an assessment team for a particular disaster. The team cooperates with national disaster managers and with non-government organisations already on the scene to produce a comprehensive assessment. Donors increasingly accept these assessments as an acceptable basis for action.
Nevertheless, more work is still needed to define common Needs Assessment aims and practices. It is expected that the increased attention of media and public to disaster impacts will lead countries and agencies to pay greater attention to this activity in the next few years.
Joe Barr has been involved in disaster needs assessment in at least seven different countries and is preparing a thesis on this topic. He can be contacted by telephone and fax on 02-6247 3973 or by Email as email@example.com.
The occurrences and consequences of a natural peril are determined by a plethora of environmental and socioeconomic factors - either climatic, geologic or human-made. Effective coupling of these factors in a specific spatial/temporal context is the key to a generic natural hazard risk assessment, which generally includes risk perception, likelihood, identification, process and consequences.
The past decade has witnessed the progress of GIS-related technologies and multi-criteria evaluation (MCE) theory in many hazard applications. The GIS-MCE approach enables integration of themes and subjective knowledge in an interpretable, open way, and therefore can facilitate site-specific risk analysis and decision making. This article introduces a new typology of the GIS-MCE integration for hazards risk assessment from the following three perspectives: data sources, data structures, and methods development.
The first dimension concerns the nature of the data employed. "Hard" data sources use different scales of measurement, such as ordinal, ratio, and interval. Presently, most GIS applications use "hard" data sources in pursuit of accuracy and certainty. However, many risk assessment criteria and risk ratings cannot be defined precisely. The difficulties often come from unquantifiable and incomplete information during the risk evaluation process. On the other hand, "soft" data derived by fuzzy membership functions and operators tend to represent realistic situations. For example, linguistic terms of different risk scales, from "low risk" to "high risk", can be converted to fuzzy numbers and expressed by continuous values between [0, 1].
The second dimension concerns the data structure on which many MCE models rely - raster and vector. Since natural hazards have explicit spatial distributions and temporal dynamics, from the point of view of theoretical research and practice, raster data are preferred as they offer an effective spatial structure for the various hazards phenomena. For census-based risk data using vector data structure, it is difficult to ensure effective spatial representation of data during analysis. We suggest that census data be dis-aggregated at the following three levels using remotely sensed images to highlight spatial characteristics: (1) residential and non-residential classification; (2) land covers or dwelling densities within the residential area; and (3) street blocks or individual dwellings identified by using very high resolution air-borne and space-borne images.
The third dimension concerns the development of MCE methods which are applicable to the above two dimensions. Many traditional MCE methods, evolved from multi-attribute/multi-objective/group decision making theory, are capable of integrating various hazards factors in a GIS overlay operation. Recently, artificial intelligence (AI) has offered new opportunities to combine various factors and to explore patterns among variables. For example, neural networks fed by "hard" and "soft" data sources could effectively describe spatial patterns of the hazards risk.
The above three perspectives represent opportunities for effective hazards risk assessment today, both theoretically and practically. To complement and operationalise these ideas, an ongoing project of developing a "MCE-RISK" tool kit is under way. Main modules of the tool kit include: (1) an independent and effective data visualisation platform which can import popular GIS data sources; (2) data standardisation and conventional MCE methods (such as TOPSIS, compromise programming); (3) a dozen fuzzy membership functions and operators in support of "soft" risk evaluation; and (4) AI-based risk pattern analysis.
For more information please contact:
Floods in Narrabri, Gunnedah, Bathurst, Gippsland, Townsville, Katherine, Gympie and other parts of southeast Queensland, in the last 18 months have focussed the attention of both homeowners and insurers on flood risk in Australia. Since the August 1998 flood in Wollongong a number of direct insurers have indicated willingness to extend flood cover to domestic property.
In early 1998 Macquarie Research Ltd, on behalf of the Natural Hazards Research Centre, purchased the Intellectual Property to the well-known flood loss model, ANUFLOOD. In the last few months ANUFLOOD has formed the core of a new Floodplain Inundation Risk Model - FIRM, designed with risk rating for insurance purposes in mind.
Insurance Portfolio data, such as street addresses, are reduced to latitudes and longitudes or Australian Map Grid locations. Ideally, floor elevation data and the number of building levels or storeys are included in the portfolio data, but floor elevations can be user - selected to provide a range of values reflecting the likely distribution.
The stage-damage curves available at present are modifications of those used in ANUFLOOD. Losses to Buildings, Contents, or Buildings + Contents can be estimated as $ amounts. The scatter in the available loss data is considerable; detailed studies of future flood losses should improve the insured loss estimates in FIRM.
Outputs from FIRM include:
Two versions of FIRM will be used. The simpler version is a new model, focussed on providing an assessment of flood risk at ground elevation for each 25-metre grid cell. This model uses no portfolio information or stage-damage data. The more complex model includes the potential to select all of the outputs listed above.
ANUFLOOD afficionados will recognise that FIRM is headed in a new direction, with the focus firmly on estimating aspects of flood damage of vital interest to the insurance industry. For the next few months our efforts will be devoted to potential damage to domestic policies in New South Wales - later, with the aid of insurers and other parties we will shift attention to other states and territories and to potential commercial losses.
For further information please contact: