Earthquake Data Base
The current data base was assembled over a period of several decades. Constituent catalogs were in some cases entered manually into the data base from published papers and in other cases entered from computer tapes supplied to the NEIC by representatives of the institutions that compiled the catalogs. Some of the catalogs were provided to the U.S. Geological Survey in computer-readable form by the National Geophysical Data Center of the National Oceanic and Atmospheric Administration (NOAA) (Rinehart and others, 1985).
If an entry attributed to a source in the data base is not consistent with the entry for the same earthquake given in the source publication or preferred by the source institution, the latter should taken as authoritative. Some catalogs in the data base are updated versions of those described in the cited reference, provided by the authors and prepared using the conventions described in the reference.
The USGS has not been able to determine the sources for some catalogs in the data base. These catalogs are retained in the data base because they clearly represent major cataloging efforts, and they may include significant earthquakes that are not included in other catalogs.
The temporal extent of the data base extends from 2000B.C. through the current week of the Preliminary Determination of Epicenters (PDE) program. Each earthquake in the data base is detailed according to source, date, time, latitude, longitude, magnitude, intensity, and seismic-related information. An earthquake is extracted from the data base when it meets all input user-conditions.
The data base is not static; as additions or modifications are made available, new data are added and old data are deleted. The catalogs are not systematically critiqued for erroneous or missing data. The user should be aware that some of the source-institutions for catalogs in the data base may have produced more recent versions of the catalogs than those in the present data base.
Micro-earthquakes having magnitudes below 1.0 are not retained in the data base. Earthquakes with magnitudes less than 2.0 are found in the data base, but in general, the magnitude level of earthquakes in the data base range from 2.5-9.5. Users of micro-earthquake data should contact institutions that operate seismograph networks in their area of interest.
Users who examine the output for several different catalogs will commonly find that the catalog entries are inconsistent for some seismic events. The same earthquake may have slightly different hypocenters or magnitudes in two different catalogs, or an earthquake may be listed in one catalog and not in another. The following are some reasons for these discrepancies:
1. Modern hypocenters and magnitudes are computed with seismographic data, using computer programs that make simplifying assumptions about the earth; differences between data base entries may arise from differences in the data or the computer programs. The first modern seismographs were invented in the time period 1880 - 1900.
2. The compilers of the different data bases sometimes had different conventions about the size or locations of the earthquakes listed.
3. There are a number of different types of earthquake magnitude, corresponding to seismic energy in different frequency bands, and the different types of magnitude calculated for a single earthquake commonly differ slightly, and under some circumstances, may differ greatly.
4. Origin-times of earthquakes are listed to different levels or precision in the different data bases, and some data bases list origin-times for historical earthquakes in local times, rather than the more customary Coordinated Universal Time (UTC).
5. Hypocenters of earthquakes are listed to different levels of precision in the different data bases.
6. Estimates of the hypocenters and magnitudes of earthquakes before about 1900 must commonly be based on scant evidence, and the compilers of different catalogs sometimes interpreted the scant evidence differently.
It is ultimately the user's responsibility to assess the accuracy and completeness of a data-set extracted from the data base and to determine if these are sufficient for the purposes to which the data-set would be applied.