School of GeoSciences

School of GeoSciences

Menu

Conor McKernon


What I was going to do...

My project was originally going to be about looking at seismic efficiency - the proportion of energy released by earthquakes in an area (obtained by summing seismic moment tensors) compared to the energy put in by ongoing tectonic forcing (which is very stable constant over time) . Such an analysis would allow the repeat times of the largest earthquakes to be estimated.

However, one drawback of such a method is the relatively short duration of complete seismic records - maybe only 30 or 40 years - compared to the typical time between characteristic earthquakes, which can be several hundred years. This can imply seismic efficiencies that are either very small (for regions where no large earthquakes have occurred over the duration of seismic records), or very large (>> 1), for regions where large events have occurred. Another complicating factor is that some plate boundaries tend to be more 'lubricated', releasing more of their energy aseismically. Until we have a longer database of complete seismic records, the success of such an approach will be limited.

Earthquake Triggering (what I've actually done)

So, instead, my project has been about trying to place objective and quantitative limits on the extent of earthquake triggering. The classical view of earthquake triggering is based around a well defined model of a large mainshock followed by smaller aftershocks. Such sequences are sometimes also preceded by foreshocks, which has led researchers to try find some inherent difference between foreshocks and their subsequent mainshocks. If such a distinction could be made in real-time, we would have some warning about impending large earthquakes.

Unfortunately, such distinctions have not been observed, or at least none that everyone can agree on. Current thinking is favouring the view that there is no inherent difference between foreshocks, mainshocks and aftershocks apart from their magnitudes. Such labels can only be applied retrospectively - it is not possible to (consistently and accurately) predict what size an event will be before it occurs. As such, we need to try to learn more about earthquake populations as a whole, building on our empirical understanding of seismicity, before moving onto the controlling physical framework.

What is a triggered earthquake?

A triggered earthquake is basically an earthquake that in all likeliness was going to happen sooner or later anyway, but is forced to rupture sooner (this in known as a 'clock advance') due to the stress changes brought about by another earthquake. These stress changes can either by dynamic or static. Dynamic stress changes are caused by the passage of seismic waves, and so can only directly trigger other events immediately after a triggering event. Static stress changes are changes in the Earth's stress field after an earthquake, with the extent of the changes related to the size of the earthquake. So, we would expect a larger earthquake to trigger aftershocks over a larger range.

Earthquake predictability

The largest, most damaging events are effectively random and unpredictable, with no universal precursors yet observed. However, aftershocks and triggered earthquakes appear to obey some well known empirical rules, such as Omori's law (which describers the rate at which afterhshocks decline). So, to learn more about triggering, and the scales at which it operates, we need to try to spearate the triggering events from the triggered ones.

I look at the distribution of time and distance separations between causally related pairs of events - both for real data and earthquake catalogues with random times. The differences can tell us about the temporally non-random component of seismicity - in otherwords aftershocks, and how they interact with each other. This in turn can allow inferences about diffusive processes in the lithosphere to be made, such as how stress are transmitted over time.

This sort of analysis can also allow time-dependant seismic hazard to be estimated, following an earthquake of a given magnitude. It may also be possible to look at how some of the underlying physical mechanisms of plate tectonics may influence the nature and extent of earthquake triggering.


Figure showing evolution of mean distance between triggered (temporally correlated) earthquakes, for global data with a range of magnitude thresholds. This basically shows how the characteristic inter-event length grows over time, when the lower magnitude threshold is changed. At greater magnitude thresholds (meaning fewer total events - there are much fewer large earthquakes than smaller ones), the signal becomes noisier. The same occurs as tme time window between triggering and triggered earthquake becomes smaller. This noise is due to the decreased number of pairs available for analysis. The main thing to note from this diagram is that the inter-event length grows over time, and accelerates at larger times.