Modelling realistic ruptures on the Wellington fault
Authors: R Benites, R Robinson, T Webb, P McGinty, Institute of Geological & Nuclear Sciences
Paper number: 3707 (EQC 99/373)
Abstract
The aim of this project was to realistically simulate a rupture of the Wellington fault. This is a difficult problem because, while we have a reasonable understanding of the low frequency behaviour (say 2-100 seconds period) of fault ruptures, we are not able to accurately model the high frequency behaviour (about ~1 Hz). Our inability to model high frequencies stems mainly from a lack of knowledge of the fine detail of earthquake ruptures and how seismic waves propagate through the earth. Typical approaches in the past have been to model these two processes separately, to carefully match the seismograms where they overlap in frequency, and then to combine them.
Our approach differs from the more conventional approach in that we have used as a starting point some computer-generated Wellington fault ruptures that have been produced by a complex model of interacting fault patches (1,500 patches distributed uniformly over the fault). We have shown that these synthetic rupture models are consistent with real observations of the faulting in large shallow earthquakes world-wide. Thus, for the first time, we have available a sufficiently complex model to generate high-frequency strong ground motion seismograms.
Our aim is to produce realistic seismograms that can be used by the science and engineering community to improve building design or to estimate likely earthquake damage. The trick in doing this is to correctly add up the contribution of each of the 1,500 subfaults with the correct timing and faulting behaviour. This was the most difficult part of this project.
Our initial approach was to treat each of the 1,500 subfaults as a small fault in its own right with its own series of small earthquake ruptures that represent that fault’s behaviour during the whole rupture. We have successfully implemented this, but it is very time consuming to compute the results. We then spent a lot of effort on improving the efficiency of the calculations and transferring them to run on a cluster of 21 fast PC’s. Even so, it still takes at least one week to do the full rupture simulation for a single site. This is too slow if we are to undertake detailed investigations of the spatial pattern of shaking or to gain insight into what contributes to the different parts of the strong shaking that we observe.
An alternative approach has been to treat each of the 1,500 subfaults as a simple point radiating seismic energy and then applying a correction to make it look like a small fault that is rupturing. This is much faster than our first approach – as currently implemented, we can complete a calculation for a single site in 5 hours.
In spite of the computer time limitations, we have been able to obtain some results using the first approach mentioned above. The results are very encouraging in that we have generated seismograms (representing earth motion in terms of displacement, velocity, and acceleration) that have realistic peak values compared to data from large shallow world-wide earthquakes. The method does show some deficiencies in addition to being slow. Firstly, there is an enhancement of shaking at frequencies related to the size of the subfaults. Secondly, our assumed model of the earth has very hard rock right to the surface and the way in which seismic waves die out with distance has not been included. This results in accelerations that are too high. Some of these deficiencies are relatively easy to correct.
The results from our second approach are also encouraging in that they produce similar predicted levels of ground shaking far more quickly than in the previous approach. This approach also seems to largely overcome problems related to the size of subfaults. The second approach, as currently implemented, also has a slight deficiency in that the seismograms do not closely match those produced by the first method. This is because the way the two methods smooth the high frequency seismic waves is slightly different. If this smoothing effect is more carefully matched we expect the results to be in much closer agreement.
In summary, we have made a lot of progress towards generating realistic estimates of the shaking that would be produced by a complex Wellington fault rupture. The methods we have developed still have some shortcomings that need to be addressed. It must also be remembered that the approach we have adopted cannot account for localised amplification effects produced by either topography or deep, soft soil layers. These problems are more difficult to solve and need alternative approaches.
Technical Abstract
We have defined a 3-segment fault model for the Wellington fault based on geological evidence and the fault geometry. The model has a 75 km length, a 20 km width, and dips at 80˚ to the northwest. Purely horizontal dextral faulting has been assumed. Scaling relations suggest a likely magnitude of Mw7.4-7.6. We have modelled a smooth rupture across this fault and generated synthetic seismograms for two near-fault sites to test the effect of fault geometry. These tests show that a 3-segment model gives noticeably different low frequency seismograms for near-fault stations. We interpret this as being due to changes in station location with respect to the radiation pattern from various parts of the fault as the rupture propagates past, highlighting the need to use accurate fault geometry and site locations to in turn generate reliable seismograms.
We have used synthetic seismicity models, based on the interaction between many fault patches, or subfaults, to generate detailed slip distributions for ruptures over the model fault plane. We have shown that these slip distributions are consistent with relations derived from large global earthquakes. We then treated each subfault in the synthetic seismicity model as a small finite fault, computed the radiation from that fault, and then summed over all subfaults to produce synthetic seismograms. We have found that this approach has two shortcomings. First, the calculations are very computer intensive. After a significant amount of effort improving efficiency, it still requires at least a week of computation to calculate the seismograms for a single site on a cluster of 21 fast PC’s. More rapid turn-around is required for detailed studies of the predicted ground shaking and what factors affect it. The second shortcoming was that displacement spectra are enhanced over the 2-3.5 Hz range and then show a sudden decrease at frequencies related to the subfault size. Artefacts of this kind are not unexpected, but given that they are nearly two orders of magnitude in size they are quite undesirable if the synthetic seismograms are to be of practical use. We believe that the artefact is related to the frequencies produced by the starting and stopping of the ruptures across each individual subfault.
As a more practical alternative to the finite fault approach, we then treated each subfault as a point source, but included directivity through adding correction factors for the P and S waves. This reduced computation times on the PC cluster to about five hours, and also greatly reduced the artefact related to subfault size. This method, however, also seems to have some shortcomings. Waveforms and spectra differ significantly from the finite fault results. This is most likely to be due to the use of the same rise time as for the finite fault approach and can be remedied by adjusting the rise time so that the frequency content of the two approaches is matched.
Both methods of rupture summation produce displacement and velocity seismograms with peak amplitudes near to those expected, based on observations from global earthquakes. The only exceptions to this are the high accelerations due to very close asperities (regions of high slip). These warrant further investigation for a variety of rupture scenarios. Peak accelerations tend to be a little high, but this is most likely to be due to using a model with high velocities (akin to very hard rock) near to the surface and the lack of any attenuation terms. A high priority for further work would be to improve the velocity model and to include the effects of attenuation. The models correctly reproduce clear fault-normal directivity pulses at low frequencies. In future work we hope to examine these in detail to look at their behaviour for a range of earthquake magnitudes and to see when the breakdown in directivity is occurring as a function of frequency.
We see the progress we have made in implementing two methods for summing subfault ruptures produced from a synthetic seismicity catalogue as a big step forward over other techniques that have had to rely on the synthesis of separate deterministic and stochastic approaches for low and high frequencies, respectively. We are very encouraged that both of our summation methods give reasonable values for near-fault motions in terms of peak displacement, velocity, and acceleration and, when more fully developed, will lead to more insight into the generation of strong ground motions. Both of our approaches, however, have some shortcomings that need further work to overcome. The most significant of these, and perhaps the most easily overcome, are the use of lower velocity surface layers (softer rock), the inclusion of attenuation terms, and suitable matching of frequency content across the two approaches by adjusting risk time duration.
Order a research paper
Many of these research papers have PDF downloads available on the site.
If you'd like to access a paper that doesn't have a download, get in touch to ask for a copy.