Can we trust the density of earthquake events as an indicator for future events?
We can plot the density of earthquake events
In a similar way as in the laboratory experiments, I suggest that you have to go back in the past and find the mean time collapsed between two main enents in the wider region of the same location. Then you may use your judgement to define the period you need. The longer the period the better the approximation. (rare) events follow the geometric distribution and the time between them the exponential one. Now, the period of two years is very short if the mean time of the main events is 20 years. Good luck with your research. consider whether you are dealing with point processes (memory- less) or feedback-feedforward processes (memory-full in the sense that seismicity triggers future seismicity via short and long range stress-stress interaction and the whole system gets self-organized, sometimes culminating with a large main shock). In the former case you have normal distribution of interevent times and random occurrence of large earthquakes. In the latter case you have q-exponential distribution because the system is non-extensive. Moreover, you have to consider the effects of aftershocks. Aftershock sequences are correlated in time, their distribution being some variant of Pareto, with the q-exponential being the most likely contestant. However, are they only seismicity bursts contained in the interaction radius of their parent main event or do they contribute to the evolution of seismicity via long-range interactions? And how does this affect the expectation values of earthquake occurrences in the long term? Could we define another more proper index than spatial density? Since, as you wrote : "...events, say, with magnitudes >5 are difficult to include in point density plots keeping a physical meaning of a hypocenter processes.", probably we should treat those events a little bit different. For example we could agree that a main event (say 5R) it has to be treated as a set which contains at least its after shocks. I think it could be done via any statistical package if we agree at the restrictions about the spatial zone of the event. By this way we could eliminate the problem of equal referring small and indifferent events and keep tracking only to the scale we are interested for. the seismicity in northern Tien Shan is "high" compared to seismicity in Central Europe. So far in relative terms. To give you an better impression, the parameter 'a' of the well-known Gutenberg-Richter quation (logN(M) = a-bM) is about 8 for zones of "high" activity and about 5 in average. So the total number of earthquakes is quite high. For the seismic catalogue of 30 years there are about 20k events. Be carefull in such plot that also the density of stations may be quite different in different regions and the number of seismic events reported strongly depend of such constrains. Determine the global magnitude completeness of the catalog is a good way to have a more global view. There are a lot of instruments in Corinth area and the NOA catalog is strongly improved since few years. But, even taking this into account, it is true that there have been quite a lot of events in this area in the last years..... density of earthquake events may be used as an indicator for future events (taking into account well-known long duration of geodynamic activity). At the same time, dangerous geodynamic event may appear in the areas with a low density of earthquake events. For this aim different statistical methodologies are used. Statistical and informational methodologies are sufficiently widely applied in geophysics. density of earthquake events may be used as an indicator for future events (taking into account well- known long duration of geodynamic activity). At the same time, dangerous geodynamic event may appear in the areas with a low density of earthquake events. For this aim different statistical methodologies are used. Statistical and informational methodologies are sufficiently widely applied in geophysics. However, I believe that we should very carefully work with statistics in the earthquake prognosis. 1) stress 2) movement. Your cumulative magnitude (Benioffs graph) is proportional to the movement (or cumulative deformation) at the specific fault. This is not the same as the stress. During the period of the high deformation rate close to the focal area, the stress field itself is decreasing. The maximum of stress is right before the mainshock. Therefore, it is not possible "to predict" the mainshock from the deformation rate, because the deformation rate is the consequence of the mainshock. On my graph, there is the tilt, which is proportional to the first derivative of the main component (complex amplitude) of the stress tensor. Then I can observe the increasing of the stress after Taiwan EQ (which was a trigger of the process) and the maximum of the stress was observed right before the mainshock at Sumatra EQ M8.5. Dear Dragomir, since the magnitude is a measure of the energy which is released at every event, then we can cumulatively add all magnitudes for a large time period. If you do this you will notice some sigmoid curves which denote seismic circles. The relative inflection points indicate the significant change in energy released which is closest to a big event. The problem is to find other proper quantities that can be inflected at least one week before the main event. it is spatial density, ie #of events/square area, not exactly frequency. Dear Theodore, my amateuristic studies show that the main event happens in a less dense area: it seems that geological energy is blowing out with many small events in an area, thus probably the big event becomes in another area where there exist small density. dealing with point processes (memory-less) or feedback-feedforward processes (memory-full in the sense that seismicity triggers future seismicity via short and long range stress-stress interaction and the whole system gets self-organized, sometimes culminating with a large main shock). In the former case you have normal distribution of interevent times and random occurrence of large earthquakes. In the latter case you have q-exponential distribution because the system is non-extensive. Moreover, you have to consider the effects of aftershocks. Aftershock sequences are correlated in time, their distribution being some variant of Pareto, with the q-exponential being the most likely contestant. However, are they only seismicity bursts contained in the interaction radius of their parent main event or do they contribute to the evolution of seismicity via long-range interactions? And how does this affect the expectation values of earthquake occurrences in the long term? the pendulum measures tilt, which is proportional to the horizontal component of deformation, which is proportional to the first derivative of the the appropriate component of stress tensor. In the local minima and maxima of tilt, the stress is maximal or minimal (depending on the local stress field orientation). We want to detect the "stress waves", which are generated in the focal area before the mainshock. These "stress waves" have S-form and their periods are proportional to the magnitude of the mainshock. Then, we must only localise the focal area, where these "stress waves" were originated. This is in our case the most difficult task. magnitude is a measure of the energy which is released at every event, then we can cumulatively add all magnitudes for a large time period. If you do this you will notice some sigmoid curves which denote seismic circles. The relative inflection points indicate the significant change in energy released which is closest to a big event. The problem is to find other proper quantities that can be inflected at least one week before the main event. t velocity and slip pattern of a fault are discontinuous by nature. So if a scenario comes where we need to analyze nucleation initiation we will get significant changes in both these two parameters for a fault.
Are earthquakes predictable : Starting with the fundamental question and finding an answer through nucleation will involve two things : nucleation arrest and nucleation initation. We need to define how the earthquake genesis is going to be for a fault patch. What is an earthquakes behavioral pattern Earthquakes occur when the stress on the fault overcomes the frictional resistance as rupture nucleates with a directivity for a fault slip propogating with a certain velocity for the fault patch until it ruptures the fault length. When the stress wave reaches a barrier it diverts the direction as it continues in an offset with a profound increase of stress close to the asperity where future large earthquakes are bound to occur in large numbers. why study nucleation then? The study of nucleation is essential because of the non causal behavior of stress variability. Understanding the structure of the earths crust and mechanical nucleation models pre valent in geo analaysis and initiation of seismogenesis, we need to initiate analysis of slip velocity locally as there are gradients.Delayed dynamic triggering is difficult to analyze as there are small ruptures and larger jumps around fluid barriers. Sudden jump in slip velocity is imposed as there is also instantaneously change in rupture length uniformly around the depth of the rock. The basic analysis of nucleation for a fault patch will depend how it is arrested and not how it is initiated. My inference so far which I hope a few may add to>>> This proves that earthquake forecast is possible since when we talk of initiation process .Earthquakes do not just nucleate under the initiation of a breakge due to excessive stress and strain behavior or just the weakness of the point in the fault. When we need to talk of genesis of nucleation there is a There is a nucleation arrest for the fault patch. Which proves the first statement by Prof Demetris Christopoulos that density of earthquake events for a source will be an effective model to analyze the process of nulceation. How? I will just explain with an anology If there are points of conflict in the society the denser the source of conflict the more will be the events occuring in the conflict zone.. It may sound as comon sense what I will add is that the conflict will be a more violent one if it is arrested to tht place and not dissipated. Or in other terms say a 1. X conflict events initiated with L no of probable points. L no of probable points dissipated over time but X conflict is same and 2. X conflict initiated with L no of probable points where E(L) to add to the conflict increased over time.(E is the function of an Expectation parameter... 3. X conflict decreased over time But E(L) increased over time which means dissipation in nucleation or nucleation arrest. 4. E(X) to increase over time is independent of the nucleation points. The deformation measurement of rock mass in the depth and mathematical modelling solved the old question of Wegeners theory What is the main engine for the lithosphere movement?. The solar energy, which reaches the Earth, is two orders higher than the energy of all earthquakes and volcanoes. Only a small part of the solar energy is accumulated in the rocks and the thermal wave created by the solar irradiation penetrates the subsurface layers. The thermal expansions of rocks give rise to excitation of the thermoelastic waves, which are observable in depths as well as in the whole lithosphere plate. The thermoelastic waves with diurnal and annual periods are well observable. The upper limit corresponds with slow slip events, tremors, creep or earthquakes, the lower limit corresponds with opening of cracks and faults, which can be filled by ratchets. Such mechanism leads to the non-reversible expansions of rocks and spreading of the ocean floor. The most important fact is that no new sources of energy are necessary to explain all of energy output by earthquakes and volcanoes. The volume of the energy, which is accumulated in the rocks, can be only 4% of the total energy input, which is coming from the Sun (in the order of 10 24 J/year). The 2-D numerical model of the thermoelastic waves show that a small volume of the rock mass is close or above the strength limits in any time. This leads to the irreversible deformations creep, slow slip events, tremors and earthquakes. The ratcheting mechanism can explain the irreversibility of the mechanism and especially the western drift of lithosphere against mantle. The ratcheting mechanism leads to the spreading of the ocean floor as well as to the expansion of the continental crust. The most tectonically active should be the winter periods, when high microseisms occurred, when is the maximal strain of rocks and high deformation noise. The same could be seen in the global cooling periods.
1) the rock mass is a heterogenous environment. Therefore, the parts of it, which are close to the limits, there exist. This can be measured as conflict density. These parts (from macroscopic point of view) generate the stress waves, because they are destructed during the stress increasing and their stress-toughness is spreading to their surrounding. The denser and larger area generates the higher and larger stress waves. 2) It is correct, that conflict X decreases over time, because of rheology parameters of teh rock mass (creep, slow-slip events, micrseisms, .....). E(L) increases over time due to ratcheting, which can accumulate the elastic energy (only one-way function).
the reaction of environment to external forces can be different - here is the example of strain. There are upper and lower limits of linear parts of strain. We found (in this case) that 0.3% of the material exceeded the strength limit and will be destructed (microscopically). On the other hand, the lower limits will be exceeded in some cases and the cracks will be open. I understood the X parameter in this sense as the density of exceeding of limits (both).