Selasa, 03 Juli 2018

Sponsored Links

Earthquakes Prediction: 9 Methods to Predict Earthquake
src: cdn.yourarticlelibrary.com

Earthquake Prediction is a branch of seismological science related to the specification of time, location, and magnitude of future earthquakes within specified limits, and in particular "parameter determination for next strong earthquakes occur in a region, earthquake predictions are sometimes distinguished from earthquake forecasting , which can be defined as probabilistic assessments of general seismic hazards, including the frequency and magnitude of devastating earthquakes in certain areas over the years or decades. Prediction can be further distinguished from the earthquake warning system, which after detecting earthquakes, gives a real time warning of seconds to a neighboring area that may be affected.

In the 1970s, scientists were optimistic that practical methods for predicting earthquakes would soon be discovered, but in the 1990s continuing failures caused many people to question whether they were possible. Evidence of great earthquake success has not happened yet and some success claims are still controversial. For example, the most notable claim of a successful prediction is the allegations for the 1975 Haicheng earthquake. Further studies say that there is no valid short-term prediction. Extensive searches have reported many possible seismic precursors, but, so far, those precursors have not been reliably identified on significant spatial and temporal scales. While part of the scientific community argues that, taking into account non-seismic precursors and given enough resources to study them extensively, predictions may be possible, most pessimistic scientists and some argue that seismic predictions are essentially impossible.


Video Earthquake prediction



Evaluating earthquake predictions

Predictions are considered significant if they can prove successful beyond random opportunities. Therefore, the method of statistical hypothesis testing is used to determine the probability that an earthquake as predicted will occur anyway (the null hypothesis). Predictions are then evaluated by testing whether they correlate with an earthquake that is actually better than the null hypothesis.

In many instances, however, the statistical nature of earthquake events is not only homogeneous. Clustering takes place in time and space. In southern California about 6% of M = 3.0 earthquakes "followed by larger earthquakes in 5 days and 10 km." In central Italy, 9.5% of M = 3.0 earthquakes followed by larger events within 48 hours and 30 km. Although these statistics are not satisfactory for predictive purposes (giving ten to twenty false alarms for each successful prediction), they will disguise the analysis that assumes that earthquakes occur randomly in time, for example, as realized from the Poisson process. It has been shown that a "naive" method based only on clustering can successfully predict about 5% of earthquakes; "Much better than 'chance'".

Since the purpose of short-term prediction is to enable emergency measures to reduce death and destruction, failure to provide large earthquake warnings, which do occur, or at least an adequate evaluation of hazards, may result in legal liability, or even political purge. For example, it has been reported that members of the Chinese Academy of Sciences were cleansed for "having ignored scientific predictions about the Tangshan earthquake disaster in the summer of 1976." Wade 1977. After the L'Aquila earthquake of 2009, seven scientists and technicians in Italy were convicted of ordinary murder, but not so much because they failed to predict 2009 L'Aquila Earthquake (where about 300 people died)) such as to give undue warranty to the public - a victim calls it "anesthetize" - that there will not be a serious earthquake, and therefore no need to take precautions. But earthquake warnings that do not occur also incur costs: not only the costs of emergency measures themselves, but civil and economic disturbances. Fake alarms, including canceled alarms, also undermine credibility, and thus effectiveness, future warnings. In 1999 it was reported (Saegusa 1999) that China introduced a "hard rule intended to eradicate 'fake' earthquake warnings, to prevent mass panicking and evacuation of cities triggered by large earthquake forecasts." This is driven by "more than 30 unauthorized seismic warnings... in the last three years, nothing is accurate." An acceptable exchange between a missed earthquake and a false alarm depends on the community's assessment of these results. The rate of occurrence of both should be considered when evaluating any prediction method.

In a 1997 study of the profit-and-loss ratio of quake-testing research in Greece, Stathis Stiros suggested that even good (hypothetical) predictive methods would be a questionable social utility, since "organized evacuation of urban centers is unlikely to be successfully implemented" , while "panic and other undesirable side-effects can also be anticipated." He found that earthquakes killed fewer than ten people per year in Greece (on average), and that most of the casualties occurred in large buildings with identifiable structural problems. Therefore, Stiros stated that it would be far more cost-effective to focus efforts on identifying and improving unsafe buildings. Since the death toll on the Greek highway is over 2300 per year on average, he argues that more souls will also be saved if the entire Greek budget for earthquake prediction has been used for road and road safety instead.

Maps Earthquake prediction



Prediction method

Earthquake prediction is an immature science - it has not yet led to a successful prediction of earthquakes from the first physical principle. Research into prediction methods therefore focuses on empirical analysis, with two common approaches: either identifying specific precursors for earthquakes, or identifying some geological trends or patterns in seismicity which may precede a major earthquake. Precursor methods are pursued primarily because of their potential utility for short-term predictions or forecasts, while 'trend' methods are generally considered useful for forecasting, long-term predictions (10 to 100 year time scales) or medium-term forecasts (1 to 10-year time scales).

Precursors

Earthquake precursors are anomalous phenomena that may provide effective warning against future earthquakes. This report - although generally recognized only after the event - amounts to thousands, some dating back to antiquity. There are about 400 possible precursor reports in the scientific literature, about twenty different types, running the whole from aeronomy to zoology. Nothing found reliable for earthquake prediction purposes.

In early 1990, IASPEI requested a nomination for the Introduction List of Significant Precursors. Forty nominations were made, five of which were selected as significant precursors, with two of them based on one observation each.

Following a critical review of the scientific literature, the International Commission on Estimates of the Earthquake for Civil Protection (ICEF) concluded in 2011 there was "considerable space for methodological improvement in this type of research." In particular, many cases of precursors are reported to be contradictory, lacking in amplitude size, or generally unsuitable for rigorous statistical evaluation. The published results are biased against positive results, so the negative error rate (earthquake but no forecast signal) is unclear.

Behavior of the animal

Over the centuries there have been anecdotal records of anomalous animal behavior previously and associated with earthquakes. In cases where animals exhibit unusual behavior a few tens of seconds before an earthquake, it has been suggested they respond to the P-wave. This trip pierces the ground about two times faster than the S-wave that causes the most severe vibrations. They predict not the earthquake itself - which has happened - but only the immediate arrival of the more destructive S-wave.

It has also been suggested that hours of unusual behavior or even days before can be triggered by foreshock activity on a scale that most people do not notice. Another confounding factor of the unusual phenomenon is skewed by "flashbulb memory": otherwise, mediocre details become more memorable and more significant when associated with emotionally powerful events like earthquakes. A study that attempted to control such factors found an unusual increase in animal behavior (possibly triggered by foreshocks) in one case, but not in four other cases that looked like earthquakes.

Dilatancy-difusion

In the 1970s the dilusicy-diffusion hypothesis was highly regarded as the physical basis for various phenomena seen as earthquake precursors. It is based on "solid and recurring evidence" of laboratory experiments that strongly emphasizes the crystal rocks of volume changes, or dilatancy, which causes other characteristic changes, such as seismic velocity and electrical resistivity, and even large-scale topographic enhancements. This is believed to occur in the 'preparation phase' shortly before the earthquake, and appropriate monitoring can therefore warn of an earthquake.

Detection of variations in the relative speeds of primary and secondary seismic waves - expressed as Vp/Vs - as it passes through a particular zone is the basis for predicting the 1973 Blue Mountain Lake (NY) and 1974 Riverside (CA) quakes. Although these predictions are informal and even trivial, their real success is seen as a confirmation of both dilatan and the existence of the preparatory process, leading to what has come to be called "overly optimistic statements" that successful quake predictions "seem to be on the verge of practical reality."

However, many studies question this outcome, and the hypothesis ultimately languished. Subsequent research indicates that "fails for several reasons, largely related to the validity of underlying assumptions", including the assumption that laboratory results can be upgraded to the real world. Another factor is the selection of retrospective criteria. Other studies have shown dilatan to be so neglected that Main et al. 2012 concludes: "The concept of 'large-scale preparation zones' suggests most likely future events, remain the same ether with undetected ether in the Michelson-Morley experiment."

Changes in Vp/Vs

V p is the symbol for wave velocity "P" (primer or pressure) that passes through the rock, while V s is the symbol for wave velocity "S" (secondary or shear). Small-scale laboratory experiments have shown that these two speed ratios - represented as V p / V s - change when stone approaching the fracture point. In the 1970's it was considered a breakthrough when Russian earthquake experts reported observing the change (later discount.) In the area of ​​the next quake. This effect, as well as other possible precursors, has been attributed to dilatancy, in which the rocks pressed near its top point expand slightly (dilate).

The study of this phenomenon near Blue Mountain Lake in New York State produced an unsuccessful though unofficial prediction in 1973, and is credited for predicting the Riverside Riverside (CA) earthquake of 1974. However, additional success has not been followed, and it has been suggested that this prediction is a worm. A V p / V s anomaly is the basis of the 1976 prediction from M 5.5 to 6.5 â € <â € Radon emission

Most rocks contain small amounts of isotopically distinguishable gases from normal atmospheric gases. There have been reports of spikes in the concentrations of these gases prior to a major earthquake; this has been associated with release due to pre-seismic stress or rock fracture. One of these gases is radon, produced by radioactive decay from the amount of uranium traces that exist in most of the rocks.

Radon is useful as a potential earthquake predictor because it is radioactive and easily detectable, and short half-life (3.8 days) makes the radon level sensitive to short-term fluctuations. A 2009 review found 125 reports of radon emission changes prior to 86 earthquakes since 1966. But when ICEF finds in its review, the earthquake by which these changes are allegedly linked up to a thousand kilometers away, a few months later, and altogether. In some cases anomalies are observed on distant sites, but not on closer sites. ICEF found "no significant correlation". Another review concludes that in some cases, radon level changes precede earthquakes, but correlations have not been established with certainty.

Electromagnetic Anomalies

The observations of electromagnetic interference and their attribution to the seismic failure process returned as far as the Great Lisbon earthquake in 1755, but almost all such observations before the mid-1960s were invalid because the instruments used were sensitive to physical movement. Since then various electrical anomalies, resistive electrics, and magnetic phenomena have been attributed to precursor voltage changes and strain preceding the earthquake, raising hopes of finding reliable precursor earthquakes. While some researchers have gained a lot of attention with theories about how the phenomenon can be produced, the claims have observed the phenomenon before the earthquake, no such phenomenon has proved to be a true predecessor.

A 2011 review found the most "convincing" electromagnetic precursor to be a ULF magnetic anomaly, such as the Corralitos event (discussed below) recorded before the 1989 Loma Prieta earthquake. However, it is now believed that observation is a system malfunction. The study of the closely watched 2004 earthquake found no evidence of predecessor electromagnetic signals of any kind; Further studies show that earthquakes with magnitude less than 5 do not produce significant transient signals. The International Commission on Earthquake forecasts for Civil Protection (ICEF) considers the search for useful precursors to be unsuccessful.

* VAN seismic electric signals

The most commended, and most criticized, claims of electromagnetic precursors are the VAN method of physics professor Panayiotis Varotsos, Kessar Alexopoulos and Konstantine Nomicos (VAN) of the University of Athens. In a 1981 paper they claim that by measuring the geo-electric voltage - what they call "seismic electrical signals" (SES) - they can predict earthquakes of magnitude greater than 2.8 across Greece up to seven hours earlier.

In 1984 they claimed there was a "one-to-one correspondence" between the SES and the earthquake - that is, "every EQ big enough is preceded by SES and upside down each SES is always followed by EQ magnitude and predictable episentrum reliably "- SES appeared between 6 and 115 hours before the earthquake. As a proof of their method, they claim a series of successful predictions.

Although their report "was honored by some as a major breakthrough" - an enthusiastic advocate (Uyeda) reportedly said "VAN was the greatest invention since the time of Archimedes" - among the seismologists it was greeted by "a wave of general skepticism". In 1996, a paper submitted by VAN to the journal Geophysical Research Letters was given an unprecedented public review by a large group of reviewers, with papers and reviews published in a special edition; the majority of reviewers found the VAN method to be flawed. Additional criticism was raised in the same year in a public debate between several principals.

The main criticism is that this method is geophysically unreasonable and scientifically unhealthy. Additional objections include provable falsehoods from the one-to-one earthquake and SES relationships claimed, most likely the forecasting process generates a signal that is stronger than observed from the actual earthquake, and a very strong possibility that the signal is made by humans. Further work in Greece has tracked SES-like "anomalous electrical signal" back to a particular human source, and found that the signal is not excluded by the criteria used by the VAN to identify SES.

The validity of the VAN method, and therefore the predictive significance of SES, is based primarily on the empirical claims of successful predictions shown. Many weaknesses have been found in the VAN methodology, and in 2011 ICEF concluded that predictive capabilities claimed by VAN can not be validated. Most seismologists think the VAN has been "convincingly denied".

* Corralitos anomaly

Perhaps the most famous seism electromagnetic event ever, and one of the most frequently cited examples of possible seismic precursors, is the Corralitos anomaly of 1989. In the month before the 1989 Loma Prieta earthquake measurements of the earth's magnetic field at ultra-low frequencies by magnetometers in Corralitos, California, just 7 km from the epicenter of the impending earthquake, began to show anomalous increase in amplitude. Just three hours before the earthquake, measurements soared to about thirty times larger than normal, with amplitude subside after the earthquake. Such amplitude is never seen in two years of operation, or in similar instruments located as far as 54 km. For many people, such clear locations in space and time show a connection with earthquakes.

Additional magnetometers were then deployed in northern and southern California, but after ten years, and several major earthquakes, similar signals have not been observed. More recent studies have cast doubt on connections, connecting Corralitos signals with unrelated magnetic interference or, even more simply, to sensor system damage.

* Freund Physics

In his research on crystal physics, Friedemann Freund discovered that water molecules embedded in rocks could dissociate into ions if the rocks were under intense pressure. The resulting charge carrier can generate a battery current under certain conditions. Freund suggested that this current may be responsible for earthquake precursors such as electromagnetic radiation, earthquake lights and plasma disturbances in the ionosphere. The study of currents and interactions is known as "Freund physics".

Most seismologists reject Freund's suggestion that the signals produced by stress can be detected and used as a precursor, for a number of reasons. First, it is believed that stress does not accumulate quickly before a major earthquake, and thus there is no reason to expect a large current generated quickly. Second, seismologists have been extensively searching for statistically reliable precursors of electricity, using sophisticated instrumentation, and have not yet identified the precursors. And thirdly, the water in the earth's crust will cause any resulting current to be absorbed before it reaches the surface.

Trends

Rather than observing anomalous phenomena that may be the first signs of an impending earthquake, another approach to predicting earthquakes is to find trends or patterns that lead to an earthquake. Because this trend may be complicated and involves many variables, advanced statistical techniques are often needed to understand it, therefore this is sometimes called statistical methods. These approaches also tend to be more probabilistic, and have larger periods of time, and so integrate with earthquake forecasting.

​​â € <â €

Even the hardest stones are not stiff. With great strength (like between two large tectonic plates moving past each other) the earth's crust will be bent or deformed. According to Reid's elastic rebound theory (1910), eventually the deformation (strain) becomes severe enough to cause damage, usually on errors. Slippage along the pause (earthquake) allows the stone on each side to bounce to a less flawed state. In the energy process is released in various forms, including seismic waves. The tectonic strength cycle accumulated in the elastic deformation and released suddenly is then repeated. Since the displacement of an earthquake ranges from less than one meter to about 10 meters (for an earthquake of M 8), the presence of massive hundreds of miles of slip-shifts indicates a long-running seismic cycle.

Earthquake characteristics

The most studied seismic disturbances (such as Nankai megathrust, Wasatch fault, and San Andreas fault) appear to have different segments. The earthquake characteristic model postulates that earthquakes are generally constrained in this segment. Because the length and other properties of the segment are fixed, earthquakes that solve all errors must have similar characteristics. This includes the maximum amount (which is limited by the length of the rupture), and the number of stretches accumulated necessary to break the fault segment. Because continuous plate motions cause strains to accumulate steadily, seismic activity in certain segments must be dominated by earthquakes of similar characteristics that repeat at rather regular intervals. For certain fault segments, identifying the earthquake of these characteristics and the timing of the repetition rate (or reverse return period) should therefore inform us of the subsequent split; this is an approach commonly used in estimating seismic hazards. UCERF3 is an important example of such a forecast, prepared for the state of California. The return period is also used to forecast other rare events, such as cyclones and floods, and assume that future frequencies will be similar to the frequencies observed to date.

The idea of ​​a characteristic earthquake is the basis of Parkfield's prediction: similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggest a break pattern of 21.9 years, with a standard deviation of ± 3.1 years. The extrapolation of the 1966 events led to an earthquake prediction around 1988, or before 1993 at the latest (at 95% confidence interval). The appeal of such a method is that the prediction is derived entirely from the trend, which should take into account the unknown earthquake physics and possibly its fault parameters. However, in the Parkfield case, a predicted quake did not occur until 2004, a decade late. This seriously undermines the claim that the Parkfield earthquakes are quasi-periodic, and shows individual events quite differently in other respects to question whether they have different characteristics in general.

Parkfield's prediction failure has raised doubts about the validity of the characteristic seismic model itself. Some studies question various assumptions, including the main that earthquakes are restricted in segments, and suggest that "typical earthquakes" may be a selection bias artifact and the short of seismological records (relative to earthquake cycles). Other studies have considered whether other factors need to be considered, such as the age of the fracture. Whether earthquake rupture is more commonly restricted in a segment (as is often seen), or penetrates the segment boundary (also visible), has a direct effect on the seismic hazard level: larger earthquakes where some segments break, but in reducing more filters they will more rare.

Seismic gap

In contacts where two tectonic plates are missed each other each part must eventually slip, because (in the long run) nothing is left behind. But they do not all slip at the same time; Different parts will be at different stages in the accumulated and rebound tension (deformation) cycle. In the seismic gap model, the next "substantial earthquake" should not be expected in segments where the recent seismicity has eased the tension, but in the intervening gaps where the unbearable strain is the largest. This model has an intuitive appeal; is used in long-term estimates, and is the basis of a series of circum-pacific (pacific) forecasts in 1979 and 1989-1991.

However, some of the underlying assumptions about seismic gaps are now known to be false. A close examination indicates that "there may be no information in the seismic gap of the timing of the occurrence or magnitude of the next major event in the region"; the statistical test of circum-pacific forecasts shows that the seismic gap model "does not predict large earthquakes well". Another study concluded that long periods of calm did not increase the potential of the earthquake.

Seismicity pattern

Various heuristic-derived algorithms have been developed to predict earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of Vladimir Keilis-Borok. M8 issues a Time of Increased Probability (TIP) alarm for large earthquakes of a certain magnitude when observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers) up to five years. Such large parameters have made the M8 controversial, as it is difficult to determine whether any punches are predictably skilled, or simply the result of chance.

M8 received great attention when the San Simeon and Hokkaido 2003 earthquakes occurred in TIP. In 1999, the Keilis-Borok group published a claim of having achieved statistically significant medium-term results using their M8 and MSc models, as far as the world's major earthquakes were considered. However, Geller et al. skeptical of predictive claims over any period shorter than 30 years. Types published widely for the 6.4 M quake in Southern California in 2004 were not met, or two other lesser known TIPs. An in-depth study of the RTP method in 2008 found that out of about twenty alarms, only two can be considered a hit (and one of them has a 60% chance of occurring). It concludes that "RTP does not differ significantly from the guessing methods based on the historical level of seismicity."

Accelerate the moment release (AMR, "when" into seismic energy measurement), also known as time-to-failure analysis, or accelerate the release seismic moment ( ASMR), based on the observation that foreshock activity before the massive earthquake not only increased, but increased at an exponential rate. In other words, the cumulative number of foreshocks becomes steeper just before the main surprise.

The following formulation by Bowman et al. (1998) to be a testable hypothesis, and a number of positive reports, AMR looks promising despite some problems. Known issues include undetectable for all locations and events, and difficulty projecting accurate timing when the curve's tail tip becomes steep. But careful testing has shown that apparent AMR trends may result from how data adjustments are made, and fail to account for spatiotemporal clustering of earthquakes. The AMR trend is therefore not statistically significant. Interest in AMR (as assessed by the number of peer reviewed papers) has fallen since 2004.

2016 Major Earthquake Predictions...Dates & Places to Watch! - YouTube
src: i.ytimg.com


Important predictions

These are predictions, or claims of prediction, which can be known either scientifically or because of public fame, and claim a scientific or semi-scientific basis. Because many predictions are kept secret, or published in unclear locations, and become famous only when claimed, there may be a choice bias in clicks that get more attention than the wrong ones. The predictions listed here are discussed in Hough's book and Geller's paper. 1975: _Haicheng.2C_China "> 1975: Haicheng, China

The Haicheng M 7.3 earthquake of 1975 was the "success" of the most cited quake predictions. The study of seismic activity in the region caused Chinese authorities to issue medium-term predictions in June 1974. Therefore, the political authorities ordered various measures taken, including forced evacuation of homes, the construction of "simple outer structures", and showing films out-door. The quake, striking at 19:36, was powerful enough to destroy or destroy about half the houses. However, "effective preventive measures taken" are said to have kept the death toll below 300 in areas with a population of about 1.6 million, where declared tens of thousands of lives may have been estimated.

However, despite the massive earthquake, there is some skepticism about the narrative actions taken on the basis of timely predictions. This event occurred during the Cultural Revolution, when "confidence in earthquake prediction has become an element of ideological orthodoxy that distinguishes true party offenders from right-wing deviations". Recording is irregular, making it difficult to verify details, including whether there is an evacuation booked. The method used for both medium- and short-term prediction (other than "Chairman Mao's revolutionary line") has not been determined. The evacuation may be spontaneous, following a strong foreshock (M 4.7) that occurred the day before.

A 2006 study that had access to extensive records found that the prediction was false. "In particular, there is no official short-term prediction, although such predictions are made by individual scientists." Also: "it is the foreshocks alone that trigger the final decision of warning and evacuation". They estimated that 2,041 lives were lost. The less dead are associated with a number of coincidental circumstances, including earthquake education in the preceding months (driven by increased seismic activity), local initiatives, time (occurring when people are not working or sleeping), and local construction styles. The authors conclude that, while unsatisfactory as predictions, "this is an attempt to predict major earthquakes that for the first time do not end in practical failure."

1981: Lima, Peru (Brady)

In 1976 Dr. Brian Brady, then a physicist at the US Mining Bureau, where he has studied how the rocks crack, "concludes a series of four articles on earthquake theory with strain-building deductions in the Peru offshore (sub-coastal) subduction zone could lead to earthquakes "In an internal memo written in June 1978 he narrowed the time window into" October to November, 1981 ", with major surprises in the range of 9.2 Â ± 0, 2. In a 1980 memo he was reported as "mid-September 1980". This was discussed at a scientific seminar in San Juan, Argentina, in October 1980, in which Brady's colleague, Dr. W. Spence, presented a paper. Brady and Spence later met with government officials from the US and Peru on October 29, and "predicted a series of powerful earthquakes in the second half of 1981." This prediction became famous in Peru, following what the US embassy referred to as "the first page headline brought to most daily Five" on January 26, 1981.

On January 27, 1981, after reviewing the Brady-Spence predictions, the US National Seismic Prediction Evaluation Council (NEPEC) announced "unsure of the scientific validity" of the predictions, and "shows nothing in seismic data observed, or in theory presented, which lends substance to the predicted time, location, and magnitude of the earthquake. "It goes on to say that while there may be a major earthquake at the expected time, the probability is low, and recommend that" predictions are not given serious consideration. "

Undeterred, Brady later revised his forecasts, stating that there will be at least three earthquakes on or about July 6, August 18 and September 24, 1981, which leads one USGS official to complain: "If he is allowed to continue playing this game. he will eventually get hit and his theory will be considered legitimate by many people. "

On June 28 (the date most widely taken as the date of the first expected earthquake), it was reported that: "Five residents passed a quiet Sunday". The title in a Peruvian newspaper: "NO PASO NADA" ("Nothing happens").

In July Brady officially withdrew his forecast on the grounds that precondition seismic activity did not occur. The economic loss due to tourism shortages during this episode is roughly estimated at a hundred million dollars.

1985-1993: Parkfield, USA (Bakun-Lindh)

The "Parkfield quake prediction experiment" is the most touted scientific earthquake prediction ever. This is based on the observation that the Parkfield segment of the San Andreas Fault is regularly damaged by moderate earthquakes around M 6 every few decades: 1857, 1881, 1901, 1922, 1934, and 1966. More specifically, Bakun & amp; Lindh (1985) shows that, if the 1934 quake is excluded, this occurs every 22 years, Ã, Â ± 4.3 years. Counting since 1966, they estimate a 95% chance that the next quake will hit around 1988, or no later than 1993. The National Earthquake Prediction Evaluation Council (NEPEC) evaluates this, and agrees. The US Geological Survey and the State of California therefore formed one of "the most sophisticated and densest monitoring instruments in the world", in part to identify any precursors when an earthquake comes. Confidence is high enough that detailed plans are made to alert emergency authorities if there are signs of an imminent earthquake. In the words of the Economist: "there has never been a more careful ambush laid for such an event."

1993 came, and passed, without fulfillment. Finally there was an M 6.0 earthquake in the Parkfield segment of the fault, on September 28, 2004, but without prior warning or clear precursor. While experiments in earthquake capture are considered by many scientists to be successful, predictions have not succeeded in the eventual event of a decade overdue.

1983-1995: Greek (VAN)

In 1981, the "VAN" group, led by Panayiotis Varotsos, said they found a connection between an earthquake and a 'seismic electrical signal' (SES). In 1984 they presented a chart of 23 earthquakes from 19 January 1983 to 19 September 1983, where they claimed to have successfully predicted 18 earthquakes. Another list followed, as their 1991 claims predicted six of the seven earthquakes with M s > = 5.5 in the period 1 April 1987 to 10 August 1989, or five of the seven earthquakes with M s > = 5.3 in the overlapping period May 15, 1988 to August 10, 1989, In ​​1996 they published "Summary of all Predictions issued from 1 January 1987 to 15 June 1995", totaling 94 predictions. Match this with a list of "All earthquakes with M S (ATH)" and within geographical boundaries including most of Greece they come up with a list of 14 earthquakes they should predict. Here they claimed ten successes, for a 70% success rate, but also a false alarm rate of 89%.

VAN's predictions have been criticized for various reasons, including geophysically absurd, "vague and ambiguous", that "VAN predictions" never specify a window, and never declare an unambiguous expiration date [and thus] the VAN does not make earthquake prediction in the first place ", failing to meet predictive criteria, and adjusting retroactive parameters.Also it has been raised the objection that no one can" confidently state, except in the most general terms, what the VAN's hypothesis is, because the authors do not have a thorough formulation about it. "

A critical review of the 14 cases in which the VAN claimed 10 successes only shows one case in which an earthquake occurs within predicted parameters. The VAN prediction not only fails to do better than coincidence, but shows "a much better relationship with events that occurred before them", according to Mulargia and Gasperini. Another preliminary study found that VAN results, when evaluated by certain parameters, were statistically significant. Both the positive and negative views on the VAN's predictions from this period are summarized in the 1996 book "A Critical Review of VAN" edited by Sir James Lighthill and in the issue of the debate presented by the journal Geophysical Research Letters that focuses on the statistical significance of the VAN Method. VAN has the opportunity to reply to their criticism in the publication of the review. In 2011, ICEF reviewed the 1996 debate, and concluded that the optimistic SES prediction capabilities claimed by the VAN can not be validated.

Crucial issues are large and often uncertain predictive parameters, so some critics say this is not a prediction, and should not be recognized as such. Much controversy with the VAN arose from this failure to adequately define these parameters. Some of their telegrams include predictions from two different earthquake events, such as (usually) one earthquake predicted at 300 km "NW" Athens, and another at 240 km "W", "with magnitutes [sic] 5.3 and 5 , 8 ", indefinitely.

VAN has disputed the 'pessimistic' conclusion of their critics, but the critics have not relented. It is recommended that VANs fail to account for earthquake clustering, or that they interpret their data differently over a period of greater seismic activity.

The VAN has been criticized on several occasions for causing public panic and widespread unrest. This has been exacerbated by the extent of their predictions, which cover large areas of Greece (up to 240 kilometers across, and often pairs of territory), much larger than the area actually affected by predicted earthquakes (usually several tens of kilometers across). The quantities are also the same extent: a predicted quantity of "6.0" represents a range of a harmless quantity 5.3 to a widely damaging 6.7. Paired with indefinite time windows a month or more, such predictions "can not be used practically" to determine the right level of preparedness, whether to reduce ordinary social functioning, or even issue a public warning. For example from a quandary public official facing: in 1995 Professor Varotsos reportedly filed a complaint with a public prosecutor accusing government officials of negligence in not responding to predictions of alleged earthquakes. A government official was quoted as saying "VAN's predictions are useless" because they include two-thirds of Greece.

1989: Loma Prieta, USA

The 1989 Loma Prieta earthquake (epicenter of the Santa Cruz Mountains in northwest San Juan Bautista, California) caused significant damage in the San Francisco Bay Area of ​​California. The US Geological Survey (USGS) reportedly claimed, twelve hours after the event, that it had "predicted" this earthquake in a previous year's report. The USGS staff later claimed this earthquake had been "anticipated"; various other prediction claims have also been made.

Harris (1998) reviewed 18 papers (with 26 forecasts) dating from 1910 "that offer or relate to scientific forecasts from the 1989 Loma Prieta earthquake." (In this case there is no difference made between forecast , which is limited to probabilistic estimates of earthquakes occurring over several time periods, and predictions more specifically .) This forecast can strictly tested for lack of specificity, and where forecasts do not embed precise time and location, the windows are very large (eg covering most of California for five years) to lose any value as a forecast. The approximate prediction (but given a probability of only 30%) has a window of ten or twenty years.

One debatable prediction comes from the M8 algorithm used by Keilis-Borok and colleagues in four forecasts. The first of these estimates missed both the magnitude (M 7.5) and the time (the five-year window from 1 January 1984, to 31 December 1988). They get the location, by including most of California and half from Nevada. The subsequent revisions, presented to NEPEC, extended the time lag until 1 July 1992, and reduced the location to central California only; the magnitude remains the same. The numbers they present have two more revisions, for M> = 7.0 earthquakes in central California. The five-year window for one ended in July 1989, and missed the Loma Prieta event; the second revision extended to 1990, and thus includes Loma Prieta.

When discussing the success or failure of predictions for the Loma Prieta earthquake, some scientists argue that it does not occur in the San Andreas fault (the focus is mostly approximate), and involves dip-slip motion (vertical) rather than strike-slip (horizontal) movement, predicted. Other scientists argue that it happened at the San Andreas fault zone, and released many of the strains accumulated since the 1906 San Francisco earthquake; therefore some of his predictions are true. Hough states that "most seismologists" do not believe this quake is predicted "per se". In the strict sense there is no prediction, only prediction, which only partially succeeds.

Iben Browning claimed to have predicted the Loma Prieta event, but (as will be seen in the next section) this claim has been rejected.

1990: New Madrid , US (Browning)

Dr. Iben Browning (a scientist with a Ph.D. in zoology and training as a biophysicist, but no experience in geology, geophysics or seismology) is an "independent business consultant" who foresaw long-term climatic trends for business. He supports the idea (not scientifically proven) that volcanoes and earthquakes are more likely to be triggered when the tidal forces of the sun and moon coincide with the maximum pressure on the earth's crust (syzygy). Once calculated when the tidal power is maximized, Browning then "projects" which areas are most at risk for major earthquakes. An area he calls often is the New Madrid Seismic Zone in the southeastern corner of the state of Missouri, where three great earthquakes occurred in 1811-12, combined with December 3, 1990.

Browning's reputation and perceived credibility were encouraged when he claimed in various promotional and advertising leaflets had predicted (among other events) the Loma Prieta earthquake of October 17, 1989. The National Seismic Prediction Evaluation Council (NEPEC) formed the AHWG Ad Hoc Working Group (NEPEC) ) to evaluate Browning's predictions. His report (released Oct. 18, 1990) specifically rejected the successful prediction claims of the Loma Prieta earthquake. A transcript of his lecture in San Francisco on October 10 shows him saying: "There may be some earthquakes around the world, Richter 6, and there may be a volcano or two" - which, on a global scale, is about average for a week - without mentioning the earthquake in California.

Although the AHWG report does not approve Browning's claim of previous success and his "projection" basis, it makes a small impact after a year of continuous claims of success. Browning's prediction received geophysicist support from David Stewart, and the silent support of many public authorities in their preparation for a major disaster, all of which were reinforced by massive exposure in the news media. Nothing happened on December 3, and Browning died of a heart attack seven months later.

2004 & amp; 2005: Southern California, USA (Keilis-ulcer)

The M8 algorithm (developed under the leadership of Dr. Vladimir Keilis-Borok at UCLA) was honored by seemingly successful predictions from the San Simeon and Hokkaido earthquakes of 2003. Hence, large interest was generated by predictions in early 2004 from the earthquake M> = 6.4 occurred somewhere in the southern California region of approximately 12,000 square miles, on or before September 5, 2004. In evaluating these predictions, the California Earthquake Predicted Evaluation Council (CEPEC) noted that this method has not made sufficient predictions for statistical validation, and are sensitive to input assumptions. It is therefore concluded that no "public policy action" is justified, although it reminds all Californians "of significant seismic dangers throughout the state." The predicted earthquake did not occur.

A very similar prediction was made for an earthquake on or before August 14, 2005, around the same area in southern California. The CEPEC evaluation and recommendations are basically the same, this time noting that previous predictions and the other two have not been met. This prediction also fails.

2009: L'Aquila, Italy ( Giuliani)

At 3:32 am on April 6, 2009, the Abruzzo region in Italy was rocked by a magnitude 6.3 earthquake. In the town of L'Aquila and its vicinity some 60,000 buildings collapsed or were severely damaged, resulting in 308 deaths and 67,500 displaced persons. Around the same time, it was reported that Giampa Giuliani had predicted an earthquake, had tried to warn the public, but had been muzzled by the Italian government.

Giampaolo Giuliani is a laboratory technician at the Laboratori Nazionali del Gran Sasso. As a hobby he has for years been monitoring radon using the instruments he has designed and built. Before the L'Aquila earthquake he was not recognized by the scientific community, and had never published any scientific works. He was interviewed on March 24 by an Italian-language blog, Donne Democratiche , about a bunch of low-level earthquakes in the Abruzzo region that had begun in December. He said that the herd is normal and will be reduced by the end of March. On March 30, L'Aquila was struck by a magnitude 4.0 quake, the largest to date.

On 27 March Giuliani warned the mayor of L'Aquila could have an earthquake within 24 hours, and an earthquake M ~ 2.3 occurred. On March 29 he made a second prediction. He called the mayor of Sulmona town, about 55 kilometers southeast of L'Aquila, to expect a "damaging" earthquake - or even a "disaster" within 6 to 24 hours. Loudspeaker vans are used to warn residents of Sulmona to evacuate, with panic as a consequence. No earthquakes occurred and Giuliano was cited for inciting public alarms and ordered to make public predictions in the future.

After the event L'Aquila Giuliani claimed that he had found an alarming increase in radon levels just hours earlier. He said he had warned relatives, friends and colleagues the night before the earthquake struck. He was subsequently interviewed by the International Commission on Earthquake forecasts for Civil Protection, which found that Giuliani had not sent valid predictions of major shocks to civilian authorities before the incident.

Can we predict Earthquakes? ~ Learning Geology
src: 3.bp.blogspot.com


Difficulty or impossibility

As the previous example shows, the earthquake prediction record has been disappointing. The optimism of the 1970s that the routine predictions of an earthquake would be "soon", perhaps within ten years, appeared very briefly in the 1990s, and many scientists began to wonder why. In 1997 it was positively declared that an earthquake could be unpredictable, leading to an important debate in 1999 about whether individual earthquake prediction was a realistic scientific goal.

Earthquake prediction may have failed simply because it is "very difficult" and is still beyond the current competence of science. Although the announcement was convinced four decades ago that seismology is "on the brink" to make reliable predictions, there may still be underestimation of adversity. In early 1978 it was reported that earthquake rupture may be complicated by the "heterogeneous distribution of mechanical properties along the fracture", and in 1986 that geometric irregularities at the fracture surface "seemed to exert major controls at the start and breaks broke." Other studies linked significant differences in error behavior with error maturity. These types of complexity are not reflected in current prediction methods.

Seismology may not yet adequately grasp the most central concept, the theory of elastic rebound. A simulation exploring assumptions about slip distribution finds results "incompatible with the classical view of elastic rebound theory". (This is associated with details of heterogeneity errors that are not taken into account in theory.)

Earthquake prediction may be intrinsically impossible. It has been argued that the Earth is in a self-regulated critical state "where every small earthquake has the possibility to flow into a major event". It has also been debated on the theoretical basis-a decision that "large earthquake predictions are, in a practical sense, impossible."

The quake's prediction may be intrinsically unlikely to have been disputed. But the best inability to impossible - an effective seismic prediction - remains to be proven.

California Earthquake Prediction Map | California Map 2018
src: centroculturalaustriaco.com


See also


Earthquake California Prediction - newcalendar
src: i.ytimg.com


Note


Breaking The Code: Models Predict Next Rupture Point of San ...
src: www.southerncaliforniaweatherforce.com


Source


8/02/2017 -- Large Earthquake in Iraq / Kuwait -- Forecast direct ...
src: i.ytimg.com


External links

  • US. Geological Survey: Earthquake Predicted Topics
  • Nature magazine - debate whether earthquake prediction is a realistic scientific goal
  • Dr Stuart Robbins (September 1, 2012). "Moon Earthquake: Does Tides Cause Earthquakes?". Exposing the PseudoAstronomy Podcast. Ã, - Podcasts discuss why claims that earthquakes can be predicted is wrong.

Source of the article : Wikipedia

Comments
0 Comments