Python + elasticsearch: easily play with 2.3 million seismic data spanning thousands of years

Tianyuan prodigal son 2020-11-13 09:09:33
python elasticsearch easily play million


1 Preface

2020 year 2 month 18 Japan 17 Time , In Changqing District of Jinan City 4.1 An earthquake of magnitude , A depth 10 km . In the following two days, there were many aftershocks in succession . Always thought that , Jinan is a lucky-bargee, Whether it's a natural disaster , Or man-made disaster and war , Almost never hurt her . however , The earthquake that happened in Changqing District shook my mind . that , Has Jinan ever had an earthquake of higher magnitude in history ?

With this question , I collected all kinds of information on the Internet 230 Ten thousand earthquake data , With the help of Python and ElasticSearch, The distribution of earthquakes in the world and China is analyzed , The historical seismic data of several provinces and regions in China are compared and analyzed . All source and sample data have been uploaded to GitHub, Interested readers can download the source code after , Use download code to download all data by yourself .

2 Python + ElasticSearch Environment construction of

2.1 Installation and startup ElasticSearch

ElasticSearch It's a distribution 、 High expansion 、 High real time search and data analysis engine . It can be very convenient to make a large number of data search 、 The ability to analyze and explore . make the best of ElasticSearch Horizontal scalability of , Can make data more valuable in the production environment .

ElasticSearch It's using NoSql database , Its basic concept is different from that of traditional relational database . Let's take a look at these two concepts :

  • file
    NoSql Database is also called document database . A document is a record of a relational database ( That's ok ).

  • Indexes
    Index in relational database , A data structure designed to speed up query setup . Index different from relational database ,Elasticsearch An index of this meaning will be created for each field , and Elasticsearch The index in is transparent , therefore Elasticsearch Stop talking about the index of this meaning , It gives the index two meanings :

    • Noun : An index is similar to a database in a traditional relational database , It's a place to store relational documents
    • Verb : Index a document , To store a document into an index ( Noun ) So that it can be retrieved and queried , Equivalent to a relational database insert

ElasticSearch It's using java Compiling , install ElasticSearch Before , First install java Running environment . In order to lighten the burden of computer , Can not be installed JDK, Install only JRE that will do .

After installation , Environment variable needs to be set . I installed jre1.8.0_241, The installation path is C:\Program Files\Java\, If you install a different version path and path , Please fill in... According to the actual installation :

  • JAVAHOME:C:\Program Files\Java\jre1.8.0_241
  • CLASSPATH:.;%JAVA_HOME%\lib
  • PATH: %JAVA_HOME%\bin

When the environment variable is set , Can be installed ElasticSearch 了 .ElasticSearch Easy to install , from Official website Download and unzip , Under its decompression path bin Run... In file elasticsearch.bat, You can start ElasticSearch service . such as , my ElasticSearch The decompression path is D:\Tools\elasticsearch-7.6.0, The start command is like this :

PS D:\Tools\elasticsearch-7.6.0\bin> .\elasticsearch.bat

2.3 install Python Of Elasticsearch client

Omnipotent pip:

pip install elasticsearch

After successful installation , You can use this client to connect Elasticsearch The server .

>>> from elasticsearch import Elasticsearch
>>> es = Elasticsearch()
>>> es.info()
{
'name': 'LAPTOP-FN4A44A5', 'cluster_name': 'elasticsearch', 'cluster_uuid': 'FUBGxHB4QC2dMccTlAvphg', 'version': {
'number': '7.6.0', 'build_flavor': 'default', 'build_type': 'zip', 'build_hash': '7f634e9f44834fbc12724506cc1da681b0c3b1e3', 'build_date': '2020-02-06T00:09:00.449973Z', 'build_snapshot': False, 'lucene_version': '8.4.0', 'minimum_wire_compatibility_version': '6.8.0', 'minimum_index_compatibility_version': '6.0.0-beta1'}, 'tagline': 'You Know, for Search'}

3 Data download and Analysis

3.1 data source

Initially, I plan to download data from the official website of China Seismological Bureau , Later, it was found that the data on the official website of the Seismological Bureau was far from available Global earthquake history data query website Rich data . The website contains information from China Earthquake Administration (CEA) And the USGS (USGS) Data from two sources . China Seismological Bureau (CEA) Sensitive seismic data from 2012 Year begins , and 5 The earliest data of earthquakes with magnitude above can be traced back to A.D 1000 year . China Seismological Bureau (CEA) Data focus on earthquake information in China , The smaller ones abroad have been removed . USGS (USGS) The data starts with 1900 year , Focus on the United States , Outside the United States , Most of them only contain the earthquake data with larger magnitude .

3.2 Download and parse

The data download of the global earthquake history data query website is very simple , Without registration , Use GET Method can be easily downloaded .URL There are two parameters ,dizhen_ly Presentation data source ,page Page number .dizhen_ly=china China Earthquake Administration (CEA) data ,dizhen_ly=usa Download USGS (USGS) data . Here is the data download and analysis source , The downloaded data is saved as .csv file .

import time, re, requests
from bs4 import BeautifulSoup
def Crawl_data(url, csv_file):
''' Grabbing seismic data '''
resp = requests.get(url)
r = re.compile(' Query to (\d+) Bar record , branch (\d+) Page display ')
pcount = int(r.findall(resp.text)[0][1]) # Get total pages 
# Get each page of data table , And wrote csv In file 
with open(csv_file, 'w', encoding='utf-8') as fp:
fp.write(' The moment of the earthquake , Magnitude (M), longitude (°), latitude (°), depth ( km ), Reference position , \n')
for page in range(1, pcount+1):
#for page in range(2):
print(' The first %d page / common %d page ' % (page, pcount), '...', end='')
try_count = 0
resp = requests.get(url+'&page=%d'%page)
while not resp.ok and try_count < 2:
try_count += 1
time.sleep(try_count*1)
resp = requests.get(url+'&page=%d'%page)
if not resp.ok:
print('Error:', url+'&page=%d'%page)
continue
soup = BeautifulSoup(resp.text, 'lxml')
for tr in soup.find_all('tr')[1:]:
tds = tr.find_all('td')
dt = tds[0].text
level = tds[1].text
lon = tds[2].text
lat = tds[3].text
deep = tds[4].text
location = tds[5].find('a').text
fp.write('%s, %s, %s, %s, %s, %s\n'%(dt, level, lon, lat, deep, location))
print('Done')

call Crawl_data() function , Give download address and data file name , You can download and analyze the data .

Crawl_data('http://ditu.92cha.com/dizhen.php?dizhen_ly=china', 'earthQuake_china.csv')
Crawl_data('http://ditu.92cha.com/dizhen.php?dizhen_ly=usa', 'earthQuake_usa.csv')

Friendship tips : It takes about 1 minute , Grabbing USGS data may require 10 Hours , Even longer , Please be prepared for this , And arrange download time properly .

4 Data cleaning and storage

4.1 Data cleansing strategy

Because the time span of data exceeds 1000 year , The date of a lot of historical seismic data 、 Time 、 Location and other information are not accurate enough , It will cause an exception when processing . So , We have an agreement :

  • If the data lacks time , Then press 00:00:00 meter
  • If the date of the data is incomplete , In the absence of months, press 1 Monthly meter , Missing date , Then press 1 Day count
  • If the magnitude is 、 Longitude and latitude 、 Source depth cannot be converted to floating point number , It is considered invalid data

4.2 Create index ( Building database )

Elasticsearch Very powerful , But a lot of concepts are confusing , The most intolerable thing is , Misuse index (index) The word . such as , The operation of creating database or table in relational database ,MongoDB Create collection in (collection) operation , here we are Elasticsearch here , It turns out to be index creation . That's not all , Insert in relational database (insert) operation , here we are Elasticsearch here , Still index operation . Forget it , Not make complaints about it. , Who makes people powerful , It's right to be a little grumpy .

from elasticsearch import Elasticsearch, client
def create_index():
""" Create index """
es = Elasticsearch()
ic = client.IndicesClient(es)
# Delete if index exists 
try:
ic.delete(index="earthquake")
except:
pass
# Create index 
ic.create(
index="earthquake",
body={

"mappings": {

"properties": {

"time": {
"type": "date"}, # The time of the earthquake 
"level": {
"type": "float"}, # Magnitude 
"geo": {
"type": "geo_point"}, # Location 
"deep": {
"type": "float"}, # depth 
"location": {
"type": "text"}, # Location 
"source": {
"type": "keyword"} # Data sources 
}
}
}
)

For every seismic data , We designed 6 A phase item : Time 、 Magnitude 、 Longitude and latitude 、 A depth 、 Address 、 Data sources . call create_index() function , Can be in Elasticsearch Index creation in database . Please note that , Before index creation , The index with the same name will be deleted first .

4.3 Data cleaning and storage

from datetime import datetime
from elasticsearch import Elasticsearch
def insert_doc(csv_file, source):
""" Data warehousing """
with open(csv_file, "r", encoding="utf-8") as fp:
lines = fp.readlines()
total = len(lines)-1 # The total amount of data in the file 
success, failure = 0, 0 # The cumulative number of successes and failures 
section = 10000 # Insert in batches , Quantity per batch 
rank = list(range(1, len(lines), section))
rank.append(len(lines))
for i in range(len(rank)-1):
print(rank[i], rank[i+1])
docs = []
fail = 0 # Number of failures in this batch 
for line in lines[rank[i]:rank[i+1]]:
data = line.split(",")
try:
dt = datetime.strptime(data[0], "%Y-%m-%d %H:%M:%S").isoformat()
except:
try:
d, t = data[0].split()
yy, mm, dd = d.split('-')
if mm == '00':
mm = '01'
if dd == '00':
dd = '01'
#print("Data Clearing:", data[0])
data[0] = '%s-%s-%s %s'%(yy, mm, dd, t)
dt = datetime.strptime(data[0], "%Y-%m-%d %H:%M:%S").isoformat()
except:
print("Error:", i, data[0])
fail += 1
continue
try:
cmd = {
"index":{
"_index":"earthquake"}}
doc = {

"time": dt,
"level": float(data[1]),
"geo": [float(data[2]), float(data[3])],
"deep": float(data[4]),
"location": data[5],
"source": source
}
docs.append(cmd)
docs.append(doc)
except:
print("Error:", line)
fail += 1
es = Elasticsearch()
ret = es.bulk(index='earthquake', body=docs)
success += len(docs)/2 - fail
failure += fail
print("%s total %d Data , Cumulative stock in %d strip , Cumulative failure %d strip "%(csv_file, total, success, failure))

The next two lines of code , Respectively download the China Seismological Bureau (CEA) Data files and USGS (USGS) Import data file to Elasticsearch In the database . About 230 Ten thousand records , It takes about 3 minute .

insert_doc("earthQuake_china.csv", 'CEA')
insert_doc("earthQuake_usa.csv", 'USGS')

5 Data analysis and Visualization

5.1 Data analysis and visualization source

The data is ready , The exciting moment has finally begun . Go straight to source :

#!/usr/bin/env python
# coding:utf-8
from pyecharts import options as opts
from pyecharts.charts import Geo
from pyecharts.globals import ChartType
from datetime import datetime
from elasticsearch import Elasticsearch
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['FangSong'] # Specify default font 
plt.rcParams['axes.unicode_minus'] = False # When saving images '-' Questions displayed as squares 
def get_data(level, year, source):
''' Obtain an earthquake level not less than level The earthquake data '''
es = Elasticsearch()
dt = datetime.strptime(str(year), '%Y').isoformat()
condition = {

'size': 0,
'track_total_hits': True,
'query': {

'bool': {

'must': [
{

'range': {

'level': {

'gte': level
}
}
},
{

'range': {

'time': {

'gt': dt
}
},
}
]
}
},
'aggregations': {

'heatmap': {

'geohash_grid': {

'field': 'geo',
'precision': 5
},
'aggs': {

'centroid': {

'geo_centroid': {

'field': 'geo'
}
}
}
}
}
}
if source == 'CEA' or source == 'USGS':
condition['query']['bool']['must'].append({
'term':{
'source':source}})
return es.search(index='earthquake', body=condition)
def plot_heatmap(level, maptype, source, year=1900, cb=(0,10)):
''' Map the earthquake heat
level - Just draw no less than level The magnitude of the earthquake data
maptype - Map type :china|world
source - data source :CEA|USGS
year - year: Starting year
cb - ColorBar Minimum and maximum values displayed
'''
zone = ' China ' if maptype == 'china' else ' The global '
subject = ' A.D. %d So far this year %s%d Thermal map of earthquake with magnitude above (%s)'%(year, zone, level, source)
data = get_data(level, year, source)
#print(data['hits']['total']['value'])
c = Geo(init_opts={
'width':'1700px', 'height':'800px'})
c.add_schema(maptype=maptype)
values = []
for bucket in data['aggregations']['heatmap']['buckets']:
c.add_coordinate(bucket['key'], bucket['centroid']['location']['lon'], bucket['centroid']['location']['lat'])
values.append((bucket['key'], bucket['doc_count']))
c.add(subject, values, type_=ChartType.HEATMAP)
c.set_series_opts(label_opts=opts.LabelOpts(is_show=False))
c.set_global_opts(
visualmap_opts=opts.VisualMapOpts(min_=cb[0], max_=cb[1], is_calculable=True, orient='horizontal', pos_left='center'),
title_opts=opts.TitleOpts(title='Geo-HeatMap'),
)
c.render('%s.html'%subject)
def top_10(year, source):
""" Conditional search """
dt = datetime.strptime(str(year), '%Y').isoformat()
condition = {

'size': 10,
'query': {

'bool': {

'must': [
{

'range': {

'time': {

'gt': dt
}
},
},
{

'term': {

'source': source
}
}
]
}
},
'sort': {

'level': {

'order': 'desc'
}
},
'highlight': {

'fields': {

'time': {
},
'level': {
},
'location': {
}
}
}
}
es = Elasticsearch()
ret = es.search(index='earthquake', body=condition)
result = list()
for item in ret['hits']['hits']:
result.append((item['_source']['time'], item['_source']['level'], item['_source']['location'].strip()))
return result
def search_by_condition(location, level, year=1900, source='CEA', size=200):
""" Conditional search """
dt = datetime.strptime(str(year), '%Y').isoformat()
condition = {

'size': size,
'query': {

'bool': {

'must': [
{

'range': {

'level': {

'gte': level
}
}
},
{

'range': {

'time': {

'gt': dt
}
},
},
{

'match_phrase': {

'location': {

'query': location,
'slop': 0
}
}
},
{

'term': {

'source': source
}
}
]
}
},
'sort': {

'time': {

'order': 'desc'
}
},
'highlight': {

'fields': {

'time': {
},
'level': {
},
'location': {
}
}
}
}
es = Elasticsearch()
ret = es.search(index='earthquake', body=condition)
result = list()
for item in ret['hits']['hits']:
result.append((item['_source']['time'], item['_source']['level'], item['_source']['location'].strip()))
return result
def plot_bar(city_list, level_list, year=1900, source='CEA', size=200):
""" Draw the seismic histogram of city classification """
title = ' A.D. %d Histogram of the number of earthquakes in some provinces of China (%s)'%(year, source)
fig, ax = plt.subplots()
fig.set_size_inches(12, 6)
for level in level_list:
data = list()
for city in city_list:
data.append(len(search_by_condition(city, level, year=year, source=source, size=size)))
#print(level, data)
ax.bar(city_list, data, 0.35, label='%d Grade and above '%level)
ax.legend(loc='upper left')
ax.set_ylabel(' The number of earthquakes ')
ax.set_title(title)
fig.savefig('%s.png'%title)
if __name__ == '__main__':
# A.D. 1000 Year to date 5 Thermal map of earthquake with magnitude above (CEA)
plot_heatmap(5, 'world', source='CEA', year=1000, cb=(0,10))
# A.D. 1900 Year to date 7 Thermal map of earthquake with magnitude above (USGS)
plot_heatmap(7, 'world', source='USGS', year=1900, cb=(0,5))
# A.D. 1900 Year to date 7 Thermal map of earthquake with magnitude above (CEA)
plot_heatmap(7, 'world', source='CEA', year=1900, cb=(0,5))
# A.D. 1000 Since, China 5 Thermal map of earthquake with magnitude above (CEA)
plot_heatmap(5, 'china', source='CEA', year=1000, cb=(0,20))
# A.D. 1000 Since, China 7 Thermal map of earthquake with magnitude above (CEA)
plot_heatmap(7, 'china', source='CEA', year=1000, cb=(0,3))
# A.D. 1900 Since, China 7 Thermal map of earthquake with magnitude above (CEA)
plot_heatmap(7, 'china', source='CEA', year=1900, cb=(0,1))
# A.D. 1900 Histogram of the number of earthquakes in some provinces of China (CEA)
city_list = [' Beijing ', ' Shanghai ', ' guangdong ', ' jiangsu ', ' Zhejiang ', ' Shandong ', ' Taiwan ', ' Henan ', ' anhui ', ' yunnan ', ' guizhou ', ' sichuan ', ' hubei ', ' shaanxi ', ' xinjiang ', ' hebei ', ' gansu ', ' jiangxi ', ' Ji Lin ', ' liaoning ']
level_list = [6, 7]
plot_bar(city_list, level_list, year=1900, source='CEA', size=2000)
# The strongest earthquake TOP10
for year, source in [(1900, 'USGS'), (1900, 'CEA'), (1000, 'CEA')]:
top = top_10(year=year, source=source)
print(' From ad %d The strongest earthquake since TOP10(%s)'%(year, source))
for i, item in enumerate(top):
print('|%d|%s|%.1f|%s|'%((i+1), *item))
print('----------------------------------')
# A.D. 1000 Earthquake history of Jinan since 
print(' A.D. 1000 Earthquake history of Jinan since :')
res = search_by_condition(' jinan ', 0, 1000)
for i, item in enumerate(res):
print('%d. %s %.1f %s'%((i+1), *item))

5.2 A.D. 1000 Year to date 5 Thermal map of earthquake with magnitude above (CEA)

This is based on the China Seismological Bureau (CEA) Data drawn by the ad 1000 Year to date 5 Thermal map of earthquake with magnitude above , Red means 5 An earthquake of magnitude or above is equal to or greater than 10 Time .
 Insert picture description here

5.3 A.D. 1900 Year to date 7 Thermal map of earthquake with magnitude above (USGS)

This is based on the USGS (USGS) Data drawn by the ad 1900 Year to date 7 Thermal map of earthquake with magnitude above , Red means 7 An earthquake of magnitude or above is equal to or greater than 5 Time .
 Insert picture description here

5.4 A.D. 1900 Year to date 7 Thermal map of earthquake with magnitude above (CEA)

This is based on the China Seismological Bureau (CEA) Data drawn by the ad 1900 Year to date 7 Thermal map of earthquake with magnitude above , Red means 7 An earthquake of magnitude or above is equal to or greater than 5 Time . Compared with the previous picture ,CEA The data is significantly higher than USGS A lot less .
 Insert picture description here

5.5 A.D. 1000 Since, China 5 Thermal map of earthquake with magnitude above (CEA)

This is based on the China Seismological Bureau (CEA) Data drawn by the ad 1000 Since, China 5 Thermal map of earthquake with magnitude above , Red means 5 An earthquake of magnitude or above is equal to or greater than 20 Time .
 Insert picture description here

5.6 A.D. 1000 Since, China 7 Thermal map of earthquake with magnitude above (CEA)

This is based on the China Seismological Bureau (CEA) Data drawn by the ad 1000 Since, China 7 Thermal map of earthquake with magnitude above , Red means 7 An earthquake of magnitude or above is equal to or greater than 3 Time .
 Insert picture description here

5.7 A.D. 1900 Since, China 7 Thermal map of earthquake with magnitude above (CEA)

This is based on the China Seismological Bureau (CEA) Data drawn by the ad 1900 Since, China 7 Thermal map of earthquake with magnitude above , Red means 7 An earthquake of magnitude or above is equal to or greater than 1 Time .
 Insert picture description here

5.8 A.D. 1900 Histogram of the number of earthquakes in some provinces of China (CEA)

This is from A.D 1900 Histogram of the number of earthquakes in some provinces of China . For more than a hundred years , Taiwan is close to 50 More than 7 An earthquake of magnitude , It's really shocking !
 Insert picture description here

5.9 The strongest earthquake TOP10(USGS, A.D. 1900 So far this year )

No. Time Magnitude place
1 1960-05-22T19:11:20 9.5 Bio-Bio( Chile earthquake )
2 1964-03-28T03:36:16 9.2 Southern Alaska
3 2011-03-11T05:46:24 9.1 near the east coast of Honshu
4 2004-12-26T00:58:53 9.1 off the west coast of northern Sumatra
5 1952-11-04T16:58:30 9.0 off the east coast of the Kamchatka Peninsula
6 2010-02-27T06:34:11 8.8 offshore Bio-Bio
7 1965-02-04T05:01:22 8.7 Rat Islands
8 2012-04-11T08:38:36 8.6 off the west coast of northern Sumatra
9 2005-03-28T16:09:36 8.6 northern Sumatra
10 1957-03-09T14:22:33 8.6 Andreanof Islands

5.10 The strongest earthquake TOP10(CEA, A.D. 1000 So far this year )

No. Time Magnitude place
1 2012-04-11T16:38:36 8.6 The waters near northern Sumatra
2 1950-08-15T22:09:34 8.6 Chayu, Tibet 、 Ink to take off >Ⅹ
3 1920-12-16T20:05:53 8.5 Haiyuan, Ningxia Ⅻ
4 1668-07-25T00:00:00 8.5 Tancheng, Shandong ≥Ⅺ
5 1902-08-22T11:00:00 8.3 North of artushi, Xinjiang >Ⅹ
6 1556-02-02T00:00:00 8.3 Huaxian County, Shaanxi Province Ⅺ
7 2017-09-08T12:49:15 8.2 Off the coast of Mexico
8 2015-09-17T06:54:31 8.2 Off the coast of central Chile
9 2013-05-24T13:44:49 8.2 Sea of Okhotsk
10 2012-04-11T18:43:12 8.2 The waters near northern Sumatra

CEA That's kind of unscientific , Number one , It's not 1960 Chile earthquake in . Fortunately ,CEA The data let us see two big earthquakes in Chinese history : The Tancheng earthquake in Shandong Province and the Huaxian earthquake in Shaanxi Province , It's a little bit of face .

6 Conclusion

6.1 Earthquake history of Jinan

It's time to answer my question : Has Jinan ever had an earthquake of higher magnitude in history ? Search for AD 1000 Since, the earthquake sites include “ jinan ” All the seismic data for :

# A.D. 1000 Earthquake history of Jinan since 
print( A.D. 1000 Earthquake history of Jinan since )
res = search_by_condition(' jinan ', 0, 1000)
for i, item in enumerate(res):
print(i+1, *item)

Results show , lately 1000 Over the years , Jinan actually has only two earthquakes just happened !

A.D. 1000 Earthquake history of Jinan since :

  1. 2020-02-20T04:44:34 3.1 Changqing District, Jinan City, Shandong Province
  2. 2020-02-18T17:07:16 4.1 Changqing District, Jinan City, Shandong Province

6.2 Tancheng earthquake

In the history of , The earthquake that happened in Shandong is very few , But the most destructive earthquake in the east of China has happened in Shandong Province . A.D. 1668 year 7 month 25 Japan , There was an unprecedented earthquake in the south of Shandong Province , The magnitude is 8.5 level , The epicenter is located in Tancheng, Shandong Province 、 Linshu 、 Juxian area , The epicenter is located at north latitude 34.8°、 East longitude 118.5°( Gangouyuan village, Linshu County ), The intensity of the earthquake area is as high as Ⅻ degree . Because most of the earthquake area is located in Tancheng county , So it's called Tancheng earthquake . This earthquake is one of the strongest earthquakes in the eastern plate of China , It has caused heavy casualties and economic losses .

Mr. Pu Songling is in 《 Liao Zhai 》 This earthquake is described in : On June 17, the seventh year of Kangxi , A big earthquake . Yu Shike, Jixia ( The ancient capital of Qi , Now Linzi , It's very close to my hometown ), Fang and his cousin Li Duzhi drink by candlelight . Suddenly I heard a sound like thunder , From the Southeast , Go to the northwest . People are astonished , I don't know why . There are several cases in Russia , The glass is overturned , Beams and rafters , There is a sound in the wrong fold . look at each other in fear . For a long time , Before we know the earthquake , Each rushes out . See the pavilions , Fall and rise again , The sound of the walls falling down , With the baby girl , It's as loud as a boil . People can't stand dizzy , Sit on the ground and turn around . The river is pouring more than ten feet , All over the city . A little more than a moment . Look on the street , Men and women get together naked , Competing claims , And forget the clothes .

6.3 Chile earthquake

1960 year 5 month 21 On the afternoon of Sunday 3 when , In Chile 9.5 An earthquake of magnitude . From this day to 5 month 30 Japan , The country has been hit by several earthquakes in a row , During the earthquake ,6 A dead volcano erupted again ,3 A new volcano appeared .5 month 21 Japanese 9.5 The earthquake of magnitude M caused 20 The biggest tsunami of the century . This is the biggest earthquake ever recorded by instruments . The earthquake caused Chile 2000 Multiple deaths . After a few days , The energy of the earthquake passes through the Pacific Ocean , There was a tsunami on the west coast of the Pacific , It also caused serious damage to the eastern coastal areas of Japan and the Philippines .

版权声明
本文为[Tianyuan prodigal son]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database