How to deal with "dirty, messy and poor" excel data?? Stamp here to teach you how to use Python to standardize excel table data (data cleaning)

SunriseCai 2020-11-13 11:28:31
deal dirty messy poor excel


This blog is only for my spare time to record articles , Publish to , Only for users to read , If there is any infringement , Please let me know , I'll delete it .
This article is pure and wild , There is no reference to other people's articles or plagiarism . Insist on originality !!

The beginning of the article is equipped with the demonstration data , Readers can download what they need .0 integral !!!


1. Preface

Hello . Here is 【 Data analysis 】Python Help you regulate Excel Tabular data ( Data cleaning ). I am a SunriseCai.

This article mainly introduces the use of Python To regulate Excel Tabular data , It's a simple data cleaning .!!!

I believe it can help some readers to organize effectively Excel form .


2. Data cleaning concept

The following figure is referenced in — Baidu Encyclopedia : Data cleaning .
 Insert picture description here
To sum up ( Was it ), Data cleaning method is the following four characteristics :

  1. integrity
    . If there is a missing value (NaN) You need to fill in .
  2. rationality
    . If someone is old enough to 200 year , Weight to 800 kg , Obviously unreasonable .
  3. Uniqueness
    . If individual data is duplicated , Need to be heavy .
  4. Uniformity
    . As of a unit kg and pounds , Need to be converted to consistent .

Here are the specific use of code to execute these four features .

3. Data presentation

The sample data is shown in the figure :

 Insert picture description here
The picture above is very obscure !! There's no data mark , What are they all about ??? This is the famous "Thirty-three Data !!!

In fact, this is a player worksheet data .

Method explain Name
First column full name name
Second column height height
The third column weight weight
The fourth column Working days working
The fifth column Wages salary

It's all about stars and height and weight , I made up the rest of my working days and salary ...

See the name column , The first letter Some are in capitals 、 Some are in lowercase . Height and weight The unit is not the same , There is still Blank line , Null value (NaN) Not ASCII Character etc. !!! It looks like it really hurts .

Special note , The English name is chosen here to better illustrate the case !! It's not worshiping foreign things and fawning on foreign countries !!

also !! You need to add a column name to each column , Otherwise, it is impossible to carry out subsequent operations .

4. Code execution

The main module used here is Pandas 了 , Only to Pandas modular .
If you can't understand what the code means , Please be patient and look down , Below the article will be an explanation of all the demo code in the article !!!


Install the module

pip install pandas

The import module

import pandas as pd

there Excel The form file name is data.xls.

# Import Excel Form file
df = pd.read_excel('data.xls')

4.1 integrity

  • Delete blank lines
df.dropna(how='all', inplace=True)
  • Fill in empty values ( There are two ways , take Average or The highest frequency
# For working days and Salary column Take the average of the columns
df['working'].fillna(round(df['working'].mean()), inplace=True)
df['salary'].fillna(round(df['salary'].mean()), inplace=True)
# The highest frequency
# work_maxf = df['working'].value_counts().index[0]
# df['working'].fillna(work_maxf, inplace=True)
# The highest frequency
# work_maxf = df['salary'].value_counts().index[0]
# df['salary'].fillna(work_maxf, inplace=True)

4.2 rationality

  • there Shaquille O'Neal It looks like it has Not ASCII character , You need to remove Not ASCII character
# Regular matching Delete Not ASCII character
df['name'].replace({
r'[^\x00-\x7F]+':''}, regex=True, inplace=True)

4.3 Uniqueness

  • See that there are individual lines that are repeated , Need to delete
# Delete name、weight、height All three fields have the same repeating line , Keep first line
df.drop_duplicates(['name', 'height', 'weight'], inplace=True)

4.4 Uniformity

  • All names are capitalized
# title() Initial capital upper() For all capitals lower() For all lowercase
df['name'] = df['name'].str.title()
  • Height unit converted to CM (cm)
for index, data in df.iterrows(): # 1
height = data['height']
if 'cm' not in height: # 2
height = round(float(height[:-1]) * 100) # 3
df.at[index, 'height'] = f'{height}cm' # 4
## Code interpretation :
1 -- Returns a tuple (index,row) Index and data
2 -- Judge 'cm' Whether in 'height' inside
3 -- It's in 'm' The data is floating point , And remove the last one 'm', ride 100 Convert to 'cm'
4 -- Get the value of a location , for example , For the first 0 That's ok , The first a The value of the column ,df.at[0, 'a']

  • Weight is converted into kilogram (kg)
rows_with_lb = df['weight'].str.contains('lb', na=False) # 1
for index, data in df[rows_with_lb].iterrows(): # 2
weight = int(float(data['weight'][:-2]) / 2.2) # 3
df.at[index, 'weight'] = f'{weight}kg' # 4
## Code interpretation :
1 -- obtain weight The unit in the data column is lb The data of
2 -- Returns a tuple (index,row) Index and data
3 -- Intercept from the beginning to the last three characters , The removed lbs, And convert pounds into kilograms : pounds * 2.2 = kg
4 -- Get the value of a location , for example , For the first 0 That's ok , The first a The value of the column ,df.at[0, 'a']

4.5 Processed data sheet

The processed data table is as follows , What about? . It looks a lot better !!!
 Insert picture description here

4.6 Save the processed data

# Save as Excel form
df.to_excel('clean_data.xls')
# Save as csv form
df.to_csv('clean_data.csv')

4.7 summary

Look back at , This article has done those operations on the data in the example .

  1. Delete blank lines
  2. Fill in empty values
  3. Delete Not ASCII character
  4. Delete duplicate lines
  5. All names are capitalized
  6. Height unit converted to CM (cm)
  7. Weight is converted into kilogram (kg)
  8. Finally, the data after cleaning is saved as the form document of heart .

indeed , This article can not be used as a standard data cleaning study , But it also has certain reference value !!!

thus !! I'm sure you've been able to take a look at the "Thirty-three The data is cleaned .


5. The article uses pandas Code integration

5.1 Delete blank lines

df.dropna(how='all', inplace=True) # Delete blank lines 

5.2 Delete duplicate data lines

The pass parameter is the column for judging the repetition or not ,
The first item of duplicate data is reserved by default ,keep Used to specify which column to delete ,first Leave the first one ,last Leave the last one ,False Means delete all duplicate data .

# Delete duplicate data lines 
df.drop_duplicates(['name', 'height', 'weight'], keep='last', inplace=True)

5.3 Fill in empty values

# take salary Column average 
df['salary'].fillna(round(df['salary'].mean()), inplace=True)
# take Age List the highest frequency 
salary_maxf = df['salary'].value_counts().index[0]
df['salary'].fillna(salary_maxf, inplace=True)

5.4 Find a line that contains a character

lb by The query contains lb The line of
na=False Do not fill in empty values for

rows_with_lb = df['weight'].str.contains('lb', na=False)
print(df[rows_with_lb])

5.5 Replace character

df.replace, Can replace all , You can also replace a column

regex The default is Flase, Represents a regular match , If regex Not for True, You need to match it all

5.5.1 Replace a single character in a column

df['name'].replace('xxx', '1', regex=True, inplace=True)

5.5.2 Replace more than one character in a column

A dictionary is needed to match multiple characters

# Replace all Not ASCII character by empty 
df['name'].replace({
r'[^\x00-\x7F]+': '123'}, regex=True, inplace=True)

5.3 Get the data of a row and a column

effect : Get the value of a location , for example , For the first 0 That's ok , The first a The value of the column ,

data = df.at[0, 'a'] # namely :index=0,columns='a', 

5.4 Returns the iteratable object of each row of data

The return object is a tuple (index,row) Index and data

Generally, two variables are defined to receive the return , The first is the index , The second is row data .

for index,value in df.iterrows():
print(index,value) # take Age Columns of data value[Age]

5.5 Split a column and generate a new column

expand, Expand the split string into separate columns . The default is False,

df[['first_name', 'last_name']] = df['name'].str.split(expand=True)
df.drop('name', axis=1, inplace=True) # Delete name Column , You need to specify the axis

6. Words behind

Okay , This sharing is here .
If you have any questions, please leave a message .
Welcome to join QQ Communicate with the group :648696280.

版权声明
本文为[SunriseCai]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database