[data analysis] prediction of Titanic survival rate -- Python decision tree

SunriseCai 2020-11-13 11:28:30
data analysis prediction titanic survival


This blog is only for my spare time to record articles , Publish to , Only for users to read , If there is any infringement , Please let me know , I'll delete it .

 Insert picture description here

Preface

I've been studying recently Predicting Titanic survival , It's a rotten Street project , It is kaggle Science competition website Last entry data analysis case .

The following figure is referenced in kaggle- Titanic survival prediction project .
 Insert picture description here

Article process

Considering the length of the article , There's no way to predict who's more likely to survive , Will write another article to do it .

  1. An overview of the data

  2. Data cleaning

    2.1 Data preprocessing

    2.2 feature selection

  3. Decision tree modeling ( As for what is decision tree , Refer to Baidu Encyclopedia – Decision tree .

  4. Predicted results

  5. Decision tree visualization

1. An overview of the data

Data from data science competition platform kaggle obtain .kaggle- Titanic survival prediction project .

As shown in the figure below , Click download .

The downloaded data includes gender_submission.csv and test.csv as well as train.csv Three files .

  • train As the training set , It is used to analyze and build machine learning model , Contains survival information ;
  • test For test set , Do not provide survival information , The purpose of the project is to predict the survival rate of passengers in this document ;
  • gender_submission Documents are useless , You can ignore .

 Insert picture description here


The data overview of the two documents is as follows :
 Insert picture description here
You can see that the data is the same as a whole , The only difference is train.csv More Survived This column ( I.e. survival ).

What do the fields in the document mean ?

Field Definition Key
Passengerld Passenger number
Survival Survival or not 0 = No, 1 = Yes
Pclass Ticket class class 1 = 1st, 2 = 2nd, 3 = 3rd
Name Name of passenger
Sex Gender
Age Age
Sibsp Passenger brother and sister / spouse
Parch Passenger's parents / children
Ticket Ticket No
Fare Ticket price
Cabin Hold No
Embarked Port of embarkation C = Cherbourg, Q = Queenstown, S = Southampton

2. Data cleaning

2.1 Data preprocessing

Import data :

import pandas as pd
test_pd = pd.read_csv('test.csv')
train_pd = pd.read_csv('train.csv')

View basic data set information :

# View rows and columns of data 
print('train Number of rows and columns in the dataset :',train_pd.shape)
# interval 
print('- ' * 20)
# Look at the index dtype And column dtype, Non null values and memory usage 
print(train_pd.info())
# View the total number of missing values for each column in the document 
print('- ' * 20)
print(train_pd.isnull().sum())

result:

  • You can see train The document is (891 That's ok ,12 Column
  • AgeCabinEmbarked Null values exist in all fields , The back of this one needs filling .
train Number of rows and columns in the dataset : (891, 12)
- - - - - - - - - - - - - - - - - - - -
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype 
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
None
- - - - - - - - - - - - - - - - - - - -
PassengerId 0
Survived 0
Pclass 0
Name 0
Sex 0
Age 177
SibSp 0
Parch 0
Ticket 0
Fare 0
Cabin 687
Embarked 2
dtype: int64

Age Can be filled with average ;

# Use the average age to fill in the nan value 
train_pd['Age'].fillna(train_pd['Age'].mean(), inplace=True)

Cabin Too many missing , Not easy to add , Then fill as unknown, Meaning unknown ;

# Fill missing value is unknown
train_pd['Cabin'].fillna('unknown', inplace=True)

Embarked defect 2 individual , You can choose to delete the corresponding row or fill the highest number of times .

# see Embarked The number of occurrences of the value of 
print(train_pd['Embarked'].value_counts())

result:

S 646
C 168
Q 77
Name: Embarked, dtype: int64
# Fill missing value is S
train_pd['Embarked'].fillna('S', inplace=True)

After missing fill , You can see that there is no null value .

 Insert picture description here


2.2 feature selection

The main purpose of the project is to predict the survival rate of passengers , So this step needs to select features from the dataset , Key to predicting survival .

Except for the frame column ( Survival rate ), There's more left 11 Features .
 Insert picture description here
According to my immature thinking :

  1. PassengerId Number passengers , Not related to survival rate , Removable ;
  2. Name Name of passenger , It doesn't do much for classification , Removable ;
  3. Ticket Is the ticket number , And there is no naming rule , Removable ;
  4. Cabin Is the cabin number , Too many missing values , Also removed .

In the end Pclass、Sex、Age、SibSp、Parch and Fare,Embarked 7 Features , They may be related to the classification of passenger survival rates , But what is the relationship , This is for machine learning !

(1) Remove the removable features mentioned above :

train_pd.drop(["PassengerId","Cabin","Name","Ticket"],inplace=True,axis=1)

This is the data at this time .
 Insert picture description here


Some of the eigenvalues are not numeric , For the convenience of subsequent operation, it is necessary to convert it to numerical type .
Sex Field , Men and women of , use 0 or 1 To express .
Embarked Of S、C、Q , Numerical value 0、1 and 2 To express .

(2) take ’Age’ Change to numeric type .

train_pd['Sex'] = train_pd['Sex'].map({
'male': 0, 'female': 1}).astype(int)

(3) take ’Embarked’ Change to numeric type .

# Method 1 
train_pd['Embarked'] = train_pd['Embarked'].map( {
'S': 0, 'C': 1,'Q':2},).astype(int)
# Method 2 
# use unique take Embarked The unique value of is converted to list, Then replace numbers with indexes 
labels = train_pd["Embarked"].unique().tolist()
train_pd["Embarked"] = train_pd["Embarked"].apply(lambda x: labels.index(x))

The processed data are as follows :
 Insert picture description here

3. Decision tree modeling

To this step , We just need to plug the feature matrix and tags into the decision tree model .

The characteristic matrix does not include the survival rate , So delete it here Survived This business .

Classification marks are Survived This column .

# Characteristic matrix 
train_labels = train_pd["Survived"]
# Classification identification 
train_pd = train_pd.drop(['Survived'], axis=1)

Import decision tree module

from sklearn.tree import DecisionTreeClassifier
# Build decision tree 
clf = DecisionTreeClassifier(criterion="entropy")
# Decision tree training 
clf.fit(train_pd, train_labels)

Baidu Encyclopedia – Cross validation
 Insert picture description here

Simply speaking ,K Fold cross validation can improve the accuracy of decision tree .
cross_val_score Parameters in function cv Represents how many copies of the original data are divided , That is to say K value , Use here cv=10

from sklearn.model_selection import cross_val_score
print(cross_val_score(clf, train_pd, train_labels, cv=10).max()) # Maximum 
print(cross_val_score(clf, train_pd, train_labels, cv=10).min()) # minimum value 
print(cross_val_score(clf, train_pd, train_labels, cv=10).mean()) # Average 
# 0.8539325842696629
# 0.6853932584269663
# 0.7744569288389513

Then you can use GridSearchCV Adjustable parameter , The meaning of its existence is automatic parameter adjustment , Just input the parameters , We can give the optimal results and parameters . Interested students can learn from the Internet .

Because it's not familiar , Don't mess with people . So I'll use others' answers directly !

the GridSearchCV After parameter adjustment , Get the criterion=‘gini’,max_depth=9,min_samples_leaf=6 When , Highest accuracy .

It's also obvious here , The accuracy is improved .

clf = DecisionTreeClassifier(criterion='gini', min_samples_leaf=5, max_depth=9)
print(cross_val_score(clf, train_pd, train_labels, cv=10).max())
print(cross_val_score(clf, train_pd, train_labels, cv=10).min())
print(cross_val_score(clf, train_pd, train_labels, cv=10).mean())
# 0.8876404494382022
# 0.7528089887640449
# 0.8317103620474408

4. Predicted results

Data cleaning of test set .

# Import dataset 
test_pd = pd.read_csv('test.csv')
# Fill in missing values 
test_pd ["Age"] = test_pd ["Age"].fillna(test_pd ["Age"].mean())
test_pd ["Fare"] = test_pd ["Fare"].fillna(test_pd ["Fare"].mean())
test_pd ['Embarked'] = test_pd ['Embarked'].fillna('S')
# Delete unwanted features 
test_pd.drop(["PassengerId","Cabin","Name","Ticket"],inplace=True,axis=1)
# 
test_pd['Sex'] = test_pd['Sex'].map({
'male': 0, 'female': 1}).astype(int)
test_pd['Embarked'] = test_pd['Embarked'].map( {
'S': 0, 'C': 1,'Q':2},).astype(int)

Estimated results :

test_pd['Survived']=pd.DataFrame(clf.predict(test_pd))

The data at this time is as follows :
 Insert picture description here
Look at the survival rate :

# Training set survival rate 
Survived = train_pd['Survived'].value_counts(normalize=True)
print(f' Training set : mortality {Survived[0]:.3%} , Survival rate {Survived[1]:.3%}')
# Test set survival 
Survived = df_test['Survived'].value_counts(normalize=True)
print(f' Test set : mortality {Survived[0]:.3%} , Survival rate {Survived[1]:.3%}')

result:

  • You can see , The difference between training set and test set is not big .
 Training set : mortality 61.616% , Survival rate 38.384%
Test set : mortality 62.679% , Survival rate 37.321%

5. Decision tree visualization

I'll fill it up later .

The latter

Classic case of classification tree prediction —— Titanic survivor prediction , Partial reference to this article .

Above , Because the study is not deep enough , So it's very delicious to share .

There are many different points in the article , Also ask everybody not hesitate to grant instruction .

This sharing ends . Pure personal notes , Not as a reference for systematic learning and learning !!!

版权声明
本文为[SunriseCai]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database