A series of handwriting algorithms based on Python machine learning -- gbdt gradient lifting classification

There are several evidences 2020-11-13 12:41:19
series handwriting algorithms based python


Gradient rise (Gradient Boosting) Train a series of weak learners (learners), Each learner has pseudo residuals for the previous learner ( instead of y), In order to improve the performance of the algorithm (performance).

This is how Wikipedia describes gradient ascension

Gradient rise ( Gradient enhancement ) A classification problem for machine learning , The prediction model produced by this method is the integration of weak prediction models , For example, a typical decision tree is used As a weak prediction model , At this point, it's a gradient tree (GBT or GBDT). Like any other method of ascension , It builds the model in phases , But it is an extension of the general lifting method by allowing optimization of any differentiable loss function .

Necessary knowledge

1. Logical regression
2. Linear regression
3. gradient descent
4. Decision tree
5. Gradient ascension regression

After reading this article , You will learn

1. How gradient promotion is applied to classification
2. Handwritten gradient classification from zero

Algorithm

The following figure shows the gradient lifting classification algorithm well
 Insert picture description here
( The picture is from Youtube channel StatQuest)

The first part of the picture above is a stump , Its value is log of odds of y, We remember it as l l l. And then there are some trees . The trees in the back , When training them , Their goal is not y, It is y Residual of .

remnant Bad = really real value − pre measuring value residual = True value - Predictive value remnant Bad = really real value pre measuring value

This picture here , It's a little more complicated than gradient lift regression . here , The green part is the residual , The red one is called γ \gamma γ, The black one is the learning rate . The residuals here are not simply averaged γ \gamma γ. γ \gamma γ Used to update l l l.


technological process

Step 1: Calculation log of odds l 0 l_0 l0. Or this is y The first prediction of . here n 1 n_1 n1 yes y=1 The number of , n 0 n_0 n0 yes y=0 The number of .

l 0 ( x ) = log ⁡ n 1 n 0 l_0(x)=\log \frac{n_1}{n_0} l0(x)=logn0n1

For each x i x_i xi, The probability is :
p 0 i = e l 0 i 1 + e l 0 i p_{0i}=\frac{e^{l_{0i}}}{1+e^{l_{0i}}} p0i=1+el0iel0i

The forecast is :

f 0 i = { 0 p 0 i < 0.5 1 p 0 i > = 0.5 f_{0i}=\begin{cases} 0 & p_{0i}<0.5 \\ 1 & p_{0i}>=0.5 \end{cases} f0i={ 01p0i<0.5p0i>=0.5

Step 2 for m in 1 to M:

  • Step 2.1: Calculate the so-called pseudo residuals :

r i m = f i − p i r_{im}=f_i-p_i rim=fipi

  • Step 2.2: Fitting a regression tree with pseudo residuals t m ( x ) t_m(x) tm(x) , And identify the final leaf node R j m R_{jm} Rjm for j = 1... J m j=1...Jm j=1...Jm

  • Step 2.3: Calculate the... Of each leaf node γ \gamma γ

γ i m = ∑ r i m ∑ ( 1 − r i m − 1 ) ( r i m − 1 ) \gamma_{im}=\frac{\sum r_{im}}{\sum (1-r_{im-1})(r_{im-1})} γim=(1rim1)(rim1)rim

  • Step 2.4: to update l l l, p p p, f f f. α \alpha α It's the learning rate :
    l m ( x ) = l m − 1 + α γ m l_m(x)=l_{m-1}+\alpha \gamma_m lm(x)=lm1+αγm

p m i = e l m i 1 + e l m i p_{mi}=\frac{e^{l_{mi}}}{1+e^{l_{mi}}} pmi=1+elmielmi

f m i = { 0 p m i < 0.5 1 p m i > = 0.5 f_{mi}=\begin{cases} 0 & p_{mi}<0.5 \\ 1 & p_{mi}>=0.5 \end{cases} fmi={ 01pmi<0.5pmi>=0.5
Step 3. Output f M ( x ) f_M(x) fM(x)


(Optional) The classification of gradient regression is derived from gradient regression

The above knowledge of simplifying the process , For handwritten gradient lifting classification algorithm , That's enough . If there is spare force , Gradient and I can rise together (GB) The gradient lifting classification is deduced (GBC)

First let's look at GB Steps for


Steps of gradient lifting algorithm

Input : Training data { ( x i , y i ) } i = 1 n \{(x_i, y_i)\}_{i=1}^{n} { (xi,yi)}i=1n, A differentiable loss function L ( y , F ( x ) ) L(y, F(x)) L(y,F(x)), cycles M.

Algorithm :

Step 1: With a constant F 0 ( x ) F_0(x) F0(x) Start the algorithm , This constant satisfies the following formula :

F 0 ( x ) = argmin ⁡ γ ∑ i = 1 n L ( y i , γ ) F_0(x)=\underset{\gamma}{\operatorname{argmin}}\sum_{i=1}^{n}L(y_i, \gamma) F0(x)=γargmini=1nL(yi,γ)

Step 2: for m in 1 to M:

  • Step 2.1: Calculate the pseudo residuals (pseudo-residuals):

r i m = − [ ∂ L ( y i , F ( x i ) ) ∂ F ( x i ) ] F ( x ) = F m − 1 ( x ) r_{im}=-[\frac{\partial L(y_i, F(x_i))}{\partial F(x_i)}]_{F(x)=F_{m-1}(x)} rim=[F(xi)L(yi,F(xi))]F(x)=Fm1(x)

  • Step 2.2: Fitting weak learners with pseudo residuals h m ( x ) h_m(x) hm(x) , Set up a destination area R j m ( j = 1... J m ) R_{jm}(j=1...J_m) Rjm(j=1...Jm)

  • Step 2.3: For each end area ( That is, every leaf ), Calculation γ \gamma γ

γ j m = argmin ⁡ γ ∑ x i ∈ R j m n L ( y i , F m − 1 ( x i ) + γ ) \gamma_{jm}=\underset{\gamma}{\operatorname{argmin}}\sum_{x_i \in R_{jm}}^{n}L(y_i, F_{m-1}(x_i)+\gamma) γjm=γargminxiRjmnL(yi,Fm1(xi)+γ)

  • Step 2.4: Update algorithm ( Learning rate α \alpha α) :
    F m ( x ) = F m − 1 + α γ m F_m(x)=F_{m-1}+\alpha\gamma_m Fm(x)=Fm1+αγm

Step 3. Output algorithm F M ( x ) F_M(x) FM(x)


Loss function

From gradient lifting deduction to gradient lifting classification , We need a loss function , And bring in Step 1, Step 2.1 and Step 2.3. here , We use it Log of Likelihood As a loss function .

L ( y , F ( x ) ) = − ∑ i = 1 N ( y i ∗ l o g ( p ) + ( 1 − y i ) ∗ l o g ( 1 − p ) ) L(y, F(x))=-\sum_{i=1}^{N}(y_i* log(p) + (1-y_i)*log(1-p)) L(y,F(x))=i=1N(yilog(p)+(1yi)log(1p))

This is about probability p Function of , It's not about log of odds (l) Function of , So we need to deform it .

Let's take out the middle part
− ( y ∗ log ⁡ ( p ) + ( 1 − y ) ∗ log ⁡ ( 1 − p ) ) = − y ∗ log ⁡ ( p ) − ( 1 − y ) ∗ log ⁡ ( 1 − p ) = − y log ⁡ ( p ) − log ⁡ ( 1 − p ) + y log ⁡ ( 1 − p ) = − y ( log ⁡ ( p ) − log ⁡ ( 1 − p ) ) − log ⁡ ( 1 − p ) = − y ( log ⁡ ( p 1 − p ) ) − log ⁡ ( 1 − p ) = − y log ⁡ ( o d d s ) − log ⁡ ( 1 − p ) -(y*\log(p)+(1-y)*\log(1-p)) \\ =-y * \log(p) - (1-y) * \log(1-p) \\ =-y\log(p)-\log(1-p)+y\log(1-p) \\ =-y(\log(p)-\log(1-p))-\log(1-p) \\ =-y(\log(\frac{p}{1-p}))-\log(1-p) \\ =-y \log(odds)-\log(1-p) (ylog(p)+(1y)log(1p))=ylog(p)(1y)log(1p)=ylog(p)log(1p)+ylog(1p)=y(log(p)log(1p))log(1p)=y(log(1pp))log(1p)=ylog(odds)log(1p)
because
log ⁡ ( 1 − p ) = l o g ( 1 − e l o g ( o d d s ) 1 + e l o g ( o d d s ) ) = log ⁡ ( 1 + e l 1 + e l − e l 1 + e l ) = log ⁡ ( 1 1 + e l ) = log ⁡ ( 1 ) + log ⁡ ( 1 + e l ) = − l o g ( 1 + e log ⁡ ( o d d s ) ) \log(1-p)=log(1-\frac{e^{log(odds)}}{1+e^{log(odds)}}) \\ =\log(\frac{1+e^l}{1+e^l}-\frac{e^l}{1+e^l})\\ =\log(\frac{1}{1+e^l}) \\ =\log(1)+\log(1+e^l) \\ =-log(1+e^{\log(odds)}) log(1p)=log(11+elog(odds)elog(odds))=log(1+el1+el1+elel)=log(1+el1)=log(1)+log(1+el)=log(1+elog(odds))

We brought in from above , obtain

− ( y ∗ log ⁡ ( p ) + ( 1 − y ) ∗ log ⁡ ( 1 − p ) ) = − y log ⁡ ( o d d s ) + log ⁡ ( 1 + e log ⁡ ( o d d s ) ) -(y*\log(p)+(1-y)*\log(1-p)) \\ =-y\log(odds)+\log(1+e^{\log(odds)}) \\ (ylog(p)+(1y)log(1p))=ylog(odds)+log(1+elog(odds))

Last , We got the use of l l l The loss function represented by

L = − ∑ i = 1 N ( y l − log ⁡ ( 1 + e l ) ) L=-\sum_{i=1}^{N}(yl-\log(1+e^l)) L=i=1N(yllog(1+el))

Step 1:

In order to find the minimum of the loss function , We just need to find that its first derivative is equal to 0.
∂ L ( y , F 0 ) ∂ F 0 = − ∂ ∑ i = 1 N ( y log ⁡ ( o d d s ) − log ⁡ ( 1 + e log ⁡ ( o d d s ) ) ) ∂ l o g ( o d d s ) = − ∑ i = 1 n y i + ∑ i = 1 N ∂ l o g ( 1 + e l o g ( o d d s ) ) ∂ l o g ( o d d s ) = − ∑ i = 1 n y i + ∑ i = 1 N 1 1 + e log ⁡ ( o d d s ) ∂ ( 1 + e l ) ∂ l = − ∑ i = 1 n y i + ∑ i = 1 N 1 1 + e log ⁡ ( o d d s ) ∂ ( e l ) ∂ l = − ∑ i = 1 n y i + ∑ i = 1 N e l 1 + e l = − ∑ i = 1 n y i + N e l 1 + e l = 0 \frac{\partial L(y, F_0)}{\partial F_0} \\ =-\frac{\partial \sum_{i=1}^{N}(y\log(odds)-\log(1+e^{\log(odds)}))}{\partial log(odds)} \\ =-\sum_{i=1}^{n} y_i+\sum_{i=1}^{N} \frac{\partial log(1+e^{log(odds)})}{\partial log(odds)} \\ =-\sum_{i=1}^{n} y_i+\sum_{i=1}^{N} \frac{1}{1+e^{\log(odds)}} \frac{\partial (1+e^l)}{\partial l} \\ =-\sum_{i=1}^{n} y_i+\sum_{i=1}^{N} \frac{1}{1+e^{\log(odds)}} \frac{\partial (e^l)}{\partial l} \\ =-\sum_{i=1}^{n} y_i+\sum_{i=1}^{N} \frac{e^l}{1+e^l} \\ =-\sum_{i=1}^{n} y_i+N\frac{e^l}{1+e^l} =0 F0L(y,F0)=log(odds)i=1N(ylog(odds)log(1+elog(odds)))=i=1nyi+i=1Nlog(odds)log(1+elog(odds))=i=1nyi+i=1N1+elog(odds)1l(1+el)=i=1nyi+i=1N1+elog(odds)1l(el)=i=1nyi+i=1N1+elel=i=1nyi+N1+elel=0
We get ( p It's the real probability )
e l 1 + e l = ∑ i = 1 N y i N = p e l = p + p ∗ e l ( 1 − p ) e l = p e l = p 1 − p log ⁡ ( o d d s ) = l o g ( p 1 − p ) \frac{e^l}{1+e^l}=\frac{\sum_{i=1}^{N}y_i}{N}=p \\ e^l=p+p*e^l \\ (1-p)e^l=p \\ e^l=\frac{p}{1-p} \\ \log(odds)=log(\frac{p}{1-p}) 1+elel=Ni=1Nyi=pel=p+pel(1p)el=pel=1pplog(odds)=log(1pp)

here , Let's just say l l l.

Step 2.1

r i m = − [ ∂ L ( y i , F ( x i ) ) ∂ F ( x i ) ] F ( x ) = F m − 1 ( x ) r_{im}=-[\frac{\partial L(y_i, F(x_i))}{\partial F(x_i)}]_{F(x)=F_{m-1}(x)} rim=[F(xi)L(yi,F(xi))]F(x)=Fm1(x)

= − [ ∂ ( − ( y i ∗ l o g ( p ) + ( 1 − y i ) ∗ l o g ( 1 − p ) ) ) ∂ F m − 1 ( x i ) ] F ( x ) = F m − 1 ( x ) =-[\frac{\partial (-(y_i* log(p)+(1-y_i)*log(1-p)))}{\partial F_{m-1}(x_i)}]_{F(x)=F_{m-1}(x)} =[Fm1(xi)((yilog(p)+(1yi)log(1p)))]F(x)=Fm1(x)

Similarly , You can get

= y i − F m − 1 ( x i ) =y_i-F_{m-1}(x_i) =yiFm1(xi)

Step 2.3:

γ j m = argmin ⁡ γ ∑ x i ∈ R j m n L ( y i , F m − 1 ( x i ) + γ ) \gamma_{jm}=\underset{\gamma}{\operatorname{argmin}}\sum_{x_i \in R_{jm}}^{n}L(y_i, F_{m-1}(x_i)+\gamma) γjm=γargminxiRjmnL(yi,Fm1(xi)+γ)

Bring in the loss function :

γ j m = argmin ⁡ γ ∑ x i ∈ R j m n L ( y i , F m − 1 ( x i ) + γ ) = argmin ⁡ γ ∑ x i ∈ R j m n ( − y i ∗ ( F m − 1 + γ ) + log ⁡ ( 1 + e F m − 1 + γ ) ) \gamma_{jm} \\ =\underset{\gamma}{\operatorname{argmin}}\sum_{x_i \in R_{jm}}^{n}L(y_i, F_{m-1}(x_i)+\gamma) \\ =\underset{\gamma}{\operatorname{argmin}}\sum_{x_i \in R_{jm}}^{n} (-y_i * (F_{m-1}+\gamma)+\log(1+e^{F_{m-1}+\gamma})) \\ γjm=γargminxiRjmnL(yi,Fm1(xi)+γ)=γargminxiRjmn(yi(Fm1+γ)+log(1+eFm1+γ))

Let's solve the middle part .

− y i ∗ ( F m − 1 + γ ) + log ⁡ ( 1 + e F m − 1 + γ ) -y_i * (F_{m-1}+\gamma)+\log(1+e^{F_{m-1}+\gamma}) yi(Fm1+γ)+log(1+eFm1+γ)

We use the second-order Taylor polynomials expansion :

L ( y , F + γ ) ≈ L ( y , F ) + d L ( y , F + γ ) γ d F + 1 2 d 2 L ( y , F + γ ) γ 2 d 2 F L(y,F+\gamma) \approx L(y, F)+ \frac{d L(y, F+\gamma)\gamma}{d F}+\frac{1}{2} \frac{d^2 L(y, F+\gamma)\gamma^2}{d^2 F} L(y,F+γ)L(y,F)+dFdL(y,F+γ)γ+21d2Fd2L(y,F+γ)γ2

Derivation
∵ d L ( y , F + γ ) d γ ≈ d L ( y , F ) d F + d 2 L ( y , F ) γ d 2 F = 0 ∴ d L ( y , F ) d F + d 2 L ( y , F ) γ d 2 F = 0 ∴ γ = − d L ( y , F ) d F d 2 L ( y , F ) d 2 F ∴ γ = y − p d 2 ( − y ∗ l + log ⁡ ( 1 + e l ) ) d 2 l ∴ γ = y − p d ( − y + e l 1 + e l ) d l ∴ γ = y − p d e l 1 + e l d l \because \frac{d L(y, F+\gamma)}{d\gamma} \approx \frac{d L(y, F)}{d F}+\frac{d^2 L(y, F)\gamma}{d^2 F}=0 \\ \therefore \frac{d L(y, F)}{d F}+\frac{d^2 L(y, F)\gamma}{d^2 F}=0 \\ \therefore \gamma=-\frac{\frac{d L(y, F)}{d F}}{\frac{d^2 L(y, F)}{d^2 F}} \\ \therefore \gamma = \frac{y-p}{\frac{d^2 (-y * l + \log(1+e^l))}{d^2 l}} \\ \therefore \gamma = \frac{y-p}{\frac{d (-y + \frac{e^l}{1+e^l})}{d l}} \\ \therefore \gamma = \frac{y-p}{\frac{d \frac{e^l}{1+e^l}}{d l}} \\ dγdL(y,F+γ)dFdL(y,F)+d2Fd2L(y,F)γ=0dFdL(y,F)+d2Fd2L(y,F)γ=0γ=d2Fd2L(y,F)dFdL(y,F)γ=d2ld2(yl+log(1+el))ypγ=dld(y+1+elel)ypγ=dld1+elelyp
( use product rule (ab)’=a’ b+a b’​)
∴ γ = y − p d e l d l ∗ 1 1 + e l − e l ∗ d d l 1 1 + e l = y − p e l 1 + e l − e l ∗ 1 ( 1 + e l ) 2 d d l ( 1 + e l ) = y − p e l 1 + e l − ( e l ) 2 ( 1 + e l ) 2 = y − p e l + ( e l ) 2 − + ( e l ) 2 = y − p e l ( 1 + e l ) 2 = y − p p ( 1 − p ) \therefore \gamma=\frac{y-p}{\frac{d e^l}{dl} * \frac{1}{1+e^l} - e^l * \frac{d }{d l} \frac{1}{1+e^l}} \\ =\frac{y-p}{\frac{e^l}{1+e^l}-e^l * \frac{1}{(1+e^l)^2} \frac{d}{dl} (1+e^l)} \\ =\frac{y-p}{\frac{e^l}{1+e^l}- \frac{(e^l)^2}{(1+e^l)^2}} \\ =\frac{y-p}{e^l+(e^l)^2-+(e^l)^2} \\ =\frac{y-p}{\frac{e^l}{(1+e^l)^2}} \\ =\frac{y-p}{p(1-p)} γ=dldel1+el1eldld1+el1yp=1+elelel(1+el)21dld(1+el)yp=1+elel(1+el)2(el)2yp=el+(el)2+(el)2yp=(1+el)2elyp=p(1p)yp

Finally get γ \gamma γ as follows

γ = ∑ ( y − p ) ∑ p ( 1 − p ) \gamma = \frac{\sum (y-p)}{\sum p(1-p)} γ=p(1p)(yp)

Handwritten code

First create a table , Here we have to predict whether a person likes movies 《Troll 2》.

no name likes_popcorn age favorite_color loves_troll2
0 Alex 1 10 Blue 1
1 Brunei 1 90 Green 1
2 Candy 0 30 Blue 0
3 David 1 30 Red 0
4 Eric 0 30 Green 1
5 Felicity 0 10 Blue 1

Step 1 Calculation l 0 l_0 l0, p 0 p_0 p0, f 0 f_0 f0

log_of_odds0=np.log(4 / 2)
probability0=np.exp(log_of_odds0)/(np.exp(log_of_odds0)+1)
print(f'the log_of_odds is : {log_of_odds0}')
print(f'the probability is : {probability0}')
predict0=1
print(f'the prediction is : 1')
n_samples=6
loss0=-(y*np.log(probability0)+(1-y)*np.log(1-probability0))

Output

the log_of_odds is : 0.6931471805599453
the probability is : 0.6666666666666666
the prediction is : 1

Step 2

Let's define a function first , We call him iteration, To run a iteration It's just a run for loop . The purpose of taking it apart is to make the fight clear .

def iteration(i):
#step 2.1 calculate the residuals
residuals[i] = y - probabilities[i]
#step 2.2 Fit a regression tree
dt = DecisionTreeRegressor(max_depth=1, max_leaf_nodes=3)
dt=dt.fit(X, residuals[i])
trees.append(dt.tree_)
#Step 2.3 Calculate gamma
leaf_indeces=dt.apply(X)
print(leaf_indeces)
unique_leaves=np.unique(leaf_indeces)
n_leaf=len(unique_leaves)
#for leaf 1
for ileaf in range(n_leaf):
leaf_index=unique_leaves[ileaf]
n_leaf=len(leaf_indeces[leaf_indeces==leaf_index])
previous_probability = probabilities[i][leaf_indeces==leaf_index]
denominator = np.sum(previous_probability * (1-previous_probability))
igamma = dt.tree_.value[ileaf+1][0][0] * n_leaf / denominator
gamma_value[i][ileaf]=igamma
print(f'for leaf {leaf_index}, we have {n_leaf} related samples. and gamma is {igamma}')
gamma[i] = [gamma_value[i][np.where(unique_leaves==index)] for index in leaf_indeces]
#Step 2.4 Update F(x) 
log_of_odds[i+1] = log_of_odds[i] + learning_rate * gamma[i]
probabilities[i+1] = np.array([np.exp(odds)/(np.exp(odds)+1) for odds in log_of_odds[i+1]])
predictions[i+1] = (probabilities[i+1]>0.5)*1.0
score[i+1]=np.sum(predictions[i+1]==y) / n_samples
#residuals[i+1] = y - probabilities[i+1]
loss[i+1]=np.sum(-y * log_of_odds[i+1] + np.log(1+np.exp(log_of_odds[i+1])))
new_df=df.copy()
new_df.columns=['name', 'popcorn','age','color','y']
new_df[f'$p_{i}$']=probabilities[i]
new_df[f'$l_{i}$']=log_of_odds[i]
new_df[f'$r_{i}$']=residuals[i]
new_df[f'$\gamma_{i}$']=gamma[i]
new_df[f'$l_{i+1}$']=log_of_odds[i+1]
new_df[f'$p_{i+1}$']=probabilities[i+1]
display(new_df)
dot_data = tree.export_graphviz(dt, out_file=None, filled=True, rounded=True,feature_names=X.columns)
graph = graphviz.Source(dot_data)
display(graph)

Iteration 0

iteration(0)

Output :

[1 2 2 2 2 1]
for leaf 1, we have 2 related samples. and gamma is 1.5
for leaf 2, we have 4 related samples. and gamma is -0.7499999999999998

no name popcorn age color y 𝑝0 𝑙0 𝑟0 𝛾0 𝑙1 𝑝1
0 Alex 1 10 Blue 1 0.666667 0.693147 0.333333 1.50 1.893147 0.869114
1 Brunei 1 90 Green 1 0.666667 0.693147 0.333333 -0.75 0.093147 0.523270
2 Candy 0 30 Blue 0 0.666667 0.693147 -0.666667 -0.75 0.093147 0.523270
3 David 1 30 Red 0 0.666667 0.693147 -0.666667 -0.75 0.093147 0.523270
4 Eric 0 30 Green 1 0.666667 0.693147 0.333333 -0.75 0.093147 0.523270
5 Felicity 0 10 Blue 1 0.666667 0.693147 0.333333 1.50 1.893147 0.869114

 Insert picture description here

Let's look at each step separately

Step 2.1, Calculated residual error y − p 0 y-p_0 yp0.

Step 2.2, Fit a regression tree .

Step 2.3, Calculation γ \gamma γ.

  • For leaves 1, We have two samples (Alex and Felicity). γ \gamma γ yes : (1/3+1/3)/((1-2/3)*2/3+(1-2/3)*2/3)=1.5

  • For leaves 2, We have four samples . γ \gamma γ yes :(1/3-2/3-2/3+1/3)/(4*(1-2/3)*2/3)=-0.75

Step 2.4, to update F(x).

Iteration 1

iteration(1)

Output :

[1 2 1 1 1 1]
for leaf 1, we have 5 related samples. and gamma is -0.31564962030401844
for leaf 2, we have 1 related samples. and gamma is 1.9110594001952543

name popcorn age color y 𝑝1 𝑙1 𝑟1 𝛾1 𝑙2 𝑝2
0 Alex 1 10 Blue 1 0.869114 1.893147 0.130886 -0.315650 1.640627 0.837620
1 Brunei 1 90 Green 1 0.523270 0.093147 0.476730 1.911059 1.621995 0.835070
2 Candy 0 30 Blue 0 0.523270 0.093147 -0.523270 -0.315650 -0.159373 0.460241
3 David 1 30 Red 0 0.523270 0.093147 -0.523270 -0.315650 -0.159373 0.460241
4 Eric 0 30 Green 1 0.523270 0.093147 0.476730 -0.315650 -0.159373 0.460241
5 Felicity 0 10 Blue 1 0.869114 1.893147 0.130886 -0.315650 1.640627 0.837620

 Insert picture description here

For leaves 1, Yes 5 Samples . γ \gamma γ yes :

(0.130886±0.523270±0.523270+0.476730+0.130886)/(20.869114(1-0.869114)+30.523270(1-0.523270))=-0.3156498224562022

For leaves 2, Yes 1 Samples . γ \gamma γ yes :

0.476730/(0.523270*(1-0.523270))=1.9110593001700842

Iteration 2

iteration(2)

Output

no name popcorn age color y 𝑝2 𝑙2 𝑟2 𝛾2 𝑙3 𝑝3
0 Alex 1 10 Blue 1 0.837620 1.640627 0.162380 1.193858 2.595714 0.930585
1 Brunei 1 90 Green 1 0.835070 1.621995 0.164930 -0.244390 1.426483 0.806353
2 Candy 0 30 Blue 0 0.460241 -0.159373 -0.460241 -0.244390 -0.354885 0.412198
3 David 1 30 Red 0 0.460241 -0.159373 -0.460241 -0.244390 -0.354885 0.412198
4 Eric 0 30 Green 1 0.460241 -0.159373 0.539759 -0.244390 -0.354885 0.412198
5 Felicity 0 10 Blue 1 0.837620 1.640627 0.162380 1.193858 2.595714 0.930585

[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-JXb6xhZJ-1587299145162)(/images/gradient_boosting_classification/tree2.svg)]

Iteration 3 and 4 Omit ...

iteration(3)
iteration(4)

Accuracy :

 Insert picture description here

Loss :

 Insert picture description here

Code address :

https://github.com/EricWebsmith/machine_learning_from_scrach/blob/master/gradiant_boosting_classification.ipynb

quote

Gradient rise ( Wikipedia )

Gradient Boost Part 3: Classification – Youtube StatQuest

Gradient Boost Part 4: Classification Details – Youtube StatQuest

sklearn.tree.DecisionTreeRegressor – scikit-learn 0.21.3 documentation

Understanding the decision tree structure – scikit-learn 0.21.3 documentation

版权声明
本文为[There are several evidences]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database