A series of handwriting algorithms for Python machine learning optimizers

There are several evidences 2020-11-13 12:40:55
series handwriting algorithms python machine


In this paper, we take a simple linear regression problem as an example , Gradient descent is achieved (SGD), Momentum, Nesterov Accelerated Gradient, AdaGrad, RMSProp and Adam.

gradient descent

Let's first review the gradient descent , Take the first article in this series 《python Machine learning handwriting algorithm series —— Linear regression 》 For example .

Objective function :

y = f ( θ , x ) y=f(\theta,x) y=f(θ,x)

The objective function of linear regression is :

y = a x + b + ε y=ax+b+ε y=ax+b+ε
y ^ = a x + b \hat{y}=ax+b y^=ax+b

The loss function is :

J ( θ ) J(\theta) J(θ)

The loss function of linear regression is :

J ( a , b ) = 1 2 n ∑ i = 0 n ( y i − y ^ i ) 2 J(a,b)=\frac{1}{2n}\sum_{i=0}^{n}(y_i−\hat{y}_i )^2 J(a,b)=2n1i=0n(yiy^i)2

Optimization function :

θ = θ − α ∂ J ∂ θ \theta = \theta - \alpha \frac{\partial J}{\partial \theta} θ=θαθJ

For the optimization function of linear regression of one variable :

a = a − α ∂ J ∂ a a = a - \alpha \frac{\partial J}{\partial a} a=aαaJ
b = b − α ∂ J ∂ b b = b - \alpha \frac{\partial J}{\partial b} b=bαbJ

here ∂ J ∂ a \frac{\partial J}{\partial a} aJ and ∂ J ∂ b \frac{\partial J}{\partial b} bJ Namely :

∂ J ∂ a = 1 n ∑ i = 0 n x ( y ^ i − y i ) \frac{\partial J}{\partial a} = \frac{1}{n}\sum_{i=0}^{n}x(\hat{y}_i-y_i) aJ=n1i=0nx(y^iyi)

∂ J ∂ b = 1 n ∑ i = 0 n ( y ^ i − y i ) \frac{\partial J}{\partial b} = \frac{1}{n}\sum_{i=0}^{n}(\hat{y}_i-y_i) bJ=n1i=0n(y^iyi)

here y ^ \hat{y} y^ yes y The estimate of , θ \theta θ Is the parameter , J J J Is the loss function , α \alpha α It's the learning rate , The same below .

Here we use python The code for the implementation is as follows

def model(a, b, x):
return a*x + b
def cost_function(a, b, x, y):
n = 5
return 0.5/n * (np.square(y-a*x-b)).sum()
def sgd(a,b,x,y):
n = 5
alpha = 1e-1
y_hat = model(a,b,x)
da = (1.0/n) * ((y_hat-y)*x).sum()
db = (1.0/n) * ((y_hat-y).sum())
a = a - alpha*da
b = b - alpha*db
return a, b

We optimize 100 The results are as follows :

 Insert picture description here

The loss is 0.00035532090622957674, Let's write it down here , after , We will use other optimization functions to achieve this value respectively . See how many calculations they need .

Momentum

Momentum, He's compared to gradient descent , Added to consider the previous gradient , When updating parameters , Besides gradients , Plus the gradient before (Momentum).

m = β m − α ∂ J ∂ θ m = \beta m - \alpha \frac{\partial J}{\partial \theta} m=βmαθJ

θ = θ + m \theta = \theta + m θ=θ+m

For linear regression with one variable :

m a = β m a − α ∂ J ∂ a m_a = \beta m_a - \alpha \frac{\partial J}{\partial a} ma=βmaαaJ

a = a + m a a = a + m_a a=a+ma

m b = β m b − α ∂ J ∂ b m_b = \beta m_b - \alpha \frac{\partial J}{\partial b} mb=βmbαbJ

b = b + m a b = b + m_a b=b+ma

among :

∂ J ∂ a = 1 n ∑ i = 0 n x ( y ^ i − y i ) \frac{\partial J}{\partial a} = \frac{1}{n}\sum_{i=0}^{n}x(\hat{y}_i-y_i) aJ=n1i=0nx(y^iyi)

∂ J ∂ b = 1 n ∑ i = 0 n ( y ^ i − y i ) \frac{\partial J}{\partial b} = \frac{1}{n}\sum_{i=0}^{n}(\hat{y}_i-y_i) bJ=n1i=0n(y^iyi)

Python Realization

def momentum(a, b, ma, mb, x, y):
n = 5
alpha = 1e-1
beta = 0.9
y_hat = model(a,b,x)
da = (1.0/n) * ((y_hat-y)*x).sum()
db = (1.0/n) * ((y_hat-y).sum())
ma = beta*ma - alpha*da
mb = beta*mb - alpha*db
a = a + ma
b = b + mb
return a, b, ma, mb

In order to achieve the same loss ,Momentum It was used 46 Time . It's less than half of the gradient .

Nesterov Accelerated Gradient

Nesterov Accelerated Gradient(NAG), Also called Nesterov momentum optimization, yes Yurii Nesterov stay 1983 Put forward in . be relative to Momentum, It's a little bit “ prospect ”. When it's calculating the loss function , Have already put Momentum It's in the parameters . So it calculates the first derivative of the update loss .
m = β m − α ∂ J ( θ + β m ) ∂ θ m = \beta m - \alpha \frac{\partial J(\theta+\beta m)}{\partial \theta} m=βmαθJ(θ+βm)

θ = θ + m \theta = \theta + m θ=θ+m
 Insert picture description here

For linear regression with one variable :

m a = β m a − α ∂ J ( a + β m a ) ∂ a m_a = \beta m_a - \alpha \frac{\partial J(a + \beta m_a)}{\partial a} ma=βmaαaJ(a+βma)

a = a + m a a = a + m_a a=a+ma

m b = β m b − α ∂ J ( b + β m b ) ∂ b m_b = \beta m_b - \alpha \frac{\partial J(b+\beta m_b)}{\partial b} mb=βmbαbJ(b+βmb)

b = b + m a b = b + m_a b=b+ma

among ,

∂ J ∂ a = 1 n ∑ i = 0 n x ( y ^ i − y i ) \frac{\partial J}{\partial a} = \frac{1}{n}\sum_{i=0}^{n}x(\hat{y}_i-y_i) aJ=n1i=0nx(y^iyi)

∂ J ∂ b = 1 n ∑ i = 0 n ( y ^ i − y i ) \frac{\partial J}{\partial b} = \frac{1}{n}\sum_{i=0}^{n}(\hat{y}_i-y_i) bJ=n1i=0n(y^iyi)

here , We just need to make

y ^ = ( a + m a ) x + ( b + m b ) \hat{y}=(a+m_a)x+(b+m_b) y^=(a+ma)x+(b+mb)

that will do . Other places and Momentum equally .

Python Realization

def nesterov(a, b, ma, mb, x, y):
n = 5
alpha = 1e-1
beta = 0.9
y_hat = model(a+ma,b+mb,x)
da = (1.0/n) * ((y_hat-y)*x).sum()
db = (1.0/n) * ((y_hat-y).sum())
ma = beta*ma - alpha*da
mb = beta*mb - alpha*db
a = a + ma
b = b + mb
return a, b, ma, mb

here , We use the 21 Times achieved the same loss .

AdaGrad

In the following illustration , The blue one is gradient descent , It's going fast in the direction of the maximum gradient , Instead of moving towards the end of the picture . Yellow is AdaGrad, It points to the global optimum . The way it works is to shrink (scaling down) The maximum gradient parameter .

 Insert picture description here
The formula :

ϵ = 1 e − 10 \epsilon=1e-10 ϵ=1e10

s = s + ∂ J ∂ θ ⊙ ∂ J ∂ θ s = s + \frac{\partial J}{\partial \theta} \odot \frac{\partial J}{\partial \theta} s=s+θJθJ

θ = θ − α ∂ J ∂ θ ⊘ s + ϵ \theta = \theta - \alpha \frac{\partial J}{\partial \theta} \oslash \sqrt{s+\epsilon} θ=θαθJs+ϵ

Again , We use a,b Replace θ \theta θ that will do .

Python Code :

def ada_grad(a,b,sa, sb, x,y):
epsilon=1e-10
n = 5
alpha = 1e-1
y_hat = model(a,b,x)
da = (1.0/n) * ((y_hat-y)*x).sum()
db = (1.0/n) * ((y_hat-y).sum())
sa=sa+da*da + epsilon
sb=sb+db*db + epsilon
a = a - alpha*da / np.sqrt(sa)
b = b - alpha*db / np.sqrt(sb)
return a, b, sa, sb

here , We use the 114 Time , It's more than a gradient drop 14 Time .

RMSProp

AdaGrad What's the risk , It's slowing down too fast . So it took longer to reach the global optimum . It may even never be globally optimal .PMSProp This deceleration is reduced . The method is to multiply by a parameter β \beta β, Usually its value is 0.9.

ϵ = 1 e − 10 \epsilon=1e-10 ϵ=1e10

s = β s + ( 1 − β ) ∂ J ∂ θ ⊙ ∂ J ∂ θ s = \beta s + (1-\beta) \frac{\partial J}{\partial \theta} \odot \frac{\partial J}{\partial \theta} s=βs+(1β)θJθJ

θ = θ − α ∂ J ∂ θ ⊘ s + ϵ \theta = \theta - \alpha \frac{\partial J}{\partial \theta} \oslash \sqrt{s+\epsilon} θ=θαθJs+ϵ

Python Code

def rmsprop(a,b,sa, sb, x,y):
epsilon=1e-10
beta = 0.9
n = 5
alpha = 1e-1
y_hat = model(a,b,x)
da = (1.0/n) * ((y_hat-y)*x).sum()
db = (1.0/n) * ((y_hat-y).sum())
sa=beta*sa+(1-beta)*da*da + epsilon
sb=beta*sb+(1-beta)*db*db + epsilon
a = a - alpha*da / np.sqrt(sa)
b = b - alpha*db / np.sqrt(sb)
return a, b, sa, sb

here , Only by the 11 Time , It's a gradient drop, the same loss .

Adam

Let's finish with Adam,Adam Namely adaptive moment estimation. He is momentum and RMSProp The combination of .

m = β 1 m − ( 1 − β 1 ) ∂ J ∂ θ m = \beta_1 m - (1-\beta_1)\frac{\partial J}{\partial \theta} m=β1m(1β1)θJ

s = β 2 s + ( 1 − β 2 ) ∂ J ∂ θ ⊙ ∂ J ∂ θ s = \beta_2 s + (1-\beta_2) \frac{\partial J}{\partial \theta} \odot \frac{\partial J}{\partial \theta} s=β2s+(1β2)θJθJ

m ^ = m 1 − β 1 T \hat{m} = \frac{m}{1-\beta_1^T} m^=1β1Tm

s ^ = s 1 − β 2 T \hat{s} = \frac{s}{1-\beta_2^T} s^=1β2Ts

θ = θ + α m ^ ⊘ s ^ + ϵ \theta = \theta + \alpha \hat{m} \oslash \sqrt{\hat{s}+\epsilon} θ=θ+αm^s^+ϵ

there ⊙ \odot and ⊘ \oslash It's dot multiplication and dot division , Or call it element-wise product and element-wise division.

Python Realization

def adam(a, b, ma, mb, sa, sb, t, x, y):
epsilon=1e-10
beta1 = 0.9
beta2 = 0.9
n = 5
alpha = 1e-1
y_hat = model(a,b,x)
da = (1.0/n) * ((y_hat-y)*x).sum()
db = (1.0/n) * ((y_hat-y).sum())
ma = beta1 * ma - (1-beta1)*da
mb = beta1 * mb - (1-beta1)*db
sa = beta2 * sa + (1-beta2)*da*da
sb = beta2 * sb + (1-beta2)*db*db
ma_hat = ma/(1-beta1**t)
mb_hat = mb/(1-beta1**t)
sa_hat=sa/(1-beta2**t)
sb_hat=sb/(1-beta2**t)
a = a + alpha*ma_hat / np.sqrt(sa_hat)
b = b + alpha*mb_hat / np.sqrt(sb_hat)
return a, b, ma, mb, sa, sb

Adam It was used 25 Time .

Nadam

Nadam Namely Nesterov + Adam.nadam and adam comparison , Only the difference is used in calculating the loss θ + m \theta+m θ+m To replace the θ \theta θ

m = β 1 m − ( 1 − β 1 ) ∂ J ( θ + β 1 m ) ∂ θ m = \beta_1 m - (1-\beta_1)\frac{\partial J(\theta+\beta_1 m)}{\partial \theta} m=β1m(1β1)θJ(θ+β1m)

s = β 2 s + ( 1 − β 2 ) ∂ J ( θ + β 1 m ) ∂ θ ⊙ ∂ J ( θ + β 1 m ) ∂ θ s = \beta_2 s + (1-\beta_2) \frac{\partial J(\theta+\beta_1 m)}{\partial \theta} \odot \frac{\partial J(\theta+\beta_1 m)}{\partial \theta} s=β2s+(1β2)θJ(θ+β1m)θJ(θ+β1m)

m ^ = m 1 − β 1 T \hat{m} = \frac{m}{1-\beta_1^T} m^=1β1Tm

s ^ = s 1 − β 2 T \hat{s} = \frac{s}{1-\beta_2^T} s^=1β2Ts

θ = θ + α m ^ ⊘ s ^ + ϵ \theta = \theta + \alpha \hat{m} \oslash \sqrt{\hat{s}+\epsilon} θ=θ+αm^s^+ϵ

def nadam(a, b, ma, mb, sa, sb, t, x, y):
epsilon=1e-10
beta1 = 0.9
beta2 = 0.9
n = 5
alpha = 1e-1
# to modify adam to nadam, 
# we only modify here
# with a = a + ma 
# and b = b + mb
y_hat = model(a+ma,b+mb,x)
da = (1.0/n) * ((y_hat-y)*x).sum()
db = (1.0/n) * ((y_hat-y).sum())
ma = beta1 * ma - (1-beta1)*da
mb = beta1 * mb - (1-beta1)*db
sa = beta2 * sa + (1-beta2)*da*da
sb = beta2 * sb + (1-beta2)*db*db
ma_hat = ma/(1-beta1**t)
mb_hat = mb/(1-beta1**t)
sa_hat=sa/(1-beta2**t)
sb_hat=sb/(1-beta2**t)
a = a + alpha*ma_hat / np.sqrt(sa_hat)
b = b + alpha*mb_hat / np.sqrt(sb_hat)
return a, b, ma, mb, sa, sb

summary

For our univariate linear regression . To achieve the same effect ( Loss ), All iterations of various optimizers :

optimization algorithm The number of iterations impulse Nesterov adaptive Self regulation remarks
SGD 100
Momentum 46
Nesterov 21
AdaGrad 114
RMSProp 11
Adam 25 Deep learning defaults
Nadam 14

Be careful , This is just a representation of my dataset . This article is not about optimizing usage . Here is a brief introduction .AdaGrad( And its variants RMSProp), Adam, Nadam These names carry ada(adaptive) Of , It's going to shrink (decay) The effect of learning rate . But if their learning rate shrinks too fast , It slows them down , On the contrary, it takes a long time to reach the optimum , Even never . I use tensorflow When , Usually use first adam, If the effect is not good , Just use Learning Rate Scheduling technology , Get the right learning rate . Then change it to SGD. It can also be used. Gradient Clipping technology . Please refer to Daniel's works for details .

Reference material

When writing this article , It's time to see 《hands-on machine learning (2nd Edition)》, So I took this book as a reference , The screenshots in the article are all from this book .

I've never read the watermelon book you often say , And there's no plan to see it . Machine learning should start with English materials , Not in Chinese . It's hard to see English in the early stage , It's easy later . It's easy to watch early Chinese , It's hard at the back .

Write optimization functions , You can also

These optimization functions , When it's done , Just apply the formula in the book . I spent a total of three hours , All the functions are implemented . among , Function is wrong. Waste an hour . Look for dot multiplication and dot division latex perhaps markdown It's a waste of half an hour .

Dot multiplication and dot division markdown / latex:

\odot
\oslash

I think the optimization function is the simplest among all the handwriting algorithms I have done . There's basically no need to derive formulas , Just use it directly .

You can try to write it yourself , I can't write it out , You can refer to my code .

I wish you all the best in machine learning .

May the machine learn with you.

Github Code address

https://github.com/EricWebsmith/machine_learning_from_scrach

Mainland users render github jupyter notebook Please use jupyter notebook viewer:
https://nbviewer.jupyter.org/github/EricWebsmith/machine_learning_from_scrach/blob/master/optimizers.ipynb

python Machine learning handwriting algorithm series

python Machine learning handwriting algorithm series —— Linear regression

python Machine learning handwriting algorithm series —— Logical regression

python Machine learning handwriting algorithm series —— Optimization function

python Machine learning handwriting algorithm series —— Decision tree

python Machine learning handwriting algorithm series ——kmeans clustering

python Machine learning handwriting algorithm series ——GBDT Gradient lifting classification

python Machine learning handwriting algorithm series ——GBDT Gradient ascension regression

python Machine learning handwriting algorithm series —— Bayesian optimization Bayesian Optimization

python Machine learning handwriting algorithm series ——PageRank Algorithm

版权声明
本文为[There are several evidences]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database