如何使用Python进行超参调参和调优

InfoQ 2021-10-29 11:31:17
Python 使用 InfoQ 进行

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"color","attrs":{"color":"#000000","name":"user"}},{"type":"strong"}],"text":"本文最初发布于rubikscode.com网站,经原作者授权由InfoQ中文站翻译并分享。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"围绕模型优化这一主题发展出来的许多子分支之间的差异之大往往令人难以置信。其中的一个子分支叫做超参数优化,或超参数调优。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在本文中你会学到:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"机器学习中的超参数"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"前置条件和数据"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"网格搜索超参数调优"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"随机搜索超参数调优"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"贝叶斯超参数优化"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"减半网格搜索和减半随机搜索"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":7,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"替代选项"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"机器学习中的超参数"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"超参数是所有机器学习和深度学习算法都包含的一部分。与由算法本身学习的标准机器学习参数(如线性回归中的w和b,或神经网络中的连接权重)不同,"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"超参数由工程师在训练流程之前设置"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"它们是完全由工程师定义的一项外部因素,用来控制学习算法的行为。想看些例子?学习率是最著名的超参数之一,SVM中的C也是超参数,决策树的最大深度同样是一个超参数,等等。这些超参数都可以由工程师手动设置。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"但是,如果我们想运行多个测试,超参数用起来可能会很麻烦。于是我们就需要对超参数做优化了。这些技术的主要目标是找到给定机器学习算法的最佳超参数,以在验证集上获得最佳评估性能。在本教程中,我们探索了几种可以为你提供最佳超参数的技术。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"前置条件和数据"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"前置条件和库"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"请安装以下Python库,为本文接下来的内容做准备:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"NumPy——如果你需要安装帮助,请参考这份"},{"type":"link","attrs":{"href":"https:\/\/numpy.org\/install\/","title":null,"type":null},"content":[{"type":"text","text":"指南"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"SciKit Learn——如果你需要安装帮助,请参考这份"},{"type":"link","attrs":{"href":"https:\/\/scikit-learn.org\/stable\/install.html","title":null,"type":null},"content":[{"type":"text","text":"指南"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"SciPy——如果你需要安装帮助,请参考这份"},{"type":"link","attrs":{"href":"https:\/\/www.scipy.org\/install.html","title":null,"type":null},"content":[{"type":"text","text":"指南"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Sci-Kit Optimization——如果你需要安装帮助,请参考这份"},{"type":"link","attrs":{"href":"https:\/\/scikit-optimize.github.io\/stable\/install.html","title":null,"type":null},"content":[{"type":"text","text":"指南"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"安装完成后,请确保你已导入本教程中使用的所有必要模块。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import f1_score\n\nfrom sklearn.model_selection import GridSearchCV, RandomizedSearchCV\n\nfrom sklearn.experimental import enable_halving_search_cv\nfrom sklearn.model_selection import HalvingGridSearchCV, HalvingRandomSearchCV\n\nfrom sklearn.svm import SVC\nfrom sklearn.ensemble import RandomForestRegressor\n\nfrom scipy import stats\nfrom skopt import BayesSearchCV\nfrom skopt.space import Real, Categorical"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"除此之外,你最好起码熟悉一下线性代数、微积分和概率论的基础知识。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"准备数据"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们在本文中使用的数据来自PalmerPenguins数据集。该数据集是最近发布的,旨在作为著名的Iris数据集的替代品。它由Kristen Gorman博士和南极洲LTER的帕尔默科考站共同创建。你可以在"},{"type":"link","attrs":{"href":"https:\/\/github.com\/allisonhorst\/palmerpenguins","title":null,"type":null},"content":[{"type":"text","text":"此处"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"或通过Kaggle获取此数据集。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"该数据集本质上是由两个数据集组成的,每个数据集包含344只企鹅的数据。就像Iris一样,这个数据集也有来自帕尔默群岛3个岛屿的3个种类的企鹅。此外,这些数据集包含每个物种的"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"culmen"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"维度。culmen是鸟喙的上脊。在简化的企鹅数据中,culmen长度和深度被重命名为变量culmen_length_mm和culmen_depth_mm。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/fd\/91\/fd136bce4f63dd478585782620a2d791.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"由于这个数据集已经标记过了,我们应该能验证我们的实验结果。但实际情况往往没这么简单,聚类算法结果的验证通常是一个艰难而复杂的过程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们先来加载并准备PalmerPenguins数据集。首先,我们加载数据集,删除本文中不会用到的特征:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"data = pd.read_csv('.\/data\/penguins_size.csv')\n\ndata = data.dropna()\ndata = data.drop(['sex', 'island', 'flipper_length_mm', 'body_mass_g'], axis=1)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"然后我们分离输入数据并对其进行缩放:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"X = data.drop(['species'], axis=1)\n\nss = StandardScaler()\nX = ss.fit_transform(X) \n\ny = data['species']\nspicies = {'Adelie': 0, 'Chinstrap': 1, 'Gentoo': 2}\ny = [spicies[item] for item in y]\ny = np.array(y) "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"最后,我们将数据拆分为训练和测试数据集:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=33)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"当我们绘制这里的数据时,图像是下面这个样子:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/90\/41\/9085ab63af5f7a5a1e47d867ab4b8c41.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"网格搜索超参数调优"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"超参数调优的工作手动做起来又慢又烦人。所以我们开始探索第一个,也是最简单的超参数优化技术——网格搜索。这种技术可以加快调优工作,是最常用的超参数优化技术之一。从本质上讲,它会自动化试错流程。对于这种技术,我们提供了一个包含所有超参数值的列表,然后该算法为每个可能的组合构建模型,对其进行评估,并选择提供最佳结果的值。它是一种通用技术,可以应用于任何模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在我们的示例中,我们使用SVM算法进行分类。我们考虑了三个超参数——C、gamma和kernel。想要更详细地了解它们的话请查看这篇"},{"type":"link","attrs":{"href":"https:\/\/rubikscode.net\/2020\/08\/10\/back-to-machine-learning-basics-support-vector-machines\/","title":null,"type":null},"content":[{"type":"text","text":"文章"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。对于C,我们要检查以下值:0.1、1、100、1000;对于gamma,我们使用值:0.0001、0.001、0.005、0.1、1、3、5,对于kernel,我们使用值:“linear”和“rbf”。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"网格搜索实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"下面是代码中的样子:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"hyperparameters = {\n 'C': [0.1, 1, 100, 1000],\n 'gamma': [0.0001, 0.001, 0.005, 0.1, 1, 3, 5],\n 'kernel': ('linear', 'rbf')\n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们这里利用了Sci-Kit Learn库及其SVC类,其中包含SVM分类实现。除此之外,我们还使用了GridSearchCV类,用于网格搜索优化。结合起来是这个样子:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"grid = GridSearchCV(\n estimator=SVC(),\n param_grid=hyperparameters,\n cv=5, \n\tscoring='f1_micro', \n\tn_jobs=-1)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"这个类通过构造器接收几个参数:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"estimator——实例机器学习算法本身。我们在那里传递SVC类的新实例。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"param_grid——包含超参数字典。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"cv——确定交叉验证拆分策略。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"scoring——用于评估预测的验证指标。我们使用F1分数。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"n_jobs——表示要并行运行的作业数。值-1表示正在使用所有处理器。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"剩下要做的就是使用fit方法运行训练过程:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"grid.fit(X_train, y_train)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"训练完成后,我们可以查看最佳超参数和这些参数的得分:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"print(f'Best parameters: {grid.best_params_}')\nprint(f'Best score: {grid.best_score_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"Best parameters: {'C': 1000, 'gamma': 0.1, 'kernel': 'rbf'}\nBest score: 0.9626834381551361\t"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"此外,我们可以打印出所有结果:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"print(f'All results: {grid.cv_results_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"Allresults: {'mean_fit_time': array([0.00780015, 0.00280147, 0.00120015, 0.00219998, 0.0240006 ,\n 0.00739942, 0.00059962, 0.00600033, 0.0009994 , 0.00279789,\n 0.00099969, 0.00340114, 0.00059986, 0.00299864, 0.000597 ,\n 0.00340023, 0.00119658, 0.00280094, 0.00060058, 0.00179944,\n 0.00099964, 0.00079966, 0.00099916, 0.00100031, 0.00079999,\n 0.002 , 0.00080023, 0.00220037, 0.00119958, 0.00160012,\n 0.02939963, 0.00099955, 0.00119963, 0.00139995, 0.00100069,\n 0.00100017, 0.00140052, 0.00119977, 0.00099974, 0.00180006,\n 0.00100312, 0.00199976, 0.00220003, 0.00320096, 0.00240035,\n 0.001999 , 0.00319982, 0.00199995, 0.00299931, 0.00199928, \n..."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"好的,现在我们构建这个模型并检查它在测试数据集上的表现:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"model = SVC(C=500, gamma = 0.1, kernel = 'rbf')\nmodel.fit(X_train, y_train)\n\n\npreditions = model.predict(X_test)\nprint(f1_score(preditions, y_test, average='micro'))"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"0.9701492537313433"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"结果很不错,我们的模型用建议的超参数获得了约97%的精度。下面是绘制时模型的样子:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/2a\/20\/2a4a17fba74d3731a28a9e2270d07420.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"随机搜索超参数调优"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"网格搜索非常简单,但它的计算成本也很高。特别是在深度学习领域,训练可能需要大量时间。此外,某些超参数可能比其他超参数更重要。于是人们提出了随机搜索的想法,本文接下来会具体介绍。事实上,这项研究表明,随机搜索在做超参数优化时计算成本比网格搜索更有优势。这种技术也让我们可以更精确地发现重要超参数的理想值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"就像网格搜索一样,随机搜索会创建一个超参数值网格并选择随机组合来训练模型。这种方法可能会错过最佳组合,但是与网格搜索相比,它选择最佳结果的几率竟然是更高的,而且需要的时间只有网格搜索的一小部分。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"随机搜索实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们看看它是怎样写成代码的。我们再次使用Sci-KitLearn库的SVC类,但这次我们使用RandomSearchCV类进行随机搜索优化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"hyperparameters = {\n \"C\": stats.uniform(500, 1500),\n \"gamma\": stats.uniform(0, 1),\n 'kernel': ('linear', 'rbf')\n}\nrandom = RandomizedSearchCV(\n estimator = SVC(), \n param_distributions = hyperparameters, \n n_iter = 100, \n cv = 3, \n random_state=42, \n n_jobs = -1)\nrandom.fit(X_train, y_train)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"请注意,我们对C和gamma使用了均匀分布。同样,我们可以打印出结果:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"print(f'Best parameters: {random.best_params_}')\nprint(f'Best score: {random.best_score_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"Best parameters: {'C': 510.5994578295761, 'gamma': 0.023062425041415757, 'kernel': 'linear'}\nBest score: 0.9700374531835205"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"可以看到我们的结果接近网格搜索,但并不一样。网格搜索的超参数C的值为500,而随机搜索的值为510.59。仅从这一点你就可以看到随机搜索的好处,因为我们不太可能将这个值放入网格搜索列表中。类似地,对于gamma,我们的随机搜索结果为0.23,而网格搜索为0.1。真正令人惊讶的是随机搜索选择了线性kernel而不是RBF,并且它获得了更高的F1分数。要打印所有结果,我们使用cv_results_属性:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"print(f'All results: {random.cv_results_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"Allresults: {'mean_fit_time': array([0.00200065, 0.00233404, 0.00100454, 0.00233777, 0.00100009,\n 0.00033339, 0.00099715, 0.00132942, 0.00099921, 0.00066725,\n 0.00266568, 0.00233348, 0.00233301, 0.0006667 , 0.00233285,\n 0.00100001, 0.00099993, 0.00033331, 0.00166742, 0.00233364,\n 0.00199914, 0.00433286, 0.00399915, 0.00200049, 0.01033338,\n 0.00100342, 0.0029997 , 0.00166655, 0.00166726, 0.00133403,\n 0.00233293, 0.00133729, 0.00100009, 0.00066662, 0.00066646,\n\t \n\t ...."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们来重复上面网格搜索的步骤:使用建议的超参数创建模型,检查测试数据集的分数并绘制模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"model = SVC(C=510.5994578295761, gamma = 0.023062425041415757, kernel = 'linear')\nmodel.fit(X_train, y_train)\npreditions = model.predict(X_test)\nprint(f1_score(preditions, y_test, average='micro'))"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"0.9701492537313433"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"哇,测试数据集上的F1分数与我们使用网格搜索时的分数完全相同。查看模型:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/58\/08\/58b9ce80e945215903c1f5e83a6fb708.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"贝叶斯超参数优化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"前两种算法有一点很棒,那就是使用各种超参数值的所有实验都可以"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"并行"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"运行。这可以为我们节省很多时间。然而这也是它们最大的缺陷所在。由于每个实验都是孤立运行的,我们不能在当前实验中使用来自过去实验的信息。有一个专门用于解决序列优化问题的领域——基于模型的序列优化(SMBO)。在该领域探索的那些算法会使用先前的实验和对损失函数的观察结果,然后基于它们来试图确定下一个最佳点。其中一种算法是贝叶斯优化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"这种算法就像来自SMBO组的其他算法一样,使用先前评估的点(在这里指的是超参数值,但我们可以推而广之)来计算损失函数的后验期望。该算法使用两个重要的数学概念——"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"高斯过程"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"和"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"采集函数"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。由于高斯分布是在随机变量上完成的,因此高斯过程是其对函数的泛化。就像高斯分布有均值和协方差一样,高斯过程是用均值函数和协方差函数来描述的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"采集函数是我们用来评估当前损失值的函数。可以把它看作是损失函数的损失函数。它是损失函数的后验分布函数,描述了所有超参数值的效用。最流行的采集函数是Expected Improvement(EI):"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/d6\/1d\/d6424ce7e6eaff7dfae82c785728de1d.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"其中f是损失函数,x'是当前最优的超参数集。当我们把它们放在一起时,贝叶斯优化分3个步骤完成:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"使用先前评估的损失函数点,使用高斯过程计算后验期望。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"选择最大化 EI的新点集"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"计算新选择点的损失函数"}]}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"贝叶斯优化实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"将其带入代码的最简单方法是使用Sci-Kit optimization库,通常称为skopt。按照我们在前面示例中使用的过程,我们可以执行以下操作:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"hyperparameters = {\n \"C\": Real(1e-6, 1e+6, prior='log-uniform'),\n \"gamma\": Real(1e-6, 1e+1, prior='log-uniform'),\n \"kernel\": Categorical(['linear', 'rbf']),\n}\nbayesian = BayesSearchCV(\n estimator = SVC(), \n search_spaces = hyperparameters, \n n_iter = 100, \n cv = 5, \n random_state=42, \n n_jobs = -1)\nbayesian.fit(X_train, y_train)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"同样,我们为超参数集定义了字典。请注意,我们使用了Sci-Kit优化库中的Real和Categorical类。然后我们用和使用GridSearchCV或RandomSearchCV类相同的方式来使用BayesSearchCV类。训练完成后,我们可以打印出最好的结果:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"print(f'Best parameters: {bayesian.best_params_}')\nprint(f'Best score: {bayesian.best_score_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"Best parameters: \nOrderedDict([('C', 3932.2516133086), ('gamma', 0.0011646737978730447), ('kernel', 'rbf')])\nBest score: 0.9625468164794008"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"很有趣,不是吗?使用这种优化我们得到了完全不同的结果。损失比我们使用随机搜索时要高一些。我们甚至可以打印出所有结果:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"print(f'All results: {bayesian.cv_results_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"All results: defaultdict(, {'split0_test_score': [0.9629629629629629,\n 0.9444444444444444, 0.9444444444444444, 0.9444444444444444, 0.9444444444444444,\n 0.9444444444444444, 0.9444444444444444, 0.9444444444444444, 0.46296296296296297,\n 0.9444444444444444, 0.8703703703703703, 0.9444444444444444, 0.9444444444444444, \n 0.9444444444444444, 0.9444444444444444, 0.9444444444444444, 0.9444444444444444, \n ....."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"使用这些超参数的模型在测试数据集上的表现如何?我们来了解一下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"model = SVC(C=3932.2516133086, gamma = 0.0011646737978730447, kernel = 'rbf')\nmodel.fit(X_train, y_train)\npreditions = model.predict(X_test)\nprint(f1_score(preditions, y_test, average='micro'))"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"0.9850746268656716"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"太有意思了。尽管我们在验证数据集上的结果要差一些,但我们在测试数据集上获得了更好的分数。下面是模型:"}]},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/01\/d0\/01da0073yydcc8a02f0d7bb8b280a8d0.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"加点乐趣,我们可以把所有这些模型并排放置:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/b6\/f0\/b62e8590fce37eab5f71dd5f9629aff0.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"减半网格搜索和减半随机搜索"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"几个月前,Sci-Kit Learn库引入了两个新类,HalvingGridSearchCV和HalvingRandomSearchCV。他们声称,这两个类“可以更快地找到一个理想参数组合”。这些类使用连续减半方法来搜索指定的参数值。该技术开始使用少量资源评估所有候选者,并使用越来越多的资源迭代地选择最佳候选者。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"从减半网格搜索的角度来看,这意味着在第一次迭代中,所有候选者都将在少量训练数据上进行训练。下一次迭代将只包括在前一次迭代中表现最好的候选者,这些模型将获得更多资源,也就是更多的训练数据,然后再做评估。这个过程将继续,并且减半网格搜索将只保留前一次迭代中的最佳候选者,直到只剩最后一个为止。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"整个过程由两个参数控制——min_samples和factor。第一个参数min_samples表示进程开始时的数据量。每次迭代时,该数据集将按factor定义的值增长。该过程类似于HalvingRandomSearchCV。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"减半网格搜索和减半随机搜索实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"这里的代码与前面的示例类似,我们只是使用了不同的类。我们先从HalvingGridSearch开始:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"hyperparameters = {\n 'C': [0.1, 1, 100, 500, 1000],\n 'gamma': [0.0001, 0.001, 0.01, 0.005, 0.1, 1, 3, 5],\n 'kernel': ('linear', 'rbf')\n}\n\n\ngrid = HalvingGridSearchCV(\n estimator=SVC(),\n param_grid=hyperparameters,\n cv=5, \n scoring='f1_micro', \n n_jobs=-1)\ngrid.fit(X_train, y_train)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"有趣的是这段代码只运行了0.7秒。相比之下,使用GridSearchCV类的相同代码跑了3.6秒。前者的速度快得多,但结果有点不同:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"print(f'Best parameters: {grid.best_params_}')\nprint(f'Best score: {grid.best_score_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"Best parameters: {'C': 500, 'gamma': 0.005, 'kernel': 'rbf'}\nBest score: 0.9529411764705882"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们得到了相似的结果,但并不相同。如果我们使用这些值创建模型将获得以下精度和图:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"model = SVC(C=500, gamma = 0.005, kernel = 'rbf')\nmodel.fit(X_train, y_train)\npreditions = model.predict(X_test)\nprint(f1_score(preditions, y_test, average='micro'))"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"0.9850746268656716"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/d6\/7a\/d63a4d52435405e8d476a73ba309137a.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"然后我们对减半随机搜索重复上述过程。有趣的是,使用这种方法我们得到了最奇怪的结果。我们可以说以这种方式创建的模型很难过拟合:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"hyperparameters = {\n \"C\": stats.uniform(500, 1500),\n \"gamma\": stats.uniform(0, 1),\n 'kernel': ('linear', 'rbf')\n}\nrandom = HalvingRandomSearchCV(\n estimator = SVC(), \n param_distributions = hyperparameters, \n cv = 3, \n random_state=42, \n n_jobs = -1)\nrandom.fit(X_train, y_train)\nprint(f'Best parameters: {random.best_params_}')\nprint(f'Best score: {random.best_score_}')"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"Best parameters: {'C': 530.8767414437036, 'gamma': 0.9699098521619943, 'kernel': 'rbf'}\nBest score: 0.9506172839506174"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/2c\/29\/2c4a3b1f9613afb1252c5af6c1ab1729.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"其他替代品"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"一般来说,前面描述的这些方法是最流行和最常用的。但是,如果上面介绍的方案不适合你,你还可以考虑多种替代方案。其中之一是基于梯度的超参数值优化。该技术会计算关于超参数的梯度,然后使用梯度下降算法对其进行优化。这种方法的问题在于,要让梯度下降过程正常运行,我们需要凸且平滑的函数,但超参数这个领域并不是总有这样的条件。另一种方法是使用进化算法进行优化。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"小结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在本文中,我们介绍了几种众所周知的超参数优化和调整算法。我们学习了如何使用网格搜索、随机搜索和贝叶斯优化来获得超参数的最佳值。我们还学到了如何利用Sci-KitLearn类和方法在代码中做到这一点。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"感谢阅读!"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"原文链接:"},{"type":"link","attrs":{"href":"https:\/\/rubikscode.net\/2021\/08\/17\/ml-optimization-pt-3-hyperparameter-optimization-with-python\/","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/rubikscode.net\/2021\/08\/17\/ml-optimization-pt-3-hyperparameter-optimization-with-python\/"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]}
版权声明
本文为[InfoQ]所创,转载请带上原文链接,感谢
https://www.infoq.cn/article/dLIUVWWWQMDL0sHd6rig?utm_source=rss&utm_medium=article

  1. Python code reading (Chapter 14): List Union
  2. Lecture du Code Python (article 25): diviser les chaînes multilignes en listes
  3. Python self study notes -- operators
  4. Formation python - différences entre http et HTTPS
  5. Implementation of automatic timing comment function on Python CSDN platform
  6. python+tkinter+treeview子控件快捷键
  7. Raccourcis clavier pour les sous - contrôles Python + tkinter + treeview
  8. Analyse des données Python
  9. python+tkinter+treeview子控件快捷鍵
  10. Devine si je peux attraper Maotai avec la programmation python? Tout est ouvert à github
  11. À propos de pygame.display.set in Python Un petit problème avec mode ()
  12. Implementation of automatic timing comment function on Python CSDN platform
  13. python:dataframe进行iteritem遍历时如何将输出结果按照列分别输出为该列最后一行
  14. python:dataframe進行iteritem遍曆時如何將輸出結果按照列分別輸出為該列最後一行
  15. Python: comment le dataframe affiche les résultats de sortie par colonne à la dernière ligne de la colonne lors de la traversée de l'itemitem
  16. Écrivez un gadget de bureau pour votre fille préférée en python et elle dit que c'est génial!
  17. Introduction to closures in Python 3
  18. Global / nonlocal usage in Python 3
  19. Introduction to context manager in Python 3
  20. Python crawler selenium framework. You can start with these five questions | Python skill tree
  21. Common standard library random, python introductory tutorial 5 or 6 questions a day | Python skill tree
  22. It is said that Python is omnipotent. It's really good to see Liyang photography circle with Python this time
  23. 【Python 爬虫】 4、爬虫基本原理
  24. 【Python 爬蟲】 4、爬蟲基本原理
  25. 【 Python crawler】 4. Principes de base du crawler
  26. 这道python题到底应该要怎么做
  27. Que doit faire exactement ce problème Python
  28. Après l'importation des variables du module Python, les valeurs imprimées sont fixes.
  29. Nouveau singe Muzi Lee: 0 cours de formation Python de base types de hachage pour les opérations Python redis
  30. Looking at problems from a fresh perspective: analyzing selenium principle from the perspective of Python
  31. Insérez le format de date dans la base de données MySQL en python et ne l'exécutez pas.
  32. Try Python 3.10 with CONDA
  33. Répondez en python et demandez à quelqu'un de vous aider.
  34. Un simple problème de travail Python, qui ne fonctionne pas
  35. Problèmes d'écriture Python pour la boucle
  36. Comment Python exécute les commandes du programme à plusieurs reprises au lieu de quitter
  37. YYDS! Dexplot: one line of Python code to easily draw statistical charts!
  38. pandas生成的透视表如何和源数据一起保存
  39. pandas生成的透視錶如何和源數據一起保存
  40. Comment sauvegarder le tableau pivot généré par pandas avec les données sources
  41. 10 fois plus efficace avec cache dans le développement de Django
  42. 求Python *.svg文件操作方法
  43. 求Python *.svg文件操作方法
  44. Trouver la méthode de fonctionnement du fichier Python *.Svg
  45. 【 python】 Internal Guide for Unit Test Practice
  46. 用Python编程佩尔数列pell数列循环结构
  47. 【 python】 échafaudage fastapi: spécification du développement du projet d'interface arrière fastapi
  48. [Python] restful Specification Practice Based on fastapi
  49. Python代码阅读(第26篇):将列表映射成字典
  50. How to use Python to make a screen color extractor with Exe file
  51. Lecture du Code Python (article 26): cartographie des listes dans les dictionnaires
  52. Python代码阅读(第26篇):将列表映射成字典
  53. Python代碼閱讀(第26篇):將列錶映射成字典
  54. Lecture du Code Python (article 26): cartographie des listes dans les dictionnaires
  55. 使用 Python 进行数据可视化之Seaborn
  56. Real time access to stock data, free—— Python crawler Sina stock actual combat
  57. Seaborn pour la visualisation des données en python
  58. 浅识XPath(熟练掌握XPath的语法)【python爬虫入门进阶】(03)
  59. Python中if else语句进行操作的时候哪里除了错,搞不懂
  60. Python题,我刚学,还不会