author |Khuyen Tran compile |VK source |Towards Data Science
Sklearn It's a great library , There are various machine learning models , Can be used to train data . But if your data is big , You may need a long time to train your data , Especially when you're looking for the best model with different hyperparameters .
Is there a way to make machine learning models faster than using Sklearn Fast 150 times ? The answer is that you can use cuML.
The following chart compares the use of Sklearn Of RandomForestClassifier and cuML Of RandomForestClassifier The time the first mock exam is needed to train the same model .
cuML It's a fast set ,GPU Accelerated machine learning algorithms , Designed for data science and analysis tasks . its API Be similar to Sklearn Of , That means you can use training Sklearn Model code to train cuML Model of .
from cuml.ensemble import RandomForestClassifier
clf = KNeighborsClassifier(n_neighbors=10)
clf.fit(X, y)
In this paper , I'll compare the performance of the two libraries using different models . I'll also show you how to add a graphics card , Make it faster 10 times .
To install cuML, Please follow Rapids The instructions on the page to install . Make sure to check the prerequisites before installing the library . You can install all the packages , You can just install cuML. If your computer space is limited , I suggest installing cuDF and cuML.
Although in many cases , No installation required cuDF To use cuML, however cuDF yes cuML A good complement to , Because it's a GPU Data frame .
Make sure you choose the right option for your computer .
Because when there's a lot of data ,cuML Often than Sklearn Better , So we will use sklearn.datasets.
from sklearn Import dataset
from sklearn import datasets
X, y = datasets.make_classification(n_samples=40000)
Convert data type to np.float32 Because there are some cuML The model requires input to be np.float32.
X = X.astype(np.float32)
y = y.astype(np.float32)
We're going to create functions for training models . Using this function will make it easier for us to compare different models .
def train_data(model, X=X, y=y):
clf = model
clf.fit(X, y)
We use iPython Of magic command %timeit Run each function 7 Time , Take the average of all the experiments .
from sklearn.svm import SVC
from cuml.svm import SVC as SVC_gpu
clf_svc = SVC(kernel='poly', degree=2, gamma='auto', C=1)
sklearn_time_svc = %timeit -o train_data(clf_svc)
clf_svc = SVC_gpu(kernel='poly', degree=2, gamma='auto', C=1)
cuml_time_svc = %timeit -o train_data(clf_svc)
print(f"""Average time of sklearn's {clf_svc.__class__.__name__}""", sklearn_time_svc.average, 's')
print(f"""Average time of cuml's {clf_svc.__class__.__name__}""", cuml_time_svc.average, 's')
print('Ratio between sklearn and cuml is', sklearn_time_svc.average/cuml_time_svc.average)
Average time of sklearn's SVC 48.56009825014287 s
Average time of cuml's SVC 19.611496431714304 s
Ratio between sklearn and cuml is 2.476103668030909
cuML Of SVC Than sklearn Of SVC fast 2.5 times !
Let's visualize it with pictures . We create a function to draw the speed of the model .
!pip install cutecharts
import cutecharts.charts as ctc
def plot(sklearn_time, cuml_time):
chart = ctc.Bar('Sklearn vs cuml')
chart.set_options(
labels=['sklearn', 'cuml'],
x_label='library',
y_label='time (s)',
)
chart.add_series('time', data=[round(sklearn_time.average,2), round(cuml_time.average,2)])
return chart
plot(sklearn_time_svc, cuml_time_svc).render_notebook()
because cuML When running big data Sklearn The model is fast , Because they use GPU Trained , If we were to GPU What happens if you triple your memory ?
In the previous comparison , I'm using a carrier geforce2060 Of Alienware M15 Laptops and 6.3gb Of the graphics card memory .
Now? , I'm going to use one with Quadro RTX 5000 Of Dell Precision 7740 and 17 GB To test the memory of the graphics card GPU The speed at which memory increases .
Average time of sklearn's SVC 35.791008955999914 s
Average time of cuml's SVC 1.9953700327142931 s
Ratio between sklearn and cuml is 17.93702840535976
When it's in a graphics card memory for 17gb When training on the machine ,cuML The support vector machine is better than Sklearn Support vector machines are fast 18 times ! Its speed is the speed of laptop training 10 times , The memory of the video card is 6.3gb.
That's why if we use things like cuML In this way GPU Acceleration Library .
clf_rf = RandomForestClassifier(max_features=1.0,
n_estimators=40)
sklearn_time_rf = %timeit -o train_data(clf_rf)
clf_rf = RandomForestClassifier_gpu(max_features=1.0,
n_estimators=40)
cuml_time_rf = %timeit -o train_data(clf_rf)
print(f"""Average time of sklearn's {clf_rf.__class__.__name__}""", sklearn_time_rf.average, 's')
print(f"""Average time of cuml's {clf_rf.__class__.__name__}""", cuml_time_rf.average, 's')
print('Ratio between sklearn and cuml is', sklearn_time_rf.average/cuml_time_rf.average)
Average time of sklearn's RandomForestClassifier 29.824075075857113 s
Average time of cuml's RandomForestClassifier 0.49404465585715635 s
Ratio between sklearn and cuml is 60.3671646323408
cuML Of RandomForestClassifier Than Sklearn Of RandomForestClassifier fast 60 times ! If you train Sklearn Of RandomForestClassifier need 30 second , So training cuML Of RandomForestClassifier It only takes less than half a second !
Average time of Sklearn's RandomForestClassifier 24.006061030143037 s
Average time of cuML's RandomForestClassifier 0.15141178591425808 s.
The ratio between Sklearn’s and cuML is 158.54816641379068
In my Dell Precision 7740 When training on a laptop ,cuML Of RandomForestClassifier Than Sklearn Of RandomForestClassifier fast 158 times !
Average time of sklearn's KNeighborsClassifier 0.07836367340000508 s
Average time of cuml's KNeighborsClassifier 0.004251259535714585 s
Ratio between sklearn and cuml is 18.43304854518441
notes :y On axis 20m Express 20ms.
cuML Of KNeighborsClassifier Than Sklearn Of KNeighborsClassifier fast 18 times .
Average time of sklearn's KNeighborsClassifier 0.07511190322854547 s
Average time of cuml's KNeighborsClassifier 0.0015137992111426033 s
Ratio between sklearn and cuml is 49.618141346401956
In my Dell Precision 7740 When training on a laptop ,cuML Of KNeighborsClassifier Than Sklearn Of KNeighborsClassifier fast 50 times .
You can find other comparison code here .
The following two tables summarize the speed of the different models between the two libraries :
index | sklearn(s) | cuml(s) | sklearn/cuml |
---|---|---|---|
SVC | 50.24 | 23.69 | 2.121 |
RandomForestClassifier | 29.82 | 0.443 | 67.32 |
KNeighborsClassifier | 0.078 | 0.004 | 19.5 |
LinearRegression | 0.005 | 0.006 | 0.8333 |
Ridge | 0.021 | 0.006 | 3.5 |
KNeighborsRegressor | 0.076 | 0.002 | 38 |
index | sklearn(s) | cuml(s) | sklearn/cuml |
---|---|---|---|
SVC | 35.79 | 1.995 | 17.94 |
RandomForestClassifier | 24.01 | 0.151 | 159 |
KNeighborsClassifier | 0.075 | 0.002 | 37.5 |
LinearRegression | 0.006 | 0.002 | 3 |
Ridge | 0.005 | 0.002 | 2.5 |
KNeighborsRegressor | 0.069 | 0.001 | 69 |
It's quite impressive , isn't it? ?
You just learned that in cuML Train different models with Sklearn How fast compared to . If you use Sklearn It takes a long time to train your model , I strongly recommend that you try cuML, Because with Sklearn Of API comparison , There is no change in the code .
Of course , If the library uses GPU To perform something like cuML This code , So the better graphics you have , The faster you train .
More about other machine learning models , see also cuML Documents :https://docs.rapids.ai/api/cuml/stable/
Link to the original text :https://towardsdatascience.com/train-your-machine-learning-model-150x-faster-with-cuml-69d0768a047a
Welcome to join us AI Blog station : http://panchuang.net/
sklearn Machine learning Chinese official documents : http://sklearn123.com/
Welcome to pay attention to pan Chuang blog resource summary station : http://docs.panchuang.net/