Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Usually, hyperparameter tuning is combined with cross-validation. Sometimes, we want to run cross validation independently to see whether a candidate model is generalized enough for this dataset. To this end, let's use linear regressor as an example.
- #
- # Code Listing 9.01. Import all the necessary packages for the cross-validation example. We use the first 150 data points of the diabetes dataset and make a linear regressor with default parameters.
- from sklearn import datasets, linear_model
- from sklearn.model_selection import cross_validate
- diabetes = datasets.load_diabetes()
- X = diabetes.data[:150]
- y = diabetes.target[:150]
- lr = linear_model.LinearRegression()
- # Next, we will use the method cross_validate() to apply cross-validation on the linear regressor
- #
- # Code Listing 9.02. Use the cross_validate() method to apply 5-fold cross-validation on the linear regressor, and then print the test scores.
- scores = cross_validate(lr, X, y, cv=5, scoring=('r2', 'neg_mean_squared_error'),
- return_train_score=True)
- print("negative mean squared errors: ", scores["test_neg_mean_squared_error"])
- print("r2 scores: ", scores["test_r2"])
- negative mean squared errors: [-2547.29219945 -4523.25983124 -2301.49369105 -4378.07848216
- -2409.19372015]
- r2 scores: [0.36324841 0.28239194 0.4211776 0.30071196 0.61240533]
- # We use 5-fold cross-validation and use r2 and negative mean square error as the metrics. As we can see from the output, the linear regressor performs differently on each fold. That's why cross-validation can help us observe the performance vibration when data changes.
- #
- # Now we can try to incorporate hyperparameter tuning and see how it improves the performance over the model with default parameters. We know random forest models perform very well with default settings. Can we still make improvements with hyperparameter tuning and cross-validation?
- #
- # Firstly, we will fetch the California housing dataset for this practice. As usual, we will randomly get 80% of the data for training.
- #
- # Code Listing 9.03. Fetch the California housing dataset and split it into training/test sets.
- import numpy as np
- from sklearn.model_selection import RandomizedSearchCV
-
- from sklearn.datasets import fetch_california_housing
- from sklearn.model_selection import train_test_split
- from sklearn.metrics import mean_squared_error, r2_score
-
- california_housing_bunch = fetch_california_housing()
- california_housing_X, california_housing_y = california_housing_bunch.data, california_housing_bunch.target
- x_train, x_test, y_train, y_test = train_test_split(california_housing_X, california_housing_y, test_size=0.2)
- # For the second step, we need to create a basic estimator. If you want to fix some hyperparameters, you can set them at this stage.
- #
- # train a kNN regressor with k = 10
- n_neighbors=10
- from sklearn.neighbors import KNeighborsRegressor
- knn_10_regr = KNeighborsRegressor(n_neighbors=10)
- knn_10_regr.fit(x_train, y_train)
-
- # train a kNN regressor with k = 100
- knn_100_regr = KNeighborsRegressor(n_neighbors=100, weights="distance")
- knn_100_regr.fit(x_train, y_train)
- KNeighborsRegressor(n_neighbors=100, weights='distance')
- random_grid = {'n_neighbors': [10]}
- # Now it is time to create a hyperparameter grid for a random search.
- #
- # Code Listing 9.05. Create a hyperparameter grid for 3 parameters: n_estimators, max_depth, bootstrap for RandomForestRegressor.
- #
- # Number of trees in random forest
- n_estimators = [int(x) for x in np.linspace(start = 600, stop = 2000, num = 15)]
- # Maximum number of levels in tree
- max_depth = [int(x) for x in np.linspace(10, 80, num = 8)]
- max_depth.append(None)
- # Method of selecting samples for training each tree
- bootstrap = [True, False]
-
- random_grid = {'n_neighbors': [10]}
- # numpy's linspace() method can create a list of numbers by pre-defined start/stop. We can start the random search + cross-validation. When we have a basic regressor, we can use it as the value for the parameter estimator for RandomizedSearchCV(), and randomly try different combinations of hyperparameters we want to test.
- #
- # Code Listing 9.06. Use training data to process the randomized search and cross-validation.
- #
- # Random search of parameters, using 3 fold cross validation,
- # search across 10 different combinations, and use all available cores
- rf_random = RandomizedSearchCV(estimator = knn_10_regr, param_distributions = random_grid, n_iter = 10, cv = 3, n_jobs = -1)
- # Fit the random search model
- rf_random.fit(x_train, y_train)
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py:285: UserWarning: The total space of parameters 1 is smaller than n_iter=10. Running 1 iterations. For exhaustive searches, use GridSearchCV.
- warnings.warn(
- RandomizedSearchCV(cv=3, estimator=KNeighborsRegressor(n_neighbors=10),
- n_jobs=-1, param_distributions={'n_neighbors': [10]})
Advertisement
Add Comment
Please, Sign In to add comment