Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Usually, hyperparameter tuning is combined with cross-validation. Sometimes, we want to run cross validation independently to see whether a candidate model is generalized enough for this dataset. To this end, let's use linear regressor as an example.
- #
- # Code Listing 9.01. Import all the necessary packages for the cross-validation example. We use the first 150 data points of the diabetes dataset and make a linear regressor with default parameters.
- from sklearn import datasets, linear_model
- from sklearn.model_selection import cross_validate
- diabetes = datasets.load_diabetes()
- X = diabetes.data[:150]
- y = diabetes.target[:150]
- lr = linear_model.LinearRegression()
- # Next, we will use the method cross_validate() to apply cross-validation on the linear regressor
- #
- # Code Listing 9.02. Use the cross_validate() method to apply 5-fold cross-validation on the linear regressor, and then print the test scores.
- scores = cross_validate(lr, X, y, cv=5, scoring=('r2', 'neg_mean_squared_error'),
- return_train_score=True)
- print("negative mean squared errors: ", scores["test_neg_mean_squared_error"])
- print("r2 scores: ", scores["test_r2"])
- negative mean squared errors: [-2547.29219945 -4523.25983124 -2301.49369105 -4378.07848216
- -2409.19372015]
- r2 scores: [0.36324841 0.28239194 0.4211776 0.30071196 0.61240533]
- # We use 5-fold cross-validation and use r2 and negative mean square error as the metrics. As we can see from the output, the linear regressor performs differently on each fold. That's why cross-validation can help us observe the performance vibration when data changes.
- #
- # Now we can try to incorporate hyperparameter tuning and see how it improves the performance over the model with default parameters. We know random forest models perform very well with default settings. Can we still make improvements with hyperparameter tuning and cross-validation?
- #
- # Firstly, we will fetch the California housing dataset for this practice. As usual, we will randomly get 80% of the data for training.
- #
- # Code Listing 9.03. Fetch the California housing dataset and split it into training/test sets.
- import numpy as np
- from sklearn.model_selection import RandomizedSearchCV
- from sklearn.datasets import fetch_california_housing
- from sklearn.model_selection import train_test_split
- from sklearn.metrics import mean_squared_error, r2_score
- california_housing_bunch = fetch_california_housing()
- california_housing_X, california_housing_y = california_housing_bunch.data, california_housing_bunch.target
- x_train, x_test, y_train, y_test = train_test_split(california_housing_X, california_housing_y, test_size=0.2)
- # For the second step, we need to create a basic estimator. If you want to fix some hyperparameters, you can set them at this stage.
- #
- # train a kNN regressor with k = 10
- from sklearn.neighbors import KNeighborsRegressor
- knn_10_regr = KNeighborsRegressor(n_neighbors=10)
- knn_10_regr.fit(x_train, y_train)
- # train a kNN regressor with k = 100
- knn_100_regr = KNeighborsRegressor(n_neighbors=100, weights="distance")
- knn_100_regr.fit(x_train, y_train)
- KNeighborsRegressor(n_neighbors=100, weights='distance')
- # Now it is time to create a hyperparameter grid for a random search.
- #
- # Code Listing 9.05. Create a hyperparameter grid for 3 parameters: n_estimators, max_depth, bootstrap for RandomForestRegressor.
- #
- # Number of trees in random forest
- n_estimators = [int(x) for x in np.linspace(start = 600, stop = 2000, num = 15)]
- # Maximum number of levels in tree
- max_depth = [int(x) for x in np.linspace(10, 80, num = 8)]
- max_depth.append(None)
- # Method of selecting samples for training each tree
- bootstrap = [True, False]
- random_grid = {'n_estimators': n_estimators,
- 'max_depth': max_depth,
- 'bootstrap': bootstrap}
- # numpy's linspace() method can create a list of numbers by pre-defined start/stop. We can start the random search + cross-validation. When we have a basic regressor, we can use it as the value for the parameter estimator for RandomizedSearchCV(), and randomly try different combinations of hyperparameters we want to test.
- #
- # Code Listing 9.06. Use training data to process the randomized search and cross-validation.
- #
- # Random search of parameters, using 3 fold cross validation,
- # search across 10 different combinations, and use all available cores
- rf_random = RandomizedSearchCV(estimator = knn_10_regr, param_distributions = random_grid, n_iter = 10, cv = 3, n_jobs = -1)
- # Fit the random search model
- rf_random.fit(x_train, y_train)
- ---------------------------------------------------------------------------
- _RemoteTraceback Traceback (most recent call last)
- _RemoteTraceback:
- """
- Traceback (most recent call last):
- File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 436, in _process_worker
- r = call_item()
- File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 288, in __call__
- return self.fn(*self.args, **self.kwargs)
- File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\_parallel_backends.py", line 595, in __call__
- return self.func(*args, **kwargs)
- File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\parallel.py", line 262, in __call__
- return [func(*args, **kwargs)
- File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\parallel.py", line 262, in <listcomp>
- return [func(*args, **kwargs)
- File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 222, in __call__
- return self.function(*args, **kwargs)
- File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py", line 586, in _fit_and_score
- estimator = estimator.set_params(**cloned_parameters)
- File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 230, in set_params
- raise ValueError('Invalid parameter %s for estimator %s. '
- ValueError: Invalid parameter n_estimators for estimator KNeighborsRegressor(n_neighbors=10). Check the list of available parameters with `estimator.get_params().keys()`.
- """
- The above exception was the direct cause of the following exception:
- ValueError Traceback (most recent call last)
- D:\Users\psalm\AppData\Local\Temp/ipykernel_33000/966981705.py in <module>
- 7 rf_random = RandomizedSearchCV(estimator = knn_10_regr, param_distributions = random_grid, n_iter = 10, cv = 3, n_jobs = -1)
- 8 # Fit the random search model
- ----> 9 rf_random.fit(x_train, y_train)
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
- 61 extra_args = len(args) - len(all_args)
- 62 if extra_args <= 0:
- ---> 63 return f(*args, **kwargs)
- 64
- 65 # extra_args > 0
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
- 839 return results
- 840
- --> 841 self._run_search(evaluate_candidates)
- 842
- 843 # multimetric is determined here because in the case of a callable
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates)
- 1631 def _run_search(self, evaluate_candidates):
- 1632 """Search n_iter candidates from param_distributions"""
- -> 1633 evaluate_candidates(ParameterSampler(
- 1634 self.param_distributions, self.n_iter,
- 1635 random_state=self.random_state))
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params, cv, more_results)
- 793 n_splits, n_candidates, n_candidates * n_splits))
- 794
- --> 795 out = parallel(delayed(_fit_and_score)(clone(base_estimator),
- 796 X, y,
- 797 train=train, test=test,
- C:\ProgramData\Anaconda3\lib\site-packages\joblib\parallel.py in __call__(self, iterable)
- 1054
- 1055 with self._backend.retrieval_context():
- -> 1056 self.retrieve()
- 1057 # Make sure that we get a last message telling us we are done
- 1058 elapsed_time = time.time() - self._start_time
- C:\ProgramData\Anaconda3\lib\site-packages\joblib\parallel.py in retrieve(self)
- 933 try:
- 934 if getattr(self._backend, 'supports_timeout', False):
- --> 935 self._output.extend(job.get(timeout=self.timeout))
- 936 else:
- 937 self._output.extend(job.get())
- C:\ProgramData\Anaconda3\lib\site-packages\joblib\_parallel_backends.py in wrap_future_result(future, timeout)
- 540 AsyncResults.get from multiprocessing."""
- 541 try:
- --> 542 return future.result(timeout=timeout)
- 543 except CfTimeoutError as e:
- 544 raise TimeoutError from e
- C:\ProgramData\Anaconda3\lib\concurrent\futures\_base.py in result(self, timeout)
- 443 raise CancelledError()
- 444 elif self._state == FINISHED:
- --> 445 return self.__get_result()
- 446 else:
- 447 raise TimeoutError()
- C:\ProgramData\Anaconda3\lib\concurrent\futures\_base.py in __get_result(self)
- 388 if self._exception:
- 389 try:
- --> 390 raise self._exception
- 391 finally:
- 392 # Break a reference cycle with the exception in self._exception
- ValueError: Invalid parameter n_estimators for estimator KNeighborsRegressor(n_neighbors=10). Check the list of available parameters with `estimator.get_params().keys()`.
- estimator.get_params().keys()
- ---------------------------------------------------------------------------
- NameError Traceback (most recent call last)
- D:\Users\psalm\AppData\Local\Temp/ipykernel_33000/2616873292.py in <module>
- ----> 1 estimator.get_params().keys()
- NameError: name 'estimator' is not defined
- # Parameter n_iter determines how many hyperparameter combinations we want to try. Initially, we should set it to 1 and see how long it will take and use this information to estimate the time cost if we set it a large number. cv =3 means we will do 3-fold cross-validation. In total we will train/test n_iter * cv times and find the optimal hyperparameter combination. Given the California housing dataset, it takes about 7.5 minutes on a Macbook Pro laptop when we run code in Code Listing 9.06.
- #
- # Once we have done the procedure, we can evaluate the best (fine-tune) estimator we obtain with the basic estimator.
- #
- # Code Listing 9.07. Evaluate the basic estimator and the fine-tune estimator.
- # Evaluate the three models on the test set with metrics MSE and R2
- from sklearn.metrics import mean_squared_error, r2_score
- print("Performance of Linear regressor")
- california_housing_y_pred = lr.predict(x_test)
- print("Mean squared error: %.2f" % mean_squared_error(y_test, california_housing_y_pred))
- print("Coefficient of determination: %.2f" % r2_score(y_test, california_housing_y_pred))
- print()
- print("Performance of kNN regressor with n = 10")
- california_housing_y_pred = knn_10_regr.predict(x_test)
- print("Mean squared error: %.2f" % mean_squared_error(y_test, california_housing_y_pred))
- print("Coefficient of determination: %.2f" % r2_score(y_test, california_housing_y_pred))
- print()
- print("Performance of kNN regressor with n = 100")
- california_housing_y_pred = knn_100_regr.predict(x_test)
- print("Mean squared error: %.2f" % mean_squared_error(y_test, california_housing_y_pred))
- print("Coefficient of determination: %.2f" % r2_score(y_test, california_housing_y_pred))
- Performance of Linear regressor
- ---------------------------------------------------------------------------
- NotFittedError Traceback (most recent call last)
- D:\Users\psalm\AppData\Local\Temp/ipykernel_33000/2629837005.py in <module>
- 8
- 9 print("Performance of Linear regressor")
- ---> 10 california_housing_y_pred = lr.predict(x_test)
- 11 print("Mean squared error: %.2f" % mean_squared_error(y_test, california_housing_y_pred))
- 12 print("Coefficient of determination: %.2f" % r2_score(y_test, california_housing_y_pred))
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\linear_model\_base.py in predict(self, X)
- 236 Returns predicted values.
- 237 """
- --> 238 return self._decision_function(X)
- 239
- 240 _preprocess_data = staticmethod(_preprocess_data)
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\linear_model\_base.py in _decision_function(self, X)
- 216
- 217 def _decision_function(self, X):
- --> 218 check_is_fitted(self)
- 219
- 220 X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
- 61 extra_args = len(args) - len(all_args)
- 62 if extra_args <= 0:
- ---> 63 return f(*args, **kwargs)
- 64
- 65 # extra_args > 0
- C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py in check_is_fitted(estimator, attributes, msg, all_or_any)
- 1096
- 1097 if not attrs:
- -> 1098 raise NotFittedError(msg % {'name': type(estimator).__name__})
- 1099
- 1100
- NotFittedError: This LinearRegression instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.
Advertisement
Add Comment
Please, Sign In to add comment