Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Our current execution role is required when creating the model as the training
- # and inference code will need to access the model artifacts.
- role = get_execution_role()
- # We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
- # For convenience, the training and inference code both use the same container.
- container = get_image_uri(session.boto_region_name, 'xgboost')
- # Create a SageMaker estimator using the container location determined in the previous cell.
- # It is recommended that we use a single training instance of type ml.m4.xlarge. It is also
- # recommended that we use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
- # output path.
- xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
- role, # What is our current IAM Role
- train_instance_count=1, # How many compute instances
- train_instance_type='ml.m4.xlarge', # What kind of compute instances
- output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
- sagemaker_session=session)
- # Set the XGBoost hyperparameters in the xgb object.
- # We have a binary label so we should be using the 'binary:logistic' objective.
- xgb.set_hyperparameters(max_depth=5,
- eta=0.2,
- gamma=4,
- min_child_weight=6,
- subsample=0.8,
- silent=0,
- objective='binary:logistic',
- early_stopping_rounds=10,
- num_round=500)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement