Advertisement
eedubraz

Untitled

Oct 17th, 2017
210
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 19.08 KB | None | 0 0
  1.  
  2. # coding: utf-8
  3.  
  4. # # Your first neural network
  5. #
  6. # In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
  7. #
  8. #
  9.  
  10. # In[68]:
  11.  
  12.  
  13. get_ipython().magic('matplotlib inline')
  14. get_ipython().magic("config InlineBackend.figure_format = 'retina'")
  15.  
  16. import numpy as np
  17. import pandas as pd
  18. import matplotlib.pyplot as plt
  19.  
  20.  
  21. # ## Load and prepare the data
  22. #
  23. # A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
  24.  
  25. # In[69]:
  26.  
  27.  
  28. data_path = 'Bike-Sharing-Dataset/hour.csv'
  29.  
  30. rides = pd.read_csv(data_path)
  31.  
  32.  
  33. # In[70]:
  34.  
  35.  
  36. rides.head()
  37.  
  38.  
  39. # ## Checking out the data
  40. #
  41. # This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
  42. #
  43. # Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
  44.  
  45. # In[71]:
  46.  
  47.  
  48. rides[:24*10].plot(x='dteday', y='cnt')
  49.  
  50.  
  51. # ### Dummy variables
  52. # Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
  53.  
  54. # In[72]:
  55.  
  56.  
  57. dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
  58. for each in dummy_fields:
  59. dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
  60. rides = pd.concat([rides, dummies], axis=1)
  61.  
  62. fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
  63. 'weekday', 'atemp', 'mnth', 'workingday', 'hr']
  64. data = rides.drop(fields_to_drop, axis=1)
  65. data.head()
  66.  
  67.  
  68. # ### Scaling target variables
  69. # To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
  70. #
  71. # The scaling factors are saved so we can go backwards when we use the network for predictions.
  72.  
  73. # In[73]:
  74.  
  75.  
  76. quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
  77. # Store scalings in a dictionary so we can convert back later
  78. scaled_features = {}
  79. for each in quant_features:
  80. mean, std = data[each].mean(), data[each].std()
  81. scaled_features[each] = [mean, std]
  82. data.loc[:, each] = (data[each] - mean)/std
  83.  
  84.  
  85. # ### Splitting the data into training, testing, and validation sets
  86. #
  87. # We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
  88.  
  89. # In[74]:
  90.  
  91.  
  92. # Save data for approximately the last 21 days
  93. test_data = data[-21*24:]
  94.  
  95. # Now remove the test data from the data set
  96. data = data[:-21*24]
  97.  
  98. # Separate the data into features and targets
  99. target_fields = ['cnt', 'casual', 'registered']
  100. features, targets = data.drop(target_fields, axis=1), data[target_fields]
  101. test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
  102.  
  103.  
  104. # We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
  105.  
  106. # In[75]:
  107.  
  108.  
  109. # Hold out the last 60 days or so of the remaining data as a validation set
  110. train_features, train_targets = features[:-60*24], targets[:-60*24]
  111. val_features, val_targets = features[-60*24:], targets[-60*24:]
  112.  
  113.  
  114. # ## Time to build the network
  115. #
  116. # Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
  117. #
  118. # <img src="assets/neural_network.png" width=300px>
  119. #
  120. # The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
  121. #
  122. # We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
  123. #
  124. # > **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
  125. #
  126. # Below, you have these tasks:
  127. # 1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
  128. # 2. Implement the forward pass in the `train` method.
  129. # 3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
  130. # 4. Implement the forward pass in the `run` method.
  131. #
  132.  
  133. # In[79]:
  134.  
  135.  
  136. class NeuralNetwork(object):
  137. def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
  138. # Set number of nodes in input, hidden and output layers.
  139. self.input_nodes = input_nodes
  140. self.hidden_nodes = hidden_nodes
  141. self.output_nodes = output_nodes
  142.  
  143. # Initialize weights
  144. self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
  145. (self.input_nodes, self.hidden_nodes))
  146.  
  147. self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
  148. (self.hidden_nodes, self.output_nodes))
  149. self.lr = learning_rate
  150.  
  151. #### TODO: Set self.activation_function to your implemented sigmoid function ####
  152. #
  153. # Note: in Python, you can define a function with a lambda expression,
  154. # as shown below.
  155. self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
  156.  
  157. ### If the lambda code above is not something you're familiar with,
  158. # You can uncomment out the following three lines and put your
  159. # implementation there instead.
  160. #
  161. #def sigmoid(x):
  162. # return 0 # Replace 0 with your sigmoid calculation here
  163. #self.activation_function = sigmoid
  164.  
  165.  
  166. def train(self, features, targets):
  167. ''' Train the network on batch of features and targets.
  168.  
  169. Arguments
  170. ---------
  171.  
  172. features: 2D array, each row is one data record, each column is a feature
  173. targets: 1D array of target values
  174.  
  175. '''
  176. n_records = features.shape[0]
  177. delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
  178. delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
  179. for X, y in zip(features, targets):
  180. #### Implement the forward pass here ####
  181. ### Forward pass ###
  182. # TODO: Hidden layer - Replace these values with your calculations.
  183. hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
  184. hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
  185.  
  186. # TODO: Output layer - Replace these values with your calculations.
  187. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
  188. final_outputs = final_inputs # signals from final output layer
  189.  
  190. #### Implement the backward pass here ####
  191. ### Backward pass ###
  192.  
  193. # TODO: Output error - Replace this value with your calculations.
  194. error = y - final_outputs # Output layer error is the difference between desired target and actual output.
  195.  
  196. # TODO: Calculate the backpropagated error term (delta) for the output
  197. output_error_term = error
  198.  
  199. # TODO: Calculate the hidden layer's contribution to the error
  200. hidden_error = np.dot(output_error_term.T, self.weights_hidden_to_output)
  201.  
  202. # TODO: Calculate the backpropagated error term (delta) for the hidden layer
  203. hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
  204.  
  205. # Weight step (input to hidden)
  206. delta_weights_i_h += hidden_error_term * X[:, None]
  207. # Weight step (hidden to output)
  208. delta_weights_h_o += output_error_term * hidden_outputs
  209.  
  210. # TODO: Update the weights - Replace these values with your calculations.
  211. self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
  212. self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
  213.  
  214. def run(self, features):
  215. ''' Run a forward pass through the network with input features
  216.  
  217. Arguments
  218. ---------
  219. features: 1D array of feature values
  220. '''
  221.  
  222. #### Implement the forward pass here ####
  223. # TODO: Hidden layer - replace these values with the appropriate calculations.
  224. hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
  225. hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
  226.  
  227. # TODO: Output layer - Replace these values with the appropriate calculations.
  228. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
  229. final_outputs = final_inputs # signals from final output layer
  230.  
  231. return final_outputs
  232.  
  233.  
  234. # In[80]:
  235.  
  236.  
  237. def MSE(y, Y):
  238. return np.mean((y-Y)**2)
  239.  
  240.  
  241. # ## Unit tests
  242. #
  243. # Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
  244.  
  245. # In[82]:
  246.  
  247.  
  248. import unittest
  249.  
  250. inputs = np.array([[0.5, -0.2, 0.1]])
  251. targets = np.array([[0.4]])
  252. test_w_i_h = np.array([[0.1, -0.2],
  253. [0.4, 0.5],
  254. [-0.3, 0.2]])
  255. test_w_h_o = np.array([[0.3],
  256. [-0.1]])
  257.  
  258. class TestMethods(unittest.TestCase):
  259.  
  260. ##########
  261. # Unit tests for data loading
  262. ##########
  263.  
  264. def test_data_path(self):
  265. # Test that file path to dataset has been unaltered
  266. self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
  267.  
  268. def test_data_loaded(self):
  269. # Test that data frame loaded
  270. self.assertTrue(isinstance(rides, pd.DataFrame))
  271.  
  272. ##########
  273. # Unit tests for network functionality
  274. ##########
  275.  
  276. def test_activation(self):
  277. network = NeuralNetwork(3, 2, 1, 0.5)
  278. # Test that the activation function is a sigmoid
  279. self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
  280.  
  281. def test_train(self):
  282. # Test that weights are updated correctly on training
  283. network = NeuralNetwork(3, 2, 1, 0.5)
  284. network.weights_input_to_hidden = test_w_i_h.copy()
  285. network.weights_hidden_to_output = test_w_h_o.copy()
  286.  
  287. network.train(inputs, targets)
  288. self.assertTrue(np.allclose(network.weights_hidden_to_output,
  289. np.array([[ 0.37275328],
  290. [-0.03172939]])))
  291. self.assertTrue(np.allclose(network.weights_input_to_hidden,
  292. np.array([[ 0.10562014, -0.20185996],
  293. [0.39775194, 0.50074398],
  294. [-0.29887597, 0.19962801]])))
  295.  
  296. def test_run(self):
  297. # Test correctness of run method
  298. network = NeuralNetwork(3, 2, 1, 0.5)
  299. network.weights_input_to_hidden = test_w_i_h.copy()
  300. network.weights_hidden_to_output = test_w_h_o.copy()
  301.  
  302. self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
  303.  
  304. suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
  305. unittest.TextTestRunner().run(suite)
  306.  
  307.  
  308. # ## Training the network
  309. #
  310. # Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
  311. #
  312. # You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
  313. #
  314. # ### Choose the number of iterations
  315. # This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
  316. #
  317. # ### Choose the learning rate
  318. # This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
  319. #
  320. # ### Choose the number of hidden nodes
  321. # In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
  322. #
  323. # Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
  324.  
  325. # In[ ]:
  326.  
  327.  
  328. import sys
  329.  
  330. ### Set the hyperparameters here ###
  331. iterations = 100
  332. learning_rate = 0.1
  333. hidden_nodes = 2
  334. output_nodes = 1
  335.  
  336. N_i = train_features.shape[1]
  337. network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
  338.  
  339. losses = {'train':[], 'validation':[]}
  340. for ii in range(iterations):
  341. # Go through a random batch of 128 records from the training data set
  342. batch = np.random.choice(train_features.index, size=128)
  343. X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
  344.  
  345. network.train(X, y)
  346.  
  347. # Printing out the training progress
  348. train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
  349. val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
  350. sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) + "% ... Training loss: " + str(train_loss)[:5] + " ... Validation loss: " + str(val_loss)[:5])
  351. sys.stdout.flush()
  352.  
  353. losses['train'].append(train_loss)
  354. losses['validation'].append(val_loss)
  355.  
  356.  
  357. # In[ ]:
  358.  
  359.  
  360. plt.plot(losses['train'], label='Training loss')
  361. plt.plot(losses['validation'], label='Validation loss')
  362. plt.legend()
  363. _ = plt.ylim()
  364.  
  365.  
  366. # ## Check out your predictions
  367. #
  368. # Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
  369.  
  370. # In[ ]:
  371.  
  372.  
  373. fig, ax = plt.subplots(figsize=(8,4))
  374.  
  375. mean, std = scaled_features['cnt']
  376. predictions = network.run(test_features).T*std + mean
  377. ax.plot(predictions[0], label='Prediction')
  378. ax.plot((test_targets['cnt']*std + mean).values, label='Data')
  379. ax.set_xlim(right=len(predictions))
  380. ax.legend()
  381.  
  382. dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
  383. dates = dates.apply(lambda d: d.strftime('%b %d'))
  384. ax.set_xticks(np.arange(len(dates))[12::24])
  385. _ = ax.set_xticklabels(dates[12::24], rotation=45)
  386.  
  387.  
  388. # ## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
  389. #
  390. # Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
  391. #
  392. # > **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
  393. #
  394. # #### Your answer below
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement