Advertisement
ERENARD63

Deep Neural Network for Image Classification: Application

May 30th, 2018
541
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Python 16.93 KB | None | 0 0
  1.  
  2. # coding: utf-8
  3.  
  4. # # Deep Neural Network for Image Classification: Application
  5. #
  6. # When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
  7. #
  8. # You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.  
  9. #
  10. # **After this assignment you will be able to:**
  11. # - Build and apply a deep neural network to supervised learning.
  12. #
  13. # Let's get started!
  14.  
  15. # ## 1 - Packages
  16.  
  17. # Let's first import all the packages that you will need during this assignment.
  18. # - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
  19. # - [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
  20. # - [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
  21. # - [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
  22. # - dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
  23. # - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
  24.  
  25. # In[1]:
  26.  
  27. import time
  28. import numpy as np
  29. import h5py
  30. import matplotlib.pyplot as plt
  31. import scipy
  32. from PIL import Image
  33. from scipy import ndimage
  34. from dnn_app_utils_v2 import *
  35.  
  36. get_ipython().magic('matplotlib inline')
  37. plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
  38. plt.rcParams['image.interpolation'] = 'nearest'
  39. plt.rcParams['image.cmap'] = 'gray'
  40.  
  41. get_ipython().magic('load_ext autoreload')
  42. get_ipython().magic('autoreload 2')
  43.  
  44. np.random.seed(1)
  45.  
  46.  
  47. # ## 2 - Dataset
  48. #
  49. # You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
  50. #
  51. # **Problem Statement**: You are given a dataset ("data.h5") containing:
  52. #     - a training set of m_train images labelled as cat (1) or non-cat (0)
  53. #     - a test set of m_test images labelled as cat and non-cat
  54. #     - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
  55. #
  56. # Let's get more familiar with the dataset. Load the data by running the cell below.
  57.  
  58. # In[2]:
  59.  
  60. train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
  61.  
  62.  
  63. # The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
  64.  
  65. # In[3]:
  66.  
  67. # Example of a picture
  68. index = 17
  69. plt.imshow(train_x_orig[index])
  70. print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") +  " picture.")
  71.  
  72.  
  73. # In[4]:
  74.  
  75. # Explore your dataset
  76. m_train = train_x_orig.shape[0]
  77. num_px = train_x_orig.shape[1]
  78. m_test = test_x_orig.shape[0]
  79.  
  80. print ("Number of training examples: " + str(m_train))
  81. print ("Number of testing examples: " + str(m_test))
  82. print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
  83. print ("train_x_orig shape: " + str(train_x_orig.shape))
  84. print ("train_y shape: " + str(train_y.shape))
  85. print ("test_x_orig shape: " + str(test_x_orig.shape))
  86. print ("test_y shape: " + str(test_y.shape))
  87.  
  88.  
  89. # As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
  90. #
  91. #
  92. # In[5]:
  93.  
  94. # Reshape the training and test examples
  95. train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T   # The "-1" makes reshape flatten the remaining dimensions
  96. test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
  97.  
  98. # Standardize data to have feature values between 0 and 1.
  99. train_x = train_x_flatten/255.
  100. test_x = test_x_flatten/255.
  101.  
  102. print ("train_x's shape: " + str(train_x.shape))
  103. print ("test_x's shape: " + str(test_x.shape))
  104.  
  105.  
  106. # $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
  107.  
  108. # ## 3 - Architecture of your model
  109.  
  110. # Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
  111. #
  112. # You will build two different models:
  113. # - A 2-layer neural network
  114. # - An L-layer deep neural network
  115. #
  116. # You will then compare the performance of these models, and also try out different values for $L$.
  117. #
  118. # Let's look at the two architectures.
  119. #
  120. # ### 3.1 - 2-layer neural network
  121. #
  122. # . <br> The model can be summarized as: ***INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT***. </center></caption>
  123. #
  124. # <u>Detailed Architecture of figure 2</u>:
  125. # - The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
  126. # - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
  127. # - You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
  128. # - You then repeat the same process.
  129. # - You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
  130. # - Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
  131. #
  132. # ### 3.2 - L-layer deep neural network
  133. #
  134. # It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
  135. #
  136. # <img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
  137. # <caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: ***[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID***</center></caption>
  138. #
  139. # <u>Detailed Architecture of figure 3</u>:
  140. # - The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
  141. # - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
  142. # - Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
  143. # - Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
  144. #
  145. # ### 3.3 - General methodology
  146. #
  147. # As usual you will follow the Deep Learning methodology to build the model:
  148. #     1. Initialize parameters / Define hyperparameters
  149. #     2. Loop for num_iterations:
  150. #         a. Forward propagation
  151. #         b. Compute cost function
  152. #         c. Backward propagation
  153. #         d. Update parameters (using parameters, and grads from backprop)
  154. #     4. Use trained parameters to predict labels
  155. #
  156. # Let's now implement those two models!
  157.  
  158. # ## 4 - Two-layer neural network
  159. #
  160. # **Question**:  Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: *LINEAR -> RELU -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:
  161. # ```python
  162. # def initialize_parameters(n_x, n_h, n_y):
  163. #     ...
  164. #     return parameters
  165. # def linear_activation_forward(A_prev, W, b, activation):
  166. #     ...
  167. #     return A, cache
  168. # def compute_cost(AL, Y):
  169. #     ...
  170. #     return cost
  171. # def linear_activation_backward(dA, cache, activation):
  172. #     ...
  173. #     return dA_prev, dW, db
  174. # def update_parameters(parameters, grads, learning_rate):
  175. #     ...
  176. #     return parameters
  177. # ```
  178.  
  179. # In[6]:
  180.  
  181. ### CONSTANTS DEFINING THE MODEL ####
  182. n_x = 12288     # num_px * num_px * 3
  183. n_h = 7
  184. n_y = 1
  185. layers_dims = (n_x, n_h, n_y)
  186.  
  187.  
  188. # In[ ]:
  189.  
  190. # GRADED FUNCTION: two_layer_model
  191.  
  192. def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
  193.     """
  194.    Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
  195.    
  196.    Arguments:
  197.    X -- input data, of shape (n_x, number of examples)
  198.    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
  199.    layers_dims -- dimensions of the layers (n_x, n_h, n_y)
  200.    num_iterations -- number of iterations of the optimization loop
  201.    learning_rate -- learning rate of the gradient descent update rule
  202.    print_cost -- If set to True, this will print the cost every 100 iterations
  203.    
  204.    Returns:
  205.    parameters -- a dictionary containing W1, W2, b1, and b2
  206.    """
  207.    
  208.     np.random.seed(1)
  209.     grads = {}
  210.     costs = []                              # to keep track of the cost
  211.     m = X.shape[1]                           # number of examples
  212.     (n_x, n_h, n_y) = layers_dims
  213.    
  214.     # Initialize parameters dictionary, by calling one of the functions you'd previously implemented
  215.     ### START CODE HERE ### (≈ 1 line of code)
  216.    
  217.     #print("toto")
  218.     parameters = initialize_parameters(n_x, n_h, n_y)
  219.     ### END CODE HERE ###
  220.    
  221.     # Get W1, b1, W2 and b2 from the dictionary parameters.
  222.     W1 = parameters["W1"]
  223.     b1 = parameters["b1"]
  224.     W2 = parameters["W2"]
  225.     b2 = parameters["b2"]
  226.    
  227.     # Loop (gradient descent)
  228.  
  229.     for i in range(0, num_iterations):
  230.  
  231.         # Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
  232.         ### START CODE HERE ### (≈ 2 lines of code)
  233.         A1, cache1 = linear_activation_forward(X, W1, b1, activation = "relu")
  234.         A2, cache2 = linear_activation_forward(A1, W2, b2, activation = "sigmoid")
  235.         ### END CODE HERE ###
  236.        
  237.         # Compute cost
  238.         ### START CODE HERE ### (≈ 1 line of code)
  239.         cost = compute_cost(A2, Y)
  240.         ### END CODE HERE ###
  241.        
  242.         # Initializing backward propagation
  243.         dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
  244.        
  245.         # Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
  246.         ### START CODE HERE ### (≈ 2 lines of code)
  247.         dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation="sigmoid")
  248.         dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation="relu")
  249.         ### END CODE HERE ###
  250.        
  251.         # Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
  252.         grads['dW1'] = dW1
  253.         grads['db1'] = db1
  254.         grads['dW2'] = dW2
  255.         grads['db2'] = db2
  256.        
  257.         # Update parameters.
  258.         ### START CODE HERE ### (approx. 1 line of code)
  259.         parameters = update_parameters(parameters, grads, learning_rate)
  260.         ### END CODE HERE ###
  261.  
  262.         # Retrieve W1, b1, W2, b2 from parameters
  263.         W1 = parameters["W1"]
  264.         b1 = parameters["b1"]
  265.         W2 = parameters["W2"]
  266.         b2 = parameters["b2"]
  267.        
  268.         # Print the cost every 100 training example
  269.         if print_cost and i % 100 == 0:
  270.             print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
  271.         if print_cost and i % 100 == 0:
  272.             costs.append(cost)
  273.        
  274.     # plot the cost
  275.  
  276.     plt.plot(np.squeeze(costs))
  277.     plt.ylabel('cost')
  278.     plt.xlabel('iterations (per tens)')
  279.     plt.title("Learning rate =" + str(learning_rate))
  280.     plt.show()
  281.    
  282.     return parameters
  283.  
  284.  
  285. # Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
  286.  
  287. # In[ ]:
  288.  
  289. parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
  290.  
  291.  
  292. # Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
  293. #
  294. # Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
  295.  
  296. # In[ ]:
  297.  
  298. predictions_train = predict(train_x, train_y, parameters)
  299.  
  300.  
  301. # In[ ]:
  302.  
  303. predictions_test = predict(test_x, test_y, parameters)
  304.  
  305.  
  306. # **Note**: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
  307. #
  308. # Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
  309.  
  310. # ## 5 - L-layer Neural Network
  311. #
  312. # **Question**: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: *[LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:
  313. # ```python
  314. # def initialize_parameters_deep(layer_dims):
  315. #     ...
  316. #     return parameters
  317. # def L_model_forward(X, parameters):
  318. #     ...
  319. #     return AL, caches
  320. # def compute_cost(AL, Y):
  321. #     ...
  322. #     return cost
  323. # def L_model_backward(AL, Y, caches):
  324. #     ...
  325. #     return grads
  326. # def update_parameters(parameters, grads, learning_rate):
  327. #     ...
  328. #     return parameters
  329. # ```
  330.  
  331. # In[ ]:
  332.  
  333. ### CONSTANTS ###
  334. layers_dims = [12288, 20, 7, 5, 1] #  5-layer model
  335.  
  336.  
  337. # In[ ]:
  338.  
  339. # GRADED FUNCTION: L_layer_model
  340.  
  341. def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
  342.     """
  343.    Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
  344.    
  345.    Arguments:
  346.    X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
  347.    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
  348.    layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
  349.    learning_rate -- learning rate of the gradient descent update rule
  350.    num_iterations -- number of iterations of the optimization loop
  351.    print_cost -- if True, it prints the cost every 100 steps
  352.    
  353.    Returns:
  354.    parameters -- parameters learnt by the model. They can then be used to predict.
  355.    """
  356.  
  357.     np.random.seed(1)
  358.     costs = []                         # keep track of cost
  359.    
  360.     # Parameters initialization.
  361.     ### START CODE HERE ###
  362.     parameters = initialize_parameters_deep(layers_dims)
  363.     ### END CODE HERE ###
  364.    
  365.     # Loop (gradient descent)
  366.     for i in range(0, num_iterations):
  367.  
  368.         # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
  369.         ### START CODE HERE ### (≈ 1 line of code)
  370.         AL, caches = L_model_forward(X, parameters)
  371.         ### END CODE HERE ###
  372.        
  373.         # Compute cost.
  374.         ### START CODE HERE ### (≈ 1 line of code)
  375.         cost = compute_cost(AL, Y)
  376.         ### END CODE HERE ###
  377.    
  378.         # Backward propagation.
  379.         ### START CODE HERE ### (≈ 1 line of code)
  380.         grads = L_model_backward(AL, Y, caches)
  381.         ### END CODE HERE ###
  382.  
  383.         # Update parameters.
  384.         ### START CODE HERE ### (≈ 1 line of code)
  385.         parameters = update_parameters(parameters, grads, learning_rate)
  386.         ### END CODE HERE ###
  387.                
  388.         # Print the cost every 100 training example
  389.         if print_cost and i % 100 == 0:
  390.             print ("Cost after iteration %i: %f" %(i, cost))
  391.         if print_cost and i % 100 == 0:
  392.             costs.append(cost)
  393.            
  394.     # plot the cost
  395.     plt.plot(np.squeeze(costs))
  396.     plt.ylabel('cost')
  397.     plt.xlabel('iterations (per tens)')
  398.     plt.title("Learning rate =" + str(learning_rate))
  399.     plt.show()
  400.    
  401.     return parameters
  402.  
  403.  
  404. # You will now train the model as a 5-layer neural network.
  405. #
  406. # Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
  407.  
  408. # In[ ]:
  409.  
  410. parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
  411.  
  412.  
  413. pred_train = predict(train_x, train_y, parameters)
  414.  
  415.  
  416. pred_test = predict(test_x, test_y, parameters)
  417.  
  418.  
  419. # ##  6) Results Analysis
  420. #
  421. # First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
  422.  
  423. # In[ ]:
  424.  
  425. print_mislabeled_images(classes, test_x, test_y, pred_test)
  426.  
  427.  
  428. # **A few type of images the model tends to do poorly on include:**
  429. # - Cat body in an unusual position
  430. # - Cat appears against a background of a similar color
  431. # - Unusual cat color and species
  432. # - Camera Angle
  433. # - Brightness of the picture
  434. # - Scale variation (cat is very large or small in image)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement