Advertisement
Guest User

Untitled

a guest
Oct 22nd, 2017
171
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Python 10.96 KB | None | 0 0
  1. class TwoLayerNet(object):
  2.     """
  3.    A two-layer fully-connected neural network. The net has an input dimension of
  4.    N, a hidden layer dimension of H, and performs classification over C classes.
  5.    We train the network with a softmax loss function and L2 regularization on the
  6.    weight matrices. The network uses a ReLU nonlinearity after the first fully
  7.    connected layer.
  8.    
  9.    In other words, the network has the following architecture:
  10.    
  11.    input - fully connected layer - ReLU - fully connected layer - softmax
  12.  
  13.    The outputs of the second fully-connected layer are the scores for each class.
  14.    """
  15.  
  16.     def __init__(self, input_size, hidden_size, output_size, std=1e-4):
  17.         """
  18.        Initialize the model. Weights are initialized to small random values and
  19.        biases are initialized to zero. Weights and biases are stored in the
  20.        variable self.params, which is a dictionary with the following keys:
  21.  
  22.        W1: First layer weights; has shape (D, H)
  23.        b1: First layer biases; has shape (H,)
  24.        W2: Second layer weights; has shape (H, C)
  25.        b2: Second layer biases; has shape (C,)
  26.  
  27.        Inputs:
  28.        - input_size: The dimension D of the input data.
  29.        - hidden_size: The number of neurons H in the hidden layer.
  30.        - output_size: The number of classes C.
  31.        """
  32.         self.params = {}
  33.         self.params['W1'] = std * np.random.randn(input_size, hidden_size)
  34.         self.params['b1'] = np.zeros(hidden_size)
  35.         self.params['W2'] = std * np.random.randn(hidden_size, output_size)
  36.         self.params['b2'] = np.zeros(output_size)
  37.  
  38.     def loss(self, X, y=None, reg=0.0):
  39.         """
  40.        Compute the loss and gradients for a two layer fully connected neural
  41.        network.
  42.  
  43.        Inputs:
  44.        - X: Input data of shape (N, D). Each X[i] is a training sample.
  45.        - y: Vector of training labels. y[i] is the label for X[i], and each y[i] is
  46.          an integer in the range 0 <= y[i] < C. This parameter is optional; if it
  47.          is not passed then we only return scores, and if it is passed then we
  48.          instead return the loss and gradients.
  49.        - reg: Regularization strength.
  50.  
  51.        Returns:
  52.        If y is None, return a matrix scores of shape (N, C) where scores[i, c] is
  53.        the score for class c on input X[i].
  54.  
  55.        If y is not None, instead return a tuple of:
  56.        - loss: Loss (data loss and regularization loss) for this batch of training
  57.          samples.
  58.        - grads: Dictionary mapping parameter names to gradients of those parameters
  59.          with respect to the loss function; has the same keys as self.params.
  60.        """
  61.         # Unpack variables from the params dictionary
  62.         W1, b1 = self.params['W1'], self.params['b1']
  63.         W2, b2 = self.params['W2'], self.params['b2']
  64.         N, D = X.shape
  65.  
  66.         # Compute the forward pass
  67.         scores = None
  68.         #############################################################################
  69.         # TODO#1: Perform the forward pass, computing the class scores for the      #
  70.         # input.                                                                    #
  71.         # Store the result in the scores variable, which should be an array of      #
  72.         # shape (N, C). Note that this does not include the softmax                 #
  73.         # HINT: This is just a series of matrix multiplication.                     #
  74.         #############################################################################
  75.         score = np.dot(ReLU(np.dot(X, W1) + b1), W2)
  76.         hR = ReLU(np.dot(X, W1) + b1
  77.         #############################################################################
  78.         #                              END OF TODO#1                                #
  79.         #############################################################################
  80.  
  81.         # If the targets are not given then jump out, we're done
  82.         if y is None:
  83.             return scores
  84.  
  85.         # Compute the loss
  86.         loss=None
  87.         #############################################################################
  88.         # TODO#2: Finish the forward pass, and compute the loss. This should include#
  89.         # both the data loss and L2 regularization for W1 and W2. Store the result  #
  90.         # in the variable loss, which should be a scalar. Use the Softmax           #
  91.         # classifier loss.                                                          #
  92.         #############################################################################
  93.  
  94.         #############################################################################
  95.         #                              END OF TODO#2                                #
  96.         #############################################################################
  97.  
  98.         # Backward pass: compute gradients
  99.         grads={}
  100.         #############################################################################
  101.         # TODO#3: Compute the backward pass, computing derivatives of the weights   #
  102.         # and biases. Store the results in the grads dictionary. For example,       #
  103.         # grads['W1'] should store the gradient on W1, and be a matrix of same size #
  104.         # don't forget about the regularization term                                #
  105.         #############################################################################
  106.  
  107.         #############################################################################
  108.         #                              END OF TODO#3                                #
  109.         #############################################################################
  110.  
  111.         return loss, grads
  112.  
  113.     def train(self, X, y, X_val, y_val,
  114.         learning_rate=1e-3, learning_rate_decay=0.95,
  115.         reg=5e-6, num_iters=100,
  116.         batch_size=200, verbose=False):
  117.         """
  118.        Train this neural network using stochastic gradient descent.
  119.  
  120.        Inputs:
  121.        - X: A numpy array of shape (N, D) giving training data.
  122.        - y: A numpy array f shape (N,) giving training labels; y[i] = c means that
  123.          X[i] has label c, where 0 <= c < C.
  124.        - X_val: A numpy array of shape (N_val, D) giving validation data.
  125.        - y_val: A numpy array of shape (N_val,) giving validation labels.
  126.        - learning_rate: Scalar giving learning rate for optimization.
  127.        - learning_rate_decay: Scalar giving factor used to decay the learning rate
  128.          after each epoch.
  129.        - reg: Scalar giving regularization strength.
  130.        - num_iters: Number of steps to take when optimizing.
  131.        - batch_size: Number of training examples to use per step.
  132.        - verbose: boolean; if true print progress during optimization.
  133.        """
  134.         num_train=X.shape[0]
  135.         iterations_per_epoch=max(num_train / batch_size, 1)
  136.  
  137.         # Use SGD to optimize the parameters in self.model
  138.         loss_history=[]
  139.         train_acc_history=[]
  140.         val_acc_history=[]
  141.  
  142.         for it in range(num_iters):
  143.           X_batch=None
  144.           y_batch=None
  145.  
  146.           #########################################################################
  147.           # TODO#4: Create a random minibatch of training data and labels, storing#
  148.           # them in X_batch and y_batch respectively.                             #
  149.           # You might find np.random.choice() helpful.                            #
  150.           #########################################################################
  151.  
  152.           #########################################################################
  153.           #                             END OF YOUR TODO#4                        #
  154.           #########################################################################
  155.  
  156.           # Compute loss and gradients using the current minibatch
  157.           loss, grads=self.loss(X_batch, y=y_batch, reg=reg)
  158.           loss_history.append(loss)
  159.  
  160.           #########################################################################
  161.           # TODO#5: Use the gradients in the grads dictionary to update the       #
  162.           # parameters of the network (stored in the dictionary self.params)      #
  163.           # using stochastic gradient descent. You'll need to use the gradients   #
  164.           # stored in the grads dictionary defined above.                         #
  165.           #########################################################################
  166.  
  167.           #########################################################################
  168.           #                             END OF YOUR TODO#5                        #
  169.           #########################################################################
  170.  
  171.           if verbose and it % 100 == 0:
  172.             print('iteration %d / %d: loss %f' % (it, num_iters, loss))
  173.  
  174.           # Every epoch, check train and val accuracy and decay learning rate.
  175.           if it % iterations_per_epoch == 0:
  176.             # Check accuracy
  177.             train_acc=(self.predict(X_batch) == y_batch).mean()
  178.             val_acc=(self.predict(X_val) == y_val).mean()
  179.             train_acc_history.append(train_acc)
  180.             val_acc_history.append(val_acc)
  181.  
  182.             # Decay learning rate
  183.             #######################################################################
  184.             # TODO#6: Decay learning rate (exponentially) after each epoch        #
  185.             #######################################################################
  186.  
  187.             #######################################################################
  188.             #                             END OF YOUR TODO#6                      #
  189.             #######################################################################
  190.  
  191.  
  192.         return {
  193.           'loss_history': loss_history,
  194.           'train_acc_history': train_acc_history,
  195.           'val_acc_history': val_acc_history,
  196.         }
  197.  
  198.     def predict(self, X):
  199.         """
  200.        Use the trained weights of this two-layer network to predict labels for
  201.        data points. For each data point we predict scores for each of the C
  202.        classes, and assign each data point to the class with the highest score.
  203.  
  204.        Inputs:
  205.        - X: A numpy array of shape (N, D) giving N D-dimensional data points to
  206.          classify.
  207.  
  208.        Returns:
  209.        - y_pred: A numpy array of shape (N,) giving predicted labels for each of
  210.          the elements of X. For all i, y_pred[i] = c means that X[i] is predicted
  211.          to have class c, where 0 <= c < C.
  212.        """
  213.         y_pred=None
  214.  
  215.         ###########################################################################
  216.         # TODO#7: Implement this function; it should be VERY simple!              #
  217.         ###########################################################################
  218.  
  219.         ###########################################################################
  220.         #                              END OF YOUR TODO#7                         #
  221.         ###########################################################################
  222.  
  223.         return y_pred
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement