Advertisement
Guest User

Untitled

a guest
Nov 17th, 2018
90
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
MatLab 3.27 KB | None | 0 0
  1. function [J grad] = nnCostFunction(nn_params, ...
  2.                                    input_layer_size, ...
  3.                                    hidden_layer_size, ...
  4.                                    num_labels, ...
  5.                                    X, y, lambda)
  6.  
  7. Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
  8.                  hidden_layer_size, (input_layer_size + 1));
  9.  
  10. Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
  11.                  num_labels, (hidden_layer_size + 1));
  12.  
  13. % Setup some useful variables
  14. m = size(X, 1);
  15. X = [ones(m,1) X];        
  16. % You need to return the following variables correctly
  17. J = 0;
  18. Theta1_grad = zeros(size(Theta1));
  19. Theta2_grad = zeros(size(Theta2));
  20.  
  21. % ====================== YOUR CODE HERE ======================
  22. % Instructions: You should complete the code by working through the
  23. %               following parts.
  24. %
  25. % Part 1: Feedforward the neural network and return the cost in the
  26. %         variable J. After implementing Part 1, you can verify that your
  27. %         cost function computation is correct by verifying the cost
  28. %         computed in ex4.m
  29. %
  30. % Part 2: Implement the backpropagation algorithm to compute the gradients
  31. %         Theta1_grad and Theta2_grad. You should return the partial derivatives of
  32. %         the cost function with respect to Theta1 and Theta2 in Theta1_grad and
  33. %         Theta2_grad, respectively. After implementing Part 2, you can check
  34. %         that your implementation is correct by running checkNNGradients
  35. %
  36. %         Note: The vector y passed into the function is a vector of labels
  37. %               containing values from 1..K. You need to map this vector into a
  38. %               binary vector of 1's and 0's to be used with the neural network
  39. %               cost function.
  40. %
  41. %         Hint: We recommend implementing backpropagation using a for-loop
  42. %               over the training examples if you are implementing it for the
  43. %               first time.
  44. %
  45. % Part 3: Implement regularization with the cost function and gradients.
  46. %
  47. %         Hint: You can implement this around the code for
  48. %               backpropagation. That is, you can compute the gradients for
  49. %               the regularization separately and then add them to Theta1_grad
  50. %               and Theta2_grad from Part 2.
  51. %
  52. h1_out = reshape([ones(m,1) sigmoid(X * Theta1')], m, size(Theta1,1) + 1);
  53. h = sigmoid(h1_out * Theta2');
  54. tmp_y = zeros(size(h));
  55. for i = 1:m,
  56.     tmp_y(i,y(i)) = 1;
  57. end
  58. y = tmp_y;
  59. tmp_theta1 = Theta1;
  60. tmp_theta2 = Theta2;
  61. tmp_theta1(:,1) = 0;
  62. tmp_theta2(:,1) = 0;
  63.  
  64. J = (-1/m)*sum(sum(y.*log(h)) + sum((1-y).*log(1 - h))) + ...
  65.         (lambda/(2*m))*(sum(tmp_theta1(:) .^ 2) + sum(tmp_theta2(:) .^ 2));
  66.  
  67. a_1 = X;
  68.  
  69. z_2 = a_1 * Theta1';
  70. a_2 = h1_out;
  71.  
  72. z_3 = h1_out * Theta2';
  73. a_3 = h;   
  74.  
  75. diff_3 = a_3 - y;
  76. diff_2 = diff_3 * Theta2  .* sigmoidGradient([ones(size(z_2, 1), 1) z_2]);
  77. diff_2 = diff_2(:,2:end);
  78.  
  79. delta_1 = diff_2' * a_1;
  80. delta_2 =  diff_3' * a_2;
  81.  
  82. Theta1_grad = delta_1 ./ m + (lambda/m) .* tmp_theta1;
  83. Theta2_grad = delta_2 ./ m + (lambda/m) .* tmp_theta2;
  84.  
  85. % =========================================================================
  86.  
  87. % Unroll gradients
  88. grad = [Theta1_grad(:) ; Theta2_grad(:)];
  89.  
  90.  
  91. end
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement