Advertisement
Guest User

Untitled

a guest
Jun 25th, 2017
50
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.03 KB | None | 0 0
  1. %
  2. % Assume network input layer of size s1,
  3. % one hidden layer of size s2,
  4. % and an output layer of size s3.
  5. %
  6. % Note a bias unit is added to input and hidden layers.
  7. %
  8. % Theta1 and Theta2 are pre-calculated weights.
  9. % (This example just performs forward propogation.)
  10. %
  11. % Theta1 ∈ R^s2,(s1+1)
  12. % Theta2 ∈ R^s3,(s2+1)
  13. % X ∈ R^n,s1
  14. %
  15.  
  16. function A = addBiasRow(X)
  17. biases = ones(1,length(X(1,:)));
  18. A = [biases; X];
  19. endfunction
  20.  
  21. % element-wise function
  22. function h = sigmoid(Z)
  23. h = 1 ./ (1 + e.^-(Z));
  24. endfunction
  25.  
  26. % Add bias unit to layer 1
  27. X = X'; % R^s1,n
  28. A1 = addBiasRow(X); % R^(s1+1),n
  29.  
  30. % Determine activation of layer 2
  31. Z2 = Theta1 * A1; % R^s2,n
  32. A2 = sigmoid(Z2); % R^s2,n
  33.  
  34. % Add bias unit to layer 2
  35. A2 = addBiasRow(A2); % R^(s2+1),n
  36.  
  37. % Determine activation of layer 3
  38. Z3 = Theta2 * A2; % R^s3,n
  39. A3 = sigmoid(Z3); % R^s3,n
  40.  
  41. % Find index of most probably classification in each column,
  42. % which is the classification.
  43. % (vals contains corresponding conditional probabilities.)
  44. [vals, idxs] = max(A3,[],1);
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement