Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- %
- % Assume network input layer of size s1,
- % one hidden layer of size s2,
- % and an output layer of size s3.
- %
- % Note a bias unit is added to input and hidden layers.
- %
- % Theta1 and Theta2 are pre-calculated weights.
- % (This example just performs forward propogation.)
- %
- % Theta1 ∈ R^s2,(s1+1)
- % Theta2 ∈ R^s3,(s2+1)
- % X ∈ R^n,s1
- %
- function A = addBiasRow(X)
- biases = ones(1,length(X(1,:)));
- A = [biases; X];
- endfunction
- % element-wise function
- function h = sigmoid(Z)
- h = 1 ./ (1 + e.^-(Z));
- endfunction
- % Add bias unit to layer 1
- X = X'; % R^s1,n
- A1 = addBiasRow(X); % R^(s1+1),n
- % Determine activation of layer 2
- Z2 = Theta1 * A1; % R^s2,n
- A2 = sigmoid(Z2); % R^s2,n
- % Add bias unit to layer 2
- A2 = addBiasRow(A2); % R^(s2+1),n
- % Determine activation of layer 3
- Z3 = Theta2 * A2; % R^s3,n
- A3 = sigmoid(Z3); % R^s3,n
- % Find index of most probably classification in each column,
- % which is the classification.
- % (vals contains corresponding conditional probabilities.)
- [vals, idxs] = max(A3,[],1);
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement