Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Arguments:
- w -- weights, a numpy array of size (num_px * num_px * 3, 1)
- b -- bias, a scalar
- X -- data of size (num_px * num_px * 3, number of examples)
- Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
- Return:
- cost -- negative log-likelihood cost for logistic regression
- dw -- gradient of the loss with respect to w, thus same shape as w
- db -- gradient of the loss with respect to b, thus same shape as b
- Tips:
- - Write your code step by step for the propagation. np.log(), np.dot()
- """
- m = X.shape[1]
- # FORWARD PROPAGATION (FROM X TO COST)
- ### START CODE HERE ### (≈ 2 lines of code)
- p = np.dot(w.T, X) + b
- A = 1/ (1 + np.exp((-p))) # compute activation
- cost = (-1/m)* np.sum(((Y * np.log(A)))+((1 - Y)*np.log(1 - A))) # compute cost
- ### END CODE HERE ###
- # BACKWARD PROPAGATION (TO FIND GRAD)
- ### START CODE HERE ### (≈ 2 lines of code)
- dw = (1/m) * np.dot(X , (A-Y).T)
- db = (1/m) * np.sum(A-Y)
- ### END CODE HERE ###
- assert(dw.shape == w.shape)
- assert(db.dtype == float)
- cost = np.squeeze(cost)
- assert(cost.shape == ())
- grads = {"dw": dw,
- "db": db}
- return grads, cost
Add Comment
Please, Sign In to add comment