Guest User

Untitled

a guest
Jul 22nd, 2018
89
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.16 KB | None | 0 0
  1. Arguments:
  2. w -- weights, a numpy array of size (num_px * num_px * 3, 1)
  3. b -- bias, a scalar
  4. X -- data of size (num_px * num_px * 3, number of examples)
  5. Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
  6.  
  7. Return:
  8. cost -- negative log-likelihood cost for logistic regression
  9. dw -- gradient of the loss with respect to w, thus same shape as w
  10. db -- gradient of the loss with respect to b, thus same shape as b
  11.  
  12. Tips:
  13. - Write your code step by step for the propagation. np.log(), np.dot()
  14. """
  15.  
  16. m = X.shape[1]
  17.  
  18. # FORWARD PROPAGATION (FROM X TO COST)
  19. ### START CODE HERE ### (≈ 2 lines of code)
  20. p = np.dot(w.T, X) + b
  21. A = 1/ (1 + np.exp((-p))) # compute activation
  22. cost = (-1/m)* np.sum(((Y * np.log(A)))+((1 - Y)*np.log(1 - A))) # compute cost
  23. ### END CODE HERE ###
  24.  
  25. # BACKWARD PROPAGATION (TO FIND GRAD)
  26. ### START CODE HERE ### (≈ 2 lines of code)
  27. dw = (1/m) * np.dot(X , (A-Y).T)
  28. db = (1/m) * np.sum(A-Y)
  29. ### END CODE HERE ###
  30.  
  31. assert(dw.shape == w.shape)
  32. assert(db.dtype == float)
  33. cost = np.squeeze(cost)
  34. assert(cost.shape == ())
  35.  
  36. grads = {"dw": dw,
  37. "db": db}
  38.  
  39. return grads, cost
Add Comment
Please, Sign In to add comment