Guest User

Untitled

a guest
Jul 16th, 2018
92
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 0.93 KB | None | 0 0
  1. def log_probability(theta, x, y, dy)
  2. """
  3. Log-probability function.
  4.  
  5. theta : free parameters describing Kernel.
  6. x : sampling locations
  7. y : observable
  8. dy : uncertainty on observable
  9.  
  10. Should return the log-likelihood.
  11. """
  12.  
  13. # Priors.
  14. ln_priors = log_priors(theta)
  15. if not np.isfinite(ln_priors):
  16. return -np.inf
  17.  
  18. # Make a GP and sample this to numerically calculate the gradient.
  19. kernel = celerite.terms.Matern32Term(log_sigma=theta[0], log_rho=theta[1])
  20. kernel += celerite.terms.JitterTerm(log_sigma=theta[2])
  21. GP = celerite.GP(kernel)
  22. GP.compute(x, dy)
  23. P = GP.predict(y, x, return_cov=False) # This step is the one I'm not sure of.
  24.  
  25. # Calculate the model.
  26. model = np.gradient(P, x) + 5. * x**3.0
  27.  
  28. # Calculate log-likelihood and return.
  29. ln_likelihood = log_likelihood(x, y, dy, model)
  30. return ln_likelihood
Add Comment
Please, Sign In to add comment