Advertisement
Guest User

Untitled

a guest
Aug 17th, 2019
112
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.39 KB | None | 0 0
  1. 1. Latent variable is a hidden variable unobserved neither in training or testing phases
  2. 2. Why probabilistic model
  3. 1. Quantify the uncertainty of the prediction
  4. 2. Missing values
  5. 3. Introduce latent variable may simplify models (less edges)
  6. 1. Fewer parameters
  7. 2. sometimes meaningful
  8. 3. can be harder to work with
  9. 4. Probabilistic clustering
  10. 1. Hard clustering
  11. 2. soft clustering (probability p(cluster idx|x) instead of cluster idx=f(x))
  12. 3. can be used in hyperparameter tuning in determining the number of clusters
  13. 4. Generate model of the data
  14. 5. Gaussian Mixture Model
  15. 1. Weighted multiple Gaussian distributions
  16. 2. Train GMM
  17. 1. MLE
  18. 2. hard to fit with stochastic optimizer
  19. 1. hard to comply some constraints
  20. 2. Expectation maximization algorithm is much faster and more efficient
  21. 6. training GMM
  22. 1. latent variable t, p(t) = weight
  23. 2. p(x|t) = Gaussian (x)
  24. 3. EM algorithm
  25. 1. Start with 2 randomly placed Gaussian parameters theta
  26. 2. Unitl convergence repeat to update Gaussian parameter
  27. 4. Global Optimum NP-hard
  28. 5. EM won’t give global optimal (heuristics), suffer from local optimal
  29. 6. Choose the best run among several training attempts with different random initializations
  30. 7. Choose the one with the highest training log-likelihood or with highest validation log-likelihood
  31. 8. EM can train GMM faster than SGD and also handles complicated constraint, suffers from local maxima
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement