Guest User

Untitled

a guest
Jan 17th, 2018
93
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 5.32 KB | None | 0 0
  1. # Machine Learning Quick Notes.
  2.  
  3. `One plus 2 is 4 minus 1 that's 3 quick maths`
  4.  
  5. # Conjugate Distributions:
  6.  
  7. ### Beta Distribution:
  8. Family of continous Distributions, mainly focused using the two parameters `alpha` and `beta`. These are used as exponents of a random variable and are the main factors on the shape of the distribution.
  9.  
  10. Beta Distribution acts as the **conjugate distribution** of:
  11. 1. Bernoulli Distribution
  12. 2. Binomial Distribution
  13. 3. Geometric Distribution.
  14.  
  15. ### Dirichlet Distributions :
  16. Multivariate versions of the Beta distribution, paramterized by `alpha`.
  17.  
  18. Dirichlet distributions act as the **conjugate prior** for :
  19. 1. Categorical Distributions
  20. 2. Multinomial distributions
  21.  
  22. # Normal / Gaussian Distributions :
  23. 1. Gaussians are great due to the central limit theorem
  24. 2. Gaussians are **self conjugate**
  25. 3. The conjugate prior for the `mean` is a `Gaussian`
  26. 4. the conjugate prior for the `Variance/STD` is `Inverse-Wishart` which is a distribution define on the real value matrices.
  27.  
  28. # Exponential Family:
  29. The exponential family defines a _natural_ paratmetrisation of distributions that we can work with.
  30.  
  31. # Linear Regression
  32.  
  33. So, linear regression is a method of finding the relation between a set of data points,
  34. suppose you have the equation `Y = w X + e` where `e` is noise. Linear regression is used to
  35. * Find the values of W in such a way that the data matches up well
  36. * Predict values of Y based on the w's from above.
  37.  
  38. The steps of doing this involve Bayes' theorem.
  39. 1. Create the likelihood function p(Y|x,w).
  40. 2. Next have a prior belief over the W values -> p(w)
  41. 3. Now obtain a posterior belief over the values of W so you can find a closer approximation that matches the data, -> p(W|Y,X)
  42. > P(W|Y,X) depends on P(Y|X,W) and P(W)
  43.  
  44. Use conjugate priors to ensure that the posterior distribution is of the correct form.
  45.  
  46. This system however isn't the best for prediction of values, especially as the dimensionality of W, X and Y increase.
  47.  
  48. ## Dual Linear Regression
  49. Dual linear regression is used to have dual representations of the data and allow for better predictions in higher dimensionalities.
  50.  
  51. this is done using Kernel Regression / Kernel Methods
  52.  
  53. ## Kernel Regression
  54. Basically the same as Dual Linear Regression / Linear Regression but it includes the **mapping of data from N dimensions to an easier representation in X dimensions using a function `phi(.)`**
  55.  
  56. The steps for kernel regression / dual linear regression are :
  57. 1. Formulate a posterior value
  58. 2. Find the stationary point of the posterior
  59. 3. Rewrite the parameters using the data (???)
  60. 4. Kernel regression using this formula
  61.  
  62. The general kernel regression formula being used is :
  63. > y(x*) = k(x*,x) (K + LI)^-1 t
  64.  
  65. Here, the x* is the new point being predicted, k is mapping the relation between x* and x, and K is the mapping of _all_ values in the data. L acts as the noise parameter and t is the set of target variables.
  66.  
  67. This is the general formula
  68.  
  69.  
  70. # blurb of Unsupervised learning
  71. So basically when you have a function `y = f(x) + noise` with normal linear regression / supervised learning you get the parameters for the function `f` from the pairs of values `(y,x)`
  72. This is called supervised learning.
  73.  
  74. Now for unsupervised learning, you're supposed to infer the parameters of `f` and the latent variables `x` just from looking at the observed values `y`.
  75.  
  76. Normally this would involve marginalising out the values of both f and x and therefore obtaining a likelihood estimate for the formula / parameters.
  77. However this is impossible as removing both x and y is computationally impossible.
  78.  
  79. So instead we use the method of `maximum likelihood type 2` which is essentially a compromise over the fact you can't marignalise out both these things.
  80.  
  81. So instead what we do is marginalise out the values with higher dimensionality, which in our case is the values of the latent variables `x`. After this we obtain a formula for p(y|mew, W and variance).
  82.  
  83. The next step for MLE2 is to maximise the values of this function / formula.
  84.  
  85. This is done to find the `point estimate` for the parameters for the function, therefore giving you the most optimal values for the parameters given your data `y`.
  86.  
  87.  
  88. In simple words
  89. > you've been given only y values
  90. >
  91. > you know there's some values x that made it after some function f had been applied
  92. >
  93. > you take all the possible values of 'x' and force them through the formula and obtain a relation between the F parameters and the Y values.
  94. >
  95. > Now you've got the probability function that relates these Y values with the F's parameter values.
  96. >
  97. > To find a final solution, take the maximum likelihood of this probability function, which finds the point estimate of the parameters for F
  98. >
  99. > This means that you now have the best estimated value for these parameters given the values for Y and the assumed values for X.
  100. >
  101.  
  102.  
  103.  
  104. # Random Points
  105. * The **assumption over a prior of linear regression** is a **Zero Mean Isotropic Gaussian** which means that it's a Normal Distribution with mean as (W|0) and variance as `alpha ^-1` I) where alpha is a single parameter from the distribution.
  106. * **Parameter Distribution** is the process / method of applying gaussian priors over some parameters to obtain their final value inferred / obtained from the data. This is different to individually changing parameters to fit the given data in creation of a model as it would cause _overfitting_ on the data.
Add Comment
Please, Sign In to add comment