Advertisement
Guest User

Untitled

a guest
Oct 21st, 2019
86
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.65 KB | None | 0 0
  1. Natural Language Processing:
  2.  
  3. State-of-the-art (SOTA) results: Edge cutting results
  4.  
  5. Visual question answering (QA meaning :
  6. What is in the image?
  7. Are there any humans?
  8. What sport is being played?
  9. Who has the ball?
  10. How many players are in the image?
  11. Who are the teams?
  12. Is it raining?
  13.  
  14. Model word2vec :
  15. 1. First we convert words to vector repersentation. (OneHotVector - Localist)
  16. [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0] -> This could be very long based on the vocab list we are choosing to work with.
  17. It doesn't show the relationship b/w the words n all.
  18. 2. Distributional similarilty based representation:
  19. It will create a DENSE vector in such a way that there will be similarity b/w the words i.e each vector rep. of words could be able to predict prob. of each word that is present in the sentence.
  20. [0.286,0.154,-0.678,0.986...]
  21. word2vec model is based on two algorithms :
  22. Skip-gram alg. : "This table belongs to me" -> In this context, Lets take "table " as a center word then this alg. will
  23. predict whats the prob. of words which are left and right to that word. Like P("me"/"table").
  24. So, It only considers one word during prob. finding.
  25. Objective Function:
  26. J(Theta)=(-1/T)(Summations from "0" to "t")(Summations from "-m" to "m ")(log P(w(t+j)/w(t)))
  27. where "t" is the length of the sentance and "m" is the radius i.e for many words left and right words we have to consider the prob.
  28. Each word has two vector representation i.e one as a centre word and one as a context word.
  29. Note: Location doesn't have any significance in this model. We are jus concerned with what the identity of the word.
  30. How this neural network works:
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement