Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Natural Language Processing:
- State-of-the-art (SOTA) results: Edge cutting results
- Visual question answering (QA meaning :
- What is in the image?
- Are there any humans?
- What sport is being played?
- Who has the ball?
- How many players are in the image?
- Who are the teams?
- Is it raining?
- Model word2vec :
- 1. First we convert words to vector repersentation. (OneHotVector - Localist)
- [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0] -> This could be very long based on the vocab list we are choosing to work with.
- It doesn't show the relationship b/w the words n all.
- 2. Distributional similarilty based representation:
- It will create a DENSE vector in such a way that there will be similarity b/w the words i.e each vector rep. of words could be able to predict prob. of each word that is present in the sentence.
- [0.286,0.154,-0.678,0.986...]
- word2vec model is based on two algorithms :
- Skip-gram alg. : "This table belongs to me" -> In this context, Lets take "table " as a center word then this alg. will
- predict whats the prob. of words which are left and right to that word. Like P("me"/"table").
- So, It only considers one word during prob. finding.
- Objective Function:
- J(Theta)=(-1/T)(Summations from "0" to "t")(Summations from "-m" to "m ")(log P(w(t+j)/w(t)))
- where "t" is the length of the sentance and "m" is the radius i.e for many words left and right words we have to consider the prob.
- Each word has two vector representation i.e one as a centre word and one as a context word.
- Note: Location doesn't have any significance in this model. We are jus concerned with what the identity of the word.
- How this neural network works:
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement