Advertisement
Guest User

Untitled

a guest
Aug 21st, 2019
99
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 0.52 KB | None | 0 0
  1. max_words = 1000
  2. tokenize = keras.preprocessing.text.Tokenizer(num_words=max_words, char_level=False)
  3. tokenize.fit_on_texts(train_text) # fit tokenizer to our training text data
  4. x_train = tokenize.texts_to_matrix(train_text)
  5. x_test = tokenize.texts_to_matrix(test_text)
  6.  
  7. array([[0., 1., 1., ..., 0., 0., 0.],
  8. [0., 1., 1., ..., 0., 0., 0.],
  9. [0., 1., 1., ..., 0., 0., 0.],
  10. ...,
  11. [0., 1., 1., ..., 0., 0., 0.],
  12. [0., 1., 1., ..., 0., 0., 0.],
  13. [0., 1., 1., ..., 0., 0., 0.]])
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement