Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- word_index = {k:v for k,v in tokenizer.word_index.items()}
- word_index["<PAD>"] = 0
- vocab_size = len(word_index)
- maxLen = length_longest_sentence
- data = tf.keras.preprocessing.sequence.pad_sequences(sent_numeric,
- value=word_index["<PAD>"],
- padding='post',
- maxlen=maxLen)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement