Guest User

Untitled

a guest
Jan 16th, 2019
69
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 0.62 KB | None | 0 0
  1. # Build the LSTM model
  2. def getModel():
  3. units1, units2 = int (nWords/4), int (nWords/8)
  4. model = keras.models.Sequential()
  5. model.add(keras.layers.embeddings.Embedding(input_dim = len(kTokenizer.word_index)+1,output_dim=units1,input_length=sequenceLength, trainable=True)) # Line 5
  6. model.add(keras.layers.LSTM(units = units2, return_sequences =False)) # Line 6
  7. model.add(keras.layers.Dense(len(labelToName), activation ='softmax')) # Line 7
  8. model.compile(optimizer='adam', loss = 'categorical_crossentropy', metrics=['acc'])
  9. return model
Add Comment
Please, Sign In to add comment