Advertisement
sreejith2904

LSTMs & LSTMs

Apr 4th, 2017
152
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Python 1.19 KB | None | 0 0
  1. model_qn1 = Sequential()
  2. model_qn1.add(Embedding(len(word_indices) + 1,
  3.                      300,
  4.                      weights=[input_layer_matrix],
  5.                      input_length=40,
  6.                      trainable=False))
  7. model_qn1.add(LSTM(300, dropout_W=0.2, dropout_U=0.2))
  8.  
  9.  
  10. model_qn2 = Sequential()
  11. model_qn2.add(Embedding(len(word_indices) + 1,
  12.                      300,
  13.                      weights=[input_layer_matrix],
  14.                      input_length=40,
  15.                      trainable=False))
  16. model_qn2.add(LSTM(300, dropout_W=0.2, dropout_U=0.2))
  17.  
  18.  
  19. mixed_model = Sequential()
  20. mixed_model.add(Merge([model_qn1, model_qn2], mode='concat'))
  21. print(mixed_model.layers[-1].output_shape)
  22.  
  23. mixed_model.add(BatchNormalization())
  24. print(mixed_model.layers[-1].output_shape)
  25.  
  26. mixed_model.add(Reshape((1, 600)))
  27. print(mixed_model.layers[-1].output_shape)
  28.  
  29. mixed_model.add(LSTM(600, dropout_W=0.2, dropout_U=0.2))
  30. mixed_model.add(Dense(1))
  31. mixed_model.add(Activation('sigmoid'))
  32.  
  33. mixed_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
  34. mixed_model.fit([qn1, qn2], y=responses, batch_size=300, nb_epoch=1, verbose=1, validation_split=0.1, shuffle=True)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement