Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- vocab_size = 1000
- src_txt_length = 500
- sum_txt_length = 100
- inputs = Input(shape=(src_txt_length,))
- encoder1 = Embedding(vocab_size, 128)(inputs)
- encoder2 = LSTM(128)(encoder1)
- encoder3 = RepeatVector(sum_txt_length)(encoder2)
- decoder1 = LSTM(128, return_sequences=True)(encoder3)
- outputs = TimeDistributed(Dense(100, activation='softmax'))(decoder1)
- model = Model(inputs=inputs, outputs=outputs)
- model.compile(loss='categorical_crossentropy', optimizer='adam')
- hist = model.fit(x_train, y_train, verbose=1, validation_data=(x_test, y_test), batch_size=batch_size, epochs=5)
- ValueError: Error when checking target: expected time_distributed_27 to have 3 dimensions, but got array with shape (28500, 100)
Add Comment
Please, Sign In to add comment