Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- X_train.shape = (243*100*4) # Samples * Time steps * Features
- Y_train.shape = (243,) # either 0 or 1 for each samples
- X_validate.shape : (31, 100, 4) # Samples * Time steps * Features
- Y_validate.shape : (31,) # either 0 or 1 for each samples
- X_test.shape : (28, 100, 4) # Samples * Time steps * Features
- Y_test.shape : (28,) # either 0 or 1 for each samples
- 1. Train the model with random time length batches
- 2. Predict the class, if random time length batches provided as input to the model
- input_ = Input(shape=(None,4))
- x = LSTM(16, return_sequences=True)(input_)
- x = LSTM(8, return_sequences=True)(x)
- output = TimeDistributed(Dense(2, activation='sigmoid'))(x)
- # Model
- model = Model(inputs=input_, outputs=output)
- print(model.summary())
- # Compile
- model.compile(
- loss='binary_crossentropy',
- optimizer=Adam(lr=1e-4),
- metrics=['accuracy']
- )
- def common_generator(X, Y):
- while True:
- sequence_length = random.randrange(60,100,5)
- # I want my model to be trained with random time length b/w 50 to 100 with multiples of 5
- x_train = X[:, :sequence_length, :]
- y = to_categorical(Y)
- y_train = np.repeat(y[:, np.newaxis], sequence_length, axis=1)
- # For my convenience i changed my Y_train shape from (243,) to (243, sequence_length, 2)
- Refer picture below for better understanding
- yield (x_train, y_train)
- trainGen = common_generator(X_train,Y_train)
- ValGen = common_generator(X_validate, Y_validate)
- H = model.fit_generator(trainGen, steps_per_epoch=25, validation_data=ValGen, validation_steps=3, epochs=150)
- Epoch 150/150
- 25/25 [==============================] - 5s 187ms/step - loss: 0.3706 - acc: 0.8574 - val_loss: 0.3254 - val_acc: 0.8733
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement