Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- '''
- An extremely hacky way of utilizing keras optimizers to optimize model inputs.
- Can't be done using normal keras library (tf backend) since Tensors do not have the "assign" attr,
- which is normally used for updating variables. Nevertheless, the grad associated with the Tensor is still calculated.
- model : some keras model
- loss : some defined loss
- my_input : some numpy array that is used as input (i.e. model.predict(my_input))
- num_iter : number of iterations
- '''
- def get_input_updates(optimizer,loss,model):
- def fake_assign(new_val,name=None):
- return new_val
- setattr(model.input,'assign',fake_assign)
- updates = optimizer.get_updates(loss=[loss],
- params=[model.input],
- )
- for idx,update in enumerate(updates):
- if update.op.inputs[0].name == model.input.name:
- updated_param = updates.pop(idx)
- break
- grad = updated_param - model.input
- return grad,updates
- #Here is an example
- optimizer = keras.optimizers.Adam(lr=0.9) #any keras optimizer
- grad,optimizer_updates = get_input_updates(optimizer,loss,model.input)
- opt_func = K.function([model.input],
- [loss,grad],
- updates=optimizer_updates,
- name='input_optimizer')
- for i in range(num_iter):
- this_loss,this_grad = opt_func([my_input])
- my_input +=this_grad
Add Comment
Please, Sign In to add comment