Guest User

Untitled

a guest
Jul 22nd, 2018
76
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.38 KB | None | 0 0
  1. '''
  2. An extremely hacky way of utilizing keras optimizers to optimize model inputs.
  3. Can't be done using normal keras library (tf backend) since Tensors do not have the "assign" attr,
  4. which is normally used for updating variables. Nevertheless, the grad associated with the Tensor is still calculated.
  5.  
  6. model : some keras model
  7. loss : some defined loss
  8. my_input : some numpy array that is used as input (i.e. model.predict(my_input))
  9. num_iter : number of iterations
  10.  
  11. '''
  12. def get_input_updates(optimizer,loss,model):
  13. def fake_assign(new_val,name=None):
  14. return new_val
  15. setattr(model.input,'assign',fake_assign)
  16. updates = optimizer.get_updates(loss=[loss],
  17. params=[model.input],
  18. )
  19. for idx,update in enumerate(updates):
  20. if update.op.inputs[0].name == model.input.name:
  21. updated_param = updates.pop(idx)
  22. break
  23. grad = updated_param - model.input
  24. return grad,updates
  25.  
  26. #Here is an example
  27.  
  28. optimizer = keras.optimizers.Adam(lr=0.9) #any keras optimizer
  29. grad,optimizer_updates = get_input_updates(optimizer,loss,model.input)
  30. opt_func = K.function([model.input],
  31. [loss,grad],
  32. updates=optimizer_updates,
  33. name='input_optimizer')
  34. for i in range(num_iter):
  35. this_loss,this_grad = opt_func([my_input])
  36. my_input +=this_grad
Add Comment
Please, Sign In to add comment