SHARE
TWEET

Untitled

a guest Aug 18th, 2019 89 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. from keras.utils import multi_gpu_model
  2. from keras.models import Sequential
  3. from keras.layers import Dense, InputLayer
  4. import numpy as np
  5.  
  6. # select how many GPU's to run - sets the BATCH SIZE = GPUs
  7. split = 4
  8.  
  9. # 1200 features, 1200 samples, 3 GPUs, 3 Batch Size, Target 10 Classes
  10. # create data
  11. data_size = 1200
  12. classes = 10
  13. X = np.random.rand(data_size, data_size)
  14. Y = np.random.randint(2, size= (data_size, classes))
  15.  
  16. # 2.4k samples of 1.2k features
  17. print(X.shape, Y.shape)
  18.  
  19. # create network
  20. model = Sequential()
  21. model.add(InputLayer(input_shape=(data_size,)))
  22. model.add(Dense(units=8292, activation='relu'))
  23. model.add(Dense(units=4096, activation='relu'))
  24. model.add(Dense(units=2048, activation='relu'))
  25. model.add(Dense(units=1024, activation='relu'))
  26. model.add(Dense(units=512, activation='relu'))
  27. model.add(Dense(units=256, activation='relu'))
  28. model.add(Dense(units=classes, activation='softmax'))
  29. print(model.summary())
  30.  
  31. parallel_model = multi_gpu_model(model, gpus=split)
  32. parallel_model.compile(loss='categorical_crossentropy',
  33.                        optimizer='rmsprop',
  34.                        metrics=['accuracy'])
  35.  
  36. # fit the model
  37. parallel_model.fit(X, Y, epochs=100, batch_size=split)
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
 
Top