Advertisement
Guest User

Untitled

a guest
Jun 20th, 2019
218
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.76 KB | None | 0 0
  1. model = models.Sequential()
  2. model.add(layers.Conv2D(32, (3, 3), activation='relu',input_shape=(150, 150,
  3. 3)))
  4. model.add(layers.MaxPooling2D((2, 2)))
  5.  
  6. model.add(layers.Dropout(0.15))
  7. model.add(layers.Conv2D(64, (3, 3), activation='relu'))
  8. model.add(layers.MaxPooling2D((2, 2)))
  9.  
  10. model.add(layers.Dropout(0.2))
  11. model.add(layers.Conv2D(128, (3, 3), activation='relu'))
  12. model.add(layers.MaxPooling2D((2, 2)))
  13.  
  14. model.add(layers.Conv2D(256, (3, 3), activation='relu'))
  15. model.add(layers.MaxPooling2D((2, 2)))
  16.  
  17. model.add(layers.Conv2D(256, (3, 3), activation='relu'))
  18. model.add(layers.MaxPooling2D((2, 2)))
  19. model.add(BatchNormalization())
  20.  
  21. model.add(layers.Flatten())
  22. model.add(layers.Dropout(0.6))
  23. model.add(layers.Dense(150, activation='relu',
  24. kernel_regularizer=regularizers.l2(0.002)))
  25. model.add(layers.Dense(5, activation='softmax'))
  26.  
  27. model.compile(loss='categorical_crossentropy',
  28. optimizer=optimizers.Adam(lr=1e-3),
  29. metrics=['acc'])
  30.  
  31. Epoch 00067: val_loss did not improve from 0.08283
  32. Epoch 68/200
  33. 230/230 [==============================] - 56s 243ms/step - loss: 0.0893 -
  34. acc: 0.9793 - val_loss: 0.0876 - val_acc: 0.9784
  35.  
  36. Epoch 00068: val_loss did not improve from 0.08283
  37. Epoch 69/200
  38. 230/230 [==============================] - 58s 250ms/step - loss: 0.0874 -
  39. acc: 0.9774 - val_loss: 0.1209 - val_acc: 0.9684
  40.  
  41. Epoch 00069: val_loss did not improve from 0.08283
  42. Epoch 70/200
  43. 230/230 [==============================] - 57s 246ms/step - loss: 0.0879 -
  44. acc: 0.9803 - val_loss: 0.1384 - val_acc: 0.9706
  45.  
  46. Epoch 00070: val_loss did not improve from 0.08283
  47. Epoch 71/200
  48. 230/230 [==============================] - 59s 257ms/step - loss: 0.0903 -
  49. acc: 0.9783 - val_loss: 0.1352 - val_acc: 0.9728
  50.  
  51. Epoch 00071: val_loss did not improve from 0.08283
  52. Epoch 72/200
  53. 230/230 [==============================] - 58s 250ms/step - loss: 0.0852 -
  54. acc: 0.9798 - val_loss: 0.1324 - val_acc: 0.9621
  55.  
  56. Epoch 00072: val_loss did not improve from 0.08283
  57. Epoch 73/200
  58. 230/230 [==============================] - 58s 250ms/step - loss: 0.0831 -
  59. acc: 0.9815 - val_loss: 0.1634 - val_acc: 0.9574
  60.  
  61. Epoch 00073: val_loss did not improve from 0.08283
  62. Epoch 74/200
  63. 230/230 [==============================] - 57s 246ms/step - loss: 0.0824 -
  64. acc: 0.9816 - val_loss: 0.1280 - val_acc: 0.9640
  65.  
  66. Epoch 00074: val_loss did not improve from 0.08283
  67. Epoch 75/200
  68. 230/230 [==============================] - 57s 247ms/step - loss: 0.0869 -
  69. acc: 0.9774 - val_loss: 0.0777 - val_acc: 0.9882
  70.  
  71. Epoch 00075: val_loss improved from 0.08283 to 0.07765, saving model to
  72. C:/Users/xxx/Desktop/best_model_7.h5
  73. Epoch 76/200
  74. 230/230 [==============================] - 56s 243ms/step - loss: 0.0739 -
  75. acc: 0.9851 - val_loss: 0.0683 - val_acc: 0.9851
  76.  
  77. Epoch 00076: val_loss improved from 0.07765 to 0.06826, saving model to
  78. C:/Users/xxx/Desktop/best_model_7.h5
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement