Advertisement
lamiastella

sarcasm detection

Mar 3rd, 2018
225
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 10.31 KB | None | 0 0
  1. [jalal@goku src]$ python sarcasm_detection_model_CNN_LSTM_DNN.py
  2. Using TensorFlow backend.
  3. 2018-03-03 19:17:30.752038: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
  4. 2018-03-03 19:17:30.752070: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
  5. 2018-03-03 19:17:30.752113: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
  6. 2018-03-03 19:17:30.752125: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
  7. 2018-03-03 19:17:30.752135: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
  8. 2018-03-03 19:17:31.052613: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
  9. name: GeForce GTX 1080 Ti
  10. major: 6 minor: 1 memoryClockRate (GHz) 1.6705
  11. pciBusID 0000:05:00.0
  12. Total memory: 10.92GiB
  13. Free memory: 10.20GiB
  14. 2018-03-03 19:17:31.274314: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x56038c9d2f30 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
  15. 2018-03-03 19:17:31.275118: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 1 with properties:
  16. name: GeForce GTX 1080 Ti
  17. major: 6 minor: 1 memoryClockRate (GHz) 1.6705
  18. pciBusID 0000:06:00.0
  19. Total memory: 10.92GiB
  20. Free memory: 10.76GiB
  21. 2018-03-03 19:17:31.275859: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1
  22. 2018-03-03 19:17:31.275877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y Y
  23. 2018-03-03 19:17:31.275886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1: Y Y
  24. 2018-03-03 19:17:31.275900: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0)
  25. 2018-03-03 19:17:31.275910: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0)
  26. Loading resource...
  27. split entry found: 10831
  28. 0...100...200...300...400...500...600...700...800...900...1000...1100...1200...1300...1400...1500...1600...1700...1800...1900...2000...2100...2200...2300...2400...2500...2600...2700...2800...2900...3000...3100...3200...3300...3400...3500...3600...3700...3800...3900...4000...4100...4200...4300...#더쇼
  29. 4400...4500...4600...4700...4800...4900...5000...5100...#갓제븐
  30. 5200...5300...5400...5500...5600...5700...5800...5900...6000...6100...6200...6300...6400...6500...6600...6700...6800...6900...7000...7100...7200...7300...7400...7500...7600...7700...7800...7900...8000...8100...#1122_기현데이
  31. 8200...8300...8400...8500...8600...8700...8800...8900...9000...9100...9200...9300...9400...9500...9600...9700...9800...9900...10000...10100...10200...10300...10400...10500...10600...10700...10800...10900...11000...11100...11200...11300...11400...11500...11600...11700...11800...11900...12000...12100...12200...12300...12400...12500...12600...12700...12800...12900...13000...13100...13200...13300...13400...13500...13600...13700...13800...13900...14000...14100...14200...14300...14400...14500...14600...14700...14800...14900...15000...15100...15200...15300...15400...15500...15600...15700...15800...#더쇼
  32. 15900...16000...16100...16200...16300...16400...16500...16600...16700...16800...16900...17000...17100...17200...17300...17400...17500...17600...17700...17800...17900...18000...18100...18200...18300...18400...18500...18600...18700...18800...18900...19000...19100...19200...19300...19400...19500...19600...19700...19800...#김재중
  33. 19900...20000...20100...20200...20300...20400...20500...20600...20700...20800...20900...21000...21100...21200...21300...21400...21500...21600...21700...21800...21900...22000...22100...22200...22300...22400...22500...22600...22700...22800...22900...23000...23100...23200...23300...23400...23500...23600...23700...23800...23900...24000...24100...24200...24300...24400...24500...24600...24700...24800...24900...25000...25100...25200...25300...25400...#debate
  34. 25500...25600...25700...25800...25900...26000...26100...26200...26300...26400...26500...26600...26700...26800...26900...27000...27100...27200...27300...27400...27500...27600...27700...27800...27900...28000...28100...28200...28300...28400...28500...28600...28700...28800...28900...29000...29100...29200...29300...29400...29500...29600...29700...29800...29900...30000...30100...30200...30300...30400...30500...30600...30700...30800...30900...31000...31100...31200...31300...31400...31500...31600...31700...31800...31900...32000...32100...32200...32300...32400...32500...32600...32700...32800...32900...33000...33100...33200...33300...33400...33500...33600...33700...33800...33900...34000...34100...34200...34300...34400...34500...34600...34700...34800...34900...35000...35100...35200...35300...35400...35500...35600...35700...35800...35900...36000...36100...36200...36300...36400...36500...36600...36700...36800...36900...37000...37100...37200...37300...37400...37500...37600...37700...37800...37900...38000...38100...38200...38300...38400...38500...38600...38700...38800...38900...39000...39100...39200...39300...39400...39500...39600...39700...
  35. Training data loading finished...
  36. split entry found: 10831
  37. 0...100...200...300...400...500...600...700...800...#bey
  38. 900...1000...1100...1200...1300...1400...1500...1600...
  39. Validation data loading finished...
  40. 30
  41. 33919
  42. unk:: 33918
  43. Token coverage: 1.0
  44. Word coverage: 0.9999705171295478
  45. Token coverage: 0.9855469291556911
  46. Word coverage: 0.1304617017512825
  47. class ratio:: [1.0, 1.151665945478148]
  48. train_X (39780, 30)
  49. train_Y (39780, 2)
  50. validation_X (1605, 30)
  51. validation_Y (1605, 2)
  52. Build model...
  53. No of parameter: 10193922
  54. _________________________________________________________________
  55. Layer (type) Output Shape Param #
  56. =================================================================
  57. embedding_1 (Embedding) (None, 30, 256) 8683264
  58. _________________________________________________________________
  59. conv1d_1 (Conv1D) (None, 28, 256) 196864
  60. _________________________________________________________________
  61. conv1d_2 (Conv1D) (None, 26, 256) 196864
  62. _________________________________________________________________
  63. lstm_1 (LSTM) (None, 26, 256) 525312
  64. _________________________________________________________________
  65. lstm_2 (LSTM) (None, 256) 525312
  66. _________________________________________________________________
  67. dense_1 (Dense) (None, 256) 65792
  68. _________________________________________________________________
  69. dense_2 (Dense) (None, 2) 514
  70. _________________________________________________________________
  71. activation_1 (Activation) (None, 2) 0
  72. =================================================================
  73. Total params: 10,193,922
  74. Trainable params: 10,193,922
  75. Non-trainable params: 0
  76. _________________________________________________________________
  77. None
  78. Train on 39780 samples, validate on 1605 samples
  79. Epoch 1/10
  80. 2018-03-03 19:17:38.285936: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0)
  81. 2018-03-03 19:17:38.285970: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0)
  82. 39780/39780 [==============================] - 342s 9ms/step - loss: 0.6432 - acc: 0.6058 - val_loss: 0.6743 - val_acc: 0.5751
  83. Epoch 2/10
  84. 39780/39780 [==============================] - 343s 9ms/step - loss: 0.4645 - acc: 0.7847 - val_loss: 0.7511 - val_acc: 0.5894
  85. Epoch 3/10
  86. 39780/39780 [==============================] - 335s 8ms/step - loss: 0.3850 - acc: 0.8335 - val_loss: 1.1752 - val_acc: 0.4498
  87. Epoch 4/10
  88. 39780/39780 [==============================] - 356s 9ms/step - loss: 0.3398 - acc: 0.8567 - val_loss: 0.9377 - val_acc: 0.5396
  89. Epoch 5/10
  90. 39780/39780 [==============================] - 368s 9ms/step - loss: 0.3061 - acc: 0.8736 - val_loss: 0.5764 - val_acc: 0.7078
  91. Epoch 6/10
  92. 39780/39780 [==============================] - 362s 9ms/step - loss: 0.2805 - acc: 0.8860 - val_loss: 0.7124 - val_acc: 0.6517
  93. Epoch 7/10
  94. 39780/39780 [==============================] - 363s 9ms/step - loss: 0.2604 - acc: 0.8963 - val_loss: 0.7493 - val_acc: 0.6667
  95. Epoch 8/10
  96. 39780/39780 [==============================] - 354s 9ms/step - loss: 0.2409 - acc: 0.9054 - val_loss: 0.9538 - val_acc: 0.6305
  97. Epoch 9/10
  98. 39780/39780 [==============================] - 326s 8ms/step - loss: 0.2253 - acc: 0.9112 - val_loss: 0.9765 - val_acc: 0.6274
  99. Epoch 10/10
  100. 39780/39780 [==============================] - 328s 8ms/step - loss: 0.2101 - acc: 0.9182 - val_loss: 0.9260 - val_acc: 0.6374
  101. initializing...
  102. test_maxlen 30
  103. model loaded from file...
  104. model weights loaded from file...
  105. model loading time:: 0.6579561233520508
  106. split entry found: 10831
  107. 0...#woe
  108. #poop
  109. #helpful
  110. 100...200...#humbled
  111. 300...400...#fluent
  112. #accounting
  113. #conditioning
  114. 500...600...700...800...900...1000...1100...1200...1300...1400...#bahrain
  115. #ichiro
  116. 1500...#multinational
  117. #flames
  118. #woo
  119. 1600...#off
  120. #beer
  121. 1700...#tracking
  122. 1800...#resilience
  123. #libya
  124. 1900...#shh
  125.  
  126. vocab loaded...
  127. Token coverage: 0.9879144538538825
  128. Word coverage: 0.125331682292588
  129. 2000/2000 [==============================] - 40s 20ms/step
  130.  
  131. accuracy:: 0.922
  132. precision:: 0.922168867547
  133. recall:: 0.922
  134. f_score:: 0.92199219922
  135. f_score:: precision recall f1-score support
  136.  
  137. 0 0.91 0.93 0.92 1000
  138. 1 0.93 0.91 0.92 1000
  139.  
  140. avg / total 0.92 0.92 0.92 2000
  141.  
  142. [jalal@goku src]$
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement