Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- I0114 17:30:17.014989 86948 caffe.cpp:184] Using GPUs 0, 1
- I0114 17:30:18.139459 86948 solver.cpp:48] Initializing solver from parameters:
- test_iter: 10
- test_interval: 500
- base_lr: 0.001
- display: 500
- max_iter: 850000
- lr_policy: "fixed"
- gamma: 0.5
- momentum: 0.9
- weight_decay: 0.0005
- snapshot: 5000
- snapshot_prefix: "models/mv16f/mv16f1_"
- solver_mode: GPU
- device_id: 0
- net: "models/mv16f/mv_train1.prototxt"
- I0114 17:30:18.139631 86948 solver.cpp:91] Creating training net from net file: models/mv16f/mv_train1.prototxt
- I0114 17:30:18.140243 86948 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer data
- I0114 17:30:18.140266 86948 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
- I0114 17:30:18.140447 86948 net.cpp:49] Initializing net from parameters:
- name: "mv_16f1"
- state {
- phase: TRAIN
- }
- layer {
- name: "data"
- type: "HDF5Data"
- top: "data"
- top: "label"
- include {
- phase: TRAIN
- }
- hdf5_data_param {
- source: "/home/fe/anilil/caffe/models/mv16f/train.txt"
- batch_size: 150
- }
- }
- layer {
- name: "conv1"
- type: "Convolution"
- bottom: "data"
- top: "conv1"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 64
- kernel_size: 3
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu1"
- type: "ReLU"
- bottom: "conv1"
- top: "conv1"
- }
- layer {
- name: "pool1"
- type: "Pooling"
- bottom: "conv1"
- top: "pool1"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 1
- }
- }
- layer {
- name: "conv2"
- type: "Convolution"
- bottom: "pool1"
- top: "conv2"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 128
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu2"
- type: "ReLU"
- bottom: "conv2"
- top: "conv2"
- }
- layer {
- name: "pool2"
- type: "Pooling"
- bottom: "conv2"
- top: "pool2"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "conv3"
- type: "Convolution"
- bottom: "pool2"
- top: "conv3"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 256
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu3"
- type: "ReLU"
- bottom: "conv3"
- top: "conv3"
- }
- layer {
- name: "pool3"
- type: "Pooling"
- bottom: "conv3"
- top: "pool3"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "conv4"
- type: "Convolution"
- bottom: "pool3"
- top: "conv4"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 256
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu4"
- type: "ReLU"
- bottom: "conv4"
- top: "conv4"
- }
- layer {
- name: "pool4"
- type: "Pooling"
- bottom: "conv4"
- top: "pool4"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "conv5"
- type: "Convolution"
- bottom: "pool4"
- top: "conv5"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 256
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu5"
- type: "ReLU"
- bottom: "conv5"
- top: "conv5"
- }
- layer {
- name: "pool5"
- type: "Pooling"
- bottom: "conv5"
- top: "pool5"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "fc6"
- type: "InnerProduct"
- bottom: "pool5"
- top: "fc6"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 2048
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu6"
- type: "ReLU"
- bottom: "fc6"
- top: "fc6"
- }
- layer {
- name: "fc7"
- type: "InnerProduct"
- bottom: "fc6"
- top: "fc7"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 2048
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu7"
- type: "ReLU"
- bottom: "fc7"
- top: "fc7"
- }
- layer {
- name: "fc8"
- type: "InnerProduct"
- bottom: "fc7"
- top: "fc8"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 101
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "loss"
- type: "SoftmaxWithLoss"
- bottom: "fc8"
- bottom: "label"
- top: "loss"
- }
- I0114 17:30:18.140599 86948 layer_factory.hpp:77] Creating layer data
- I0114 17:30:18.140625 86948 net.cpp:106] Creating Layer data
- I0114 17:30:18.140630 86948 net.cpp:411] data -> data
- I0114 17:30:18.140650 86948 net.cpp:411] data -> label
- I0114 17:30:18.140666 86948 hdf5_data_layer.cpp:79] Loading list of HDF5 filenames from: /home/fe/anilil/caffe/models/mv16f/train.txt
- I0114 17:30:18.140861 86948 hdf5_data_layer.cpp:93] Number of HDF5 files: 477
- I0114 17:30:18.141713 86948 hdf5.cpp:32] Datatype class: H5T_FLOAT
- I0114 17:30:19.121565 86948 net.cpp:150] Setting up data
- I0114 17:30:19.121650 86948 net.cpp:157] Top shape: 150 48 58 58 (24220800)
- I0114 17:30:19.121657 86948 net.cpp:157] Top shape: 150 1 (150)
- I0114 17:30:19.121660 86948 net.cpp:165] Memory required for data: 96883800
- I0114 17:30:19.121668 86948 layer_factory.hpp:77] Creating layer conv1
- I0114 17:30:19.121704 86948 net.cpp:106] Creating Layer conv1
- I0114 17:30:19.121721 86948 net.cpp:454] conv1 <- data
- I0114 17:30:19.121743 86948 net.cpp:411] conv1 -> conv1
- I0114 17:30:19.253258 86948 net.cpp:150] Setting up conv1
- I0114 17:30:19.253304 86948 net.cpp:157] Top shape: 150 64 56 56 (30105600)
- I0114 17:30:19.253307 86948 net.cpp:165] Memory required for data: 217306200
- I0114 17:30:19.253322 86948 layer_factory.hpp:77] Creating layer relu1
- I0114 17:30:19.253340 86948 net.cpp:106] Creating Layer relu1
- I0114 17:30:19.253345 86948 net.cpp:454] relu1 <- conv1
- I0114 17:30:19.253350 86948 net.cpp:397] relu1 -> conv1 (in-place)
- I0114 17:30:19.253569 86948 net.cpp:150] Setting up relu1
- I0114 17:30:19.253577 86948 net.cpp:157] Top shape: 150 64 56 56 (30105600)
- I0114 17:30:19.253592 86948 net.cpp:165] Memory required for data: 337728600
- I0114 17:30:19.253595 86948 layer_factory.hpp:77] Creating layer pool1
- I0114 17:30:19.253607 86948 net.cpp:106] Creating Layer pool1
- I0114 17:30:19.253612 86948 net.cpp:454] pool1 <- conv1
- I0114 17:30:19.253615 86948 net.cpp:411] pool1 -> pool1
- I0114 17:30:19.253981 86948 net.cpp:150] Setting up pool1
- I0114 17:30:19.253993 86948 net.cpp:157] Top shape: 150 64 55 55 (29040000)
- I0114 17:30:19.253995 86948 net.cpp:165] Memory required for data: 453888600
- I0114 17:30:19.253998 86948 layer_factory.hpp:77] Creating layer conv2
- I0114 17:30:19.254022 86948 net.cpp:106] Creating Layer conv2
- I0114 17:30:19.254025 86948 net.cpp:454] conv2 <- pool1
- I0114 17:30:19.254031 86948 net.cpp:411] conv2 -> conv2
- I0114 17:30:19.256091 86948 net.cpp:150] Setting up conv2
- I0114 17:30:19.256105 86948 net.cpp:157] Top shape: 150 128 53 53 (53932800)
- I0114 17:30:19.256108 86948 net.cpp:165] Memory required for data: 669619800
- I0114 17:30:19.256117 86948 layer_factory.hpp:77] Creating layer relu2
- I0114 17:30:19.256122 86948 net.cpp:106] Creating Layer relu2
- I0114 17:30:19.256125 86948 net.cpp:454] relu2 <- conv2
- I0114 17:30:19.256131 86948 net.cpp:397] relu2 -> conv2 (in-place)
- I0114 17:30:19.256299 86948 net.cpp:150] Setting up relu2
- I0114 17:30:19.256310 86948 net.cpp:157] Top shape: 150 128 53 53 (53932800)
- I0114 17:30:19.256325 86948 net.cpp:165] Memory required for data: 885351000
- I0114 17:30:19.256327 86948 layer_factory.hpp:77] Creating layer pool2
- I0114 17:30:19.256363 86948 net.cpp:106] Creating Layer pool2
- I0114 17:30:19.256367 86948 net.cpp:454] pool2 <- conv2
- I0114 17:30:19.256372 86948 net.cpp:411] pool2 -> pool2
- I0114 17:30:19.256733 86948 net.cpp:150] Setting up pool2
- I0114 17:30:19.256744 86948 net.cpp:157] Top shape: 150 128 27 27 (13996800)
- I0114 17:30:19.256747 86948 net.cpp:165] Memory required for data: 941338200
- I0114 17:30:19.256750 86948 layer_factory.hpp:77] Creating layer conv3
- I0114 17:30:19.256760 86948 net.cpp:106] Creating Layer conv3
- I0114 17:30:19.256763 86948 net.cpp:454] conv3 <- pool2
- I0114 17:30:19.256770 86948 net.cpp:411] conv3 -> conv3
- I0114 17:30:19.260486 86948 net.cpp:150] Setting up conv3
- I0114 17:30:19.260499 86948 net.cpp:157] Top shape: 150 256 25 25 (24000000)
- I0114 17:30:19.260501 86948 net.cpp:165] Memory required for data: 1037338200
- I0114 17:30:19.260509 86948 layer_factory.hpp:77] Creating layer relu3
- I0114 17:30:19.260517 86948 net.cpp:106] Creating Layer relu3
- I0114 17:30:19.260520 86948 net.cpp:454] relu3 <- conv3
- I0114 17:30:19.260524 86948 net.cpp:397] relu3 -> conv3 (in-place)
- I0114 17:30:19.260691 86948 net.cpp:150] Setting up relu3
- I0114 17:30:19.260699 86948 net.cpp:157] Top shape: 150 256 25 25 (24000000)
- I0114 17:30:19.260713 86948 net.cpp:165] Memory required for data: 1133338200
- I0114 17:30:19.260716 86948 layer_factory.hpp:77] Creating layer pool3
- I0114 17:30:19.260725 86948 net.cpp:106] Creating Layer pool3
- I0114 17:30:19.260728 86948 net.cpp:454] pool3 <- conv3
- I0114 17:30:19.260733 86948 net.cpp:411] pool3 -> pool3
- I0114 17:30:19.261076 86948 net.cpp:150] Setting up pool3
- I0114 17:30:19.261086 86948 net.cpp:157] Top shape: 150 256 13 13 (6489600)
- I0114 17:30:19.261090 86948 net.cpp:165] Memory required for data: 1159296600
- I0114 17:30:19.261093 86948 layer_factory.hpp:77] Creating layer conv4
- I0114 17:30:19.261103 86948 net.cpp:106] Creating Layer conv4
- I0114 17:30:19.261106 86948 net.cpp:454] conv4 <- pool3
- I0114 17:30:19.261113 86948 net.cpp:411] conv4 -> conv4
- I0114 17:30:19.266593 86948 net.cpp:150] Setting up conv4
- I0114 17:30:19.266607 86948 net.cpp:157] Top shape: 150 256 11 11 (4646400)
- I0114 17:30:19.266610 86948 net.cpp:165] Memory required for data: 1177882200
- I0114 17:30:19.266616 86948 layer_factory.hpp:77] Creating layer relu4
- I0114 17:30:19.266621 86948 net.cpp:106] Creating Layer relu4
- I0114 17:30:19.266624 86948 net.cpp:454] relu4 <- conv4
- I0114 17:30:19.266630 86948 net.cpp:397] relu4 -> conv4 (in-place)
- I0114 17:30:19.266991 86948 net.cpp:150] Setting up relu4
- I0114 17:30:19.267014 86948 net.cpp:157] Top shape: 150 256 11 11 (4646400)
- I0114 17:30:19.267016 86948 net.cpp:165] Memory required for data: 1196467800
- I0114 17:30:19.267019 86948 layer_factory.hpp:77] Creating layer pool4
- I0114 17:30:19.267027 86948 net.cpp:106] Creating Layer pool4
- I0114 17:30:19.267041 86948 net.cpp:454] pool4 <- conv4
- I0114 17:30:19.267045 86948 net.cpp:411] pool4 -> pool4
- I0114 17:30:19.267223 86948 net.cpp:150] Setting up pool4
- I0114 17:30:19.267231 86948 net.cpp:157] Top shape: 150 256 6 6 (1382400)
- I0114 17:30:19.267246 86948 net.cpp:165] Memory required for data: 1201997400
- I0114 17:30:19.267248 86948 layer_factory.hpp:77] Creating layer conv5
- I0114 17:30:19.267258 86948 net.cpp:106] Creating Layer conv5
- I0114 17:30:19.267273 86948 net.cpp:454] conv5 <- pool4
- I0114 17:30:19.267280 86948 net.cpp:411] conv5 -> conv5
- I0114 17:30:19.272622 86948 net.cpp:150] Setting up conv5
- I0114 17:30:19.272635 86948 net.cpp:157] Top shape: 150 256 4 4 (614400)
- I0114 17:30:19.272639 86948 net.cpp:165] Memory required for data: 1204455000
- I0114 17:30:19.272647 86948 layer_factory.hpp:77] Creating layer relu5
- I0114 17:30:19.272652 86948 net.cpp:106] Creating Layer relu5
- I0114 17:30:19.272655 86948 net.cpp:454] relu5 <- conv5
- I0114 17:30:19.272660 86948 net.cpp:397] relu5 -> conv5 (in-place)
- I0114 17:30:19.272987 86948 net.cpp:150] Setting up relu5
- I0114 17:30:19.273010 86948 net.cpp:157] Top shape: 150 256 4 4 (614400)
- I0114 17:30:19.273011 86948 net.cpp:165] Memory required for data: 1206912600
- I0114 17:30:19.273026 86948 layer_factory.hpp:77] Creating layer pool5
- I0114 17:30:19.273056 86948 net.cpp:106] Creating Layer pool5
- I0114 17:30:19.273061 86948 net.cpp:454] pool5 <- conv5
- I0114 17:30:19.273064 86948 net.cpp:411] pool5 -> pool5
- I0114 17:30:19.273246 86948 net.cpp:150] Setting up pool5
- I0114 17:30:19.273253 86948 net.cpp:157] Top shape: 150 256 2 2 (153600)
- I0114 17:30:19.273267 86948 net.cpp:165] Memory required for data: 1207527000
- I0114 17:30:19.273270 86948 layer_factory.hpp:77] Creating layer fc6
- I0114 17:30:19.273298 86948 net.cpp:106] Creating Layer fc6
- I0114 17:30:19.273301 86948 net.cpp:454] fc6 <- pool5
- I0114 17:30:19.273306 86948 net.cpp:411] fc6 -> fc6
- I0114 17:30:19.289192 86948 net.cpp:150] Setting up fc6
- I0114 17:30:19.289223 86948 net.cpp:157] Top shape: 150 2048 (307200)
- I0114 17:30:19.289237 86948 net.cpp:165] Memory required for data: 1208755800
- I0114 17:30:19.289244 86948 layer_factory.hpp:77] Creating layer relu6
- I0114 17:30:19.289252 86948 net.cpp:106] Creating Layer relu6
- I0114 17:30:19.289257 86948 net.cpp:454] relu6 <- fc6
- I0114 17:30:19.289260 86948 net.cpp:397] relu6 -> fc6 (in-place)
- I0114 17:30:19.289674 86948 net.cpp:150] Setting up relu6
- I0114 17:30:19.289696 86948 net.cpp:157] Top shape: 150 2048 (307200)
- I0114 17:30:19.289700 86948 net.cpp:165] Memory required for data: 1209984600
- I0114 17:30:19.289702 86948 layer_factory.hpp:77] Creating layer fc7
- I0114 17:30:19.289722 86948 net.cpp:106] Creating Layer fc7
- I0114 17:30:19.289726 86948 net.cpp:454] fc7 <- fc6
- I0114 17:30:19.289732 86948 net.cpp:411] fc7 -> fc7
- I0114 17:30:19.322726 86948 net.cpp:150] Setting up fc7
- I0114 17:30:19.322765 86948 net.cpp:157] Top shape: 150 2048 (307200)
- I0114 17:30:19.322769 86948 net.cpp:165] Memory required for data: 1211213400
- I0114 17:30:19.322777 86948 layer_factory.hpp:77] Creating layer relu7
- I0114 17:30:19.322787 86948 net.cpp:106] Creating Layer relu7
- I0114 17:30:19.322790 86948 net.cpp:454] relu7 <- fc7
- I0114 17:30:19.322796 86948 net.cpp:397] relu7 -> fc7 (in-place)
- I0114 17:30:19.323101 86948 net.cpp:150] Setting up relu7
- I0114 17:30:19.323110 86948 net.cpp:157] Top shape: 150 2048 (307200)
- I0114 17:30:19.323124 86948 net.cpp:165] Memory required for data: 1212442200
- I0114 17:30:19.323127 86948 layer_factory.hpp:77] Creating layer fc8
- I0114 17:30:19.323142 86948 net.cpp:106] Creating Layer fc8
- I0114 17:30:19.323144 86948 net.cpp:454] fc8 <- fc7
- I0114 17:30:19.323149 86948 net.cpp:411] fc8 -> fc8
- I0114 17:30:19.325206 86948 net.cpp:150] Setting up fc8
- I0114 17:30:19.325217 86948 net.cpp:157] Top shape: 150 101 (15150)
- I0114 17:30:19.325220 86948 net.cpp:165] Memory required for data: 1212502800
- I0114 17:30:19.325227 86948 layer_factory.hpp:77] Creating layer loss
- I0114 17:30:19.325237 86948 net.cpp:106] Creating Layer loss
- I0114 17:30:19.325240 86948 net.cpp:454] loss <- fc8
- I0114 17:30:19.325244 86948 net.cpp:454] loss <- label
- I0114 17:30:19.325250 86948 net.cpp:411] loss -> loss
- I0114 17:30:19.325264 86948 layer_factory.hpp:77] Creating layer loss
- I0114 17:30:19.326247 86948 net.cpp:150] Setting up loss
- I0114 17:30:19.326261 86948 net.cpp:157] Top shape: (1)
- I0114 17:30:19.326263 86948 net.cpp:160] with loss weight 1
- I0114 17:30:19.326287 86948 net.cpp:165] Memory required for data: 1212502804
- I0114 17:30:19.326289 86948 net.cpp:226] loss needs backward computation.
- I0114 17:30:19.326292 86948 net.cpp:226] fc8 needs backward computation.
- I0114 17:30:19.326295 86948 net.cpp:226] relu7 needs backward computation.
- I0114 17:30:19.326297 86948 net.cpp:226] fc7 needs backward computation.
- I0114 17:30:19.326300 86948 net.cpp:226] relu6 needs backward computation.
- I0114 17:30:19.326303 86948 net.cpp:226] fc6 needs backward computation.
- I0114 17:30:19.326305 86948 net.cpp:226] pool5 needs backward computation.
- I0114 17:30:19.326309 86948 net.cpp:226] relu5 needs backward computation.
- I0114 17:30:19.326311 86948 net.cpp:226] conv5 needs backward computation.
- I0114 17:30:19.326314 86948 net.cpp:226] pool4 needs backward computation.
- I0114 17:30:19.326318 86948 net.cpp:226] relu4 needs backward computation.
- I0114 17:30:19.326320 86948 net.cpp:226] conv4 needs backward computation.
- I0114 17:30:19.326339 86948 net.cpp:226] pool3 needs backward computation.
- I0114 17:30:19.326342 86948 net.cpp:226] relu3 needs backward computation.
- I0114 17:30:19.326344 86948 net.cpp:226] conv3 needs backward computation.
- I0114 17:30:19.326347 86948 net.cpp:226] pool2 needs backward computation.
- I0114 17:30:19.326350 86948 net.cpp:226] relu2 needs backward computation.
- I0114 17:30:19.326352 86948 net.cpp:226] conv2 needs backward computation.
- I0114 17:30:19.326355 86948 net.cpp:226] pool1 needs backward computation.
- I0114 17:30:19.326359 86948 net.cpp:226] relu1 needs backward computation.
- I0114 17:30:19.326361 86948 net.cpp:226] conv1 needs backward computation.
- I0114 17:30:19.326364 86948 net.cpp:228] data does not need backward computation.
- I0114 17:30:19.326366 86948 net.cpp:270] This network produces output loss
- I0114 17:30:19.326382 86948 net.cpp:283] Network initialization done.
- I0114 17:30:19.327203 86948 solver.cpp:181] Creating test net (#0) specified by net file: models/mv16f/mv_train1.prototxt
- I0114 17:30:19.327262 86948 net.cpp:322] The NetState phase (1) differed from the phase (0) specified by a rule in layer data
- I0114 17:30:19.327458 86948 net.cpp:49] Initializing net from parameters:
- name: "mv_16f1"
- state {
- phase: TEST
- }
- layer {
- name: "data"
- type: "HDF5Data"
- top: "data"
- top: "label"
- include {
- phase: TEST
- }
- hdf5_data_param {
- source: "/home/fe/anilil/caffe/models/mv16f/test.txt"
- batch_size: 50
- }
- }
- layer {
- name: "conv1"
- type: "Convolution"
- bottom: "data"
- top: "conv1"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 64
- kernel_size: 3
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu1"
- type: "ReLU"
- bottom: "conv1"
- top: "conv1"
- }
- layer {
- name: "pool1"
- type: "Pooling"
- bottom: "conv1"
- top: "pool1"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 1
- }
- }
- layer {
- name: "conv2"
- type: "Convolution"
- bottom: "pool1"
- top: "conv2"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 128
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu2"
- type: "ReLU"
- bottom: "conv2"
- top: "conv2"
- }
- layer {
- name: "pool2"
- type: "Pooling"
- bottom: "conv2"
- top: "pool2"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "conv3"
- type: "Convolution"
- bottom: "pool2"
- top: "conv3"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 256
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu3"
- type: "ReLU"
- bottom: "conv3"
- top: "conv3"
- }
- layer {
- name: "pool3"
- type: "Pooling"
- bottom: "conv3"
- top: "pool3"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "conv4"
- type: "Convolution"
- bottom: "pool3"
- top: "conv4"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 256
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu4"
- type: "ReLU"
- bottom: "conv4"
- top: "conv4"
- }
- layer {
- name: "pool4"
- type: "Pooling"
- bottom: "conv4"
- top: "pool4"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "conv5"
- type: "Convolution"
- bottom: "pool4"
- top: "conv5"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 1
- decay_mult: 1
- }
- convolution_param {
- num_output: 256
- kernel_size: 3
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu5"
- type: "ReLU"
- bottom: "conv5"
- top: "conv5"
- }
- layer {
- name: "pool5"
- type: "Pooling"
- bottom: "conv5"
- top: "pool5"
- pooling_param {
- pool: AVE
- kernel_size: 2
- stride: 2
- }
- }
- layer {
- name: "fc6"
- type: "InnerProduct"
- bottom: "pool5"
- top: "fc6"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 2048
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu6"
- type: "ReLU"
- bottom: "fc6"
- top: "fc6"
- }
- layer {
- name: "fc7"
- type: "InnerProduct"
- bottom: "fc6"
- top: "fc7"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 2048
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "relu7"
- type: "ReLU"
- bottom: "fc7"
- top: "fc7"
- }
- layer {
- name: "fc8"
- type: "InnerProduct"
- bottom: "fc7"
- top: "fc8"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 101
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "xavier"
- }
- }
- }
- layer {
- name: "accuracy"
- type: "Accuracy"
- bottom: "fc8"
- bottom: "label"
- top: "accuracy"
- include {
- phase: TEST
- }
- }
- layer {
- name: "loss"
- type: "SoftmaxWithLoss"
- bottom: "fc8"
- bottom: "label"
- top: "loss"
- }
- I0114 17:30:19.327607 86948 layer_factory.hpp:77] Creating layer data
- I0114 17:30:19.327620 86948 net.cpp:106] Creating Layer data
- I0114 17:30:19.327623 86948 net.cpp:411] data -> data
- I0114 17:30:19.327630 86948 net.cpp:411] data -> label
- I0114 17:30:19.327636 86948 hdf5_data_layer.cpp:79] Loading list of HDF5 filenames from: /home/fe/anilil/caffe/models/mv16f/test.txt
- I0114 17:30:19.327733 86948 hdf5_data_layer.cpp:93] Number of HDF5 files: 205
- I0114 17:30:20.281930 86948 net.cpp:150] Setting up data
- I0114 17:30:20.281970 86948 net.cpp:157] Top shape: 50 48 58 58 (8073600)
- I0114 17:30:20.281988 86948 net.cpp:157] Top shape: 50 1 (50)
- I0114 17:30:20.281991 86948 net.cpp:165] Memory required for data: 32294600
- I0114 17:30:20.281996 86948 layer_factory.hpp:77] Creating layer label_data_1_split
- I0114 17:30:20.282017 86948 net.cpp:106] Creating Layer label_data_1_split
- I0114 17:30:20.282021 86948 net.cpp:454] label_data_1_split <- label
- I0114 17:30:20.282027 86948 net.cpp:411] label_data_1_split -> label_data_1_split_0
- I0114 17:30:20.282037 86948 net.cpp:411] label_data_1_split -> label_data_1_split_1
- I0114 17:30:20.282089 86948 net.cpp:150] Setting up label_data_1_split
- I0114 17:30:20.282109 86948 net.cpp:157] Top shape: 50 1 (50)
- I0114 17:30:20.282112 86948 net.cpp:157] Top shape: 50 1 (50)
- I0114 17:30:20.282126 86948 net.cpp:165] Memory required for data: 32295000
- I0114 17:30:20.282130 86948 layer_factory.hpp:77] Creating layer conv1
- I0114 17:30:20.282142 86948 net.cpp:106] Creating Layer conv1
- I0114 17:30:20.282146 86948 net.cpp:454] conv1 <- data
- I0114 17:30:20.282151 86948 net.cpp:411] conv1 -> conv1
- I0114 17:30:20.283648 86948 net.cpp:150] Setting up conv1
- I0114 17:30:20.283661 86948 net.cpp:157] Top shape: 50 64 56 56 (10035200)
- I0114 17:30:20.283664 86948 net.cpp:165] Memory required for data: 72435800
- I0114 17:30:20.283674 86948 layer_factory.hpp:77] Creating layer relu1
- I0114 17:30:20.283679 86948 net.cpp:106] Creating Layer relu1
- I0114 17:30:20.283682 86948 net.cpp:454] relu1 <- conv1
- I0114 17:30:20.283686 86948 net.cpp:397] relu1 -> conv1 (in-place)
- I0114 17:30:20.283843 86948 net.cpp:150] Setting up relu1
- I0114 17:30:20.283851 86948 net.cpp:157] Top shape: 50 64 56 56 (10035200)
- I0114 17:30:20.283865 86948 net.cpp:165] Memory required for data: 112576600
- I0114 17:30:20.283869 86948 layer_factory.hpp:77] Creating layer pool1
- I0114 17:30:20.283875 86948 net.cpp:106] Creating Layer pool1
- I0114 17:30:20.283879 86948 net.cpp:454] pool1 <- conv1
- I0114 17:30:20.283882 86948 net.cpp:411] pool1 -> pool1
- I0114 17:30:20.284235 86948 net.cpp:150] Setting up pool1
- I0114 17:30:20.284245 86948 net.cpp:157] Top shape: 50 64 55 55 (9680000)
- I0114 17:30:20.284247 86948 net.cpp:165] Memory required for data: 151296600
- I0114 17:30:20.284250 86948 layer_factory.hpp:77] Creating layer conv2
- I0114 17:30:20.284260 86948 net.cpp:106] Creating Layer conv2
- I0114 17:30:20.284262 86948 net.cpp:454] conv2 <- pool1
- I0114 17:30:20.284266 86948 net.cpp:411] conv2 -> conv2
- I0114 17:30:20.285594 86948 net.cpp:150] Setting up conv2
- I0114 17:30:20.285606 86948 net.cpp:157] Top shape: 50 128 53 53 (17977600)
- I0114 17:30:20.285609 86948 net.cpp:165] Memory required for data: 223207000
- I0114 17:30:20.285617 86948 layer_factory.hpp:77] Creating layer relu2
- I0114 17:30:20.285621 86948 net.cpp:106] Creating Layer relu2
- I0114 17:30:20.285624 86948 net.cpp:454] relu2 <- conv2
- I0114 17:30:20.285629 86948 net.cpp:397] relu2 -> conv2 (in-place)
- I0114 17:30:20.285780 86948 net.cpp:150] Setting up relu2
- I0114 17:30:20.285787 86948 net.cpp:157] Top shape: 50 128 53 53 (17977600)
- I0114 17:30:20.285801 86948 net.cpp:165] Memory required for data: 295117400
- I0114 17:30:20.285804 86948 layer_factory.hpp:77] Creating layer pool2
- I0114 17:30:20.285809 86948 net.cpp:106] Creating Layer pool2
- I0114 17:30:20.285812 86948 net.cpp:454] pool2 <- conv2
- I0114 17:30:20.285816 86948 net.cpp:411] pool2 -> pool2
- I0114 17:30:20.286202 86948 net.cpp:150] Setting up pool2
- I0114 17:30:20.286224 86948 net.cpp:157] Top shape: 50 128 27 27 (4665600)
- I0114 17:30:20.286226 86948 net.cpp:165] Memory required for data: 313779800
- I0114 17:30:20.286231 86948 layer_factory.hpp:77] Creating layer conv3
- I0114 17:30:20.286250 86948 net.cpp:106] Creating Layer conv3
- I0114 17:30:20.286253 86948 net.cpp:454] conv3 <- pool2
- I0114 17:30:20.286259 86948 net.cpp:411] conv3 -> conv3
- I0114 17:30:20.289516 86948 net.cpp:150] Setting up conv3
- I0114 17:30:20.289530 86948 net.cpp:157] Top shape: 50 256 25 25 (8000000)
- I0114 17:30:20.289532 86948 net.cpp:165] Memory required for data: 345779800
- I0114 17:30:20.289541 86948 layer_factory.hpp:77] Creating layer relu3
- I0114 17:30:20.289547 86948 net.cpp:106] Creating Layer relu3
- I0114 17:30:20.289551 86948 net.cpp:454] relu3 <- conv3
- I0114 17:30:20.289554 86948 net.cpp:397] relu3 -> conv3 (in-place)
- I0114 17:30:20.289873 86948 net.cpp:150] Setting up relu3
- I0114 17:30:20.289896 86948 net.cpp:157] Top shape: 50 256 25 25 (8000000)
- I0114 17:30:20.289898 86948 net.cpp:165] Memory required for data: 377779800
- I0114 17:30:20.289901 86948 layer_factory.hpp:77] Creating layer pool3
- I0114 17:30:20.289919 86948 net.cpp:106] Creating Layer pool3
- I0114 17:30:20.289922 86948 net.cpp:454] pool3 <- conv3
- I0114 17:30:20.289927 86948 net.cpp:411] pool3 -> pool3
- I0114 17:30:20.290241 86948 net.cpp:150] Setting up pool3
- I0114 17:30:20.290263 86948 net.cpp:157] Top shape: 50 256 13 13 (2163200)
- I0114 17:30:20.290266 86948 net.cpp:165] Memory required for data: 386432600
- I0114 17:30:20.290269 86948 layer_factory.hpp:77] Creating layer conv4
- I0114 17:30:20.290292 86948 net.cpp:106] Creating Layer conv4
- I0114 17:30:20.290294 86948 net.cpp:454] conv4 <- pool3
- I0114 17:30:20.290299 86948 net.cpp:411] conv4 -> conv4
- I0114 17:30:20.296136 86948 net.cpp:150] Setting up conv4
- I0114 17:30:20.296149 86948 net.cpp:157] Top shape: 50 256 11 11 (1548800)
- I0114 17:30:20.296152 86948 net.cpp:165] Memory required for data: 392627800
- I0114 17:30:20.296159 86948 layer_factory.hpp:77] Creating layer relu4
- I0114 17:30:20.296165 86948 net.cpp:106] Creating Layer relu4
- I0114 17:30:20.296169 86948 net.cpp:454] relu4 <- conv4
- I0114 17:30:20.296172 86948 net.cpp:397] relu4 -> conv4 (in-place)
- I0114 17:30:20.296542 86948 net.cpp:150] Setting up relu4
- I0114 17:30:20.296553 86948 net.cpp:157] Top shape: 50 256 11 11 (1548800)
- I0114 17:30:20.296567 86948 net.cpp:165] Memory required for data: 398823000
- I0114 17:30:20.296571 86948 layer_factory.hpp:77] Creating layer pool4
- I0114 17:30:20.296589 86948 net.cpp:106] Creating Layer pool4
- I0114 17:30:20.296592 86948 net.cpp:454] pool4 <- conv4
- I0114 17:30:20.296598 86948 net.cpp:411] pool4 -> pool4
- I0114 17:30:20.296790 86948 net.cpp:150] Setting up pool4
- I0114 17:30:20.296799 86948 net.cpp:157] Top shape: 50 256 6 6 (460800)
- I0114 17:30:20.296813 86948 net.cpp:165] Memory required for data: 400666200
- I0114 17:30:20.296816 86948 layer_factory.hpp:77] Creating layer conv5
- I0114 17:30:20.296838 86948 net.cpp:106] Creating Layer conv5
- I0114 17:30:20.296841 86948 net.cpp:454] conv5 <- pool4
- I0114 17:30:20.296847 86948 net.cpp:411] conv5 -> conv5
- I0114 17:30:20.302322 86948 net.cpp:150] Setting up conv5
- I0114 17:30:20.302346 86948 net.cpp:157] Top shape: 50 256 4 4 (204800)
- I0114 17:30:20.302350 86948 net.cpp:165] Memory required for data: 401485400
- I0114 17:30:20.302369 86948 layer_factory.hpp:77] Creating layer relu5
- I0114 17:30:20.302376 86948 net.cpp:106] Creating Layer relu5
- I0114 17:30:20.302379 86948 net.cpp:454] relu5 <- conv5
- I0114 17:30:20.302383 86948 net.cpp:397] relu5 -> conv5 (in-place)
- I0114 17:30:20.302727 86948 net.cpp:150] Setting up relu5
- I0114 17:30:20.302738 86948 net.cpp:157] Top shape: 50 256 4 4 (204800)
- I0114 17:30:20.302741 86948 net.cpp:165] Memory required for data: 402304600
- I0114 17:30:20.302743 86948 layer_factory.hpp:77] Creating layer pool5
- I0114 17:30:20.302752 86948 net.cpp:106] Creating Layer pool5
- I0114 17:30:20.302755 86948 net.cpp:454] pool5 <- conv5
- I0114 17:30:20.302762 86948 net.cpp:411] pool5 -> pool5
- I0114 17:30:20.302942 86948 net.cpp:150] Setting up pool5
- I0114 17:30:20.302963 86948 net.cpp:157] Top shape: 50 256 2 2 (51200)
- I0114 17:30:20.302965 86948 net.cpp:165] Memory required for data: 402509400
- I0114 17:30:20.302968 86948 layer_factory.hpp:77] Creating layer fc6
- I0114 17:30:20.302974 86948 net.cpp:106] Creating Layer fc6
- I0114 17:30:20.302978 86948 net.cpp:454] fc6 <- pool5
- I0114 17:30:20.302995 86948 net.cpp:411] fc6 -> fc6
- I0114 17:30:20.331744 86948 net.cpp:150] Setting up fc6
- I0114 17:30:20.331787 86948 net.cpp:157] Top shape: 50 2048 (102400)
- I0114 17:30:20.331796 86948 net.cpp:165] Memory required for data: 402919000
- I0114 17:30:20.331814 86948 layer_factory.hpp:77] Creating layer relu6
- I0114 17:30:20.331830 86948 net.cpp:106] Creating Layer relu6
- I0114 17:30:20.331838 86948 net.cpp:454] relu6 <- fc6
- I0114 17:30:20.331852 86948 net.cpp:397] relu6 -> fc6 (in-place)
- I0114 17:30:20.332582 86948 net.cpp:150] Setting up relu6
- I0114 17:30:20.332607 86948 net.cpp:157] Top shape: 50 2048 (102400)
- I0114 17:30:20.332613 86948 net.cpp:165] Memory required for data: 403328600
- I0114 17:30:20.332620 86948 layer_factory.hpp:77] Creating layer fc7
- I0114 17:30:20.332638 86948 net.cpp:106] Creating Layer fc7
- I0114 17:30:20.332645 86948 net.cpp:454] fc7 <- fc6
- I0114 17:30:20.332656 86948 net.cpp:411] fc7 -> fc7
- I0114 17:30:20.391887 86948 net.cpp:150] Setting up fc7
- I0114 17:30:20.391923 86948 net.cpp:157] Top shape: 50 2048 (102400)
- I0114 17:30:20.391929 86948 net.cpp:165] Memory required for data: 403738200
- I0114 17:30:20.391943 86948 layer_factory.hpp:77] Creating layer relu7
- I0114 17:30:20.391959 86948 net.cpp:106] Creating Layer relu7
- I0114 17:30:20.391966 86948 net.cpp:454] relu7 <- fc7
- I0114 17:30:20.391979 86948 net.cpp:397] relu7 -> fc7 (in-place)
- I0114 17:30:20.392313 86948 net.cpp:150] Setting up relu7
- I0114 17:30:20.392329 86948 net.cpp:157] Top shape: 50 2048 (102400)
- I0114 17:30:20.392334 86948 net.cpp:165] Memory required for data: 404147800
- I0114 17:30:20.392339 86948 layer_factory.hpp:77] Creating layer fc8
- I0114 17:30:20.392351 86948 net.cpp:106] Creating Layer fc8
- I0114 17:30:20.392356 86948 net.cpp:454] fc8 <- fc7
- I0114 17:30:20.392366 86948 net.cpp:411] fc8 -> fc8
- I0114 17:30:20.395522 86948 net.cpp:150] Setting up fc8
- I0114 17:30:20.395540 86948 net.cpp:157] Top shape: 50 101 (5050)
- I0114 17:30:20.395545 86948 net.cpp:165] Memory required for data: 404168000
- I0114 17:30:20.395555 86948 layer_factory.hpp:77] Creating layer fc8_fc8_0_split
- I0114 17:30:20.395565 86948 net.cpp:106] Creating Layer fc8_fc8_0_split
- I0114 17:30:20.395570 86948 net.cpp:454] fc8_fc8_0_split <- fc8
- I0114 17:30:20.395576 86948 net.cpp:411] fc8_fc8_0_split -> fc8_fc8_0_split_0
- I0114 17:30:20.395611 86948 net.cpp:411] fc8_fc8_0_split -> fc8_fc8_0_split_1
- I0114 17:30:20.395666 86948 net.cpp:150] Setting up fc8_fc8_0_split
- I0114 17:30:20.395676 86948 net.cpp:157] Top shape: 50 101 (5050)
- I0114 17:30:20.395681 86948 net.cpp:157] Top shape: 50 101 (5050)
- I0114 17:30:20.395685 86948 net.cpp:165] Memory required for data: 404208400
- I0114 17:30:20.395689 86948 layer_factory.hpp:77] Creating layer accuracy
- I0114 17:30:20.395707 86948 net.cpp:106] Creating Layer accuracy
- I0114 17:30:20.395712 86948 net.cpp:454] accuracy <- fc8_fc8_0_split_0
- I0114 17:30:20.395719 86948 net.cpp:454] accuracy <- label_data_1_split_0
- I0114 17:30:20.395727 86948 net.cpp:411] accuracy -> accuracy
- I0114 17:30:20.395741 86948 net.cpp:150] Setting up accuracy
- I0114 17:30:20.395746 86948 net.cpp:157] Top shape: (1)
- I0114 17:30:20.395750 86948 net.cpp:165] Memory required for data: 404208404
- I0114 17:30:20.395756 86948 layer_factory.hpp:77] Creating layer loss
- I0114 17:30:20.395762 86948 net.cpp:106] Creating Layer loss
- I0114 17:30:20.395766 86948 net.cpp:454] loss <- fc8_fc8_0_split_1
- I0114 17:30:20.395772 86948 net.cpp:454] loss <- label_data_1_split_1
- I0114 17:30:20.395781 86948 net.cpp:411] loss -> loss
- I0114 17:30:20.395792 86948 layer_factory.hpp:77] Creating layer loss
- I0114 17:30:20.396368 86948 net.cpp:150] Setting up loss
- I0114 17:30:20.396384 86948 net.cpp:157] Top shape: (1)
- I0114 17:30:20.396389 86948 net.cpp:160] with loss weight 1
- I0114 17:30:20.396401 86948 net.cpp:165] Memory required for data: 404208408
- I0114 17:30:20.396406 86948 net.cpp:226] loss needs backward computation.
- I0114 17:30:20.396412 86948 net.cpp:228] accuracy does not need backward computation.
- I0114 17:30:20.396417 86948 net.cpp:226] fc8_fc8_0_split needs backward computation.
- I0114 17:30:20.396421 86948 net.cpp:226] fc8 needs backward computation.
- I0114 17:30:20.396425 86948 net.cpp:226] relu7 needs backward computation.
- I0114 17:30:20.396430 86948 net.cpp:226] fc7 needs backward computation.
- I0114 17:30:20.396433 86948 net.cpp:226] relu6 needs backward computation.
- I0114 17:30:20.396437 86948 net.cpp:226] fc6 needs backward computation.
- I0114 17:30:20.396442 86948 net.cpp:226] pool5 needs backward computation.
- I0114 17:30:20.396446 86948 net.cpp:226] relu5 needs backward computation.
- I0114 17:30:20.396451 86948 net.cpp:226] conv5 needs backward computation.
- I0114 17:30:20.396456 86948 net.cpp:226] pool4 needs backward computation.
- I0114 17:30:20.396461 86948 net.cpp:226] relu4 needs backward computation.
- I0114 17:30:20.396466 86948 net.cpp:226] conv4 needs backward computation.
- I0114 17:30:20.396469 86948 net.cpp:226] pool3 needs backward computation.
- I0114 17:30:20.396474 86948 net.cpp:226] relu3 needs backward computation.
- I0114 17:30:20.396478 86948 net.cpp:226] conv3 needs backward computation.
- I0114 17:30:20.396482 86948 net.cpp:226] pool2 needs backward computation.
- I0114 17:30:20.396487 86948 net.cpp:226] relu2 needs backward computation.
- I0114 17:30:20.396492 86948 net.cpp:226] conv2 needs backward computation.
- I0114 17:30:20.396495 86948 net.cpp:226] pool1 needs backward computation.
- I0114 17:30:20.396500 86948 net.cpp:226] relu1 needs backward computation.
- I0114 17:30:20.396504 86948 net.cpp:226] conv1 needs backward computation.
- I0114 17:30:20.396510 86948 net.cpp:228] label_data_1_split does not need backward computation.
- I0114 17:30:20.396517 86948 net.cpp:228] data does not need backward computation.
- I0114 17:30:20.396520 86948 net.cpp:270] This network produces output accuracy
- I0114 17:30:20.396525 86948 net.cpp:270] This network produces output loss
- I0114 17:30:20.396551 86948 net.cpp:283] Network initialization done.
- I0114 17:30:20.396718 86948 solver.cpp:60] Solver scaffolding done.
- I0114 17:30:20.397513 86948 caffe.cpp:128] Finetuning from models/mv16f/mv16f1__iter_5000.caffemodel
- I0114 17:30:31.195835 86948 parallel.cpp:391] GPUs pairs 0:1
- I0114 17:30:31.513195 86948 net.cpp:99] Sharing layer data from root net
- I0114 17:30:31.514991 86948 net.cpp:143] Created top blob 0 (shape: 150 48 58 58 (24220800)) for shared layer data
- I0114 17:30:31.515151 86948 net.cpp:143] Created top blob 1 (shape: 150 1 (150)) for shared layer data
- I0114 17:30:31.810956 86948 parallel.cpp:419] Starting Optimization
- I0114 17:30:31.811024 86948 solver.cpp:288] Solving mv_16f1
- I0114 17:30:31.811029 86948 solver.cpp:289] Learning Rate Policy: fixed
- I0114 17:30:31.811134 86948 solver.cpp:341] Iteration 0, Testing net (#0)
- I0114 17:30:32.401675 86948 solver.cpp:409] Test net output #0: accuracy = 0.03
- I0114 17:30:32.401713 86948 solver.cpp:409] Test net output #1: loss = 6.22612 (* 1 = 6.22612 loss)
- I0114 17:30:32.615201 86948 solver.cpp:237] Iteration 0, loss = 6.00407
- I0114 17:30:32.615250 86948 solver.cpp:253] Train net output #0: loss = 6.00407 (* 1 = 6.00407 loss)
- I0114 17:30:32.930352 86948 sgd_solver.cpp:106] Iteration 0, lr = 0.001
- I0114 17:53:19.297849 86948 solver.cpp:341] Iteration 500, Testing net (#0)
- I0114 17:53:20.085278 86948 solver.cpp:409] Test net output #0: accuracy = 0.08
- I0114 17:53:20.085350 86948 solver.cpp:409] Test net output #1: loss = 4.21846 (* 1 = 4.21846 loss)
- I0114 17:53:33.010169 86948 solver.cpp:237] Iteration 500, loss = 3.1388
- I0114 17:53:33.010236 86948 solver.cpp:253] Train net output #0: loss = 3.1388 (* 1 = 3.1388 loss)
- I0114 17:53:33.311209 86948 sgd_solver.cpp:106] Iteration 500, lr = 0.001
- I0114 18:15:45.039989 86948 solver.cpp:341] Iteration 1000, Testing net (#0)
- I0114 18:15:45.830772 86948 solver.cpp:409] Test net output #0: accuracy = 0.106
- I0114 18:15:45.830831 86948 solver.cpp:409] Test net output #1: loss = 4.29256 (* 1 = 4.29256 loss)
- I0114 18:16:07.472872 86948 solver.cpp:237] Iteration 1000, loss = 2.89553
- I0114 18:16:07.472937 86948 solver.cpp:253] Train net output #0: loss = 2.89553 (* 1 = 2.89553 loss)
- I0114 18:16:07.806648 86948 sgd_solver.cpp:106] Iteration 1000, lr = 0.001
- I0114 18:38:54.700490 86948 solver.cpp:341] Iteration 1500, Testing net (#0)
- I0114 18:38:55.191038 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0114 18:38:55.191095 86948 solver.cpp:409] Test net output #1: loss = 4.38897 (* 1 = 4.38897 loss)
- I0114 18:39:08.405194 86948 solver.cpp:237] Iteration 1500, loss = 2.40281
- I0114 18:39:08.405258 86948 solver.cpp:253] Train net output #0: loss = 2.40281 (* 1 = 2.40281 loss)
- I0114 18:39:08.707820 86948 sgd_solver.cpp:106] Iteration 1500, lr = 0.001
- I0114 19:01:43.483299 86948 solver.cpp:341] Iteration 2000, Testing net (#0)
- I0114 19:01:57.033280 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0114 19:01:57.033339 86948 solver.cpp:409] Test net output #1: loss = 4.58599 (* 1 = 4.58599 loss)
- I0114 19:02:15.622921 86948 solver.cpp:237] Iteration 2000, loss = 2.65678
- I0114 19:02:15.623080 86948 solver.cpp:253] Train net output #0: loss = 2.65678 (* 1 = 2.65678 loss)
- I0114 19:02:16.000957 86948 sgd_solver.cpp:106] Iteration 2000, lr = 0.001
- I0114 19:25:49.882880 86948 solver.cpp:341] Iteration 2500, Testing net (#0)
- I0114 19:25:50.668056 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0114 19:25:50.668131 86948 solver.cpp:409] Test net output #1: loss = 4.66895 (* 1 = 4.66895 loss)
- I0114 19:26:04.158541 86948 solver.cpp:237] Iteration 2500, loss = 2.29065
- I0114 19:26:04.158601 86948 solver.cpp:253] Train net output #0: loss = 2.29065 (* 1 = 2.29065 loss)
- I0114 19:26:04.158622 86948 sgd_solver.cpp:106] Iteration 2500, lr = 0.001
- I0114 19:48:19.053063 86948 solver.cpp:341] Iteration 3000, Testing net (#0)
- I0114 19:48:19.546254 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0114 19:48:19.546349 86948 solver.cpp:409] Test net output #1: loss = 5.19176 (* 1 = 5.19176 loss)
- I0114 19:48:32.406949 86948 solver.cpp:237] Iteration 3000, loss = 1.92223
- I0114 19:48:32.407004 86948 solver.cpp:253] Train net output #0: loss = 1.92223 (* 1 = 1.92223 loss)
- I0114 19:48:32.708853 86948 sgd_solver.cpp:106] Iteration 3000, lr = 0.001
- I0114 20:11:15.198561 86948 solver.cpp:341] Iteration 3500, Testing net (#0)
- I0114 20:11:15.988100 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0114 20:11:15.988169 86948 solver.cpp:409] Test net output #1: loss = 4.85761 (* 1 = 4.85761 loss)
- I0114 20:11:29.490351 86948 solver.cpp:237] Iteration 3500, loss = 1.88295
- I0114 20:11:29.490416 86948 solver.cpp:253] Train net output #0: loss = 1.88295 (* 1 = 1.88295 loss)
- I0114 20:11:29.864094 86948 sgd_solver.cpp:106] Iteration 3500, lr = 0.001
- I0114 20:33:36.204324 86948 solver.cpp:341] Iteration 4000, Testing net (#0)
- I0114 20:33:49.203364 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0114 20:33:49.203444 86948 solver.cpp:409] Test net output #1: loss = 5.02526 (* 1 = 5.02526 loss)
- I0114 20:34:01.464068 86948 solver.cpp:237] Iteration 4000, loss = 1.94949
- I0114 20:34:01.464118 86948 solver.cpp:253] Train net output #0: loss = 1.94949 (* 1 = 1.94949 loss)
- I0114 20:34:01.813761 86948 sgd_solver.cpp:106] Iteration 4000, lr = 0.001
- I0114 20:56:38.518110 86948 solver.cpp:341] Iteration 4500, Testing net (#0)
- I0114 20:56:39.302815 86948 solver.cpp:409] Test net output #0: accuracy = 0.146
- I0114 20:56:39.302886 86948 solver.cpp:409] Test net output #1: loss = 5.19903 (* 1 = 5.19903 loss)
- I0114 20:56:56.997396 86948 solver.cpp:237] Iteration 4500, loss = 1.50638
- I0114 20:56:56.997449 86948 solver.cpp:253] Train net output #0: loss = 1.50638 (* 1 = 1.50638 loss)
- I0114 20:56:57.367223 86948 sgd_solver.cpp:106] Iteration 4500, lr = 0.001
- I0114 21:19:51.173880 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_5000.caffemodel
- I0114 21:19:54.054301 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_5000.solverstate
- I0114 21:19:54.105180 86948 solver.cpp:341] Iteration 5000, Testing net (#0)
- I0114 21:19:54.591950 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0114 21:19:54.592030 86948 solver.cpp:409] Test net output #1: loss = 5.67052 (* 1 = 5.67052 loss)
- I0114 21:20:07.774837 86948 solver.cpp:237] Iteration 5000, loss = 1.57842
- I0114 21:20:07.774885 86948 solver.cpp:253] Train net output #0: loss = 1.57842 (* 1 = 1.57842 loss)
- I0114 21:20:08.038023 86948 sgd_solver.cpp:106] Iteration 5000, lr = 0.001
- I0114 21:43:29.756516 86948 solver.cpp:341] Iteration 5500, Testing net (#0)
- I0114 21:43:30.251178 86948 solver.cpp:409] Test net output #0: accuracy = 0.134
- I0114 21:43:30.251236 86948 solver.cpp:409] Test net output #1: loss = 6.00113 (* 1 = 6.00113 loss)
- I0114 21:43:42.861708 86948 solver.cpp:237] Iteration 5500, loss = 1.49909
- I0114 21:43:42.861762 86948 solver.cpp:253] Train net output #0: loss = 1.49909 (* 1 = 1.49909 loss)
- I0114 21:43:43.164937 86948 sgd_solver.cpp:106] Iteration 5500, lr = 0.001
- I0114 22:05:53.412967 86948 solver.cpp:341] Iteration 6000, Testing net (#0)
- I0114 22:06:06.204192 86948 solver.cpp:409] Test net output #0: accuracy = 0.134
- I0114 22:06:06.204249 86948 solver.cpp:409] Test net output #1: loss = 6.14714 (* 1 = 6.14714 loss)
- I0114 22:06:19.697239 86948 solver.cpp:237] Iteration 6000, loss = 1.08655
- I0114 22:06:19.697300 86948 solver.cpp:253] Train net output #0: loss = 1.08655 (* 1 = 1.08655 loss)
- I0114 22:06:19.999053 86948 sgd_solver.cpp:106] Iteration 6000, lr = 0.001
- I0114 22:28:44.623858 86948 solver.cpp:341] Iteration 6500, Testing net (#0)
- I0114 22:28:45.117502 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0114 22:28:45.117586 86948 solver.cpp:409] Test net output #1: loss = 6.22176 (* 1 = 6.22176 loss)
- I0114 22:29:02.504175 86948 solver.cpp:237] Iteration 6500, loss = 1.09802
- I0114 22:29:02.504230 86948 solver.cpp:253] Train net output #0: loss = 1.09802 (* 1 = 1.09802 loss)
- I0114 22:29:02.504263 86948 sgd_solver.cpp:106] Iteration 6500, lr = 0.001
- I0114 22:51:08.072340 86948 solver.cpp:341] Iteration 7000, Testing net (#0)
- I0114 22:51:08.570003 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0114 22:51:08.570077 86948 solver.cpp:409] Test net output #1: loss = 7.06356 (* 1 = 7.06356 loss)
- I0114 22:51:21.792124 86948 solver.cpp:237] Iteration 7000, loss = 1.38146
- I0114 22:51:21.792181 86948 solver.cpp:253] Train net output #0: loss = 1.38146 (* 1 = 1.38146 loss)
- I0114 22:51:22.154963 86948 sgd_solver.cpp:106] Iteration 7000, lr = 0.001
- I0114 23:13:57.562847 86948 solver.cpp:341] Iteration 7500, Testing net (#0)
- I0114 23:13:58.056743 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0114 23:13:58.056813 86948 solver.cpp:409] Test net output #1: loss = 6.39689 (* 1 = 6.39689 loss)
- I0114 23:14:19.756793 86948 solver.cpp:237] Iteration 7500, loss = 1.23898
- I0114 23:14:19.756858 86948 solver.cpp:253] Train net output #0: loss = 1.23898 (* 1 = 1.23898 loss)
- I0114 23:14:20.112123 86948 sgd_solver.cpp:106] Iteration 7500, lr = 0.001
- I0114 23:37:58.058513 86948 solver.cpp:341] Iteration 8000, Testing net (#0)
- I0114 23:38:11.531455 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0114 23:38:11.531529 86948 solver.cpp:409] Test net output #1: loss = 6.81792 (* 1 = 6.81792 loss)
- I0114 23:38:25.274824 86948 solver.cpp:237] Iteration 8000, loss = 1.02914
- I0114 23:38:25.274878 86948 solver.cpp:253] Train net output #0: loss = 1.02914 (* 1 = 1.02914 loss)
- I0114 23:38:25.577234 86948 sgd_solver.cpp:106] Iteration 8000, lr = 0.001
- I0115 00:05:17.607841 86948 solver.cpp:341] Iteration 8500, Testing net (#0)
- I0115 00:05:18.392987 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0115 00:05:18.393034 86948 solver.cpp:409] Test net output #1: loss = 6.75926 (* 1 = 6.75926 loss)
- I0115 00:05:45.961122 86948 solver.cpp:237] Iteration 8500, loss = 1.06676
- I0115 00:05:45.961179 86948 solver.cpp:253] Train net output #0: loss = 1.06676 (* 1 = 1.06676 loss)
- I0115 00:05:46.262384 86948 sgd_solver.cpp:106] Iteration 8500, lr = 0.001
- I0115 00:32:18.132803 86948 solver.cpp:341] Iteration 9000, Testing net (#0)
- I0115 00:32:18.625879 86948 solver.cpp:409] Test net output #0: accuracy = 0.134
- I0115 00:32:18.625929 86948 solver.cpp:409] Test net output #1: loss = 7.14524 (* 1 = 7.14524 loss)
- I0115 00:32:30.816737 86948 solver.cpp:237] Iteration 9000, loss = 0.860762
- I0115 00:32:30.816799 86948 solver.cpp:253] Train net output #0: loss = 0.860762 (* 1 = 0.860762 loss)
- I0115 00:32:31.143739 86948 sgd_solver.cpp:106] Iteration 9000, lr = 0.001
- I0115 01:00:46.037446 86948 solver.cpp:341] Iteration 9500, Testing net (#0)
- I0115 01:00:46.825520 86948 solver.cpp:409] Test net output #0: accuracy = 0.148
- I0115 01:00:46.825567 86948 solver.cpp:409] Test net output #1: loss = 7.651 (* 1 = 7.651 loss)
- I0115 01:01:12.079197 86948 solver.cpp:237] Iteration 9500, loss = 0.76086
- I0115 01:01:12.079242 86948 solver.cpp:253] Train net output #0: loss = 0.76086 (* 1 = 0.76086 loss)
- I0115 01:01:12.424202 86948 sgd_solver.cpp:106] Iteration 9500, lr = 0.001
- I0115 01:23:29.878836 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_10000.caffemodel
- I0115 01:23:31.686995 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_10000.solverstate
- I0115 01:23:31.741595 86948 solver.cpp:341] Iteration 10000, Testing net (#0)
- I0115 01:23:53.724689 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0115 01:23:53.724733 86948 solver.cpp:409] Test net output #1: loss = 7.54584 (* 1 = 7.54584 loss)
- I0115 01:24:14.223534 86948 solver.cpp:237] Iteration 10000, loss = 0.774985
- I0115 01:24:14.223733 86948 solver.cpp:253] Train net output #0: loss = 0.774985 (* 1 = 0.774985 loss)
- I0115 01:24:14.602334 86948 sgd_solver.cpp:106] Iteration 10000, lr = 0.001
- I0115 01:47:00.340564 86948 solver.cpp:341] Iteration 10500, Testing net (#0)
- I0115 01:47:01.126672 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0115 01:47:01.126751 86948 solver.cpp:409] Test net output #1: loss = 7.71585 (* 1 = 7.71585 loss)
- I0115 01:47:13.787405 86948 solver.cpp:237] Iteration 10500, loss = 0.845633
- I0115 01:47:13.787449 86948 solver.cpp:253] Train net output #0: loss = 0.845633 (* 1 = 0.845633 loss)
- I0115 01:47:13.787469 86948 sgd_solver.cpp:106] Iteration 10500, lr = 0.001
- I0115 02:15:35.751904 86948 solver.cpp:341] Iteration 11000, Testing net (#0)
- I0115 02:15:36.241739 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0115 02:15:36.241788 86948 solver.cpp:409] Test net output #1: loss = 9.06797 (* 1 = 9.06797 loss)
- I0115 02:15:48.996397 86948 solver.cpp:237] Iteration 11000, loss = 0.546424
- I0115 02:15:48.996453 86948 solver.cpp:253] Train net output #0: loss = 0.546424 (* 1 = 0.546424 loss)
- I0115 02:15:48.996476 86948 sgd_solver.cpp:106] Iteration 11000, lr = 0.001
- I0115 02:43:31.419049 86948 solver.cpp:341] Iteration 11500, Testing net (#0)
- I0115 02:43:32.210687 86948 solver.cpp:409] Test net output #0: accuracy = 0.104
- I0115 02:43:32.210752 86948 solver.cpp:409] Test net output #1: loss = 8.80403 (* 1 = 8.80403 loss)
- I0115 02:43:44.821365 86948 solver.cpp:237] Iteration 11500, loss = 0.706103
- I0115 02:43:44.821415 86948 solver.cpp:253] Train net output #0: loss = 0.706103 (* 1 = 0.706103 loss)
- I0115 02:43:45.123746 86948 sgd_solver.cpp:106] Iteration 11500, lr = 0.001
- I0115 03:14:23.524358 86948 solver.cpp:341] Iteration 12000, Testing net (#0)
- I0115 03:14:46.023252 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0115 03:14:46.023300 86948 solver.cpp:409] Test net output #1: loss = 8.83962 (* 1 = 8.83962 loss)
- I0115 03:14:59.884909 86948 solver.cpp:237] Iteration 12000, loss = 0.786133
- I0115 03:14:59.885141 86948 solver.cpp:253] Train net output #0: loss = 0.786133 (* 1 = 0.786133 loss)
- I0115 03:15:00.243067 86948 sgd_solver.cpp:106] Iteration 12000, lr = 0.001
- I0115 03:41:47.587093 86948 solver.cpp:341] Iteration 12500, Testing net (#0)
- I0115 03:41:48.073796 86948 solver.cpp:409] Test net output #0: accuracy = 0.098
- I0115 03:41:48.073854 86948 solver.cpp:409] Test net output #1: loss = 9.32967 (* 1 = 9.32967 loss)
- I0115 03:42:09.332177 86948 solver.cpp:237] Iteration 12500, loss = 0.707963
- I0115 03:42:09.332263 86948 solver.cpp:253] Train net output #0: loss = 0.707963 (* 1 = 0.707963 loss)
- I0115 03:42:09.676595 86948 sgd_solver.cpp:106] Iteration 12500, lr = 0.001
- I0115 04:06:02.889426 86948 solver.cpp:341] Iteration 13000, Testing net (#0)
- I0115 04:06:03.378060 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0115 04:06:03.378123 86948 solver.cpp:409] Test net output #1: loss = 9.17314 (* 1 = 9.17314 loss)
- I0115 04:06:22.668329 86948 solver.cpp:237] Iteration 13000, loss = 0.509333
- I0115 04:06:22.668419 86948 solver.cpp:253] Train net output #0: loss = 0.509333 (* 1 = 0.509333 loss)
- I0115 04:06:22.970981 86948 sgd_solver.cpp:106] Iteration 13000, lr = 0.001
- I0115 04:32:01.955811 86948 solver.cpp:341] Iteration 13500, Testing net (#0)
- I0115 04:32:02.444028 86948 solver.cpp:409] Test net output #0: accuracy = 0.11
- I0115 04:32:02.444092 86948 solver.cpp:409] Test net output #1: loss = 9.12479 (* 1 = 9.12479 loss)
- I0115 04:32:20.073575 86948 solver.cpp:237] Iteration 13500, loss = 0.500275
- I0115 04:32:20.073634 86948 solver.cpp:253] Train net output #0: loss = 0.500275 (* 1 = 0.500275 loss)
- I0115 04:32:20.435401 86948 sgd_solver.cpp:106] Iteration 13500, lr = 0.001
- I0115 04:57:03.939975 86948 solver.cpp:341] Iteration 14000, Testing net (#0)
- I0115 04:57:24.463392 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0115 04:57:24.463464 86948 solver.cpp:409] Test net output #1: loss = 9.76042 (* 1 = 9.76042 loss)
- I0115 04:57:48.684335 86948 solver.cpp:237] Iteration 14000, loss = 0.390494
- I0115 04:57:48.684521 86948 solver.cpp:253] Train net output #0: loss = 0.390494 (* 1 = 0.390494 loss)
- I0115 04:57:49.035643 86948 sgd_solver.cpp:106] Iteration 14000, lr = 0.001
- I0115 05:26:57.706825 86948 solver.cpp:341] Iteration 14500, Testing net (#0)
- I0115 05:26:58.193374 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0115 05:26:58.193425 86948 solver.cpp:409] Test net output #1: loss = 10.1548 (* 1 = 10.1548 loss)
- I0115 05:27:13.510231 86948 solver.cpp:237] Iteration 14500, loss = 0.692382
- I0115 05:27:13.510294 86948 solver.cpp:253] Train net output #0: loss = 0.692382 (* 1 = 0.692382 loss)
- I0115 05:27:13.812265 86948 sgd_solver.cpp:106] Iteration 14500, lr = 0.001
- I0115 05:55:32.336146 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_15000.caffemodel
- I0115 05:55:34.037616 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_15000.solverstate
- I0115 05:55:34.086087 86948 solver.cpp:341] Iteration 15000, Testing net (#0)
- I0115 05:55:34.573166 86948 solver.cpp:409] Test net output #0: accuracy = 0.106
- I0115 05:55:34.573215 86948 solver.cpp:409] Test net output #1: loss = 9.8564 (* 1 = 9.8564 loss)
- I0115 05:55:55.581955 86948 solver.cpp:237] Iteration 15000, loss = 0.507724
- I0115 05:55:55.582023 86948 solver.cpp:253] Train net output #0: loss = 0.507724 (* 1 = 0.507724 loss)
- I0115 05:55:55.884019 86948 sgd_solver.cpp:106] Iteration 15000, lr = 0.001
- I0115 06:23:44.102008 86948 solver.cpp:341] Iteration 15500, Testing net (#0)
- I0115 06:23:44.889958 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0115 06:23:44.890012 86948 solver.cpp:409] Test net output #1: loss = 10.4728 (* 1 = 10.4728 loss)
- I0115 06:24:01.230536 86948 solver.cpp:237] Iteration 15500, loss = 0.417666
- I0115 06:24:01.230588 86948 solver.cpp:253] Train net output #0: loss = 0.417666 (* 1 = 0.417666 loss)
- I0115 06:24:01.606312 86948 sgd_solver.cpp:106] Iteration 15500, lr = 0.001
- I0115 06:48:03.508607 86948 solver.cpp:341] Iteration 16000, Testing net (#0)
- I0115 06:48:26.399644 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0115 06:48:26.399693 86948 solver.cpp:409] Test net output #1: loss = 10.979 (* 1 = 10.979 loss)
- I0115 06:48:27.674831 86948 solver.cpp:237] Iteration 16000, loss = 0.40489
- I0115 06:48:27.674890 86948 solver.cpp:253] Train net output #0: loss = 0.40489 (* 1 = 0.40489 loss)
- I0115 06:48:28.055383 86948 sgd_solver.cpp:106] Iteration 16000, lr = 0.001
- I0115 07:15:44.285596 86948 solver.cpp:341] Iteration 16500, Testing net (#0)
- I0115 07:15:45.069802 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0115 07:15:45.069869 86948 solver.cpp:409] Test net output #1: loss = 10.3104 (* 1 = 10.3104 loss)
- I0115 07:16:09.445392 86948 solver.cpp:237] Iteration 16500, loss = 0.29393
- I0115 07:16:09.445464 86948 solver.cpp:253] Train net output #0: loss = 0.29393 (* 1 = 0.29393 loss)
- I0115 07:16:09.445511 86948 sgd_solver.cpp:106] Iteration 16500, lr = 0.001
- I0115 07:38:32.618374 86948 solver.cpp:341] Iteration 17000, Testing net (#0)
- I0115 07:38:33.405944 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0115 07:38:33.405997 86948 solver.cpp:409] Test net output #1: loss = 11.4087 (* 1 = 11.4087 loss)
- I0115 07:39:02.668887 86948 solver.cpp:237] Iteration 17000, loss = 0.475379
- I0115 07:39:02.669137 86948 solver.cpp:253] Train net output #0: loss = 0.475379 (* 1 = 0.475379 loss)
- I0115 07:39:03.016024 86948 sgd_solver.cpp:106] Iteration 17000, lr = 0.001
- I0115 08:07:22.610738 86948 solver.cpp:341] Iteration 17500, Testing net (#0)
- I0115 08:07:23.399281 86948 solver.cpp:409] Test net output #0: accuracy = 0.098
- I0115 08:07:23.399346 86948 solver.cpp:409] Test net output #1: loss = 11.3343 (* 1 = 11.3343 loss)
- I0115 08:07:35.556828 86948 solver.cpp:237] Iteration 17500, loss = 0.412808
- I0115 08:07:35.556901 86948 solver.cpp:253] Train net output #0: loss = 0.412808 (* 1 = 0.412808 loss)
- I0115 08:07:35.929672 86948 sgd_solver.cpp:106] Iteration 17500, lr = 0.001
- I0115 08:35:21.397052 86948 solver.cpp:341] Iteration 18000, Testing net (#0)
- I0115 08:35:37.317168 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0115 08:35:37.317226 86948 solver.cpp:409] Test net output #1: loss = 11.6325 (* 1 = 11.6325 loss)
- I0115 08:35:54.273187 86948 solver.cpp:237] Iteration 18000, loss = 0.330421
- I0115 08:35:54.273433 86948 solver.cpp:253] Train net output #0: loss = 0.330421 (* 1 = 0.330421 loss)
- I0115 08:35:54.644815 86948 sgd_solver.cpp:106] Iteration 18000, lr = 0.001
- I0115 09:04:32.714617 86948 solver.cpp:341] Iteration 18500, Testing net (#0)
- I0115 09:04:33.499600 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0115 09:04:33.499655 86948 solver.cpp:409] Test net output #1: loss = 11.2051 (* 1 = 11.2051 loss)
- I0115 09:04:45.729574 86948 solver.cpp:237] Iteration 18500, loss = 0.248244
- I0115 09:04:45.729639 86948 solver.cpp:253] Train net output #0: loss = 0.248244 (* 1 = 0.248244 loss)
- I0115 09:04:46.056254 86948 sgd_solver.cpp:106] Iteration 18500, lr = 0.001
- I0115 09:33:10.057044 86948 solver.cpp:341] Iteration 19000, Testing net (#0)
- I0115 09:33:10.552685 86948 solver.cpp:409] Test net output #0: accuracy = 0.088
- I0115 09:33:10.552734 86948 solver.cpp:409] Test net output #1: loss = 12.5566 (* 1 = 12.5566 loss)
- I0115 09:33:27.995590 86948 solver.cpp:237] Iteration 19000, loss = 0.353737
- I0115 09:33:27.995659 86948 solver.cpp:253] Train net output #0: loss = 0.353737 (* 1 = 0.353737 loss)
- I0115 09:33:28.297780 86948 sgd_solver.cpp:106] Iteration 19000, lr = 0.001
- I0115 09:59:55.573832 86948 solver.cpp:341] Iteration 19500, Testing net (#0)
- I0115 09:59:56.362241 86948 solver.cpp:409] Test net output #0: accuracy = 0.1
- I0115 09:59:56.362289 86948 solver.cpp:409] Test net output #1: loss = 12.3916 (* 1 = 12.3916 loss)
- I0115 10:00:18.836007 86948 solver.cpp:237] Iteration 19500, loss = 0.343227
- I0115 10:00:18.836064 86948 solver.cpp:253] Train net output #0: loss = 0.343227 (* 1 = 0.343227 loss)
- I0115 10:00:19.138316 86948 sgd_solver.cpp:106] Iteration 19500, lr = 0.001
- I0115 10:28:15.445359 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_20000.caffemodel
- I0115 10:28:17.336303 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_20000.solverstate
- I0115 10:28:17.381590 86948 solver.cpp:341] Iteration 20000, Testing net (#0)
- I0115 10:28:39.768867 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0115 10:28:39.768931 86948 solver.cpp:409] Test net output #1: loss = 12.533 (* 1 = 12.533 loss)
- I0115 10:28:54.931838 86948 solver.cpp:237] Iteration 20000, loss = 0.288482
- I0115 10:28:54.932060 86948 solver.cpp:253] Train net output #0: loss = 0.288482 (* 1 = 0.288482 loss)
- I0115 10:28:55.283051 86948 sgd_solver.cpp:106] Iteration 20000, lr = 0.001
- I0115 10:57:32.207489 86948 solver.cpp:341] Iteration 20500, Testing net (#0)
- I0115 10:57:32.996423 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0115 10:57:32.996495 86948 solver.cpp:409] Test net output #1: loss = 12.6864 (* 1 = 12.6864 loss)
- I0115 10:57:47.032009 86948 solver.cpp:237] Iteration 20500, loss = 0.232661
- I0115 10:57:47.032083 86948 solver.cpp:253] Train net output #0: loss = 0.232661 (* 1 = 0.232661 loss)
- I0115 10:57:47.383950 86948 sgd_solver.cpp:106] Iteration 20500, lr = 0.001
- I0115 11:25:23.241185 86948 solver.cpp:341] Iteration 21000, Testing net (#0)
- I0115 11:25:24.027587 86948 solver.cpp:409] Test net output #0: accuracy = 0.088
- I0115 11:25:24.027698 86948 solver.cpp:409] Test net output #1: loss = 13.6841 (* 1 = 13.6841 loss)
- I0115 11:25:39.803107 86948 solver.cpp:237] Iteration 21000, loss = 0.163174
- I0115 11:25:39.803169 86948 solver.cpp:253] Train net output #0: loss = 0.163174 (* 1 = 0.163174 loss)
- I0115 11:25:40.157781 86948 sgd_solver.cpp:106] Iteration 21000, lr = 0.001
- I0115 11:54:44.030809 86948 solver.cpp:341] Iteration 21500, Testing net (#0)
- I0115 11:54:44.521996 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0115 11:54:44.522053 86948 solver.cpp:409] Test net output #1: loss = 12.5438 (* 1 = 12.5438 loss)
- I0115 11:54:57.836741 86948 solver.cpp:237] Iteration 21500, loss = 0.339656
- I0115 11:54:57.836797 86948 solver.cpp:253] Train net output #0: loss = 0.339656 (* 1 = 0.339656 loss)
- I0115 11:54:58.139484 86948 sgd_solver.cpp:106] Iteration 21500, lr = 0.001
- I0115 12:22:13.480896 86948 solver.cpp:341] Iteration 22000, Testing net (#0)
- I0115 12:22:28.147697 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0115 12:22:28.147763 86948 solver.cpp:409] Test net output #1: loss = 12.3577 (* 1 = 12.3577 loss)
- I0115 12:22:49.409201 86948 solver.cpp:237] Iteration 22000, loss = 0.13327
- I0115 12:22:49.409493 86948 solver.cpp:253] Train net output #0: loss = 0.133271 (* 1 = 0.133271 loss)
- I0115 12:22:49.774583 86948 sgd_solver.cpp:106] Iteration 22000, lr = 0.001
- I0115 12:47:31.931730 86948 solver.cpp:341] Iteration 22500, Testing net (#0)
- I0115 12:47:32.418171 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0115 12:47:32.418236 86948 solver.cpp:409] Test net output #1: loss = 13.1006 (* 1 = 13.1006 loss)
- I0115 12:47:54.692035 86948 solver.cpp:237] Iteration 22500, loss = 0.171656
- I0115 12:47:54.692101 86948 solver.cpp:253] Train net output #0: loss = 0.171656 (* 1 = 0.171656 loss)
- I0115 12:47:55.034996 86948 sgd_solver.cpp:106] Iteration 22500, lr = 0.001
- I0115 13:15:58.244537 86948 solver.cpp:341] Iteration 23000, Testing net (#0)
- I0115 13:15:59.031538 86948 solver.cpp:409] Test net output #0: accuracy = 0.106
- I0115 13:15:59.031599 86948 solver.cpp:409] Test net output #1: loss = 13.0743 (* 1 = 13.0743 loss)
- I0115 13:16:18.517042 86948 solver.cpp:237] Iteration 23000, loss = 0.137712
- I0115 13:16:18.517102 86948 solver.cpp:253] Train net output #0: loss = 0.137712 (* 1 = 0.137712 loss)
- I0115 13:16:18.517133 86948 sgd_solver.cpp:106] Iteration 23000, lr = 0.001
- I0115 13:45:23.165734 86948 solver.cpp:341] Iteration 23500, Testing net (#0)
- I0115 13:45:23.656873 86948 solver.cpp:409] Test net output #0: accuracy = 0.092
- I0115 13:45:23.656924 86948 solver.cpp:409] Test net output #1: loss = 13.9083 (* 1 = 13.9083 loss)
- I0115 13:45:48.137435 86948 solver.cpp:237] Iteration 23500, loss = 0.265021
- I0115 13:45:48.137493 86948 solver.cpp:253] Train net output #0: loss = 0.265021 (* 1 = 0.265021 loss)
- I0115 13:45:48.466833 86948 sgd_solver.cpp:106] Iteration 23500, lr = 0.001
- I0115 14:15:13.209718 86948 solver.cpp:341] Iteration 24000, Testing net (#0)
- I0115 14:15:27.127425 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0115 14:15:27.127475 86948 solver.cpp:409] Test net output #1: loss = 13.4618 (* 1 = 13.4618 loss)
- I0115 14:15:52.682905 86948 solver.cpp:237] Iteration 24000, loss = 0.0797336
- I0115 14:15:52.683089 86948 solver.cpp:253] Train net output #0: loss = 0.0797339 (* 1 = 0.0797339 loss)
- I0115 14:15:53.035676 86948 sgd_solver.cpp:106] Iteration 24000, lr = 0.001
- I0115 14:44:07.245959 86948 solver.cpp:341] Iteration 24500, Testing net (#0)
- I0115 14:44:08.031157 86948 solver.cpp:409] Test net output #0: accuracy = 0.13
- I0115 14:44:08.031208 86948 solver.cpp:409] Test net output #1: loss = 13.2707 (* 1 = 13.2707 loss)
- I0115 14:44:20.815029 86948 solver.cpp:237] Iteration 24500, loss = 0.194503
- I0115 14:44:20.815084 86948 solver.cpp:253] Train net output #0: loss = 0.194503 (* 1 = 0.194503 loss)
- I0115 14:44:21.117302 86948 sgd_solver.cpp:106] Iteration 24500, lr = 0.001
- I0115 15:13:06.487923 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_25000.caffemodel
- I0115 15:13:08.724500 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_25000.solverstate
- I0115 15:13:08.772209 86948 solver.cpp:341] Iteration 25000, Testing net (#0)
- I0115 15:13:09.255282 86948 solver.cpp:409] Test net output #0: accuracy = 0.14
- I0115 15:13:09.255313 86948 solver.cpp:409] Test net output #1: loss = 13.9014 (* 1 = 13.9014 loss)
- I0115 15:13:28.586627 86948 solver.cpp:237] Iteration 25000, loss = 0.198213
- I0115 15:13:28.586673 86948 solver.cpp:253] Train net output #0: loss = 0.198213 (* 1 = 0.198213 loss)
- I0115 15:13:28.586694 86948 sgd_solver.cpp:106] Iteration 25000, lr = 0.001
- I0115 15:39:11.728093 86948 solver.cpp:341] Iteration 25500, Testing net (#0)
- I0115 15:39:12.220155 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0115 15:39:12.220222 86948 solver.cpp:409] Test net output #1: loss = 14.6033 (* 1 = 14.6033 loss)
- I0115 15:39:13.502130 86948 solver.cpp:237] Iteration 25500, loss = 0.250862
- I0115 15:39:13.502204 86948 solver.cpp:253] Train net output #0: loss = 0.250862 (* 1 = 0.250862 loss)
- I0115 15:39:13.805196 86948 sgd_solver.cpp:106] Iteration 25500, lr = 0.001
- I0115 16:04:51.572655 86948 solver.cpp:341] Iteration 26000, Testing net (#0)
- I0115 16:05:10.281999 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0115 16:05:10.282049 86948 solver.cpp:409] Test net output #1: loss = 15.0046 (* 1 = 15.0046 loss)
- I0115 16:05:32.037425 86948 solver.cpp:237] Iteration 26000, loss = 0.0719195
- I0115 16:05:32.037680 86948 solver.cpp:253] Train net output #0: loss = 0.0719199 (* 1 = 0.0719199 loss)
- I0115 16:05:32.395720 86948 sgd_solver.cpp:106] Iteration 26000, lr = 0.001
- I0115 16:34:12.324597 86948 solver.cpp:341] Iteration 26500, Testing net (#0)
- I0115 16:34:12.813323 86948 solver.cpp:409] Test net output #0: accuracy = 0.11
- I0115 16:34:12.813388 86948 solver.cpp:409] Test net output #1: loss = 14.6345 (* 1 = 14.6345 loss)
- I0115 16:34:29.782642 86948 solver.cpp:237] Iteration 26500, loss = 0.0966772
- I0115 16:34:29.782699 86948 solver.cpp:253] Train net output #0: loss = 0.0966777 (* 1 = 0.0966777 loss)
- I0115 16:34:30.084247 86948 sgd_solver.cpp:106] Iteration 26500, lr = 0.001
- I0115 17:03:03.270098 86948 solver.cpp:341] Iteration 27000, Testing net (#0)
- I0115 17:03:03.758162 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0115 17:03:03.758208 86948 solver.cpp:409] Test net output #1: loss = 14.6089 (* 1 = 14.6089 loss)
- I0115 17:03:24.800110 86948 solver.cpp:237] Iteration 27000, loss = 0.11815
- I0115 17:03:24.800163 86948 solver.cpp:253] Train net output #0: loss = 0.118151 (* 1 = 0.118151 loss)
- I0115 17:03:25.180635 86948 sgd_solver.cpp:106] Iteration 27000, lr = 0.001
- I0115 17:30:54.107142 86948 solver.cpp:341] Iteration 27500, Testing net (#0)
- I0115 17:30:54.893143 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0115 17:30:54.893173 86948 solver.cpp:409] Test net output #1: loss = 15.1023 (* 1 = 15.1023 loss)
- I0115 17:31:15.332089 86948 solver.cpp:237] Iteration 27500, loss = 0.166894
- I0115 17:31:15.332123 86948 solver.cpp:253] Train net output #0: loss = 0.166895 (* 1 = 0.166895 loss)
- I0115 17:31:15.332142 86948 sgd_solver.cpp:106] Iteration 27500, lr = 0.001
- I0115 17:59:33.270324 86948 solver.cpp:341] Iteration 28000, Testing net (#0)
- I0115 17:59:47.750169 86948 solver.cpp:409] Test net output #0: accuracy = 0.098
- I0115 17:59:47.750216 86948 solver.cpp:409] Test net output #1: loss = 15.66 (* 1 = 15.66 loss)
- I0115 18:00:00.982518 86948 solver.cpp:237] Iteration 28000, loss = 0.0917245
- I0115 18:00:00.982571 86948 solver.cpp:253] Train net output #0: loss = 0.0917249 (* 1 = 0.0917249 loss)
- I0115 18:00:00.982594 86948 sgd_solver.cpp:106] Iteration 28000, lr = 0.001
- I0115 18:27:58.396294 86948 solver.cpp:341] Iteration 28500, Testing net (#0)
- I0115 18:27:58.884178 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0115 18:27:58.884222 86948 solver.cpp:409] Test net output #1: loss = 15.1945 (* 1 = 15.1945 loss)
- I0115 18:28:10.991932 86948 solver.cpp:237] Iteration 28500, loss = 0.233415
- I0115 18:28:10.991986 86948 solver.cpp:253] Train net output #0: loss = 0.233416 (* 1 = 0.233416 loss)
- I0115 18:28:11.294566 86948 sgd_solver.cpp:106] Iteration 28500, lr = 0.001
- I0115 18:53:29.752965 86948 solver.cpp:341] Iteration 29000, Testing net (#0)
- I0115 18:53:30.241336 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0115 18:53:30.241379 86948 solver.cpp:409] Test net output #1: loss = 15.9175 (* 1 = 15.9175 loss)
- I0115 18:53:45.822639 86948 solver.cpp:237] Iteration 29000, loss = 0.180933
- I0115 18:53:45.822686 86948 solver.cpp:253] Train net output #0: loss = 0.180933 (* 1 = 0.180933 loss)
- I0115 18:53:46.178261 86948 sgd_solver.cpp:106] Iteration 29000, lr = 0.001
- I0115 19:20:57.301084 86948 solver.cpp:341] Iteration 29500, Testing net (#0)
- I0115 19:20:58.086940 86948 solver.cpp:409] Test net output #0: accuracy = 0.108
- I0115 19:20:58.086983 86948 solver.cpp:409] Test net output #1: loss = 15.0374 (* 1 = 15.0374 loss)
- I0115 19:21:15.846012 86948 solver.cpp:237] Iteration 29500, loss = 0.126706
- I0115 19:21:15.846066 86948 solver.cpp:253] Train net output #0: loss = 0.126706 (* 1 = 0.126706 loss)
- I0115 19:21:16.226754 86948 sgd_solver.cpp:106] Iteration 29500, lr = 0.001
- I0115 19:50:20.279408 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_30000.caffemodel
- I0115 19:50:23.981377 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_30000.solverstate
- I0115 19:50:24.027272 86948 solver.cpp:341] Iteration 30000, Testing net (#0)
- I0115 19:50:41.670526 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0115 19:50:41.670569 86948 solver.cpp:409] Test net output #1: loss = 14.7405 (* 1 = 14.7405 loss)
- I0115 19:51:10.223618 86948 solver.cpp:237] Iteration 30000, loss = 0.115344
- I0115 19:51:10.223824 86948 solver.cpp:253] Train net output #0: loss = 0.115344 (* 1 = 0.115344 loss)
- I0115 19:51:10.554566 86948 sgd_solver.cpp:106] Iteration 30000, lr = 0.001
- I0115 20:19:14.663985 86948 solver.cpp:341] Iteration 30500, Testing net (#0)
- I0115 20:19:15.150887 86948 solver.cpp:409] Test net output #0: accuracy = 0.108
- I0115 20:19:15.150933 86948 solver.cpp:409] Test net output #1: loss = 14.5146 (* 1 = 14.5146 loss)
- I0115 20:19:27.349344 86948 solver.cpp:237] Iteration 30500, loss = 0.176101
- I0115 20:19:27.349388 86948 solver.cpp:253] Train net output #0: loss = 0.176101 (* 1 = 0.176101 loss)
- I0115 20:19:27.715607 86948 sgd_solver.cpp:106] Iteration 30500, lr = 0.001
- I0115 20:48:49.768007 86948 solver.cpp:341] Iteration 31000, Testing net (#0)
- I0115 20:48:50.554081 86948 solver.cpp:409] Test net output #0: accuracy = 0.146
- I0115 20:48:50.554113 86948 solver.cpp:409] Test net output #1: loss = 15.8795 (* 1 = 15.8795 loss)
- I0115 20:49:03.957561 86948 solver.cpp:237] Iteration 31000, loss = 0.152509
- I0115 20:49:03.957617 86948 solver.cpp:253] Train net output #0: loss = 0.15251 (* 1 = 0.15251 loss)
- I0115 20:49:04.340080 86948 sgd_solver.cpp:106] Iteration 31000, lr = 0.001
- I0115 21:17:05.424391 86948 solver.cpp:341] Iteration 31500, Testing net (#0)
- I0115 21:17:06.212334 86948 solver.cpp:409] Test net output #0: accuracy = 0.14
- I0115 21:17:06.212385 86948 solver.cpp:409] Test net output #1: loss = 15.1155 (* 1 = 15.1155 loss)
- I0115 21:17:24.824414 86948 solver.cpp:237] Iteration 31500, loss = 0.106935
- I0115 21:17:24.824450 86948 solver.cpp:253] Train net output #0: loss = 0.106935 (* 1 = 0.106935 loss)
- I0115 21:17:24.824468 86948 sgd_solver.cpp:106] Iteration 31500, lr = 0.001
- I0115 21:42:54.570103 86948 solver.cpp:341] Iteration 32000, Testing net (#0)
- I0115 21:43:11.151145 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0115 21:43:11.151206 86948 solver.cpp:409] Test net output #1: loss = 15.0276 (* 1 = 15.0276 loss)
- I0115 21:43:37.811069 86948 solver.cpp:237] Iteration 32000, loss = 0.105481
- I0115 21:43:37.811275 86948 solver.cpp:253] Train net output #0: loss = 0.105482 (* 1 = 0.105482 loss)
- I0115 21:43:38.146634 86948 sgd_solver.cpp:106] Iteration 32000, lr = 0.001
- I0115 22:10:46.791887 86948 solver.cpp:341] Iteration 32500, Testing net (#0)
- I0115 22:10:47.576876 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0115 22:10:47.576911 86948 solver.cpp:409] Test net output #1: loss = 16.8682 (* 1 = 16.8682 loss)
- I0115 22:11:00.348250 86948 solver.cpp:237] Iteration 32500, loss = 0.136433
- I0115 22:11:00.348315 86948 solver.cpp:253] Train net output #0: loss = 0.136434 (* 1 = 0.136434 loss)
- I0115 22:11:00.703428 86948 sgd_solver.cpp:106] Iteration 32500, lr = 0.001
- I0115 22:40:09.482702 86948 solver.cpp:341] Iteration 33000, Testing net (#0)
- I0115 22:40:09.970733 86948 solver.cpp:409] Test net output #0: accuracy = 0.106
- I0115 22:40:09.970774 86948 solver.cpp:409] Test net output #1: loss = 15.7237 (* 1 = 15.7237 loss)
- I0115 22:40:33.196830 86948 solver.cpp:237] Iteration 33000, loss = 0.0419159
- I0115 22:40:33.196878 86948 solver.cpp:253] Train net output #0: loss = 0.0419162 (* 1 = 0.0419162 loss)
- I0115 22:40:33.520575 86948 sgd_solver.cpp:106] Iteration 33000, lr = 0.001
- I0115 23:09:33.160050 86948 solver.cpp:341] Iteration 33500, Testing net (#0)
- I0115 23:09:33.949401 86948 solver.cpp:409] Test net output #0: accuracy = 0.128
- I0115 23:09:33.949445 86948 solver.cpp:409] Test net output #1: loss = 16.4421 (* 1 = 16.4421 loss)
- I0115 23:09:55.919976 86948 solver.cpp:237] Iteration 33500, loss = 0.0335577
- I0115 23:09:55.920012 86948 solver.cpp:253] Train net output #0: loss = 0.0335579 (* 1 = 0.0335579 loss)
- I0115 23:09:55.920037 86948 sgd_solver.cpp:106] Iteration 33500, lr = 0.001
- I0115 23:38:55.816653 86948 solver.cpp:341] Iteration 34000, Testing net (#0)
- I0115 23:39:11.819865 86948 solver.cpp:409] Test net output #0: accuracy = 0.128
- I0115 23:39:11.819916 86948 solver.cpp:409] Test net output #1: loss = 15.4734 (* 1 = 15.4734 loss)
- I0115 23:39:39.288936 86948 solver.cpp:237] Iteration 34000, loss = 0.0957365
- I0115 23:39:39.289160 86948 solver.cpp:253] Train net output #0: loss = 0.0957366 (* 1 = 0.0957366 loss)
- I0115 23:39:39.647867 86948 sgd_solver.cpp:106] Iteration 34000, lr = 0.001
- I0116 00:07:39.981012 86948 solver.cpp:341] Iteration 34500, Testing net (#0)
- I0116 00:07:40.466253 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0116 00:07:40.466292 86948 solver.cpp:409] Test net output #1: loss = 16.4261 (* 1 = 16.4261 loss)
- I0116 00:07:53.536689 86948 solver.cpp:237] Iteration 34500, loss = 0.0408948
- I0116 00:07:53.536746 86948 solver.cpp:253] Train net output #0: loss = 0.040895 (* 1 = 0.040895 loss)
- I0116 00:07:53.899355 86948 sgd_solver.cpp:106] Iteration 34500, lr = 0.001
- I0116 00:36:33.319710 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_35000.caffemodel
- I0116 00:36:35.887605 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_35000.solverstate
- I0116 00:36:35.931835 86948 solver.cpp:341] Iteration 35000, Testing net (#0)
- I0116 00:36:36.414018 86948 solver.cpp:409] Test net output #0: accuracy = 0.13
- I0116 00:36:36.414049 86948 solver.cpp:409] Test net output #1: loss = 15.4727 (* 1 = 15.4727 loss)
- I0116 00:36:56.310194 86948 solver.cpp:237] Iteration 35000, loss = 0.043666
- I0116 00:36:56.310250 86948 solver.cpp:253] Train net output #0: loss = 0.0436661 (* 1 = 0.0436661 loss)
- I0116 00:36:56.666497 86948 sgd_solver.cpp:106] Iteration 35000, lr = 0.001
- I0116 01:03:28.900647 86948 solver.cpp:341] Iteration 35500, Testing net (#0)
- I0116 01:03:29.686399 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0116 01:03:29.686441 86948 solver.cpp:409] Test net output #1: loss = 15.9959 (* 1 = 15.9959 loss)
- I0116 01:03:59.950546 86948 solver.cpp:237] Iteration 35500, loss = 0.0498535
- I0116 01:03:59.950760 86948 solver.cpp:253] Train net output #0: loss = 0.0498535 (* 1 = 0.0498535 loss)
- I0116 01:04:00.292171 86948 sgd_solver.cpp:106] Iteration 35500, lr = 0.001
- I0116 01:30:57.973039 86948 solver.cpp:341] Iteration 36000, Testing net (#0)
- I0116 01:31:12.973151 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0116 01:31:12.973201 86948 solver.cpp:409] Test net output #1: loss = 16.1145 (* 1 = 16.1145 loss)
- I0116 01:31:27.247730 86948 solver.cpp:237] Iteration 36000, loss = 0.0438099
- I0116 01:31:27.247781 86948 solver.cpp:253] Train net output #0: loss = 0.04381 (* 1 = 0.04381 loss)
- I0116 01:31:27.622767 86948 sgd_solver.cpp:106] Iteration 36000, lr = 0.001
- I0116 02:00:12.971065 86948 solver.cpp:341] Iteration 36500, Testing net (#0)
- I0116 02:00:13.755668 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0116 02:00:13.755719 86948 solver.cpp:409] Test net output #1: loss = 15.5805 (* 1 = 15.5805 loss)
- I0116 02:00:36.261240 86948 solver.cpp:237] Iteration 36500, loss = 0.132418
- I0116 02:00:36.261309 86948 solver.cpp:253] Train net output #0: loss = 0.132418 (* 1 = 0.132418 loss)
- I0116 02:00:36.563027 86948 sgd_solver.cpp:106] Iteration 36500, lr = 0.001
- I0116 02:29:10.770051 86948 solver.cpp:341] Iteration 37000, Testing net (#0)
- I0116 02:29:11.554711 86948 solver.cpp:409] Test net output #0: accuracy = 0.1
- I0116 02:29:11.554755 86948 solver.cpp:409] Test net output #1: loss = 17.2159 (* 1 = 17.2159 loss)
- I0116 02:29:33.752339 86948 solver.cpp:237] Iteration 37000, loss = 0.049209
- I0116 02:29:33.752393 86948 solver.cpp:253] Train net output #0: loss = 0.049209 (* 1 = 0.049209 loss)
- I0116 02:29:34.055008 86948 sgd_solver.cpp:106] Iteration 37000, lr = 0.001
- I0116 02:59:24.341718 86948 solver.cpp:341] Iteration 37500, Testing net (#0)
- I0116 02:59:24.835041 86948 solver.cpp:409] Test net output #0: accuracy = 0.108
- I0116 02:59:24.835085 86948 solver.cpp:409] Test net output #1: loss = 16.7629 (* 1 = 16.7629 loss)
- I0116 02:59:43.998987 86948 solver.cpp:237] Iteration 37500, loss = 0.0429082
- I0116 02:59:43.999044 86948 solver.cpp:253] Train net output #0: loss = 0.0429082 (* 1 = 0.0429082 loss)
- I0116 02:59:44.358476 86948 sgd_solver.cpp:106] Iteration 37500, lr = 0.001
- I0116 03:25:54.589052 86948 solver.cpp:341] Iteration 38000, Testing net (#0)
- I0116 03:26:07.712508 86948 solver.cpp:409] Test net output #0: accuracy = 0.11
- I0116 03:26:07.712555 86948 solver.cpp:409] Test net output #1: loss = 16.8652 (* 1 = 16.8652 loss)
- I0116 03:26:25.959569 86948 solver.cpp:237] Iteration 38000, loss = 0.0664696
- I0116 03:26:25.959764 86948 solver.cpp:253] Train net output #0: loss = 0.0664696 (* 1 = 0.0664696 loss)
- I0116 03:26:25.959852 86948 sgd_solver.cpp:106] Iteration 38000, lr = 0.001
- I0116 03:52:32.271384 86948 solver.cpp:341] Iteration 38500, Testing net (#0)
- I0116 03:52:32.757511 86948 solver.cpp:409] Test net output #0: accuracy = 0.102
- I0116 03:52:32.757550 86948 solver.cpp:409] Test net output #1: loss = 16.6012 (* 1 = 16.6012 loss)
- I0116 03:52:45.400449 86948 solver.cpp:237] Iteration 38500, loss = 0.0561813
- I0116 03:52:45.400507 86948 solver.cpp:253] Train net output #0: loss = 0.0561813 (* 1 = 0.0561813 loss)
- I0116 03:52:45.703629 86948 sgd_solver.cpp:106] Iteration 38500, lr = 0.001
- I0116 04:19:38.358794 86948 solver.cpp:341] Iteration 39000, Testing net (#0)
- I0116 04:19:39.143568 86948 solver.cpp:409] Test net output #0: accuracy = 0.11
- I0116 04:19:39.143609 86948 solver.cpp:409] Test net output #1: loss = 17.4246 (* 1 = 17.4246 loss)
- I0116 04:19:51.897547 86948 solver.cpp:237] Iteration 39000, loss = 0.0750932
- I0116 04:19:51.897608 86948 solver.cpp:253] Train net output #0: loss = 0.0750932 (* 1 = 0.0750932 loss)
- I0116 04:19:52.250996 86948 sgd_solver.cpp:106] Iteration 39000, lr = 0.001
- I0116 04:48:40.934551 86948 solver.cpp:341] Iteration 39500, Testing net (#0)
- I0116 04:48:41.424099 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0116 04:48:41.424131 86948 solver.cpp:409] Test net output #1: loss = 16.8196 (* 1 = 16.8196 loss)
- I0116 04:49:06.289463 86948 solver.cpp:237] Iteration 39500, loss = 0.0240466
- I0116 04:49:06.289499 86948 solver.cpp:253] Train net output #0: loss = 0.0240466 (* 1 = 0.0240466 loss)
- I0116 04:49:06.289518 86948 sgd_solver.cpp:106] Iteration 39500, lr = 0.001
- I0116 05:18:16.326318 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_40000.caffemodel
- I0116 05:18:19.084363 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_40000.solverstate
- I0116 05:18:19.128511 86948 solver.cpp:341] Iteration 40000, Testing net (#0)
- I0116 05:18:41.247922 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0116 05:18:41.247972 86948 solver.cpp:409] Test net output #1: loss = 16.2713 (* 1 = 16.2713 loss)
- I0116 05:18:53.945335 86948 solver.cpp:237] Iteration 40000, loss = 0.0314111
- I0116 05:18:53.945523 86948 solver.cpp:253] Train net output #0: loss = 0.0314112 (* 1 = 0.0314112 loss)
- I0116 05:18:54.285737 86948 sgd_solver.cpp:106] Iteration 40000, lr = 0.001
- I0116 05:48:00.650080 86948 solver.cpp:341] Iteration 40500, Testing net (#0)
- I0116 05:48:01.433145 86948 solver.cpp:409] Test net output #0: accuracy = 0.13
- I0116 05:48:01.433185 86948 solver.cpp:409] Test net output #1: loss = 16.1484 (* 1 = 16.1484 loss)
- I0116 05:48:16.489727 86948 solver.cpp:237] Iteration 40500, loss = 0.0346045
- I0116 05:48:16.489781 86948 solver.cpp:253] Train net output #0: loss = 0.0346046 (* 1 = 0.0346046 loss)
- I0116 05:48:16.792301 86948 sgd_solver.cpp:106] Iteration 40500, lr = 0.001
- I0116 06:16:05.988806 86948 solver.cpp:341] Iteration 41000, Testing net (#0)
- I0116 06:16:06.773151 86948 solver.cpp:409] Test net output #0: accuracy = 0.146
- I0116 06:16:06.773193 86948 solver.cpp:409] Test net output #1: loss = 16.7342 (* 1 = 16.7342 loss)
- I0116 06:16:34.219686 86948 solver.cpp:237] Iteration 41000, loss = 0.0419025
- I0116 06:16:34.219740 86948 solver.cpp:253] Train net output #0: loss = 0.0419026 (* 1 = 0.0419026 loss)
- I0116 06:16:34.522406 86948 sgd_solver.cpp:106] Iteration 41000, lr = 0.001
- I0116 06:42:35.883358 86948 solver.cpp:341] Iteration 41500, Testing net (#0)
- I0116 06:42:36.669574 86948 solver.cpp:409] Test net output #0: accuracy = 0.102
- I0116 06:42:36.669616 86948 solver.cpp:409] Test net output #1: loss = 17.7167 (* 1 = 17.7167 loss)
- I0116 06:42:49.773461 86948 solver.cpp:237] Iteration 41500, loss = 0.185499
- I0116 06:42:49.773519 86948 solver.cpp:253] Train net output #0: loss = 0.185499 (* 1 = 0.185499 loss)
- I0116 06:42:50.104356 86948 sgd_solver.cpp:106] Iteration 41500, lr = 0.001
- I0116 07:11:21.097192 86948 solver.cpp:341] Iteration 42000, Testing net (#0)
- I0116 07:11:46.179417 86948 solver.cpp:409] Test net output #0: accuracy = 0.092
- I0116 07:11:46.179472 86948 solver.cpp:409] Test net output #1: loss = 17.2835 (* 1 = 17.2835 loss)
- I0116 07:11:58.591433 86948 solver.cpp:237] Iteration 42000, loss = 0.0348851
- I0116 07:11:58.591639 86948 solver.cpp:253] Train net output #0: loss = 0.0348851 (* 1 = 0.0348851 loss)
- I0116 07:11:58.952178 86948 sgd_solver.cpp:106] Iteration 42000, lr = 0.001
- I0116 07:40:17.186990 86948 solver.cpp:341] Iteration 42500, Testing net (#0)
- I0116 07:40:17.672905 86948 solver.cpp:409] Test net output #0: accuracy = 0.106
- I0116 07:40:17.672950 86948 solver.cpp:409] Test net output #1: loss = 15.8583 (* 1 = 15.8583 loss)
- I0116 07:40:41.425642 86948 solver.cpp:237] Iteration 42500, loss = 0.199413
- I0116 07:40:41.425703 86948 solver.cpp:253] Train net output #0: loss = 0.199413 (* 1 = 0.199413 loss)
- I0116 07:40:41.761833 86948 sgd_solver.cpp:106] Iteration 42500, lr = 0.001
- I0116 08:09:26.900053 86948 solver.cpp:341] Iteration 43000, Testing net (#0)
- I0116 08:09:27.686225 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0116 08:09:27.686270 86948 solver.cpp:409] Test net output #1: loss = 15.5427 (* 1 = 15.5427 loss)
- I0116 08:09:41.389711 86948 solver.cpp:237] Iteration 43000, loss = 0.0642446
- I0116 08:09:41.389744 86948 solver.cpp:253] Train net output #0: loss = 0.0642446 (* 1 = 0.0642446 loss)
- I0116 08:09:41.389763 86948 sgd_solver.cpp:106] Iteration 43000, lr = 0.001
- I0116 08:38:30.601132 86948 solver.cpp:341] Iteration 43500, Testing net (#0)
- I0116 08:38:31.386473 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0116 08:38:31.386512 86948 solver.cpp:409] Test net output #1: loss = 17.499 (* 1 = 17.499 loss)
- I0116 08:38:50.902880 86948 solver.cpp:237] Iteration 43500, loss = 0.0141587
- I0116 08:38:50.902937 86948 solver.cpp:253] Train net output #0: loss = 0.0141588 (* 1 = 0.0141588 loss)
- I0116 08:38:51.279814 86948 sgd_solver.cpp:106] Iteration 43500, lr = 0.001
- I0116 09:07:12.407667 86948 solver.cpp:341] Iteration 44000, Testing net (#0)
- I0116 09:07:34.280910 86948 solver.cpp:409] Test net output #0: accuracy = 0.102
- I0116 09:07:34.280952 86948 solver.cpp:409] Test net output #1: loss = 16.8864 (* 1 = 16.8864 loss)
- I0116 09:07:47.130345 86948 solver.cpp:237] Iteration 44000, loss = 0.0879925
- I0116 09:07:47.130524 86948 solver.cpp:253] Train net output #0: loss = 0.0879925 (* 1 = 0.0879925 loss)
- I0116 09:07:47.479935 86948 sgd_solver.cpp:106] Iteration 44000, lr = 0.001
- I0116 09:36:28.176921 86948 solver.cpp:341] Iteration 44500, Testing net (#0)
- I0116 09:36:28.962368 86948 solver.cpp:409] Test net output #0: accuracy = 0.096
- I0116 09:36:28.962402 86948 solver.cpp:409] Test net output #1: loss = 17.2819 (* 1 = 17.2819 loss)
- I0116 09:36:58.008808 86948 solver.cpp:237] Iteration 44500, loss = 0.0444473
- I0116 09:36:58.008863 86948 solver.cpp:253] Train net output #0: loss = 0.0444473 (* 1 = 0.0444473 loss)
- I0116 09:36:58.356963 86948 sgd_solver.cpp:106] Iteration 44500, lr = 0.001
- I0116 10:01:30.690524 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_45000.caffemodel
- I0116 10:01:32.237660 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_45000.solverstate
- I0116 10:01:32.281229 86948 solver.cpp:341] Iteration 45000, Testing net (#0)
- I0116 10:01:32.763458 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0116 10:01:32.763500 86948 solver.cpp:409] Test net output #1: loss = 17.0017 (* 1 = 17.0017 loss)
- I0116 10:01:57.153758 86948 solver.cpp:237] Iteration 45000, loss = 0.0694832
- I0116 10:01:57.153815 86948 solver.cpp:253] Train net output #0: loss = 0.0694833 (* 1 = 0.0694833 loss)
- I0116 10:01:57.495278 86948 sgd_solver.cpp:106] Iteration 45000, lr = 0.001
- I0116 10:29:44.696324 86948 solver.cpp:341] Iteration 45500, Testing net (#0)
- I0116 10:29:45.185446 86948 solver.cpp:409] Test net output #0: accuracy = 0.096
- I0116 10:29:45.185494 86948 solver.cpp:409] Test net output #1: loss = 17.6054 (* 1 = 17.6054 loss)
- I0116 10:30:04.526257 86948 solver.cpp:237] Iteration 45500, loss = 0.0797003
- I0116 10:30:04.526335 86948 solver.cpp:253] Train net output #0: loss = 0.0797004 (* 1 = 0.0797004 loss)
- I0116 10:30:04.827440 86948 sgd_solver.cpp:106] Iteration 45500, lr = 0.001
- I0116 10:59:38.546458 86948 solver.cpp:341] Iteration 46000, Testing net (#0)
- I0116 10:59:51.304450 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0116 10:59:51.304498 86948 solver.cpp:409] Test net output #1: loss = 16.352 (* 1 = 16.352 loss)
- I0116 11:00:03.451951 86948 solver.cpp:237] Iteration 46000, loss = 0.032807
- I0116 11:00:03.451997 86948 solver.cpp:253] Train net output #0: loss = 0.0328071 (* 1 = 0.0328071 loss)
- I0116 11:00:03.806876 86948 sgd_solver.cpp:106] Iteration 46000, lr = 0.001
- I0116 11:28:33.824901 86948 solver.cpp:341] Iteration 46500, Testing net (#0)
- I0116 11:28:34.310847 86948 solver.cpp:409] Test net output #0: accuracy = 0.102
- I0116 11:28:34.310897 86948 solver.cpp:409] Test net output #1: loss = 18.0733 (* 1 = 18.0733 loss)
- I0116 11:29:01.760802 86948 solver.cpp:237] Iteration 46500, loss = 0.0403756
- I0116 11:29:01.760859 86948 solver.cpp:253] Train net output #0: loss = 0.0403757 (* 1 = 0.0403757 loss)
- I0116 11:29:02.063665 86948 sgd_solver.cpp:106] Iteration 46500, lr = 0.001
- I0116 11:58:13.606930 86948 solver.cpp:341] Iteration 47000, Testing net (#0)
- I0116 11:58:14.391067 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0116 11:58:14.391113 86948 solver.cpp:409] Test net output #1: loss = 17.0273 (* 1 = 17.0273 loss)
- I0116 11:58:26.920436 86948 solver.cpp:237] Iteration 47000, loss = 0.010671
- I0116 11:58:26.920490 86948 solver.cpp:253] Train net output #0: loss = 0.0106711 (* 1 = 0.0106711 loss)
- I0116 11:58:27.222714 86948 sgd_solver.cpp:106] Iteration 47000, lr = 0.001
- I0116 12:25:42.220270 86948 solver.cpp:341] Iteration 47500, Testing net (#0)
- I0116 12:25:42.707748 86948 solver.cpp:409] Test net output #0: accuracy = 0.136
- I0116 12:25:42.707782 86948 solver.cpp:409] Test net output #1: loss = 17.2746 (* 1 = 17.2746 loss)
- I0116 12:26:04.379840 86948 solver.cpp:237] Iteration 47500, loss = 0.00723714
- I0116 12:26:04.379889 86948 solver.cpp:253] Train net output #0: loss = 0.00723719 (* 1 = 0.00723719 loss)
- I0116 12:26:04.719759 86948 sgd_solver.cpp:106] Iteration 47500, lr = 0.001
- I0116 12:50:40.256570 86948 solver.cpp:341] Iteration 48000, Testing net (#0)
- I0116 12:51:04.559180 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0116 12:51:04.559231 86948 solver.cpp:409] Test net output #1: loss = 17.5079 (* 1 = 17.5079 loss)
- I0116 12:51:30.824625 86948 solver.cpp:237] Iteration 48000, loss = 0.011206
- I0116 12:51:30.824817 86948 solver.cpp:253] Train net output #0: loss = 0.011206 (* 1 = 0.011206 loss)
- I0116 12:51:31.175508 86948 sgd_solver.cpp:106] Iteration 48000, lr = 0.001
- I0116 13:18:06.765019 86948 solver.cpp:341] Iteration 48500, Testing net (#0)
- I0116 13:18:07.413496 86948 solver.cpp:409] Test net output #0: accuracy = 0.136
- I0116 13:18:07.413537 86948 solver.cpp:409] Test net output #1: loss = 16.5992 (* 1 = 16.5992 loss)
- I0116 13:18:22.153182 86948 solver.cpp:237] Iteration 48500, loss = 0.0693408
- I0116 13:18:22.153237 86948 solver.cpp:253] Train net output #0: loss = 0.0693408 (* 1 = 0.0693408 loss)
- I0116 13:18:22.454764 86948 sgd_solver.cpp:106] Iteration 48500, lr = 0.001
- I0116 13:46:26.325832 86948 solver.cpp:341] Iteration 49000, Testing net (#0)
- I0116 13:46:26.812815 86948 solver.cpp:409] Test net output #0: accuracy = 0.082
- I0116 13:46:26.812847 86948 solver.cpp:409] Test net output #1: loss = 17.991 (* 1 = 17.991 loss)
- I0116 13:46:47.049461 86948 solver.cpp:237] Iteration 49000, loss = 0.0393182
- I0116 13:46:47.049496 86948 solver.cpp:253] Train net output #0: loss = 0.0393183 (* 1 = 0.0393183 loss)
- I0116 13:46:47.049516 86948 sgd_solver.cpp:106] Iteration 49000, lr = 0.001
- I0116 14:15:12.316920 86948 solver.cpp:341] Iteration 49500, Testing net (#0)
- I0116 14:15:12.803696 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0116 14:15:12.803738 86948 solver.cpp:409] Test net output #1: loss = 18.0324 (* 1 = 18.0324 loss)
- I0116 14:15:32.959098 86948 solver.cpp:237] Iteration 49500, loss = 0.00838045
- I0116 14:15:32.959175 86948 solver.cpp:253] Train net output #0: loss = 0.0083806 (* 1 = 0.0083806 loss)
- I0116 14:15:32.959216 86948 sgd_solver.cpp:106] Iteration 49500, lr = 0.001
- I0116 14:45:00.736219 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_50000.caffemodel
- I0116 14:45:04.874878 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_50000.solverstate
- I0116 14:45:04.918927 86948 solver.cpp:341] Iteration 50000, Testing net (#0)
- I0116 14:45:24.349766 86948 solver.cpp:409] Test net output #0: accuracy = 0.156
- I0116 14:45:24.349812 86948 solver.cpp:409] Test net output #1: loss = 17.3452 (* 1 = 17.3452 loss)
- I0116 14:45:46.836374 86948 solver.cpp:237] Iteration 50000, loss = 0.0652684
- I0116 14:45:46.836606 86948 solver.cpp:253] Train net output #0: loss = 0.0652686 (* 1 = 0.0652686 loss)
- I0116 14:45:47.176255 86948 sgd_solver.cpp:106] Iteration 50000, lr = 0.001
- I0116 15:14:51.746951 86948 solver.cpp:341] Iteration 50500, Testing net (#0)
- I0116 15:14:52.531561 86948 solver.cpp:409] Test net output #0: accuracy = 0.11
- I0116 15:14:52.531596 86948 solver.cpp:409] Test net output #1: loss = 18.264 (* 1 = 18.264 loss)
- I0116 15:15:14.273367 86948 solver.cpp:237] Iteration 50500, loss = 0.00904272
- I0116 15:15:14.273427 86948 solver.cpp:253] Train net output #0: loss = 0.00904288 (* 1 = 0.00904288 loss)
- I0116 15:15:14.617744 86948 sgd_solver.cpp:106] Iteration 50500, lr = 0.001
- I0116 15:40:51.166714 86948 solver.cpp:341] Iteration 51000, Testing net (#0)
- I0116 15:40:51.652114 86948 solver.cpp:409] Test net output #0: accuracy = 0.11
- I0116 15:40:51.652145 86948 solver.cpp:409] Test net output #1: loss = 18.104 (* 1 = 18.104 loss)
- I0116 15:41:06.815246 86948 solver.cpp:237] Iteration 51000, loss = 0.0139255
- I0116 15:41:06.815307 86948 solver.cpp:253] Train net output #0: loss = 0.0139257 (* 1 = 0.0139257 loss)
- I0116 15:41:07.148931 86948 sgd_solver.cpp:106] Iteration 51000, lr = 0.001
- I0116 16:10:14.715281 86948 solver.cpp:341] Iteration 51500, Testing net (#0)
- I0116 16:10:15.497902 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0116 16:10:15.497944 86948 solver.cpp:409] Test net output #1: loss = 16.8469 (* 1 = 16.8469 loss)
- I0116 16:10:30.471583 86948 solver.cpp:237] Iteration 51500, loss = 0.00585824
- I0116 16:10:30.471635 86948 solver.cpp:253] Train net output #0: loss = 0.0058584 (* 1 = 0.0058584 loss)
- I0116 16:10:30.773869 86948 sgd_solver.cpp:106] Iteration 51500, lr = 0.001
- I0116 16:40:01.208590 86948 solver.cpp:341] Iteration 52000, Testing net (#0)
- I0116 16:40:14.902446 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0116 16:40:14.902493 86948 solver.cpp:409] Test net output #1: loss = 17.7235 (* 1 = 17.7235 loss)
- I0116 16:40:35.560999 86948 solver.cpp:237] Iteration 52000, loss = 0.00612289
- I0116 16:40:35.561203 86948 solver.cpp:253] Train net output #0: loss = 0.00612306 (* 1 = 0.00612306 loss)
- I0116 16:40:35.912436 86948 sgd_solver.cpp:106] Iteration 52000, lr = 0.001
- I0116 17:08:55.274165 86948 solver.cpp:341] Iteration 52500, Testing net (#0)
- I0116 17:08:55.761176 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0116 17:08:55.761209 86948 solver.cpp:409] Test net output #1: loss = 17.5617 (* 1 = 17.5617 loss)
- I0116 17:09:21.139521 86948 solver.cpp:237] Iteration 52500, loss = 0.014499
- I0116 17:09:21.139580 86948 solver.cpp:253] Train net output #0: loss = 0.0144992 (* 1 = 0.0144992 loss)
- I0116 17:09:21.471717 86948 sgd_solver.cpp:106] Iteration 52500, lr = 0.001
- I0116 17:39:18.728796 86948 solver.cpp:341] Iteration 53000, Testing net (#0)
- I0116 17:39:19.215486 86948 solver.cpp:409] Test net output #0: accuracy = 0.14
- I0116 17:39:19.215517 86948 solver.cpp:409] Test net output #1: loss = 17.877 (* 1 = 17.877 loss)
- I0116 17:39:31.311669 86948 solver.cpp:237] Iteration 53000, loss = 0.00112286
- I0116 17:39:31.311717 86948 solver.cpp:253] Train net output #0: loss = 0.00112301 (* 1 = 0.00112301 loss)
- I0116 17:39:31.657688 86948 sgd_solver.cpp:106] Iteration 53000, lr = 0.001
- I0116 18:08:52.516023 86948 solver.cpp:341] Iteration 53500, Testing net (#0)
- I0116 18:08:53.300496 86948 solver.cpp:409] Test net output #0: accuracy = 0.146
- I0116 18:08:53.300536 86948 solver.cpp:409] Test net output #1: loss = 17.146 (* 1 = 17.146 loss)
- I0116 18:09:07.123847 86948 solver.cpp:237] Iteration 53500, loss = 0.0122715
- I0116 18:09:07.123903 86948 solver.cpp:253] Train net output #0: loss = 0.0122717 (* 1 = 0.0122717 loss)
- I0116 18:09:07.487051 86948 sgd_solver.cpp:106] Iteration 53500, lr = 0.001
- I0116 18:38:23.766234 86948 solver.cpp:341] Iteration 54000, Testing net (#0)
- I0116 18:38:42.114236 86948 solver.cpp:409] Test net output #0: accuracy = 0.128
- I0116 18:38:42.114279 86948 solver.cpp:409] Test net output #1: loss = 17.6793 (* 1 = 17.6793 loss)
- I0116 18:38:55.071338 86948 solver.cpp:237] Iteration 54000, loss = 0.00152475
- I0116 18:38:55.071554 86948 solver.cpp:253] Train net output #0: loss = 0.00152491 (* 1 = 0.00152491 loss)
- I0116 18:38:55.444756 86948 sgd_solver.cpp:106] Iteration 54000, lr = 0.001
- I0116 19:04:11.690522 86948 solver.cpp:341] Iteration 54500, Testing net (#0)
- I0116 19:04:12.474172 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0116 19:04:12.474216 86948 solver.cpp:409] Test net output #1: loss = 18.4609 (* 1 = 18.4609 loss)
- I0116 19:04:26.059206 86948 solver.cpp:237] Iteration 54500, loss = 0.00278085
- I0116 19:04:26.059262 86948 solver.cpp:253] Train net output #0: loss = 0.00278101 (* 1 = 0.00278101 loss)
- I0116 19:04:26.362669 86948 sgd_solver.cpp:106] Iteration 54500, lr = 0.001
- I0116 19:32:57.336834 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_55000.caffemodel
- I0116 19:33:02.048396 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_55000.solverstate
- I0116 19:33:02.093430 86948 solver.cpp:341] Iteration 55000, Testing net (#0)
- I0116 19:33:02.575043 86948 solver.cpp:409] Test net output #0: accuracy = 0.1
- I0116 19:33:02.575074 86948 solver.cpp:409] Test net output #1: loss = 17.5898 (* 1 = 17.5898 loss)
- I0116 19:33:14.721052 86948 solver.cpp:237] Iteration 55000, loss = 0.00244798
- I0116 19:33:14.721104 86948 solver.cpp:253] Train net output #0: loss = 0.00244815 (* 1 = 0.00244815 loss)
- I0116 19:33:15.023622 86948 sgd_solver.cpp:106] Iteration 55000, lr = 0.001
- I0116 20:00:52.280625 86948 solver.cpp:341] Iteration 55500, Testing net (#0)
- I0116 20:00:53.064491 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0116 20:00:53.064527 86948 solver.cpp:409] Test net output #1: loss = 18.1016 (* 1 = 18.1016 loss)
- I0116 20:01:10.699457 86948 solver.cpp:237] Iteration 55500, loss = 0.00185457
- I0116 20:01:10.699515 86948 solver.cpp:253] Train net output #0: loss = 0.00185475 (* 1 = 0.00185475 loss)
- I0116 20:01:11.001893 86948 sgd_solver.cpp:106] Iteration 55500, lr = 0.001
- I0116 20:30:32.961246 86948 solver.cpp:341] Iteration 56000, Testing net (#0)
- I0116 20:30:46.017097 86948 solver.cpp:409] Test net output #0: accuracy = 0.138
- I0116 20:30:46.017145 86948 solver.cpp:409] Test net output #1: loss = 17.466 (* 1 = 17.466 loss)
- I0116 20:31:08.532201 86948 solver.cpp:237] Iteration 56000, loss = 0.00167043
- I0116 20:31:08.532402 86948 solver.cpp:253] Train net output #0: loss = 0.00167062 (* 1 = 0.00167062 loss)
- I0116 20:31:08.892717 86948 sgd_solver.cpp:106] Iteration 56000, lr = 0.001
- I0116 21:00:52.884006 86948 solver.cpp:341] Iteration 56500, Testing net (#0)
- I0116 21:00:53.370746 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0116 21:00:53.370777 86948 solver.cpp:409] Test net output #1: loss = 18.5152 (* 1 = 18.5152 loss)
- I0116 21:01:29.835618 86948 solver.cpp:237] Iteration 56500, loss = 0.00127041
- I0116 21:01:29.835860 86948 solver.cpp:253] Train net output #0: loss = 0.0012706 (* 1 = 0.0012706 loss)
- I0116 21:01:30.164361 86948 sgd_solver.cpp:106] Iteration 56500, lr = 0.001
- I0116 21:29:50.693728 86948 solver.cpp:341] Iteration 57000, Testing net (#0)
- I0116 21:29:51.477064 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0116 21:29:51.477094 86948 solver.cpp:409] Test net output #1: loss = 17.2161 (* 1 = 17.2161 loss)
- I0116 21:30:16.758760 86948 solver.cpp:237] Iteration 57000, loss = 0.00125112
- I0116 21:30:16.758816 86948 solver.cpp:253] Train net output #0: loss = 0.00125128 (* 1 = 0.00125128 loss)
- I0116 21:30:17.094545 86948 sgd_solver.cpp:106] Iteration 57000, lr = 0.001
- I0116 21:56:25.575891 86948 solver.cpp:341] Iteration 57500, Testing net (#0)
- I0116 21:56:26.359499 86948 solver.cpp:409] Test net output #0: accuracy = 0.102
- I0116 21:56:26.359541 86948 solver.cpp:409] Test net output #1: loss = 17.1281 (* 1 = 17.1281 loss)
- I0116 21:56:45.616204 86948 solver.cpp:237] Iteration 57500, loss = 0.000995827
- I0116 21:56:45.616246 86948 solver.cpp:253] Train net output #0: loss = 0.000995983 (* 1 = 0.000995983 loss)
- I0116 21:56:45.616267 86948 sgd_solver.cpp:106] Iteration 57500, lr = 0.001
- I0116 22:24:15.822245 86948 solver.cpp:341] Iteration 58000, Testing net (#0)
- I0116 22:24:28.277140 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0116 22:24:28.277187 86948 solver.cpp:409] Test net output #1: loss = 17.7945 (* 1 = 17.7945 loss)
- I0116 22:24:46.763841 86948 solver.cpp:237] Iteration 58000, loss = 0.00149116
- I0116 22:24:46.764065 86948 solver.cpp:253] Train net output #0: loss = 0.00149132 (* 1 = 0.00149132 loss)
- I0116 22:24:47.134806 86948 sgd_solver.cpp:106] Iteration 58000, lr = 0.001
- I0116 22:53:01.435055 86948 solver.cpp:341] Iteration 58500, Testing net (#0)
- I0116 22:53:02.219271 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0116 22:53:02.219316 86948 solver.cpp:409] Test net output #1: loss = 17.5235 (* 1 = 17.5235 loss)
- I0116 22:53:21.272271 86948 solver.cpp:237] Iteration 58500, loss = 0.000840927
- I0116 22:53:21.272325 86948 solver.cpp:253] Train net output #0: loss = 0.000841088 (* 1 = 0.000841088 loss)
- I0116 22:53:21.631062 86948 sgd_solver.cpp:106] Iteration 58500, lr = 0.001
- I0116 23:22:36.109424 86948 solver.cpp:341] Iteration 59000, Testing net (#0)
- I0116 23:22:36.595427 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0116 23:22:36.595466 86948 solver.cpp:409] Test net output #1: loss = 17.8167 (* 1 = 17.8167 loss)
- I0116 23:22:58.007833 86948 solver.cpp:237] Iteration 59000, loss = 0.000619527
- I0116 23:22:58.007884 86948 solver.cpp:253] Train net output #0: loss = 0.000619694 (* 1 = 0.000619694 loss)
- I0116 23:22:58.388481 86948 sgd_solver.cpp:106] Iteration 59000, lr = 0.001
- I0116 23:51:47.412817 86948 solver.cpp:341] Iteration 59500, Testing net (#0)
- I0116 23:51:47.905879 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0116 23:51:47.905912 86948 solver.cpp:409] Test net output #1: loss = 17.7125 (* 1 = 17.7125 loss)
- I0116 23:52:21.172386 86948 solver.cpp:237] Iteration 59500, loss = 0.000833786
- I0116 23:52:21.172557 86948 solver.cpp:253] Train net output #0: loss = 0.000833953 (* 1 = 0.000833953 loss)
- I0116 23:52:21.172621 86948 sgd_solver.cpp:106] Iteration 59500, lr = 0.001
- I0117 00:19:51.501459 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_60000.caffemodel
- I0117 00:19:53.112973 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_60000.solverstate
- I0117 00:19:53.159576 86948 solver.cpp:341] Iteration 60000, Testing net (#0)
- I0117 00:20:13.622352 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0117 00:20:13.622414 86948 solver.cpp:409] Test net output #1: loss = 16.4636 (* 1 = 16.4636 loss)
- I0117 00:20:38.179111 86948 solver.cpp:237] Iteration 60000, loss = 0.000698102
- I0117 00:20:38.179344 86948 solver.cpp:253] Train net output #0: loss = 0.000698267 (* 1 = 0.000698267 loss)
- I0117 00:20:38.518753 86948 sgd_solver.cpp:106] Iteration 60000, lr = 0.001
- I0117 00:48:01.094512 86948 solver.cpp:341] Iteration 60500, Testing net (#0)
- I0117 00:48:01.583814 86948 solver.cpp:409] Test net output #0: accuracy = 0.128
- I0117 00:48:01.583847 86948 solver.cpp:409] Test net output #1: loss = 16.9208 (* 1 = 16.9208 loss)
- I0117 00:48:02.826798 86948 solver.cpp:237] Iteration 60500, loss = 0.00134723
- I0117 00:48:02.826843 86948 solver.cpp:253] Train net output #0: loss = 0.00134739 (* 1 = 0.00134739 loss)
- I0117 00:48:03.129700 86948 sgd_solver.cpp:106] Iteration 60500, lr = 0.001
- I0117 01:15:35.234369 86948 solver.cpp:341] Iteration 61000, Testing net (#0)
- I0117 01:15:36.017329 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0117 01:15:36.017371 86948 solver.cpp:409] Test net output #1: loss = 16.7038 (* 1 = 16.7038 loss)
- I0117 01:15:48.144557 86948 solver.cpp:237] Iteration 61000, loss = 0.00100287
- I0117 01:15:48.144616 86948 solver.cpp:253] Train net output #0: loss = 0.00100303 (* 1 = 0.00100303 loss)
- I0117 01:15:48.447521 86948 sgd_solver.cpp:106] Iteration 61000, lr = 0.001
- I0117 01:43:27.736707 86948 solver.cpp:341] Iteration 61500, Testing net (#0)
- I0117 01:43:28.520886 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0117 01:43:28.520928 86948 solver.cpp:409] Test net output #1: loss = 16.7259 (* 1 = 16.7259 loss)
- I0117 01:43:43.177824 86948 solver.cpp:237] Iteration 61500, loss = 0.00111827
- I0117 01:43:43.177875 86948 solver.cpp:253] Train net output #0: loss = 0.00111844 (* 1 = 0.00111844 loss)
- I0117 01:43:43.177896 86948 sgd_solver.cpp:106] Iteration 61500, lr = 0.001
- I0117 02:12:18.621260 86948 solver.cpp:341] Iteration 62000, Testing net (#0)
- I0117 02:12:39.971473 86948 solver.cpp:409] Test net output #0: accuracy = 0.142
- I0117 02:12:39.971523 86948 solver.cpp:409] Test net output #1: loss = 15.8327 (* 1 = 15.8327 loss)
- I0117 02:12:57.808661 86948 solver.cpp:237] Iteration 62000, loss = 0.000379236
- I0117 02:12:57.808817 86948 solver.cpp:253] Train net output #0: loss = 0.000379402 (* 1 = 0.000379402 loss)
- I0117 02:12:58.166721 86948 sgd_solver.cpp:106] Iteration 62000, lr = 0.001
- I0117 02:41:14.882915 86948 solver.cpp:341] Iteration 62500, Testing net (#0)
- I0117 02:41:15.368656 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0117 02:41:15.368685 86948 solver.cpp:409] Test net output #1: loss = 16.6882 (* 1 = 16.6882 loss)
- I0117 02:41:34.613448 86948 solver.cpp:237] Iteration 62500, loss = 0.00201296
- I0117 02:41:34.613498 86948 solver.cpp:253] Train net output #0: loss = 0.00201312 (* 1 = 0.00201312 loss)
- I0117 02:41:34.967422 86948 sgd_solver.cpp:106] Iteration 62500, lr = 0.001
- I0117 03:09:53.952474 86948 solver.cpp:341] Iteration 63000, Testing net (#0)
- I0117 03:09:54.738646 86948 solver.cpp:409] Test net output #0: accuracy = 0.118
- I0117 03:09:54.738690 86948 solver.cpp:409] Test net output #1: loss = 15.5435 (* 1 = 15.5435 loss)
- I0117 03:10:19.191061 86948 solver.cpp:237] Iteration 63000, loss = 0.00052981
- I0117 03:10:19.191112 86948 solver.cpp:253] Train net output #0: loss = 0.000529975 (* 1 = 0.000529975 loss)
- I0117 03:10:19.493536 86948 sgd_solver.cpp:106] Iteration 63000, lr = 0.001
- I0117 03:38:18.031046 86948 solver.cpp:341] Iteration 63500, Testing net (#0)
- I0117 03:38:18.815105 86948 solver.cpp:409] Test net output #0: accuracy = 0.154
- I0117 03:38:18.815142 86948 solver.cpp:409] Test net output #1: loss = 16.497 (* 1 = 16.497 loss)
- I0117 03:38:34.141821 86948 solver.cpp:237] Iteration 63500, loss = 0.000767982
- I0117 03:38:34.141876 86948 solver.cpp:253] Train net output #0: loss = 0.000768148 (* 1 = 0.000768148 loss)
- I0117 03:38:34.443259 86948 sgd_solver.cpp:106] Iteration 63500, lr = 0.001
- I0117 04:03:25.591889 86948 solver.cpp:341] Iteration 64000, Testing net (#0)
- I0117 04:03:48.695626 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0117 04:03:48.695669 86948 solver.cpp:409] Test net output #1: loss = 16.1517 (* 1 = 16.1517 loss)
- I0117 04:04:01.971127 86948 solver.cpp:237] Iteration 64000, loss = 0.000629346
- I0117 04:04:01.971350 86948 solver.cpp:253] Train net output #0: loss = 0.000629513 (* 1 = 0.000629513 loss)
- I0117 04:04:02.324965 86948 sgd_solver.cpp:106] Iteration 64000, lr = 0.001
- I0117 04:32:27.745319 86948 solver.cpp:341] Iteration 64500, Testing net (#0)
- I0117 04:32:28.231277 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0117 04:32:28.231320 86948 solver.cpp:409] Test net output #1: loss = 16.2351 (* 1 = 16.2351 loss)
- I0117 04:32:54.835930 86948 solver.cpp:237] Iteration 64500, loss = 0.00131145
- I0117 04:32:54.835990 86948 solver.cpp:253] Train net output #0: loss = 0.00131162 (* 1 = 0.00131162 loss)
- I0117 04:32:55.187325 86948 sgd_solver.cpp:106] Iteration 64500, lr = 0.001
- I0117 05:00:55.586280 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_65000.caffemodel
- I0117 05:01:00.939674 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_65000.solverstate
- I0117 05:01:02.193336 86948 solver.cpp:341] Iteration 65000, Testing net (#0)
- I0117 05:01:02.676085 86948 solver.cpp:409] Test net output #0: accuracy = 0.134
- I0117 05:01:02.676128 86948 solver.cpp:409] Test net output #1: loss = 15.8553 (* 1 = 15.8553 loss)
- I0117 05:01:26.228950 86948 solver.cpp:237] Iteration 65000, loss = 0.000773424
- I0117 05:01:26.229154 86948 solver.cpp:253] Train net output #0: loss = 0.000773591 (* 1 = 0.000773591 loss)
- I0117 05:01:26.229217 86948 sgd_solver.cpp:106] Iteration 65000, lr = 0.001
- I0117 05:29:58.098109 86948 solver.cpp:341] Iteration 65500, Testing net (#0)
- I0117 05:29:58.882112 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0117 05:29:58.882163 86948 solver.cpp:409] Test net output #1: loss = 16.5149 (* 1 = 16.5149 loss)
- I0117 05:30:31.572700 86948 solver.cpp:237] Iteration 65500, loss = 0.000899338
- I0117 05:30:31.572927 86948 solver.cpp:253] Train net output #0: loss = 0.000899504 (* 1 = 0.000899504 loss)
- I0117 05:30:31.903906 86948 sgd_solver.cpp:106] Iteration 65500, lr = 0.001
- I0117 06:00:43.981590 86948 solver.cpp:341] Iteration 66000, Testing net (#0)
- I0117 06:01:08.399309 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0117 06:01:08.399371 86948 solver.cpp:409] Test net output #1: loss = 15.3885 (* 1 = 15.3885 loss)
- I0117 06:01:44.210752 86948 solver.cpp:237] Iteration 66000, loss = 0.00092309
- I0117 06:01:44.211006 86948 solver.cpp:253] Train net output #0: loss = 0.000923256 (* 1 = 0.000923256 loss)
- I0117 06:01:44.571835 86948 sgd_solver.cpp:106] Iteration 66000, lr = 0.001
- I0117 06:30:41.497572 86948 solver.cpp:341] Iteration 66500, Testing net (#0)
- I0117 06:30:42.280725 86948 solver.cpp:409] Test net output #0: accuracy = 0.128
- I0117 06:30:42.280772 86948 solver.cpp:409] Test net output #1: loss = 14.6409 (* 1 = 14.6409 loss)
- I0117 06:30:58.502143 86948 solver.cpp:237] Iteration 66500, loss = 0.000562749
- I0117 06:30:58.502197 86948 solver.cpp:253] Train net output #0: loss = 0.000562914 (* 1 = 0.000562914 loss)
- I0117 06:30:58.870307 86948 sgd_solver.cpp:106] Iteration 66500, lr = 0.001
- I0117 06:56:45.541666 86948 solver.cpp:341] Iteration 67000, Testing net (#0)
- I0117 06:56:46.032340 86948 solver.cpp:409] Test net output #0: accuracy = 0.11
- I0117 06:56:46.032380 86948 solver.cpp:409] Test net output #1: loss = 16.1808 (* 1 = 16.1808 loss)
- I0117 06:57:01.526554 86948 solver.cpp:237] Iteration 67000, loss = 0.00102099
- I0117 06:57:01.526605 86948 solver.cpp:253] Train net output #0: loss = 0.00102115 (* 1 = 0.00102115 loss)
- I0117 06:57:01.865414 86948 sgd_solver.cpp:106] Iteration 67000, lr = 0.001
- I0117 07:24:53.334848 86948 solver.cpp:341] Iteration 67500, Testing net (#0)
- I0117 07:24:54.120802 86948 solver.cpp:409] Test net output #0: accuracy = 0.112
- I0117 07:24:54.120841 86948 solver.cpp:409] Test net output #1: loss = 16.0582 (* 1 = 16.0582 loss)
- I0117 07:25:07.953392 86948 solver.cpp:237] Iteration 67500, loss = 0.00112115
- I0117 07:25:07.953441 86948 solver.cpp:253] Train net output #0: loss = 0.00112132 (* 1 = 0.00112132 loss)
- I0117 07:25:08.307104 86948 sgd_solver.cpp:106] Iteration 67500, lr = 0.001
- I0117 07:53:18.890822 86948 solver.cpp:341] Iteration 68000, Testing net (#0)
- I0117 07:53:32.259802 86948 solver.cpp:409] Test net output #0: accuracy = 0.148
- I0117 07:53:32.259845 86948 solver.cpp:409] Test net output #1: loss = 15.5333 (* 1 = 15.5333 loss)
- I0117 07:53:47.024539 86948 solver.cpp:237] Iteration 68000, loss = 0.000853823
- I0117 07:53:47.024593 86948 solver.cpp:253] Train net output #0: loss = 0.000853988 (* 1 = 0.000853988 loss)
- I0117 07:53:47.327491 86948 sgd_solver.cpp:106] Iteration 68000, lr = 0.001
- I0117 08:22:41.051843 86948 solver.cpp:341] Iteration 68500, Testing net (#0)
- I0117 08:22:41.542336 86948 solver.cpp:409] Test net output #0: accuracy = 0.126
- I0117 08:22:41.542385 86948 solver.cpp:409] Test net output #1: loss = 14.4018 (* 1 = 14.4018 loss)
- I0117 08:22:57.310125 86948 solver.cpp:237] Iteration 68500, loss = 0.0009701
- I0117 08:22:57.310174 86948 solver.cpp:253] Train net output #0: loss = 0.000970266 (* 1 = 0.000970266 loss)
- I0117 08:22:57.310201 86948 sgd_solver.cpp:106] Iteration 68500, lr = 0.001
- I0117 08:51:31.132508 86948 solver.cpp:341] Iteration 69000, Testing net (#0)
- I0117 08:51:31.918012 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0117 08:51:31.918045 86948 solver.cpp:409] Test net output #1: loss = 15.011 (* 1 = 15.011 loss)
- I0117 08:51:55.438107 86948 solver.cpp:237] Iteration 69000, loss = 0.00144422
- I0117 08:51:55.438156 86948 solver.cpp:253] Train net output #0: loss = 0.00144438 (* 1 = 0.00144438 loss)
- I0117 08:51:55.438179 86948 sgd_solver.cpp:106] Iteration 69000, lr = 0.001
- I0117 09:20:06.035789 86948 solver.cpp:341] Iteration 69500, Testing net (#0)
- I0117 09:20:06.530084 86948 solver.cpp:409] Test net output #0: accuracy = 0.144
- I0117 09:20:06.530124 86948 solver.cpp:409] Test net output #1: loss = 15.2529 (* 1 = 15.2529 loss)
- I0117 09:20:24.312736 86948 solver.cpp:237] Iteration 69500, loss = 0.00116137
- I0117 09:20:24.312789 86948 solver.cpp:253] Train net output #0: loss = 0.00116154 (* 1 = 0.00116154 loss)
- I0117 09:20:24.615401 86948 sgd_solver.cpp:106] Iteration 69500, lr = 0.001
- I0117 09:48:31.437814 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_70000.caffemodel
- I0117 09:48:33.344377 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_70000.solverstate
- I0117 09:48:33.388301 86948 solver.cpp:341] Iteration 70000, Testing net (#0)
- I0117 09:48:46.943537 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0117 09:48:46.943588 86948 solver.cpp:409] Test net output #1: loss = 14.5244 (* 1 = 14.5244 loss)
- I0117 09:49:03.112295 86948 solver.cpp:237] Iteration 70000, loss = 0.00122999
- I0117 09:49:03.112499 86948 solver.cpp:253] Train net output #0: loss = 0.00123015 (* 1 = 0.00123015 loss)
- I0117 09:49:03.471725 86948 sgd_solver.cpp:106] Iteration 70000, lr = 0.001
- I0117 10:16:26.051054 86948 solver.cpp:341] Iteration 70500, Testing net (#0)
- I0117 10:16:26.835522 86948 solver.cpp:409] Test net output #0: accuracy = 0.13
- I0117 10:16:26.835569 86948 solver.cpp:409] Test net output #1: loss = 15.0903 (* 1 = 15.0903 loss)
- I0117 10:16:43.701015 86948 solver.cpp:237] Iteration 70500, loss = 0.00145832
- I0117 10:16:43.701076 86948 solver.cpp:253] Train net output #0: loss = 0.00145848 (* 1 = 0.00145848 loss)
- I0117 10:16:43.701113 86948 sgd_solver.cpp:106] Iteration 70500, lr = 0.001
- I0117 10:44:12.330981 86948 solver.cpp:341] Iteration 71000, Testing net (#0)
- I0117 10:44:13.115999 86948 solver.cpp:409] Test net output #0: accuracy = 0.148
- I0117 10:44:13.116049 86948 solver.cpp:409] Test net output #1: loss = 14.8792 (* 1 = 14.8792 loss)
- I0117 10:44:41.000583 86948 solver.cpp:237] Iteration 71000, loss = 0.00174855
- I0117 10:44:41.000628 86948 solver.cpp:253] Train net output #0: loss = 0.00174872 (* 1 = 0.00174872 loss)
- I0117 10:44:41.000649 86948 sgd_solver.cpp:106] Iteration 71000, lr = 0.001
- I0117 11:14:19.942044 86948 solver.cpp:341] Iteration 71500, Testing net (#0)
- I0117 11:14:20.427345 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0117 11:14:20.427377 86948 solver.cpp:409] Test net output #1: loss = 14.2639 (* 1 = 14.2639 loss)
- I0117 11:14:37.532204 86948 solver.cpp:237] Iteration 71500, loss = 0.000676939
- I0117 11:14:37.532251 86948 solver.cpp:253] Train net output #0: loss = 0.000677105 (* 1 = 0.000677105 loss)
- I0117 11:14:37.879487 86948 sgd_solver.cpp:106] Iteration 71500, lr = 0.001
- I0117 11:43:06.430202 86948 solver.cpp:341] Iteration 72000, Testing net (#0)
- I0117 11:43:22.537875 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0117 11:43:22.537925 86948 solver.cpp:409] Test net output #1: loss = 15.7033 (* 1 = 15.7033 loss)
- I0117 11:43:34.772270 86948 solver.cpp:237] Iteration 72000, loss = 0.00116305
- I0117 11:43:34.772325 86948 solver.cpp:253] Train net output #0: loss = 0.00116322 (* 1 = 0.00116322 loss)
- I0117 11:43:35.127001 86948 sgd_solver.cpp:106] Iteration 72000, lr = 0.001
- I0117 12:13:05.521544 86948 solver.cpp:341] Iteration 72500, Testing net (#0)
- I0117 12:13:06.007657 86948 solver.cpp:409] Test net output #0: accuracy = 0.114
- I0117 12:13:06.007705 86948 solver.cpp:409] Test net output #1: loss = 14.6482 (* 1 = 14.6482 loss)
- I0117 12:13:19.264045 86948 solver.cpp:237] Iteration 72500, loss = 0.00117693
- I0117 12:13:19.264101 86948 solver.cpp:253] Train net output #0: loss = 0.0011771 (* 1 = 0.0011771 loss)
- I0117 12:13:19.566165 86948 sgd_solver.cpp:106] Iteration 72500, lr = 0.001
- I0117 12:40:20.751565 86948 solver.cpp:341] Iteration 73000, Testing net (#0)
- I0117 12:40:21.238452 86948 solver.cpp:409] Test net output #0: accuracy = 0.14
- I0117 12:40:21.238520 86948 solver.cpp:409] Test net output #1: loss = 13.8224 (* 1 = 13.8224 loss)
- I0117 12:40:44.118749 86948 solver.cpp:237] Iteration 73000, loss = 0.00160221
- I0117 12:40:44.118808 86948 solver.cpp:253] Train net output #0: loss = 0.00160238 (* 1 = 0.00160238 loss)
- I0117 12:40:44.492357 86948 sgd_solver.cpp:106] Iteration 73000, lr = 0.001
- I0117 13:05:49.983078 86948 solver.cpp:341] Iteration 73500, Testing net (#0)
- I0117 13:05:50.769421 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0117 13:05:50.769461 86948 solver.cpp:409] Test net output #1: loss = 14.2741 (* 1 = 14.2741 loss)
- I0117 13:06:13.503284 86948 solver.cpp:237] Iteration 73500, loss = 0.00136268
- I0117 13:06:13.503340 86948 solver.cpp:253] Train net output #0: loss = 0.00136284 (* 1 = 0.00136284 loss)
- I0117 13:06:13.805341 86948 sgd_solver.cpp:106] Iteration 73500, lr = 0.001
- I0117 13:34:08.023531 86948 solver.cpp:341] Iteration 74000, Testing net (#0)
- I0117 13:34:27.022164 86948 solver.cpp:409] Test net output #0: accuracy = 0.104
- I0117 13:34:27.022241 86948 solver.cpp:409] Test net output #1: loss = 14.4191 (* 1 = 14.4191 loss)
- I0117 13:34:39.493510 86948 solver.cpp:237] Iteration 74000, loss = 0.00152815
- I0117 13:34:39.493753 86948 solver.cpp:253] Train net output #0: loss = 0.00152831 (* 1 = 0.00152831 loss)
- I0117 13:34:39.854380 86948 sgd_solver.cpp:106] Iteration 74000, lr = 0.001
- I0117 14:02:55.896353 86948 solver.cpp:341] Iteration 74500, Testing net (#0)
- I0117 14:02:56.681279 86948 solver.cpp:409] Test net output #0: accuracy = 0.124
- I0117 14:02:56.681329 86948 solver.cpp:409] Test net output #1: loss = 15.249 (* 1 = 15.249 loss)
- I0117 14:03:17.415796 86948 solver.cpp:237] Iteration 74500, loss = 0.00171971
- I0117 14:03:17.415844 86948 solver.cpp:253] Train net output #0: loss = 0.00171988 (* 1 = 0.00171988 loss)
- I0117 14:03:17.761025 86948 sgd_solver.cpp:106] Iteration 74500, lr = 0.001
- I0117 14:32:28.479389 86948 solver.cpp:459] Snapshotting to binary proto file models/mv16f/mv16f1__iter_75000.caffemodel
- I0117 14:32:32.218113 86948 sgd_solver.cpp:273] Snapshotting solver state to binary proto file models/mv16f/mv16f1__iter_75000.solverstate
- I0117 14:32:32.261855 86948 solver.cpp:341] Iteration 75000, Testing net (#0)
- I0117 14:32:32.778828 86948 solver.cpp:409] Test net output #0: accuracy = 0.13
- I0117 14:32:32.778861 86948 solver.cpp:409] Test net output #1: loss = 14.905 (* 1 = 14.905 loss)
- I0117 14:32:50.592114 86948 solver.cpp:237] Iteration 75000, loss = 0.000816114
- I0117 14:32:50.592182 86948 solver.cpp:253] Train net output #0: loss = 0.000816279 (* 1 = 0.000816279 loss)
- I0117 14:32:50.894603 86948 sgd_solver.cpp:106] Iteration 75000, lr = 0.001
- I0117 15:02:30.998363 86948 solver.cpp:341] Iteration 75500, Testing net (#0)
- I0117 15:02:31.485143 86948 solver.cpp:409] Test net output #0: accuracy = 0.12
- I0117 15:02:31.485188 86948 solver.cpp:409] Test net output #1: loss = 14.9655 (* 1 = 14.9655 loss)
- I0117 15:02:52.363260 86948 solver.cpp:237] Iteration 75500, loss = 0.00145175
- I0117 15:02:52.363328 86948 solver.cpp:253] Train net output #0: loss = 0.00145191 (* 1 = 0.00145191 loss)
- I0117 15:02:52.684723 86948 sgd_solver.cpp:106] Iteration 75500, lr = 0.001
- I0117 15:32:25.288990 86948 solver.cpp:341] Iteration 76000, Testing net (#0)
- I0117 15:32:38.957864 86948 solver.cpp:409] Test net output #0: accuracy = 0.116
- I0117 15:32:38.957921 86948 solver.cpp:409] Test net output #1: loss = 14.0129 (* 1 = 14.0129 loss)
- I0117 15:32:51.730339 86948 solver.cpp:237] Iteration 76000, loss = 0.00122516
- I0117 15:32:51.730393 86948 solver.cpp:253] Train net output #0: loss = 0.00122533 (* 1 = 0.00122533 loss)
- I0117 15:32:52.032361 86948 sgd_solver.cpp:106] Iteration 76000, lr = 0.001
- I0117 15:58:04.766777 86948 solver.cpp:341] Iteration 76500, Testing net (#0)
- I0117 15:58:05.253283 86948 solver.cpp:409] Test net output #0: accuracy = 0.128
- I0117 15:58:05.253428 86948 solver.cpp:409] Test net output #1: loss = 14.5003 (* 1 = 14.5003 loss)
- I0117 15:58:36.863298 86948 solver.cpp:237] Iteration 76500, loss = 0.00182002
- I0117 15:58:36.863498 86948 solver.cpp:253] Train net output #0: loss = 0.00182018 (* 1 = 0.00182018 loss)
- I0117 15:58:37.219053 86948 sgd_solver.cpp:106] Iteration 76500, lr = 0.001
- I0117 16:26:28.663875 86948 solver.cpp:341] Iteration 77000, Testing net (#0)
- I0117 16:26:29.149608 86948 solver.cpp:409] Test net output #0: accuracy = 0.158
- I0117 16:26:29.149651 86948 solver.cpp:409] Test net output #1: loss = 12.9062 (* 1 = 12.9062 loss)
- I0117 16:26:49.170310 86948 solver.cpp:237] Iteration 77000, loss = 0.00109042
- I0117 16:26:49.170364 86948 solver.cpp:253] Train net output #0: loss = 0.00109059 (* 1 = 0.00109059 loss)
- I0117 16:26:49.472210 86948 sgd_solver.cpp:106] Iteration 77000, lr = 0.001
- I0117 16:53:34.338349 86948 solver.cpp:341] Iteration 77500, Testing net (#0)
- I0117 16:53:34.824774 86948 solver.cpp:409] Test net output #0: accuracy = 0.122
- I0117 16:53:34.824815 86948 solver.cpp:409] Test net output #1: loss = 14.0104 (* 1 = 14.0104 loss)
- I0117 16:53:49.258014 86948 solver.cpp:237] Iteration 77500, loss = 0.00227474
- I0117 16:53:49.258074 86948 solver.cpp:253] Train net output #0: loss = 0.00227491 (* 1 = 0.00227491 loss)
- I0117 16:53:49.604596 86948 sgd_solver.cpp:106] Iteration 77500, lr = 0.001
- I0117 17:23:28.599503 86948 solver.cpp:341] Iteration 78000, Testing net (#0)
- I0117 17:23:42.401674 86948 solver.cpp:409] Test net output #0: accuracy = 0.134
- I0117 17:23:42.401721 86948 solver.cpp:409] Test net output #1: loss = 13.5035 (* 1 = 13.5035 loss)
- I0117 17:23:59.519179 86948 solver.cpp:237] Iteration 78000, loss = 0.00130983
- I0117 17:23:59.519390 86948 solver.cpp:253] Train net output #0: loss = 0.00131 (* 1 = 0.00131 loss)
- I0117 17:23:59.820915 86948 sgd_solver.cpp:106] Iteration 78000, lr = 0.001
- I0117 17:52:58.458806 86948 solver.cpp:341] Iteration 78500, Testing net (#0)
- I0117 17:52:58.944988 86948 solver.cpp:409] Test net output #0: accuracy = 0.132
- I0117 17:52:58.945024 86948 solver.cpp:409] Test net output #1: loss = 14.1663 (* 1 = 14.1663 loss)
- I0117 17:53:19.896749 86948 solver.cpp:237] Iteration 78500, loss = 0.00147733
- I0117 17:53:19.896806 86948 solver.cpp:253] Train net output #0: loss = 0.0014775 (* 1 = 0.0014775 loss)
- I0117 17:53:19.896831 86948 sgd_solver.cpp:106] Iteration 78500, lr = 0.001
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement