Guest User

[rraja@bzn build]$ gedit out/mon.b.log

a guest
Jan 29th, 2019
72
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 399.91 KB | None | 0 0
  1. 2019-01-29 15:26:44.079 7f860f2c31c0 10 public_network
  2. 2019-01-29 15:26:44.079 7f860f2c31c0 10 public_addr
  3. 2019-01-29 15:26:44.097 7f860f2c31c0 1 imported monmap:
  4. epoch 0
  5. fsid 6ed38227-b5fb-47b5-8017-c3f6952380f8
  6. last_changed 2019-01-29 15:26:43.871480
  7. created 2019-01-29 15:26:43.871480
  8. 0: [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] mon.a
  9. 1: [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon.b
  10. 2: [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] mon.c
  11.  
  12. 2019-01-29 15:26:44.097 7f860f2c31c0 0 /home/rraja/git/ceph/build/bin/ceph-mon: set fsid to 3b02750c-f104-4301-aa14-258d2b37f104
  13. 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option compression = kNoCompression
  14. 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  15. 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option write_buffer_size = 33554432
  16. 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option compression = kNoCompression
  17. 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  18. 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option write_buffer_size = 33554432
  19. 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: RocksDB version: 5.17.2
  20.  
  21. 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
  22. 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: Compile date Jan 28 2019
  23. 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: DB SUMMARY
  24.  
  25. 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: SST files in /home/rraja/git/ceph/build/dev/mon.b/store.db dir, Total Num: 0, files:
  26.  
  27. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph/build/dev/mon.b/store.db:
  28.  
  29. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.error_if_exists: 0
  30. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.create_if_missing: 1
  31. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.paranoid_checks: 1
  32. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.env: 0x555a504171a0
  33. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.info_log: 0x555a51f49380
  34. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_file_opening_threads: 16
  35. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.statistics: (nil)
  36. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_fsync: 0
  37. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_log_file_size: 0
  38. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_manifest_file_size: 1073741824
  39. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.log_file_time_to_roll: 0
  40. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.keep_log_file_num: 1000
  41. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.recycle_log_file_num: 0
  42. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_fallocate: 1
  43. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_mmap_reads: 0
  44. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_mmap_writes: 0
  45. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_direct_reads: 0
  46. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
  47. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.create_missing_column_families: 0
  48. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.db_log_dir:
  49. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph/build/dev/mon.b/store.db
  50. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.table_cache_numshardbits: 6
  51. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_subcompactions: 1
  52. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_background_flushes: -1
  53. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.WAL_ttl_seconds: 0
  54. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.WAL_size_limit_MB: 0
  55. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.manifest_preallocation_size: 4194304
  56. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.is_fd_close_on_exec: 1
  57. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.advise_random_on_open: 1
  58. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.db_write_buffer_size: 0
  59. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.write_buffer_manager: 0x555a51f4a0f0
  60. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.access_hint_on_compaction_start: 1
  61. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
  62. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.random_access_max_buffer_size: 1048576
  63. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_adaptive_mutex: 0
  64. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.rate_limiter: (nil)
  65. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
  66. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_recovery_mode: 2
  67. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.enable_thread_tracking: 0
  68. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.enable_pipelined_write: 0
  69. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_concurrent_memtable_write: 1
  70. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
  71. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.write_thread_max_yield_usec: 100
  72. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.write_thread_slow_yield_usec: 3
  73. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.row_cache: None
  74. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_filter: None
  75. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.avoid_flush_during_recovery: 0
  76. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_ingest_behind: 0
  77. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.preserve_deletes: 0
  78. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.two_write_queues: 0
  79. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.manual_wal_flush: 0
  80. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_background_jobs: 2
  81. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_background_compactions: -1
  82. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.avoid_flush_during_shutdown: 0
  83. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
  84. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.delayed_write_rate : 16777216
  85. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_total_wal_size: 0
  86. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
  87. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.stats_dump_period_sec: 600
  88. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_open_files: -1
  89. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.bytes_per_sync: 0
  90. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_bytes_per_sync: 0
  91. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.compaction_readahead_size: 0
  92. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Compression algorithms supported:
  93. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kZSTDNotFinalCompression supported: 0
  94. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kZSTD supported: 0
  95. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kXpressCompression supported: 0
  96. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kLZ4HCCompression supported: 1
  97. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kLZ4Compression supported: 1
  98. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kBZip2Compression supported: 0
  99. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kZlibCompression supported: 1
  100. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kSnappyCompression supported: 1
  101. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Fast CRC32 supported: Supported on x86
  102. 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:230] Creating manifest 1
  103.  
  104. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
  105.  
  106. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
  107.  
  108. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
  109. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.merge_operator:
  110. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_filter: None
  111. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_filter_factory: None
  112. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_factory: SkipListFactory
  113. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.table_factory: BlockBasedTable
  114. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555a51ba2b00)
  115. cache_index_and_filter_blocks: 1
  116. cache_index_and_filter_blocks_with_high_priority: 1
  117. pin_l0_filter_and_index_blocks_in_cache: 1
  118. pin_top_level_index_and_filter: 1
  119. index_type: 0
  120. hash_index_allow_collision: 1
  121. checksum: 1
  122. no_block_cache: 0
  123. block_cache: 0x555a52a61140
  124. block_cache_name: BinnedLRUCache
  125. block_cache_options:
  126. capacity : 536870912
  127. num_shard_bits : 4
  128. strict_capacity_limit : 0
  129. high_pri_pool_ratio: 0.000
  130. block_cache_compressed: (nil)
  131. persistent_cache: (nil)
  132. block_size: 4096
  133. block_size_deviation: 10
  134. block_restart_interval: 16
  135. index_block_restart_interval: 1
  136. metadata_block_size: 4096
  137. partition_filters: 0
  138. use_delta_encoding: 1
  139. filter_policy: rocksdb.BuiltinBloomFilter
  140. whole_key_filtering: 1
  141. verify_compression: 0
  142. read_amp_bytes_per_bit: 0
  143. format_version: 2
  144. enable_index_compression: 1
  145. block_align: 0
  146.  
  147. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.write_buffer_size: 33554432
  148. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_write_buffer_number: 2
  149. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression: NoCompression
  150. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression: Disabled
  151. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.prefix_extractor: nullptr
  152. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
  153. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.num_levels: 7
  154. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
  155. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
  156. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
  157. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.level: 32767
  158. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
  159. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
  160. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
  161. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.enabled: false
  162. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.window_bits: -14
  163. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.level: 32767
  164. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.strategy: 0
  165. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
  166. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
  167. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.enabled: false
  168. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
  169. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
  170. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level0_stop_writes_trigger: 36
  171. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.target_file_size_base: 67108864
  172. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.target_file_size_multiplier: 1
  173. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_base: 268435456
  174. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
  175. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
  176. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
  177. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
  178. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
  179. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
  180. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
  181. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
  182. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
  183. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
  184. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_compaction_bytes: 1677721600
  185. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.arena_block_size: 4194304
  186. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
  187. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
  188. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
  189. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.disable_auto_compactions: 0
  190. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
  191. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_pri: kByCompensatedSize
  192. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
  193. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
  194. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
  195. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
  196. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
  197. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
  198. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
  199. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
  200. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_fifo.ttl: 0
  201. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.table_properties_collectors:
  202. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.inplace_update_support: 0
  203. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.inplace_update_num_locks: 10000
  204. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
  205. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_huge_page_size: 0
  206. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bloom_locality: 0
  207. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_successive_merges: 0
  208. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.optimize_filters_for_hits: 0
  209. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.paranoid_file_checks: 0
  210. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.force_consistency_checks: 0
  211. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.report_bg_io_stats: 0
  212. 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.ttl: 0
  213. 2019-01-29 15:26:44.123 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph/build/dev/mon.b/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
  214.  
  215. 2019-01-29 15:26:44.123 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
  216.  
  217. 2019-01-29 15:26:44.133 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x555a529e5600
  218. 2019-01-29 15:26:44.133 7f860f2c31c0 5 adding auth protocol: cephx
  219. 2019-01-29 15:26:44.133 7f860f2c31c0 5 adding auth protocol: cephx
  220. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  221. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  222. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  223. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  224. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  225. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
  226. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  227. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  228. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  229. 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  230. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  231. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  232. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  233. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
  234. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  235. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  236. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  237. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  238. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  239. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  240. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  241. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  242. 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  243. 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  244. 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  245. 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  246. 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  247. 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
  248. 2019-01-29 15:26:44.136 7f860f2c31c0 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph/build/keyring
  249. 2019-01-29 15:26:44.136 7f860f2c31c0 10 mon.b@-1(probing) e0 extract_save_mon_key moving mon. key to separate keyring
  250. 2019-01-29 15:26:44.145 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl.cc:365] Shutdown: canceling all background work
  251. 2019-01-29 15:26:44.145 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl.cc:521] Shutdown complete
  252. 2019-01-29 15:26:44.145 7f860f2c31c0 0 /home/rraja/git/ceph/build/bin/ceph-mon: created monfs at /home/rraja/git/ceph/build/dev/mon.b for mon.b
  253. 2019-01-29 15:26:44.507 7f39deeb51c0 0 ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev), process ceph-mon, pid 613920
  254. 2019-01-29 15:26:44.545 7f39deeb51c0 0 load: jerasure load: lrc load: isa
  255. 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
  256. 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  257. 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
  258. 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
  259. 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  260. 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
  261. 2019-01-29 15:26:44.546 7f39deeb51c0 1 rocksdb: do_open column families: [default]
  262. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: RocksDB version: 5.17.2
  263.  
  264. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
  265. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compile date Jan 28 2019
  266. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: DB SUMMARY
  267.  
  268. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: CURRENT file: CURRENT
  269.  
  270. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: IDENTITY file: IDENTITY
  271.  
  272. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: MANIFEST file: MANIFEST-000001 size: 13 Bytes
  273.  
  274. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: SST files in /home/rraja/git/ceph/build/dev/mon.b/store.db dir, Total Num: 0, files:
  275.  
  276. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph/build/dev/mon.b/store.db: 000003.log size: 1091 ;
  277.  
  278. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.error_if_exists: 0
  279. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_if_missing: 0
  280. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.paranoid_checks: 1
  281. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.env: 0x563938d121a0
  282. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.info_log: 0x563939e69f40
  283. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_file_opening_threads: 16
  284. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.statistics: (nil)
  285. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_fsync: 0
  286. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_log_file_size: 0
  287. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_manifest_file_size: 1073741824
  288. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.log_file_time_to_roll: 0
  289. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.keep_log_file_num: 1000
  290. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.recycle_log_file_num: 0
  291. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_fallocate: 1
  292. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_reads: 0
  293. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_writes: 0
  294. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_reads: 0
  295. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
  296. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_missing_column_families: 0
  297. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_log_dir:
  298. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph/build/dev/mon.b/store.db
  299. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.table_cache_numshardbits: 6
  300. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_subcompactions: 1
  301. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_flushes: -1
  302. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_ttl_seconds: 0
  303. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_size_limit_MB: 0
  304. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manifest_preallocation_size: 4194304
  305. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.is_fd_close_on_exec: 1
  306. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.advise_random_on_open: 1
  307. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_write_buffer_size: 0
  308. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_buffer_manager: 0x563939e6a720
  309. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.access_hint_on_compaction_start: 1
  310. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
  311. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.random_access_max_buffer_size: 1048576
  312. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_adaptive_mutex: 0
  313. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.rate_limiter: (nil)
  314. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
  315. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_recovery_mode: 2
  316. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_thread_tracking: 0
  317. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_pipelined_write: 0
  318. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_concurrent_memtable_write: 1
  319. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
  320. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_max_yield_usec: 100
  321. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_slow_yield_usec: 3
  322. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.row_cache: None
  323. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_filter: None
  324. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_recovery: 0
  325. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_ingest_behind: 0
  326. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.preserve_deletes: 0
  327. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.two_write_queues: 0
  328. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manual_wal_flush: 0
  329. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_jobs: 2
  330. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_compactions: -1
  331. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_shutdown: 0
  332. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
  333. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delayed_write_rate : 16777216
  334. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_total_wal_size: 0
  335. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
  336. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.stats_dump_period_sec: 600
  337. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_open_files: -1
  338. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.bytes_per_sync: 0
  339. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_bytes_per_sync: 0
  340. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.compaction_readahead_size: 0
  341. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compression algorithms supported:
  342. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTDNotFinalCompression supported: 0
  343. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTD supported: 0
  344. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kXpressCompression supported: 0
  345. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4HCCompression supported: 1
  346. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4Compression supported: 1
  347. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kBZip2Compression supported: 0
  348. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZlibCompression supported: 1
  349. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kSnappyCompression supported: 1
  350. 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Fast CRC32 supported: Supported on x86
  351. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
  352.  
  353. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
  354.  
  355. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
  356. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.merge_operator:
  357. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter: None
  358. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter_factory: None
  359. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_factory: SkipListFactory
  360. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_factory: BlockBasedTable
  361. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563939ac2ab0)
  362. cache_index_and_filter_blocks: 1
  363. cache_index_and_filter_blocks_with_high_priority: 1
  364. pin_l0_filter_and_index_blocks_in_cache: 1
  365. pin_top_level_index_and_filter: 1
  366. index_type: 0
  367. hash_index_allow_collision: 1
  368. checksum: 1
  369. no_block_cache: 0
  370. block_cache: 0x56393a9752a0
  371. block_cache_name: BinnedLRUCache
  372. block_cache_options:
  373. capacity : 536870912
  374. num_shard_bits : 4
  375. strict_capacity_limit : 0
  376. high_pri_pool_ratio: 0.000
  377. block_cache_compressed: (nil)
  378. persistent_cache: (nil)
  379. block_size: 4096
  380. block_size_deviation: 10
  381. block_restart_interval: 16
  382. index_block_restart_interval: 1
  383. metadata_block_size: 4096
  384. partition_filters: 0
  385. use_delta_encoding: 1
  386. filter_policy: rocksdb.BuiltinBloomFilter
  387. whole_key_filtering: 1
  388. verify_compression: 0
  389. read_amp_bytes_per_bit: 0
  390. format_version: 2
  391. enable_index_compression: 1
  392. block_align: 0
  393.  
  394. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.write_buffer_size: 33554432
  395. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number: 2
  396. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression: NoCompression
  397. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression: Disabled
  398. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.prefix_extractor: nullptr
  399. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
  400. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.num_levels: 7
  401. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
  402. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
  403. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
  404. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.level: 32767
  405. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
  406. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
  407. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
  408. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.enabled: false
  409. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.window_bits: -14
  410. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.level: 32767
  411. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.strategy: 0
  412. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
  413. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
  414. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.enabled: false
  415. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
  416. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
  417. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_stop_writes_trigger: 36
  418. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_base: 67108864
  419. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_multiplier: 1
  420. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_base: 268435456
  421. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
  422. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
  423. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
  424. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
  425. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
  426. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
  427. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
  428. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
  429. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
  430. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
  431. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_compaction_bytes: 1677721600
  432. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.arena_block_size: 4194304
  433. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
  434. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
  435. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
  436. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.disable_auto_compactions: 0
  437. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
  438. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_pri: kByCompensatedSize
  439. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
  440. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
  441. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
  442. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
  443. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
  444. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
  445. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
  446. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
  447. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.ttl: 0
  448. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_properties_collectors:
  449. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_support: 0
  450. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_num_locks: 10000
  451. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
  452. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_huge_page_size: 0
  453. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bloom_locality: 0
  454. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_successive_merges: 0
  455. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.optimize_filters_for_hits: 0
  456. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.paranoid_file_checks: 0
  457. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.force_consistency_checks: 0
  458. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.report_bg_io_stats: 0
  459. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.ttl: 0
  460. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph/build/dev/mon.b/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
  461.  
  462. 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
  463.  
  464. 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804548935, "job": 1, "event": "recovery_started", "log_files": [3]}
  465. 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:561] Recovering log #3 mode 2
  466. 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804555547, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 4, "file_size": 1849, "table_properties": {"data_size": 1103, "index_size": 28, "filter_size": 23, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 980, "raw_average_value_size": 196, "num_data_blocks": 1, "num_entries": 5, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
  467. 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:2936] Creating manifest 5
  468.  
  469. 2019-01-29 15:26:44.563 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804564214, "job": 1, "event": "recovery_finished"}
  470. 2019-01-29 15:26:44.574 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x56393a906800
  471. 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap
  472. 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap found mkfs monmap
  473. 2019-01-29 15:26:44.574 7f39deeb51c0 10 main monmap:
  474. {
  475. "epoch": 0,
  476. "fsid": "3b02750c-f104-4301-aa14-258d2b37f104",
  477. "modified": "2019-01-29 15:26:43.871480",
  478. "created": "2019-01-29 15:26:43.871480",
  479. "features": {
  480. "persistent": [],
  481. "optional": []
  482. },
  483. "mons": [
  484. {
  485. "rank": 0,
  486. "name": "a",
  487. "public_addrs": {
  488. "addrvec": [
  489. {
  490. "type": "v2",
  491. "addr": "10.215.99.125:40363",
  492. "nonce": 0
  493. },
  494. {
  495. "type": "v1",
  496. "addr": "10.215.99.125:40364",
  497. "nonce": 0
  498. }
  499. ]
  500. },
  501. "addr": "10.215.99.125:40364/0",
  502. "public_addr": "10.215.99.125:40364/0"
  503. },
  504. {
  505. "rank": 1,
  506. "name": "b",
  507. "public_addrs": {
  508. "addrvec": [
  509. {
  510. "type": "v2",
  511. "addr": "10.215.99.125:40365",
  512. "nonce": 0
  513. },
  514. {
  515. "type": "v1",
  516. "addr": "10.215.99.125:40366",
  517. "nonce": 0
  518. }
  519. ]
  520. },
  521. "addr": "10.215.99.125:40366/0",
  522. "public_addr": "10.215.99.125:40366/0"
  523. },
  524. {
  525. "rank": 2,
  526. "name": "c",
  527. "public_addrs": {
  528. "addrvec": [
  529. {
  530. "type": "v2",
  531. "addr": "10.215.99.125:40367",
  532. "nonce": 0
  533. },
  534. {
  535. "type": "v1",
  536. "addr": "10.215.99.125:40368",
  537. "nonce": 0
  538. }
  539. ]
  540. },
  541. "addr": "10.215.99.125:40368/0",
  542. "public_addr": "10.215.99.125:40368/0"
  543. }
  544. ]
  545. }
  546.  
  547. 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
  548. 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
  549. 2019-01-29 15:26:44.575 7f39deeb51c0 0 starting mon.b rank 1 at public addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] at bind addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
  550. 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] learned_addr learned my addr [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] (peer_addr_for_me v2:10.215.99.125:40365/0)
  551. 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] _finish_bind bind my_addrs is [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0]
  552. 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  553. 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  554. 2019-01-29 15:26:44.576 7f39deeb51c0 0 starting mon.b rank 1 at [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
  555. 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  556. 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  557. 2019-01-29 15:26:44.577 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  558. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  559. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  560. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  561. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  562. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
  563. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  564. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  565. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  566. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  567. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  568. 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  569. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  570. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
  571. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  572. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  573. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  574. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  575. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  576. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  577. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  578. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  579. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  580. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  581. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  582. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  583. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  584. 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
  585. 2019-01-29 15:26:44.580 7f39deeb51c0 1 mon.b@-1(probing) e0 preinit fsid 3b02750c-f104-4301-aa14-258d2b37f104
  586. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 check_fsid cluster_uuid contains '3b02750c-f104-4301-aa14-258d2b37f104'
  587. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 features compat={},rocompat={},incompat={1=initial feature set (~v.18),3=single paxos with k/v store (v0.?)}
  588. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 calc_quorum_requirements required_features 0
  589. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 required_features 0
  590. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 has_ever_joined = 0
  591. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_last_committed_floor 0
  592. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 init_paxos
  593. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init last_pn: 0 accepted_pn: 0 last_committed: 0 first_committed: 0
  594. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init
  595. 2019-01-29 15:26:44.580 7f39deeb51c0 5 mon.b@-1(probing).mds e0 Unable to load 'last_metadata'
  596. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).health init
  597. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config init
  598. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos
  599. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos no cluster_fingerprint
  600. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) refresh
  601. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) refresh
  602. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) refresh
  603. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos
  604. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos version 0 summary v 0
  605. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) refresh
  606. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) refresh
  607. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).auth v0 update_from_paxos
  608. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) refresh
  609. 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config load_config got 0 keys
  610. 2019-01-29 15:26:44.580 7f39deeb51c0 20 mon.b@-1(probing).config load_config config map:
  611. {
  612. "global": {},
  613. "by_type": {},
  614. "by_id": {}
  615. }
  616.  
  617. 2019-01-29 15:26:44.584 7f39deeb51c0 20 mgrc handle_mgr_map mgrmap(e 0) v1
  618. 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Got map version 0
  619. 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Active mgr is now
  620. 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc reconnect No active mgr available yet
  621. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) refresh
  622. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat 0
  623. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat check_subs
  624. 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).mgrstat update_logger
  625. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) refresh
  626. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).health update_from_paxos
  627. 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).health dump:{
  628. "quorum_health": {},
  629. "leader_health": {}
  630. }
  631.  
  632. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) refresh
  633. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) post_refresh
  634. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) post_refresh
  635. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) post_refresh
  636. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) post_refresh
  637. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) post_refresh
  638. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) post_refresh
  639. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) post_refresh
  640. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) post_refresh
  641. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) post_refresh
  642. 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing) e0 loading initial keyring to bootstrap authentication for mkfs
  643. 2019-01-29 15:26:44.584 7f39deeb51c0 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph/build/dev/mon.b/keyring
  644. 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] start start
  645. 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- start start
  646. 2019-01-29 15:26:44.585 7f39deeb51c0 2 mon.b@-1(probing) e0 init
  647. 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
  648. 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
  649. 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 bootstrap
  650. 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_reset_requester
  651. 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 unregister_cluster_logger - not registered
  652. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 cancel_probe_timeout (none scheduled)
  653. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 reverting to legacy ranks for seed monmap (epoch 0)
  654. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 monmap e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  655. 2019-01-29 15:26:44.586 7f39deeb51c0 0 mon.b@-1(probing) e0 my rank is now 1 (was -1)
  656. 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] shutdown_connections
  657. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 _reset
  658. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
  659. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 timecheck_finish
  660. 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_tick_stop
  661. 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_interval_stop
  662. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_event_cancel
  663. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_reset
  664. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  665. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  666. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  667. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  668. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
  669. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  670. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  671. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  672. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(health 0..0) restart
  673. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(config 0..0) restart
  674. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
  675. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 reset_probe_timeout 0x56393ab65d70 after 2 seconds
  676. 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 probing other monitors
  677. 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916840
  678. 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916840 con 0x563939c9c900
  679. 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916b00
  680. 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916b00 con 0x563939c9cd80
  681. 2019-01-29 15:26:44.586 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9cd80 msgr2=0x56393ab9a600 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  682. 2019-01-29 15:26:44.586 7f39c617f700 10 mon.b@1(probing) e0 ms_handle_refused 0x563939c9cd80 v2:10.215.99.125:40367/0
  683. 2019-01-29 15:26:44.586 7f39c417b700 10 mon.b@1(probing) e0 ms_get_authorizer for mon
  684. 2019-01-29 15:26:44.586 7f39c417b700 10 cephx: build_service_ticket service mon secret_id 18446744073709551615 ticket_info.ticket.name=mon.
  685. 2019-01-29 15:26:44.587 7f39c417b700 10 In get_auth_session_handler for protocol 2
  686. 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 14723405194060298632
  687. 2019-01-29 15:26:44.587 7f39c417b700 20 Putting signature in client message(seq # 1): sig = 14723405194060298632
  688. 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 2859661691 middle_crc = 0 data_crc = 0 sig = 12381238761605199092
  689. 2019-01-29 15:26:44.587 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 ==== 58+0+0 (2859661691 0 0) 0x56393a917080 con 0x563939c9c900
  690. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _ms_dispatch new session 0x56393abc0000 MonSession(mon.0 [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  691. 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(probing) e0 _ms_dispatch setting monitor caps on this connection
  692. 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
  693. 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  694. 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
  695. 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
  696. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6
  697. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_probe mon.0 v2:10.215.99.125:40363/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 features 4611087854031667199
  698. 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917340 con 0x563939c9c900
  699. 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 2632229567743115917
  700. 2019-01-29 15:26:44.588 7f39c417b700 20 Putting signature in client message(seq # 2): sig = 2632229567743115917
  701. 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 137113040 middle_crc = 0 data_crc = 0 sig = 5942540075320245331
  702. 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (137113040 0 0) 0x56393a916840 con 0x563939c9c900
  703. 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  704. 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
  705. 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  706. 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
  707. 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
  708. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
  709. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_reply mon.0 v2:10.215.99.125:40363/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
  710. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  711. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 peer name is a
  712. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 mon.a is outside the quorum
  713. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 outside_quorum now a,b, need 2
  714. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 that's enough to form a new quorum, calling election
  715. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 start_election
  716. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _reset
  717. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 cancel_probe_timeout 0x56393ab65d70
  718. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 timecheck_finish
  719. 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_tick_stop
  720. 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_interval_stop
  721. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_event_cancel
  722. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_reset
  723. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  724. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  725. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  726. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  727. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
  728. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  729. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  730. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  731. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
  732. 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
  733. 2019-01-29 15:26:44.588 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
  734. 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 -- 0x56393abc06c0 con 0x563939d41600
  735. 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(electing).elector(0) start -- can i be leader?
  736. 2019-01-29 15:26:44.588 7f39c617f700 1 mon.b@1(electing).elector(0) init, first boot, initializing epoch at 1
  737. 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 415793505725448275
  738. 2019-01-29 15:26:44.605 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
  739. 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0f800
  740. 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0f800 con 0x563939c9c900
  741. 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0fb00
  742. 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0fb00 con 0x563939c9cd80
  743. 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 10881007314159201096
  744. 2019-01-29 15:26:44.605 7f39c417b700 20 Putting signature in client message(seq # 3): sig = 10881007314159201096
  745. 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 ==== 0+0+0 (0 0 0) 0x56393abc06c0 con 0x563939d41600
  746. 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0d80 MonSession(mon.1 [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  747. 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
  748. 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  749. 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  750. 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:44.606661 lease_expire=0.000000 has v0 lc 0
  751. 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  752. 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0f500 con 0x563939c9c900
  753. 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  754. 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  755. 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  756. 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
  757. 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
  758. 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  759. 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
  760. 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
  761. 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.0
  762. 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  763. 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) defer to 0
  764. 2019-01-29 15:26:44.606 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
  765. 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- ?+0 0x56393abd4000
  766. 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- 0x56393abd4000 con 0x563939c9c900
  767. 2019-01-29 15:26:44.606 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 3541336743 middle_crc = 0 data_crc = 0 sig = 8903611187062716984
  768. 2019-01-29 15:26:44.606 7f39c417b700 20 Putting signature in client message(seq # 4): sig = 8903611187062716984
  769. 2019-01-29 15:26:44.728 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=35 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53000/0 addrs are 145
  770. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  771. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  772. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 18425495964075312649
  773. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  774. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  775. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 18425495964075312650 expecting 18425495964075312650
  776. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
  777. 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept connect_seq 0 vs existing csq=0 existing_state=STATE_CONNECTING
  778. 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=CLOSED pgs=0 cs=0 l=0).replace stop myself to swap existing
  779. 2019-01-29 15:26:44.729 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_reset 0x563939c9f600 v2:10.215.99.125:40367/0
  780. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  781. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  782. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 12808771302250315903
  783. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  784. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  785. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 12808771302250315904 expecting 12808771302250315904
  786. 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
  787. 2019-01-29 15:26:44.729 7f39c397a700 10 In get_auth_session_handler for protocol 2
  788. 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 4598065285107427218
  789. 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 1): sig = 4598065285107427218
  790. 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 12491993532019861601
  791. 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 2): sig = 12491993532019861601
  792. 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 4581398896165078038
  793. 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a917600 con 0x563939c9cd80
  794. 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0fc0 MonSession(mon.2 [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  795. 2019-01-29 15:26:44.730 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
  796. 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  797. 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  798. 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
  799. 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
  800. 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
  801. 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
  802. 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917b80 con 0x563939c9cd80
  803. 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 17022378831273095551
  804. 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 3): sig = 17022378831273095551
  805. 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1672072532 middle_crc = 0 data_crc = 0 sig = 8103692841582008270
  806. 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (1672072532 0 0) 0x56393a917b80 con 0x563939c9cd80
  807. 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  808. 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  809. 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  810. 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
  811. 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
  812. 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
  813. 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
  814. 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  815. 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 peer name is c
  816. 2019-01-29 15:26:44.735 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4384171504794788302
  817. 2019-01-29 15:26:44.735 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0fb00 con 0x563939c9cd80
  818. 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  819. 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  820. 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  821. 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
  822. 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
  823. 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  824. 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
  825. 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
  826. 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.2
  827. 2019-01-29 15:26:44.735 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  828. 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) no, we already acked 0
  829. 2019-01-29 15:26:44.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9fa80 0x56393ab9b800 :53002 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=36 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53002/0 addrs are 145
  830. 2019-01-29 15:26:44.868 7f39c3179700 10 In get_auth_session_handler for protocol 0
  831. 2019-01-29 15:26:44.868 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393abc1b00 con 0x563939c9fa80
  832. 2019-01-29 15:26:44.868 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc1200 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  833. 2019-01-29 15:26:44.868 7f39c617f700 20 mon.b@1(electing) e0 caps
  834. 2019-01-29 15:26:44.868 7f39c617f700 5 mon.b@1(electing) e0 waitlisting message auth(proto 0 30 bytes epoch 0) v1
  835. 2019-01-29 15:26:49.585 7f39c8984700 11 mon.b@1(electing) e0 tick
  836. 2019-01-29 15:26:49.585 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
  837. 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  838. 2019-01-29 15:26:49.585 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.586781 lease_expire=0.000000 has v0 lc 0
  839. 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  840. 2019-01-29 15:26:49.618 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 4020688038 middle_crc = 0 data_crc = 0 sig = 6749900040655150956
  841. 2019-01-29 15:26:49.618 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 4 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 victory 2) v7 ==== 41889+0+0 (4020688038 0 0) 0x56393abd4000 con 0x563939c9c900
  842. 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  843. 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  844. 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  845. 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
  846. 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
  847. 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  848. 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
  849. 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
  850. 2019-01-29 15:26:49.619 7f39c617f700 5 mon.b@1(electing).elector(1) handle_victory from mon.0 quorum_features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  851. 2019-01-29 15:26:49.619 7f39c617f700 10 mon.b@1(electing).elector(1) bump_epoch 1 to 2
  852. 2019-01-29 15:26:49.623 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 2611882610 middle_crc = 0 data_crc = 0 sig = 10249081017414072692
  853. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 join_election
  854. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 _reset
  855. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
  856. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
  857. 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
  858. 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
  859. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
  860. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
  861. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  862. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  863. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  864. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  865. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  866. 2019-01-29 15:26:49.625 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626824 lease_expire=0.000000 has v0 lc 0
  867. 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  868. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
  869. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  870. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  871. 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626971 lease_expire=0.000000 has v0 lc 0
  872. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  873. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  874. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  875. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  876. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  877. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon) e0 lose_election, epoch 2 leader is mon0 quorum is 0,1 features are 4611087854031667199 mon_features are mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  878. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) peon_init -- i am a peon
  879. 2019-01-29 15:26:49.626 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
  880. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) election_finished
  881. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active - not active
  882. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) election_finished
  883. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active - not active
  884. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) election_finished
  885. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  886. 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627225 lease_expire=0.000000 has v0 lc 0
  887. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  888. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active - not active
  889. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) election_finished
  890. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active - not active
  891. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) election_finished
  892. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  893. 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627327 lease_expire=0.000000 has v0 lc 0
  894. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  895. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active - not active
  896. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) election_finished
  897. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active - not active
  898. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) election_finished
  899. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active - not active
  900. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) election_finished
  901. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active - not active
  902. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) election_finished
  903. 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active - not active
  904. 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon) e0 apply_quorum_to_compatset_features
  905. 2019-01-29 15:26:49.626 7f39c617f700 1 mon.b@1(peon) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
  906. 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 calc_quorum_requirements required_features 549755813888
  907. 2019-01-29 15:26:49.632 7f39c617f700 5 mon.b@1(peon) e0 apply_monmap_to_compatset_features
  908. 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
  909. 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 resend_routed_requests
  910. 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 register_cluster_logger
  911. 2019-01-29 15:26:49.634 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 5 ==== paxos(collect lc 0 fc 0 pn 100 opn 0) v4 ==== 84+0+0 (2611882610 0 0) 0x563939b0f800 con 0x563939c9c900
  912. 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  913. 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  914. 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  915. 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
  916. 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
  917. 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  918. 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
  919. 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
  920. 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) handle_collect paxos(collect lc 0 fc 0 pn 100 opn 0) v4
  921. 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
  922. 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) accepting pn 100 from 0
  923. 2019-01-29 15:26:49.639 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(last lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4300 con 0x563939c9c900
  924. 2019-01-29 15:26:49.639 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 237159531 middle_crc = 0 data_crc = 0 sig = 17831234451164454257
  925. 2019-01-29 15:26:49.639 7f39c417b700 20 Putting signature in client message(seq # 5): sig = 17831234451164454257
  926. 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 4255564515 middle_crc = 0 data_crc = 0 sig = 9698958856354677651
  927. 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 6 ==== paxos(lease lc 0 fc 0 pn 0 opn 0) v4 ==== 84+0+0 (4255564515 0 0) 0x56393abd4300 con 0x563939c9c900
  928. 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  929. 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  930. 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  931. 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
  932. 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
  933. 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  934. 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
  935. 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
  936. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_lease on 0 now 2019-01-29 15:26:54.641538
  937. 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(lease_ack lc 0 fc 0 pn 0 opn 0) v4 -- 0x56393abd4600 con 0x563939c9c900
  938. 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon).paxos(paxos active c 0..0) reset_lease_timeout - setting timeout event
  939. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active
  940. 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(mdsmap 0..0) _active we are not the leader, hence we propose nothing!
  941. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active
  942. 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(osdmap 0..0) _active we are not the leader, hence we propose nothing!
  943. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 update_logger
  944. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 take_all_failures on 0 osds
  945. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 start_mapping no pools, no mapping job
  946. 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 3188025122 middle_crc = 0 data_crc = 0 sig = 13621511732933224171
  947. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active
  948. 2019-01-29 15:26:49.641 7f39c417b700 20 Putting signature in client message(seq # 6): sig = 13621511732933224171
  949. 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(logm 0..0) _active we are not the leader, hence we propose nothing!
  950. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  951. 2019-01-29 15:26:49.641 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.642816 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
  952. 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  953. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active
  954. 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(monmap 0..0) _active we are not the leader, hence we propose nothing!
  955. 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).monmap v0 apply_mon_features wait for service to be writeable
  956. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active
  957. 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(auth 0..0) _active we are not the leader, hence we propose nothing!
  958. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  959. 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643006 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
  960. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  961. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).auth v0 AuthMonitor::on_active()
  962. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active
  963. 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgr 0..0) _active we are not the leader, hence we propose nothing!
  964. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active
  965. 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgrstat 0..0) _active we are not the leader, hence we propose nothing!
  966. 2019-01-29 15:26:49.642 7f39c617f700 20 mon.b@1(peon).mgrstat update_logger
  967. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active
  968. 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(health 0..0) _active we are not the leader, hence we propose nothing!
  969. 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active
  970. 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(config 0..0) _active we are not the leader, hence we propose nothing!
  971. 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643134 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
  972. 2019-01-29 15:26:49.652 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 359680839 middle_crc = 0 data_crc = 0 sig = 14961307058218807505
  973. 2019-01-29 15:26:49.653 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 7 ==== paxos(begin lc 0 fc 0 pn 100 opn 0) v4 ==== 2292+0+0 (359680839 0 0) 0x56393abd4600 con 0x563939c9c900
  974. 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  975. 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  976. 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  977. 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
  978. 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
  979. 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  980. 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
  981. 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
  982. 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_begin paxos(begin lc 0 fc 0 pn 100 opn 0) v4
  983. 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) accepting value for 1 pn 100
  984. 2019-01-29 15:26:49.657 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(accept lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4900 con 0x563939c9c900
  985. 2019-01-29 15:26:49.657 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 3516937211 middle_crc = 0 data_crc = 0 sig = 18198979658301600163
  986. 2019-01-29 15:26:49.657 7f39c417b700 20 Putting signature in client message(seq # 7): sig = 18198979658301600163
  987. 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 1185258935604909951
  988. 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 4 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a916b00 con 0x563939c9cd80
  989. 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  990. 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  991. 2019-01-29 15:26:49.736 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  992. 2019-01-29 15:26:49.736 7f39c617f700 20 allow so far , doing grant allow *
  993. 2019-01-29 15:26:49.736 7f39c617f700 20 allow all
  994. 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
  995. 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
  996. 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b quorum 0,1 paxos( fc 0 lc 0 ) new) v6 -- 0x56393affe000 con 0x563939c9cd80
  997. 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 3631419306 middle_crc = 0 data_crc = 0 sig = 17101328230604225658
  998. 2019-01-29 15:26:49.736 7f39c397a700 20 Putting signature in client message(seq # 4): sig = 17101328230604225658
  999. 2019-01-29 15:26:49.754 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4909251354610610373
  1000. 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 5 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x56393abd5500 con 0x563939c9cd80
  1001. 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1002. 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  1003. 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1004. 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
  1005. 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
  1006. 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  1007. 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
  1008. 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
  1009. 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) handle_propose from mon.2
  1010. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).elector(2) handle_propose required features 549755813888 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  1011. 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) got propose from old epoch, quorum is 0,1, mon.2 must have just started
  1012. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 start_election
  1013. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 _reset
  1014. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 cancel_probe_timeout (none scheduled)
  1015. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
  1016. 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_tick_stop
  1017. 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_interval_stop
  1018. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_event_cancel
  1019. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_reset
  1020. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) restart -- canceling timeouts
  1021. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) restart
  1022. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) restart
  1023. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) restart
  1024. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1025. 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755397 lease_expire=0.000000 has v0 lc 0
  1026. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1027. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) restart
  1028. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) restart
  1029. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1030. 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755436 lease_expire=0.000000 has v0 lc 0
  1031. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1032. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) restart
  1033. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) restart
  1034. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) restart
  1035. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) restart
  1036. 2019-01-29 15:26:49.754 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
  1037. 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 -- 0x56393b000000 con 0x563939d41600
  1038. 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(electing).elector(2) start -- can i be leader?
  1039. 2019-01-29 15:26:49.754 7f39c617f700 1 mon.b@1(electing).elector(2) init, last seen epoch 2
  1040. 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(electing).elector(2) bump_epoch 2 to 3
  1041. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 join_election
  1042. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 _reset
  1043. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
  1044. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
  1045. 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
  1046. 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
  1047. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
  1048. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
  1049. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  1050. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  1051. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  1052. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  1053. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1054. 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759261 lease_expire=0.000000 has v0 lc 0
  1055. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1056. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
  1057. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  1058. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1059. 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759298 lease_expire=0.000000 has v0 lc 0
  1060. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1061. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  1062. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  1063. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  1064. 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  1065. 2019-01-29 15:26:49.758 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
  1066. 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4c00
  1067. 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4c00 con 0x563939c9c900
  1068. 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4f00
  1069. 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4f00 con 0x563939c9cd80
  1070. 2019-01-29 15:26:49.759 7f39c417b700 10 _calc_signature seq 8 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 12490028364350923317
  1071. 2019-01-29 15:26:49.759 7f39c417b700 20 Putting signature in client message(seq # 8): sig = 12490028364350923317
  1072. 2019-01-29 15:26:49.759 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 13778264798706442571
  1073. 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 ==== 0+0+0 (0 0 0) 0x56393b000000 con 0x563939d41600
  1074. 2019-01-29 15:26:49.759 7f39c397a700 20 Putting signature in client message(seq # 5): sig = 13778264798706442571
  1075. 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  1076. 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  1077. 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1078. 2019-01-29 15:26:49.759 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.760231 lease_expire=0.000000 has v0 lc 0
  1079. 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1080. 2019-01-29 15:26:49.764 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2637561887 middle_crc = 0 data_crc = 0 sig = 10520589114785615889
  1081. 2019-01-29 15:26:49.764 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 6 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 3) v7 ==== 1190+0+0 (2637561887 0 0) 0x56393abd4f00 con 0x563939c9cd80
  1082. 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1083. 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  1084. 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1085. 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
  1086. 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
  1087. 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  1088. 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
  1089. 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
  1090. 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) handle_ack from mon.2
  1091. 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
  1092. 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk reading from fd=30 : Unknown error -104
  1093. 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed
  1094. 2019-01-29 15:26:50.038 7f39c417b700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).handle_message read tag failed
  1095. 2019-01-29 15:26:50.038 7f39c417b700 0 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).fault initiating reconnect
  1096. 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  1097. 2019-01-29 15:26:50.038 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  1098. 2019-01-29 15:26:50.239 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  1099. 2019-01-29 15:26:50.239 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  1100. 2019-01-29 15:26:50.639 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  1101. 2019-01-29 15:26:50.640 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  1102. 2019-01-29 15:26:51.441 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  1103. 2019-01-29 15:26:51.441 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  1104. 2019-01-29 15:26:53.043 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  1105. 2019-01-29 15:26:53.043 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  1106. 2019-01-29 15:26:54.586 7f39c8984700 11 mon.b@1(electing) e0 tick
  1107. 2019-01-29 15:26:54.586 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
  1108. 2019-01-29 15:26:54.759 7f39c8984700 5 mon.b@1(electing).elector(3) election timer expired
  1109. 2019-01-29 15:26:54.759 7f39c8984700 10 mon.b@1(electing).elector(3) bump_epoch 3 to 4
  1110. 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 join_election
  1111. 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 _reset
  1112. 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
  1113. 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 timecheck_finish
  1114. 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_tick_stop
  1115. 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_interval_stop
  1116. 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 scrub_event_cancel
  1117. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing) e0 scrub_reset
  1118. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  1119. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  1120. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  1121. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  1122. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1123. 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772105 lease_expire=0.000000 has v0 lc 0
  1124. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1125. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1126. 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772223 lease_expire=0.000000 has v0 lc 0
  1127. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1128. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
  1129. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  1130. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1131. 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772325 lease_expire=0.000000 has v0 lc 0
  1132. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1133. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  1134. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  1135. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  1136. 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  1137. 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- ?+0 0x56393abd5800
  1138. 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- 0x56393abd5800 con 0x563939c9cd80
  1139. 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(electing) e0 win_election epoch 4 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  1140. 2019-01-29 15:26:54.772 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
  1141. 2019-01-29 15:26:54.772 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2368543008 middle_crc = 0 data_crc = 0 sig = 5116208503718269665
  1142. 2019-01-29 15:26:54.772 7f39c397a700 20 Putting signature in client message(seq # 6): sig = 5116208503718269665
  1143. 2019-01-29 15:26:54.772 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 -- 0x56393b000d80 con 0x563939d41600
  1144. 2019-01-29 15:26:54.772 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 ==== 0+0+0 (0 0 0) 0x56393b000d80 con 0x563939d41600
  1145. 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) leader_init -- starting paxos recovery
  1146. 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) learned uncommitted 1 pn 100 (2196 bytes) from myself
  1147. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) get_new_proposal_number = 201
  1148. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) collect with pn 201
  1149. 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5b00
  1150. 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5b00 con 0x563939c9cd80
  1151. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) election_finished
  1152. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) _active - not active
  1153. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
  1154. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
  1155. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
  1156. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
  1157. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
  1158. 2019-01-29 15:26:54.777 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 1789744833 middle_crc = 0 data_crc = 0 sig = 2735914805292709925
  1159. 2019-01-29 15:26:54.777 7f39c397a700 20 Putting signature in client message(seq # 7): sig = 2735914805292709925
  1160. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1161. 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778696 lease_expire=0.000000 has v0 lc 0
  1162. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1163. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1164. 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778814 lease_expire=0.000000 has v0 lc 0
  1165. 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1166. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
  1167. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
  1168. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1169. 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.779003 lease_expire=0.000000 has v0 lc 0
  1170. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1171. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
  1172. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
  1173. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
  1174. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
  1175. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
  1176. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
  1177. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
  1178. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
  1179. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
  1180. 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_quorum_to_compatset_features
  1181. 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_monmap_to_compatset_features
  1182. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 timecheck_finish
  1183. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 resend_routed_requests
  1184. 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 register_cluster_logger - already registered
  1185. 2019-01-29 15:26:54.778 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  1186. 2019-01-29 15:26:54.779 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
  1187. 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1188. 2019-01-29 15:26:54.779 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.780024 lease_expire=0.000000 has v0 lc 0
  1189. 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1190. 2019-01-29 15:26:54.794 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 2077645696 middle_crc = 0 data_crc = 0 sig = 16008821289831722732
  1191. 2019-01-29 15:26:54.795 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 7 ==== paxos(last lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (2077645696 0 0) 0x56393abd5b00 con 0x563939c9cd80
  1192. 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1193. 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
  1194. 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1195. 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
  1196. 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
  1197. 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  1198. 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
  1199. 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
  1200. 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) handle_last paxos(last lc 0 fc 0 pn 201 opn 0) v4
  1201. 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) store_state nothing to commit
  1202. 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) they accepted our pn, we now have 2 peons
  1203. 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) that's everyone. begin on old learned value
  1204. 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) begin for 1 2196 bytes
  1205. 2019-01-29 15:26:54.800 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) sending begin to mon.2
  1206. 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5200
  1207. 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5200 con 0x563939c9cd80
  1208. 2019-01-29 15:26:54.800 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3535318905 middle_crc = 0 data_crc = 0 sig = 6870884659653601128
  1209. 2019-01-29 15:26:54.800 7f39c397a700 20 Putting signature in client message(seq # 8): sig = 6870884659653601128
  1210. 2019-01-29 15:26:54.807 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3909173416 middle_crc = 0 data_crc = 0 sig = 17406004129815643634
  1211. 2019-01-29 15:26:54.807 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 8 ==== paxos(accept lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (3909173416 0 0) 0x56393abd5200 con 0x563939c9cd80
  1212. 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1213. 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
  1214. 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1215. 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
  1216. 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
  1217. 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  1218. 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
  1219. 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
  1220. 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) handle_accept paxos(accept lc 0 fc 0 pn 201 opn 0) v4
  1221. 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) now 1,2 have accepted
  1222. 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) got majority, committing, done with update
  1223. 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) commit_start 1
  1224. 2019-01-29 15:26:54.813 7f39c2978700 20 mon.b@1(leader).paxos(paxos writing-previous c 0..0) commit_finish 1
  1225. 2019-01-29 15:26:54.813 7f39c2978700 10 mon.b@1(leader).paxos(paxos writing-previous c 1..1) sending commit to mon.2
  1226. 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393b00a000
  1227. 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- 0x56393b00a000 con 0x563939c9cd80
  1228. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader) e0 refresh_from_paxos
  1229. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
  1230. 2019-01-29 15:26:54.814 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 558359806 middle_crc = 0 data_crc = 0 sig = 15537979346010902130
  1231. 2019-01-29 15:26:54.814 7f39c397a700 20 Putting signature in client message(seq # 9): sig = 15537979346010902130
  1232. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
  1233. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
  1234. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos
  1235. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
  1236. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
  1237. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos version 1, my v 0
  1238. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 signaling that we need a bootstrap
  1239. 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos got 1
  1240. 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
  1241. 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).auth v0 update_from_paxos
  1242. 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
  1243. 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).config load_config got 0 keys
  1244. 2019-01-29 15:26:54.819 7f39c2978700 20 mon.b@1(leader).config load_config config map:
  1245. {
  1246. "global": {},
  1247. "by_type": {},
  1248. "by_id": {}
  1249. }
  1250.  
  1251. 2019-01-29 15:26:54.830 7f39c2978700 20 mgrc handle_mgr_map mgrmap(e 0) v1
  1252. 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Got map version 0
  1253. 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Active mgr is now
  1254. 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc reconnect No active mgr available yet
  1255. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
  1256. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat 0
  1257. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat check_subs
  1258. 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).mgrstat update_logger
  1259. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
  1260. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).health update_from_paxos
  1261. 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).health dump:{
  1262. "quorum_health": {},
  1263. "leader_health": {}
  1264. }
  1265.  
  1266. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
  1267. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
  1268. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
  1269. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
  1270. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
  1271. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
  1272. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
  1273. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
  1274. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
  1275. 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
  1276. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader).paxos(paxos refresh c 1..1) doing requested bootstrap
  1277. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 bootstrap
  1278. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 sync_reset_requester
  1279. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 unregister_cluster_logger
  1280. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 cancel_probe_timeout (none scheduled)
  1281. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  1282. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 _reset
  1283. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
  1284. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 timecheck_finish
  1285. 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_tick_stop
  1286. 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_interval_stop
  1287. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_event_cancel
  1288. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_reset
  1289. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxos(paxos refresh c 1..1) restart -- canceling timeouts
  1290. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  1291. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  1292. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  1293. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1294. 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832435 lease_expire=0.000000 has v0 lc 1
  1295. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1296. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1297. 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832553 lease_expire=0.000000 has v0 lc 1
  1298. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1299. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1300. 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832654 lease_expire=0.000000 has v0 lc 1
  1301. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1302. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
  1303. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  1304. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1305. 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832778 lease_expire=0.000000 has v0 lc 1
  1306. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1307. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  1308. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  1309. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(health 0..0) restart
  1310. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(config 0..0) restart
  1311. 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
  1312. 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 reset_probe_timeout 0x56393affc480 after 2 seconds
  1313. 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 probing other monitors
  1314. 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affe840
  1315. 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affe840 con 0x563939c9c900
  1316. 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affeb00
  1317. 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affeb00 con 0x563939c9cd80
  1318. 2019-01-29 15:26:54.832 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 3184037569713953523
  1319. 2019-01-29 15:26:54.832 7f39c397a700 20 Putting signature in client message(seq # 10): sig = 3184037569713953523
  1320. 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 8540967715760223295
  1321. 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 9 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393affeb00 con 0x563939c9cd80
  1322. 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1323. 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
  1324. 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1325. 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
  1326. 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
  1327. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
  1328. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
  1329. 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 800172242 middle_crc = 0 data_crc = 0 sig = 7787347958796391197
  1330. 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 1 lc 1 ) new) v6 -- 0x56393affe2c0 con 0x563939c9cd80
  1331. 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 443766928 middle_crc = 0 data_crc = 0 sig = 3146968082825100645
  1332. 2019-01-29 15:26:54.835 7f39c397a700 20 Putting signature in client message(seq # 11): sig = 3146968082825100645
  1333. 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 10 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6 ==== 430+0+0 (800172242 0 0) 0x56393affe000 con 0x563939c9cd80
  1334. 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1335. 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
  1336. 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1337. 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
  1338. 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
  1339. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
  1340. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
  1341. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 monmap is e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  1342. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 peer name is c
  1343. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 mon.c is outside the quorum
  1344. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 outside_quorum now b,c, need 2
  1345. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 that's enough to form a new quorum, calling election
  1346. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 start_election
  1347. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 _reset
  1348. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 cancel_probe_timeout 0x56393affc480
  1349. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 timecheck_finish
  1350. 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_tick_stop
  1351. 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_interval_stop
  1352. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_event_cancel
  1353. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_reset
  1354. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
  1355. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  1356. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  1357. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  1358. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1359. 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836598 lease_expire=0.000000 has v0 lc 1
  1360. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1361. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1362. 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836628 lease_expire=0.000000 has v0 lc 1
  1363. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1364. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1365. 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836663 lease_expire=0.000000 has v0 lc 1
  1366. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1367. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
  1368. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  1369. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1370. 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836708 lease_expire=0.000000 has v0 lc 1
  1371. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1372. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  1373. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  1374. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
  1375. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
  1376. 2019-01-29 15:26:54.835 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
  1377. 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 -- 0x56393b000240 con 0x563939d41600
  1378. 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(electing).elector(4) start -- can i be leader?
  1379. 2019-01-29 15:26:54.835 7f39c617f700 1 mon.b@1(electing).elector(4) init, last seen epoch 4
  1380. 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(electing).elector(4) bump_epoch 4 to 5
  1381. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 join_election
  1382. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 _reset
  1383. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
  1384. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 timecheck_finish
  1385. 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_tick_stop
  1386. 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_interval_stop
  1387. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_event_cancel
  1388. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_reset
  1389. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
  1390. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  1391. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  1392. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  1393. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1394. 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840738 lease_expire=0.000000 has v0 lc 1
  1395. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1396. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1397. 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840832 lease_expire=0.000000 has v0 lc 1
  1398. 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1399. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1400. 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840935 lease_expire=0.000000 has v0 lc 1
  1401. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1402. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
  1403. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  1404. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1405. 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.841040 lease_expire=0.000000 has v0 lc 1
  1406. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1407. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  1408. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  1409. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  1410. 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  1411. 2019-01-29 15:26:54.853 7f39c617f700 -1 mon.b@1(electing) e1 devname dm-0
  1412. 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00a900
  1413. 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00a900 con 0x563939c9c900
  1414. 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00ac00
  1415. 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00ac00 con 0x563939c9cd80
  1416. 2019-01-29 15:26:54.854 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12969980896879027673
  1417. 2019-01-29 15:26:54.854 7f39c397a700 20 Putting signature in client message(seq # 12): sig = 12969980896879027673
  1418. 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 ==== 0+0+0 (0 0 0) 0x56393b000240 con 0x563939d41600
  1419. 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  1420. 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
  1421. 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1422. 2019-01-29 15:26:54.854 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.855846 lease_expire=0.000000 has v0 lc 1
  1423. 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1424. 2019-01-29 15:26:54.855 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12994639518338884118
  1425. 2019-01-29 15:26:54.855 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 11 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 ==== 450+0+0 (1196600255 0 0) 0x56393b00a000 con 0x563939c9cd80
  1426. 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1427. 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
  1428. 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1429. 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
  1430. 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
  1431. 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  1432. 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
  1433. 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
  1434. 2019-01-29 15:26:54.855 7f39c617f700 5 mon.b@1(electing).elector(5) handle_propose from mon.2
  1435. 2019-01-29 15:26:54.855 7f39c617f700 10 mon.b@1(electing).elector(5) handle_propose required features 549755813888 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  1436. 2019-01-29 15:26:54.857 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 2042675853 middle_crc = 0 data_crc = 0 sig = 8776967150872131881
  1437. 2019-01-29 15:26:54.857 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 12 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 5) v7 ==== 1190+0+0 (2042675853 0 0) 0x56393b00ac00 con 0x563939c9cd80
  1438. 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1439. 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
  1440. 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1441. 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
  1442. 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
  1443. 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  1444. 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
  1445. 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
  1446. 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) handle_ack from mon.2
  1447. 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
  1448. 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 36
  1449. 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
  1450. 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).handle_message read tag failed
  1451. 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).fault on lossy channel, failing
  1452. 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x563939c9fa80 v2:10.215.99.125:53002/4155176800
  1453. 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
  1454. 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1200 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
  1455. 2019-01-29 15:26:54.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026000 0x56393ab9be00 :53042 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53042/0 addrs are 145
  1456. 2019-01-29 15:26:54.869 7f39c3179700 10 In get_auth_session_handler for protocol 0
  1457. 2019-01-29 15:26:54.870 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001b00 con 0x56393b026000
  1458. 2019-01-29 15:26:54.870 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393abc1d40 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  1459. 2019-01-29 15:26:54.870 7f39c617f700 20 mon.b@1(electing) e1 caps
  1460. 2019-01-29 15:26:54.870 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
  1461. 2019-01-29 15:26:56.244 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  1462. 2019-01-29 15:26:56.244 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  1463. 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 30
  1464. 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
  1465. 2019-01-29 15:26:57.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).handle_message read tag failed
  1466. 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).fault on lossy channel, failing
  1467. 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x56393b026000 v2:10.215.99.125:53002/4155176800
  1468. 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
  1469. 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1d40 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
  1470. 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026480 0x56393ab9d600 :53056 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53056/0 addrs are 145
  1471. 2019-01-29 15:26:57.871 7f39c3179700 10 In get_auth_session_handler for protocol 0
  1472. 2019-01-29 15:26:57.871 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001d40 con 0x56393b026480
  1473. 2019-01-29 15:26:57.872 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393b000480 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  1474. 2019-01-29 15:26:57.872 7f39c617f700 20 mon.b@1(electing) e1 caps
  1475. 2019-01-29 15:26:57.872 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
  1476. 2019-01-29 15:26:59.586 7f39c8984700 11 mon.b@1(electing) e1 tick
  1477. 2019-01-29 15:26:59.586 7f39c8984700 20 mon.b@1(electing) e1 sync_trim_providers
  1478. 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing) e1 session closed, dropping 0x56393b001b00
  1479. 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
  1480. 2019-01-29 15:26:59.586 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.587891 lease_expire=0.000000 has v0 lc 1
  1481. 2019-01-29 15:26:59.587 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1482. 2019-01-29 15:26:59.854 7f39c8984700 5 mon.b@1(electing).elector(5) election timer expired
  1483. 2019-01-29 15:26:59.854 7f39c8984700 10 mon.b@1(electing).elector(5) bump_epoch 5 to 6
  1484. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 join_election
  1485. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 _reset
  1486. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
  1487. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 timecheck_finish
  1488. 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_tick_stop
  1489. 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_interval_stop
  1490. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_event_cancel
  1491. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_reset
  1492. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
  1493. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  1494. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  1495. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  1496. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1497. 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867221 lease_expire=0.000000 has v0 lc 1
  1498. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1499. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1500. 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867315 lease_expire=0.000000 has v0 lc 1
  1501. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1502. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1503. 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867394 lease_expire=0.000000 has v0 lc 1
  1504. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1505. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1506. 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867477 lease_expire=0.000000 has v0 lc 1
  1507. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1508. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
  1509. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  1510. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  1511. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) discarding message from disconnected client client.? v2:10.215.99.125:53002/4155176800 auth(proto 0 30 bytes epoch 0) v1
  1512. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
  1513. 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867654 lease_expire=0.000000 has v0 lc 1
  1514. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1515. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  1516. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  1517. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  1518. 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  1519. 2019-01-29 15:26:59.866 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- ?+0 0x56393b00b800
  1520. 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- 0x56393b00b800 con 0x563939c9cd80
  1521. 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(electing) e1 win_election epoch 6 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  1522. 2019-01-29 15:26:59.867 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
  1523. 2019-01-29 15:26:59.867 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 178927411 middle_crc = 0 data_crc = 0 sig = 794234354135278605
  1524. 2019-01-29 15:26:59.867 7f39c397a700 20 Putting signature in client message(seq # 13): sig = 794234354135278605
  1525. 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 -- 0x56393b000fc0 con 0x563939d41600
  1526. 2019-01-29 15:26:59.867 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 ==== 0+0+0 (0 0 0) 0x56393b000fc0 con 0x563939d41600
  1527. 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) leader_init -- starting paxos recovery
  1528. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) get_new_proposal_number = 301
  1529. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) collect with pn 301
  1530. 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- ?+0 0x56393b00bb00
  1531. 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- 0x56393b00bb00 con 0x563939c9cd80
  1532. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) election_finished
  1533. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active - not active
  1534. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
  1535. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
  1536. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
  1537. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
  1538. 2019-01-29 15:26:59.872 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 4116324937 middle_crc = 0 data_crc = 0 sig = 15318303623903287436
  1539. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
  1540. 2019-01-29 15:26:59.872 7f39c397a700 20 Putting signature in client message(seq # 14): sig = 15318303623903287436
  1541. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1542. 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873644 lease_expire=0.000000 has v0 lc 1
  1543. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1544. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1545. 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873776 lease_expire=0.000000 has v0 lc 1
  1546. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1547. 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1548. 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873916 lease_expire=0.000000 has v0 lc 1
  1549. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1550. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1551. 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874034 lease_expire=0.000000 has v0 lc 1
  1552. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1553. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
  1554. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
  1555. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
  1556. 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874163 lease_expire=0.000000 has v0 lc 1
  1557. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  1558. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
  1559. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
  1560. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
  1561. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
  1562. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
  1563. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
  1564. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
  1565. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
  1566. 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
  1567. 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_quorum_to_compatset_features
  1568. 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_monmap_to_compatset_features
  1569. 2019-01-29 15:26:59.873 7f39c8984700 1 mon.b@1(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout}
  1570. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 calc_quorum_requirements required_features 2449958747315912708
  1571. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_finish
  1572. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 resend_routed_requests
  1573. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 register_cluster_logger
  1574. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start
  1575. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round curr 0
  1576. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round new 1
  1577. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck
  1578. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck start timecheck epoch 6 round 1
  1579. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck send time_check( ping e 6 r 1 ) v1 to mon.2
  1580. 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- ?+0 0x56393b000b40
  1581. 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- 0x56393b000b40 con 0x563939c9cd80
  1582. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round setting up next event
  1583. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_reset_event delay 300 rounds_since_clean 0
  1584. 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_start
  1585. 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_stop
  1586. 2019-01-29 15:26:59.879 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 72719240 middle_crc = 0 data_crc = 0 sig = 11523137460518662160
  1587. 2019-01-29 15:26:59.879 7f39c397a700 20 Putting signature in client message(seq # 15): sig = 11523137460518662160
  1588. 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 scrub_event_start
  1589. 2019-01-29 15:26:59.879 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  1590. 2019-01-29 15:26:59.880 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
  1591. 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000fc0 log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  1592. 2019-01-29 15:26:59.880 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.881040 lease_expire=0.000000 has v0 lc 1
  1593. 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  1594. 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 4120674357 middle_crc = 0 data_crc = 0 sig = 15016538117729764366
  1595. 2019-01-29 15:26:59.889 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 13 ==== paxos(last lc 1 fc 1 pn 301 opn 0) v4 ==== 84+0+0 (4120674357 0 0) 0x56393b00b800 con 0x563939c9cd80
  1596. 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 1460593560 middle_crc = 0 data_crc = 0 sig = 3747477798607265139
  1597. 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  1598. 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
  1599. 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  1600. 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
  1601. 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
  1602. 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  1603. 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
  1604. 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
  1605. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) handle_last paxos(last lc 1 fc 1 pn 301 opn 0) v4
  1606. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) store_state nothing to commit
  1607. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) they accepted our pn, we now have 2 peons
  1608. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) that's everyone. active!
  1609. 2019-01-29 15:26:59.890 7f39c617f700 7 mon.b@1(leader).paxos(paxos recovering c 1..1) extend_lease now+5 (2019-01-29 15:27:04.891100)
  1610. 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- ?+0 0x56393b00af00
  1611. 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- 0x56393b00af00 con 0x563939c9cd80
  1612. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader) e1 refresh_from_paxos
  1613. 2019-01-29 15:26:59.890 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 3892865509 middle_crc = 0 data_crc = 0 sig = 11834348323597999916
  1614. 2019-01-29 15:26:59.890 7f39c397a700 20 Putting signature in client message(seq # 16): sig = 11834348323597999916
  1615. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
  1616. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
  1617. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
  1618. 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos
  1619. 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
  1620. 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
  1621. 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
  1622. 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).auth v0 update_from_paxos
  1623. 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
  1624. 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).config load_config got 0 keys
  1625. 2019-01-29 15:26:59.891 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 3498592039 middle_crc = 0 data_crc = 0 sig = 6841074444368600247
  1626. 2019-01-29 15:26:59.891 7f39c617f700 20 mon.b@1(leader).config load_config config map:
  1627. {
  1628. "global": {},
  1629. "by_type": {},
  1630. "by_id": {}
  1631. }
  1632.  
  1633. 2019-01-29 15:26:59.897 7f39c617f700 20 mgrc handle_mgr_map mgrmap(e 0) v1
  1634. 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Got map version 0
  1635. 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Active mgr is now
  1636. 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc reconnect No active mgr available yet
  1637. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
  1638. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat 0
  1639. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat check_subs
  1640. 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).mgrstat update_logger
  1641. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
  1642. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).health update_from_paxos
  1643. 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).health dump:{
  1644. "quorum_health": {},
  1645. "leader_health": {}
  1646. }
  1647.  
  1648. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
  1649. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
  1650. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
  1651. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
  1652. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
  1653. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
  1654. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
  1655. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
  1656. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
  1657. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
  1658. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) finish_round
  1659. 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).paxos(paxos active c 1..1) finish_round waiting_for_acting
  1660. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active
  1661. 2019-01-29 15:26:59.897 7f39c617f700 7 mon.b@1(leader).paxosservice(monmap 1..1) _active creating new pending
  1662. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 create_pending monmap epoch 2
  1663. 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 noting that i was, once, part of an active quorum.
  1664. 2019-01-29 15:26:59.902 7f39c617f700 0 log_channel(cluster) log [DBG] : monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  1665. 2019-01-29 15:26:59.902 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 6 at 2019-01-29 15:26:59.903091) v1 -- 0x56393b000900 con 0x563939d41600
  1666. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).monmap v1 apply_mon_features features match current pending: mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  1667. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active
  1668. 2019-01-29 15:26:59.902 7f39c617f700 7 mon.b@1(leader).paxosservice(mdsmap 0..0) _active creating new pending
  1669. 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) is_readable = 1 - now=2019-01-29 15:26:59.903204 lease_expire=2019-01-29 15:27:04.891100 has v0 lc 1
  1670. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_pending e1
  1671. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_initial
  1672. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) propose_pending
  1673. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 encode_pending e1
  1674. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader) e1 log_health updated 0 previous 0
  1675. 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) queue_pending_finisher 0x563939ab8950
  1676. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) trigger_propose active, proposing now
  1677. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) propose_pending 2 2867 bytes
  1678. 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) begin for 2 2867 bytes
  1679. 2019-01-29 15:26:59.906 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) sending begin to mon.2
  1680. 2019-01-29 15:26:59.906 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- ?+0 0x56393b00b200
  1681. 2019-01-29 15:26:59.907 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- 0x56393b00b200 con 0x563939c9cd80
  1682. 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active
  1683. 2019-01-29 15:26:59.907 7f39c617f700 7 mon.b@1(leader).paxosservice(osdmap 0..0) _active creating new pending
  1684. 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_pending e 1
  1685. 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting backfillfull_ratio = 0.99
  1686. 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting full_ratio = 0.99
  1687. 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting nearfull_ratio = 0.99
  1688. 2019-01-29 15:26:59.907 7f39c397a700 10 _calc_signature seq 17 front_crc_ = 3823202930 middle_crc = 0 data_crc = 0 sig = 3998564941522667071
  1689. 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_initial for 3b02750c-f104-4301-aa14-258d2b37f104
  1690. 2019-01-29 15:26:59.907 7f39c397a700 20 Putting signature in client message(seq # 17): sig = 3998564941522667071
  1691. 2019-01-29 15:26:59.907 7f39c617f700 20 mon.b@1(leader).osd e0 full crc 3491248425
  1692. 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) propose_pending
  1693. 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending e 1
  1694. 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 do_prune osdmap full prune enabled
  1695. 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 should_prune currently holding only 0 epochs (min osdmap epochs: 500); do not prune.
  1696. 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
  1697. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs
  1698. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pools queued
  1699. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pgs removed because they're created
  1700. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs queue remaining: 0 pools
  1701. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0/0 pgs added from queued pools
  1702. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first mimic+ epoch
  1703. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first nautilus+ epoch
  1704. 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending encoding full map with nautilus features 1080873256688298500
  1705. 2019-01-29 15:26:59.908 7f39c617f700 20 mon.b@1(leader).osd e0 full_crc 3491248425 inc_crc 3830871662
  1706. 2019-01-29 15:26:59.911 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 2534419215 middle_crc = 0 data_crc = 0 sig = 4957134661093499494
  1707. 2019-01-29 15:26:59.940 7f39c617f700 -1 *** Caught signal (Segmentation fault) **
  1708. in thread 7f39c617f700 thread_name:ms_dispatch
  1709.  
  1710. ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev)
  1711. 1: (()+0x13dd820) [0x5639382a1820]
  1712. 2: (()+0x12080) [0x7f39d1683080]
  1713. 3: (OSDMap::check_health(health_check_map_t*) const+0x1235) [0x7f39d6284cfb]
  1714. 4: (OSDMonitor::encode_pending(std::shared_ptr<MonitorDBStore::Transaction>)+0x510d) [0x56393811017b]
  1715. 5: (PaxosService::propose_pending()+0x45a) [0x5639380fcb24]
  1716. 6: (PaxosService::_active()+0x62b) [0x5639380fdba7]
  1717. 7: (()+0x12394e9) [0x5639380fd4e9]
  1718. 8: (Context::complete(int)+0x27) [0x563937e12037]
  1719. 9: (void finish_contexts<std::__cxx11::list<Context*, std::allocator<Context*> > >(CephContext*, std::__cxx11::list<Context*, std::allocator<Context*> >&, int)+0x2c8) [0x563937e3642c]
  1720. 10: (Paxos::finish_round()+0x2ed) [0x5639380eb4e9]
  1721. 11: (Paxos::handle_last(boost::intrusive_ptr<MonOpRequest>)+0x17ae) [0x5639380e5ac8]
  1722. 12: (Paxos::dispatch(boost::intrusive_ptr<MonOpRequest>)+0x392) [0x5639380ef7cc]
  1723. 13: (Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0x1119) [0x563937de61d9]
  1724. 14: (Monitor::_ms_dispatch(Message*)+0xec6) [0x563937de4d9e]
  1725. 15: (Monitor::ms_dispatch(Message*)+0x38) [0x563937e20d04]
  1726. 16: (Dispatcher::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x5c) [0x563937e142c2]
  1727. 17: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0xe9) [0x7f39d5f6f247]
  1728. 18: (DispatchQueue::entry()+0x61c) [0x7f39d5f6dd3c]
  1729. 19: (DispatchQueue::DispatchThread::entry()+0x1c) [0x7f39d60cf7f4]
  1730. 20: (Thread::entry_wrapper()+0x78) [0x7f39d5d4cb4a]
  1731. 21: (Thread::_entry_func(void*)+0x18) [0x7f39d5d4cac8]
  1732. 22: (()+0x7594) [0x7f39d1678594]
  1733. 23: (clone()+0x3f) [0x7f39d041bf4f]
  1734. NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  1735.  
  1736. --- begin dump of recent events ---
  1737. -1424> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command assert hook 0x563939ab8540
  1738. -1423> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command abort hook 0x563939ab8540
  1739. -1422> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perfcounters_dump hook 0x563939ab8540
  1740. -1421> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command 1 hook 0x563939ab8540
  1741. -1420> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf dump hook 0x563939ab8540
  1742. -1419> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perfcounters_schema hook 0x563939ab8540
  1743. -1418> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf histogram dump hook 0x563939ab8540
  1744. -1417> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command 2 hook 0x563939ab8540
  1745. -1416> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf schema hook 0x563939ab8540
  1746. -1415> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf histogram schema hook 0x563939ab8540
  1747. -1414> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf reset hook 0x563939ab8540
  1748. -1413> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config show hook 0x563939ab8540
  1749. -1412> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config help hook 0x563939ab8540
  1750. -1411> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config set hook 0x563939ab8540
  1751. -1410> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config unset hook 0x563939ab8540
  1752. -1409> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config get hook 0x563939ab8540
  1753. -1408> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config diff hook 0x563939ab8540
  1754. -1407> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config diff get hook 0x563939ab8540
  1755. -1406> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command log flush hook 0x563939ab8540
  1756. -1405> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command log dump hook 0x563939ab8540
  1757. -1404> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command log reopen hook 0x563939ab8540
  1758. -1403> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_mempools hook 0x56393a916068
  1759. -1402> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep start
  1760. -1401> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 0
  1761. -1400> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 1
  1762. -1399> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 2
  1763. -1398> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 3
  1764. -1397> 2019-01-29 15:26:44.507 7f39deeb51c0 0 ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev), process ceph-mon, pid 613920
  1765. -1396> 2019-01-29 15:26:44.526 7f39deeb51c0 1 lockdep using id 4
  1766. -1395> 2019-01-29 15:26:44.526 7f39deeb51c0 1 lockdep using id 5
  1767. -1394> 2019-01-29 15:26:44.527 7f39deeb51c0 1 lockdep using id 6
  1768. -1393> 2019-01-29 15:26:44.527 7f39deeb51c0 1 lockdep using id 7
  1769. -1392> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) init /tmp/ceph-asok.we8t9p/mon.b.asok
  1770. -1391> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) bind_and_listen /tmp/ceph-asok.we8t9p/mon.b.asok
  1771. -1390> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command 0 hook 0x563939ac2ad0
  1772. -1389> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command version hook 0x563939ac2ad0
  1773. -1388> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command git_version hook 0x563939ac2ad0
  1774. -1387> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command help hook 0x563939ab81b0
  1775. -1386> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command get_command_descriptions hook 0x563939ab81f0
  1776. -1385> 2019-01-29 15:26:44.527 7f39deeb51c0 1 lockdep using id 8
  1777. -1384> 2019-01-29 15:26:44.527 7f39cca47700 5 asok(0x563939e2a000) entry start
  1778. -1383> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 9
  1779. -1382> 2019-01-29 15:26:44.545 7f39deeb51c0 0 load: jerasure load: lrc load: isa
  1780. -1381> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 10
  1781. -1380> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 11
  1782. -1379> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 12
  1783. -1378> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 13
  1784. -1377> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
  1785. -1376> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  1786. -1375> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
  1787. -1374> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
  1788. -1373> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  1789. -1372> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
  1790. -1371> 2019-01-29 15:26:44.546 7f39deeb51c0 1 rocksdb: do_open column families: [default]
  1791. -1370> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: RocksDB version: 5.17.2
  1792.  
  1793. -1369> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
  1794. -1368> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compile date Jan 28 2019
  1795. -1367> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: DB SUMMARY
  1796.  
  1797. -1366> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: CURRENT file: CURRENT
  1798.  
  1799. -1365> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: IDENTITY file: IDENTITY
  1800.  
  1801. -1364> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: MANIFEST file: MANIFEST-000001 size: 13 Bytes
  1802.  
  1803. -1363> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: SST files in /home/rraja/git/ceph/build/dev/mon.b/store.db dir, Total Num: 0, files:
  1804.  
  1805. -1362> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph/build/dev/mon.b/store.db: 000003.log size: 1091 ;
  1806.  
  1807. -1361> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.error_if_exists: 0
  1808. -1360> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_if_missing: 0
  1809. -1359> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.paranoid_checks: 1
  1810. -1358> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.env: 0x563938d121a0
  1811. -1357> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.info_log: 0x563939e69f40
  1812. -1356> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_file_opening_threads: 16
  1813. -1355> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.statistics: (nil)
  1814. -1354> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_fsync: 0
  1815. -1353> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_log_file_size: 0
  1816. -1352> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_manifest_file_size: 1073741824
  1817. -1351> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.log_file_time_to_roll: 0
  1818. -1350> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.keep_log_file_num: 1000
  1819. -1349> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.recycle_log_file_num: 0
  1820. -1348> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_fallocate: 1
  1821. -1347> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_reads: 0
  1822. -1346> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_writes: 0
  1823. -1345> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_reads: 0
  1824. -1344> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
  1825. -1343> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_missing_column_families: 0
  1826. -1342> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_log_dir:
  1827. -1341> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph/build/dev/mon.b/store.db
  1828. -1340> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.table_cache_numshardbits: 6
  1829. -1339> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_subcompactions: 1
  1830. -1338> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_flushes: -1
  1831. -1337> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_ttl_seconds: 0
  1832. -1336> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_size_limit_MB: 0
  1833. -1335> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manifest_preallocation_size: 4194304
  1834. -1334> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.is_fd_close_on_exec: 1
  1835. -1333> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.advise_random_on_open: 1
  1836. -1332> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_write_buffer_size: 0
  1837. -1331> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_buffer_manager: 0x563939e6a720
  1838. -1330> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.access_hint_on_compaction_start: 1
  1839. -1329> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
  1840. -1328> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.random_access_max_buffer_size: 1048576
  1841. -1327> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_adaptive_mutex: 0
  1842. -1326> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.rate_limiter: (nil)
  1843. -1325> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
  1844. -1324> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_recovery_mode: 2
  1845. -1323> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_thread_tracking: 0
  1846. -1322> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_pipelined_write: 0
  1847. -1321> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_concurrent_memtable_write: 1
  1848. -1320> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
  1849. -1319> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_max_yield_usec: 100
  1850. -1318> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_slow_yield_usec: 3
  1851. -1317> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.row_cache: None
  1852. -1316> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_filter: None
  1853. -1315> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_recovery: 0
  1854. -1314> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_ingest_behind: 0
  1855. -1313> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.preserve_deletes: 0
  1856. -1312> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.two_write_queues: 0
  1857. -1311> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manual_wal_flush: 0
  1858. -1310> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_jobs: 2
  1859. -1309> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_compactions: -1
  1860. -1308> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_shutdown: 0
  1861. -1307> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
  1862. -1306> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delayed_write_rate : 16777216
  1863. -1305> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_total_wal_size: 0
  1864. -1304> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
  1865. -1303> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.stats_dump_period_sec: 600
  1866. -1302> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_open_files: -1
  1867. -1301> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.bytes_per_sync: 0
  1868. -1300> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_bytes_per_sync: 0
  1869. -1299> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.compaction_readahead_size: 0
  1870. -1298> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compression algorithms supported:
  1871. -1297> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTDNotFinalCompression supported: 0
  1872. -1296> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTD supported: 0
  1873. -1295> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kXpressCompression supported: 0
  1874. -1294> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4HCCompression supported: 1
  1875. -1293> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4Compression supported: 1
  1876. -1292> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kBZip2Compression supported: 0
  1877. -1291> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZlibCompression supported: 1
  1878. -1290> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kSnappyCompression supported: 1
  1879. -1289> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Fast CRC32 supported: Supported on x86
  1880. -1288> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
  1881.  
  1882. -1287> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
  1883.  
  1884. -1286> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
  1885. -1285> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.merge_operator:
  1886. -1284> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter: None
  1887. -1283> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter_factory: None
  1888. -1282> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_factory: SkipListFactory
  1889. -1281> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_factory: BlockBasedTable
  1890. -1280> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563939ac2ab0)
  1891. cache_index_and_filter_blocks: 1
  1892. cache_index_and_filter_blocks_with_high_priority: 1
  1893. pin_l0_filter_and_index_blocks_in_cache: 1
  1894. pin_top_level_index_and_filter: 1
  1895. index_type: 0
  1896. hash_index_allow_collision: 1
  1897. checksum: 1
  1898. no_block_cache: 0
  1899. block_cache: 0x56393a9752a0
  1900. block_cache_name: BinnedLRUCache
  1901. block_cache_options:
  1902. capacity : 536870912
  1903. num_shard_bits : 4
  1904. strict_capacity_limit : 0
  1905. high_pri_pool_ratio: 0.000
  1906. block_cache_compressed: (nil)
  1907. persistent_cache: (nil)
  1908. block_size: 4096
  1909. block_size_deviation: 10
  1910. block_restart_interval: 16
  1911. index_block_restart_interval: 1
  1912. metadata_block_size: 4096
  1913. partition_filters: 0
  1914. use_delta_encoding: 1
  1915. filter_policy: rocksdb.BuiltinBloomFilter
  1916. whole_key_filtering: 1
  1917. verify_compression: 0
  1918. read_amp_bytes_per_bit: 0
  1919. format_version: 2
  1920. enable_index_compression: 1
  1921. block_align: 0
  1922.  
  1923. -1279> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.write_buffer_size: 33554432
  1924. -1278> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number: 2
  1925. -1277> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression: NoCompression
  1926. -1276> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression: Disabled
  1927. -1275> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.prefix_extractor: nullptr
  1928. -1274> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
  1929. -1273> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.num_levels: 7
  1930. -1272> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
  1931. -1271> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
  1932. -1270> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
  1933. -1269> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.level: 32767
  1934. -1268> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
  1935. -1267> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
  1936. -1266> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
  1937. -1265> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.enabled: false
  1938. -1264> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.window_bits: -14
  1939. -1263> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.level: 32767
  1940. -1262> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.strategy: 0
  1941. -1261> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
  1942. -1260> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
  1943. -1259> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.enabled: false
  1944. -1258> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
  1945. -1257> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
  1946. -1256> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_stop_writes_trigger: 36
  1947. -1255> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_base: 67108864
  1948. -1254> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_multiplier: 1
  1949. -1253> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_base: 268435456
  1950. -1252> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
  1951. -1251> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
  1952. -1250> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
  1953. -1249> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
  1954. -1248> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
  1955. -1247> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
  1956. -1246> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
  1957. -1245> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
  1958. -1244> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
  1959. -1243> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
  1960. -1242> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_compaction_bytes: 1677721600
  1961. -1241> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.arena_block_size: 4194304
  1962. -1240> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
  1963. -1239> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
  1964. -1238> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
  1965. -1237> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.disable_auto_compactions: 0
  1966. -1236> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
  1967. -1235> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_pri: kByCompensatedSize
  1968. -1234> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
  1969. -1233> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
  1970. -1232> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
  1971. -1231> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
  1972. -1230> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
  1973. -1229> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
  1974. -1228> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
  1975. -1227> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
  1976. -1226> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.ttl: 0
  1977. -1225> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_properties_collectors:
  1978. -1224> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_support: 0
  1979. -1223> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_num_locks: 10000
  1980. -1222> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
  1981. -1221> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_huge_page_size: 0
  1982. -1220> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bloom_locality: 0
  1983. -1219> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_successive_merges: 0
  1984. -1218> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.optimize_filters_for_hits: 0
  1985. -1217> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.paranoid_file_checks: 0
  1986. -1216> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.force_consistency_checks: 0
  1987. -1215> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.report_bg_io_stats: 0
  1988. -1214> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.ttl: 0
  1989. -1213> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph/build/dev/mon.b/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
  1990.  
  1991. -1212> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
  1992.  
  1993. -1211> 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804548935, "job": 1, "event": "recovery_started", "log_files": [3]}
  1994. -1210> 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:561] Recovering log #3 mode 2
  1995. -1209> 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804555547, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 4, "file_size": 1849, "table_properties": {"data_size": 1103, "index_size": 28, "filter_size": 23, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 980, "raw_average_value_size": 196, "num_data_blocks": 1, "num_entries": 5, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
  1996. -1208> 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:2936] Creating manifest 5
  1997.  
  1998. -1207> 2019-01-29 15:26:44.563 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804564214, "job": 1, "event": "recovery_finished"}
  1999. -1206> 2019-01-29 15:26:44.574 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x56393a906800
  2000. -1205> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 14
  2001. -1204> 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap
  2002. -1203> 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap found mkfs monmap
  2003. -1202> 2019-01-29 15:26:44.574 7f39deeb51c0 10 main monmap:
  2004. {
  2005. "epoch": 0,
  2006. "fsid": "3b02750c-f104-4301-aa14-258d2b37f104",
  2007. "modified": "2019-01-29 15:26:43.871480",
  2008. "created": "2019-01-29 15:26:43.871480",
  2009. "features": {
  2010. "persistent": [],
  2011. "optional": []
  2012. },
  2013. "mons": [
  2014. {
  2015. "rank": 0,
  2016. "name": "a",
  2017. "public_addrs": {
  2018. "addrvec": [
  2019. {
  2020. "type": "v2",
  2021. "addr": "10.215.99.125:40363",
  2022. "nonce": 0
  2023. },
  2024. {
  2025. "type": "v1",
  2026. "addr": "10.215.99.125:40364",
  2027. "nonce": 0
  2028. }
  2029. ]
  2030. },
  2031. "addr": "10.215.99.125:40364/0",
  2032. "public_addr": "10.215.99.125:40364/0"
  2033. },
  2034. {
  2035. "rank": 1,
  2036. "name": "b",
  2037. "public_addrs": {
  2038. "addrvec": [
  2039. {
  2040. "type": "v2",
  2041. "addr": "10.215.99.125:40365",
  2042. "nonce": 0
  2043. },
  2044. {
  2045. "type": "v1",
  2046. "addr": "10.215.99.125:40366",
  2047. "nonce": 0
  2048. }
  2049. ]
  2050. },
  2051. "addr": "10.215.99.125:40366/0",
  2052. "public_addr": "10.215.99.125:40366/0"
  2053. },
  2054. {
  2055. "rank": 2,
  2056. "name": "c",
  2057. "public_addrs": {
  2058. "addrvec": [
  2059. {
  2060. "type": "v2",
  2061. "addr": "10.215.99.125:40367",
  2062. "nonce": 0
  2063. },
  2064. {
  2065. "type": "v1",
  2066. "addr": "10.215.99.125:40368",
  2067. "nonce": 0
  2068. }
  2069. ]
  2070. },
  2071. "addr": "10.215.99.125:40368/0",
  2072. "public_addr": "10.215.99.125:40368/0"
  2073. }
  2074. ]
  2075. }
  2076.  
  2077. -1201> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 15
  2078. -1200> 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
  2079. -1199> 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
  2080. -1198> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 16
  2081. -1197> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 17
  2082. -1196> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 18
  2083. -1195> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 19
  2084. -1194> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 20
  2085. -1193> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 21
  2086. -1192> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 22
  2087. -1191> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 23
  2088. -1190> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 24
  2089. -1189> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 25
  2090. -1188> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 26
  2091. -1187> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 27
  2092. -1186> 2019-01-29 15:26:44.575 7f39deeb51c0 0 starting mon.b rank 1 at public addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] at bind addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
  2093. -1185> 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] learned_addr learned my addr [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] (peer_addr_for_me v2:10.215.99.125:40365/0)
  2094. -1184> 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] _finish_bind bind my_addrs is [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0]
  2095. -1183> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  2096. -1182> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  2097. -1181> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 28
  2098. -1180> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 29
  2099. -1179> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 30
  2100. -1178> 2019-01-29 15:26:44.576 7f39deeb51c0 0 starting mon.b rank 1 at [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
  2101. -1177> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 31
  2102. -1176> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 32
  2103. -1175> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 33
  2104. -1174> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 34
  2105. -1173> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 35
  2106. -1172> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 36
  2107. -1171> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  2108. -1170> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
  2109. -1169> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 37
  2110. -1168> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 38
  2111. -1167> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 39
  2112. -1166> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 40
  2113. -1165> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 41
  2114. -1164> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 42
  2115. -1163> 2019-01-29 15:26:44.576 7f39deeb51c0 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
  2116. -1162> 2019-01-29 15:26:44.576 7f39deeb51c0 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
  2117. -1161> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 43
  2118. -1160> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 44
  2119. -1159> 2019-01-29 15:26:44.577 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  2120. -1158> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  2121. -1157> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2122. -1156> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2123. -1155> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2124. -1154> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
  2125. -1153> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2126. -1152> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2127. -1151> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2128. -1150> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2129. -1149> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2130. -1148> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2131. -1147> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2132. -1146> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
  2133. -1145> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2134. -1144> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2135. -1143> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2136. -1142> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2137. -1141> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2138. -1140> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2139. -1139> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2140. -1138> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  2141. -1137> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  2142. -1136> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2143. -1135> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2144. -1134> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2145. -1133> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  2146. -1132> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
  2147. -1131> 2019-01-29 15:26:44.580 7f39deeb51c0 1 mon.b@-1(probing) e0 preinit fsid 3b02750c-f104-4301-aa14-258d2b37f104
  2148. -1130> 2019-01-29 15:26:44.580 7f39deeb51c0 1 lockdep using id 45
  2149. -1129> 2019-01-29 15:26:44.580 7f39deeb51c0 1 lockdep using id 46
  2150. -1128> 2019-01-29 15:26:44.580 7f39deeb51c0 1 lockdep using id 47
  2151. -1127> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 check_fsid cluster_uuid contains '3b02750c-f104-4301-aa14-258d2b37f104'
  2152. -1126> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 features compat={},rocompat={},incompat={1=initial feature set (~v.18),3=single paxos with k/v store (v0.?)}
  2153. -1125> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 calc_quorum_requirements required_features 0
  2154. -1124> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 required_features 0
  2155. -1123> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 has_ever_joined = 0
  2156. -1122> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_last_committed_floor 0
  2157. -1121> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 init_paxos
  2158. -1120> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init last_pn: 0 accepted_pn: 0 last_committed: 0 first_committed: 0
  2159. -1119> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init
  2160. -1118> 2019-01-29 15:26:44.580 7f39deeb51c0 5 mon.b@-1(probing).mds e0 Unable to load 'last_metadata'
  2161. -1117> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).health init
  2162. -1116> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config init
  2163. -1115> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos
  2164. -1114> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos no cluster_fingerprint
  2165. -1113> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) refresh
  2166. -1112> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) refresh
  2167. -1111> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) refresh
  2168. -1110> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos
  2169. -1109> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos version 0 summary v 0
  2170. -1108> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) refresh
  2171. -1107> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) refresh
  2172. -1106> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).auth v0 update_from_paxos
  2173. -1105> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) refresh
  2174. -1104> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config load_config got 0 keys
  2175. -1103> 2019-01-29 15:26:44.580 7f39deeb51c0 20 mon.b@-1(probing).config load_config config map:
  2176. {
  2177. "global": {},
  2178. "by_type": {},
  2179. "by_id": {}
  2180. }
  2181.  
  2182. -1102> 2019-01-29 15:26:44.581 7f39deeb51c0 4 set_mon_vals no callback set
  2183. -1101> 2019-01-29 15:26:44.584 7f39deeb51c0 20 mgrc handle_mgr_map mgrmap(e 0) v1
  2184. -1100> 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Got map version 0
  2185. -1099> 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Active mgr is now
  2186. -1098> 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc reconnect No active mgr available yet
  2187. -1097> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) refresh
  2188. -1096> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat 0
  2189. -1095> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat check_subs
  2190. -1094> 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).mgrstat update_logger
  2191. -1093> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) refresh
  2192. -1092> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).health update_from_paxos
  2193. -1091> 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).health dump:{
  2194. "quorum_health": {},
  2195. "leader_health": {}
  2196. }
  2197.  
  2198. -1090> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) refresh
  2199. -1089> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) post_refresh
  2200. -1088> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) post_refresh
  2201. -1087> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) post_refresh
  2202. -1086> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) post_refresh
  2203. -1085> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) post_refresh
  2204. -1084> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) post_refresh
  2205. -1083> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) post_refresh
  2206. -1082> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) post_refresh
  2207. -1081> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) post_refresh
  2208. -1080> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing) e0 loading initial keyring to bootstrap authentication for mkfs
  2209. -1079> 2019-01-29 15:26:44.584 7f39deeb51c0 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph/build/dev/mon.b/keyring
  2210. -1078> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command mon_status hook 0x563939ab8750
  2211. -1077> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command quorum_status hook 0x563939ab8750
  2212. -1076> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command sync_force hook 0x563939ab8750
  2213. -1075> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command add_bootstrap_peer_hint hook 0x563939ab8750
  2214. -1074> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command add_bootstrap_peer_hintv hook 0x563939ab8750
  2215. -1073> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command quorum enter hook 0x563939ab8750
  2216. -1072> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command quorum exit hook 0x563939ab8750
  2217. -1071> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command ops hook 0x563939ab8750
  2218. -1070> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command sessions hook 0x563939ab8750
  2219. -1069> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_historic_ops hook 0x563939ab8750
  2220. -1068> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_historic_ops_by_duration hook 0x563939ab8750
  2221. -1067> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_historic_slow_ops hook 0x563939ab8750
  2222. -1066> 2019-01-29 15:26:44.585 7f39deeb51c0 1 finished global_init_daemonize
  2223. -1065> 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] start start
  2224. -1064> 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- start start
  2225. -1063> 2019-01-29 15:26:44.585 7f39deeb51c0 2 mon.b@-1(probing) e0 init
  2226. -1062> 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
  2227. -1061> 2019-01-29 15:26:44.585 7f39c6980700 1 lockdep using id 48
  2228. -1060> 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
  2229. -1059> 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 bootstrap
  2230. -1058> 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_reset_requester
  2231. -1057> 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 unregister_cluster_logger - not registered
  2232. -1056> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 cancel_probe_timeout (none scheduled)
  2233. -1055> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 reverting to legacy ranks for seed monmap (epoch 0)
  2234. -1054> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 monmap e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  2235. -1053> 2019-01-29 15:26:44.586 7f39deeb51c0 0 mon.b@-1(probing) e0 my rank is now 1 (was -1)
  2236. -1052> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] shutdown_connections
  2237. -1051> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 _reset
  2238. -1050> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
  2239. -1049> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 timecheck_finish
  2240. -1048> 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_tick_stop
  2241. -1047> 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_interval_stop
  2242. -1046> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_event_cancel
  2243. -1045> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_reset
  2244. -1044> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  2245. -1043> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  2246. -1042> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  2247. -1041> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  2248. -1040> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
  2249. -1039> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  2250. -1038> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  2251. -1037> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  2252. -1036> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(health 0..0) restart
  2253. -1035> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(config 0..0) restart
  2254. -1034> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
  2255. -1033> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 reset_probe_timeout 0x56393ab65d70 after 2 seconds
  2256. -1032> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 probing other monitors
  2257. -1031> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916840
  2258. -1030> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916840 con 0x563939c9c900
  2259. -1029> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916b00
  2260. -1028> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916b00 con 0x563939c9cd80
  2261. -1027> 2019-01-29 15:26:44.586 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9cd80 msgr2=0x56393ab9a600 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  2262. -1026> 2019-01-29 15:26:44.586 7f39c617f700 10 mon.b@1(probing) e0 ms_handle_refused 0x563939c9cd80 v2:10.215.99.125:40367/0
  2263. -1025> 2019-01-29 15:26:44.586 7f39c417b700 10 mon.b@1(probing) e0 ms_get_authorizer for mon
  2264. -1024> 2019-01-29 15:26:44.586 7f39c417b700 10 cephx: build_service_ticket service mon secret_id 18446744073709551615 ticket_info.ticket.name=mon.
  2265. -1023> 2019-01-29 15:26:44.587 7f39c417b700 10 In get_auth_session_handler for protocol 2
  2266. -1022> 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 14723405194060298632
  2267. -1021> 2019-01-29 15:26:44.587 7f39c417b700 20 Putting signature in client message(seq # 1): sig = 14723405194060298632
  2268. -1020> 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 2859661691 middle_crc = 0 data_crc = 0 sig = 12381238761605199092
  2269. -1019> 2019-01-29 15:26:44.587 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 ==== 58+0+0 (2859661691 0 0) 0x56393a917080 con 0x563939c9c900
  2270. -1018> 2019-01-29 15:26:44.588 7f39c617f700 1 lockdep using id 49
  2271. -1017> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _ms_dispatch new session 0x56393abc0000 MonSession(mon.0 [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  2272. -1016> 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(probing) e0 _ms_dispatch setting monitor caps on this connection
  2273. -1015> 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
  2274. -1014> 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  2275. -1013> 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
  2276. -1012> 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
  2277. -1011> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6
  2278. -1010> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_probe mon.0 v2:10.215.99.125:40363/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 features 4611087854031667199
  2279. -1009> 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917340 con 0x563939c9c900
  2280. -1008> 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 2632229567743115917
  2281. -1007> 2019-01-29 15:26:44.588 7f39c417b700 20 Putting signature in client message(seq # 2): sig = 2632229567743115917
  2282. -1006> 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 137113040 middle_crc = 0 data_crc = 0 sig = 5942540075320245331
  2283. -1005> 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (137113040 0 0) 0x56393a916840 con 0x563939c9c900
  2284. -1004> 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  2285. -1003> 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
  2286. -1002> 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  2287. -1001> 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
  2288. -1000> 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
  2289. -999> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
  2290. -998> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_reply mon.0 v2:10.215.99.125:40363/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
  2291. -997> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  2292. -996> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 peer name is a
  2293. -995> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 mon.a is outside the quorum
  2294. -994> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 outside_quorum now a,b, need 2
  2295. -993> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 that's enough to form a new quorum, calling election
  2296. -992> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 start_election
  2297. -991> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _reset
  2298. -990> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 cancel_probe_timeout 0x56393ab65d70
  2299. -989> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 timecheck_finish
  2300. -988> 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_tick_stop
  2301. -987> 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_interval_stop
  2302. -986> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_event_cancel
  2303. -985> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_reset
  2304. -984> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  2305. -983> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  2306. -982> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  2307. -981> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  2308. -980> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
  2309. -979> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  2310. -978> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  2311. -977> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  2312. -976> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
  2313. -975> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
  2314. -974> 2019-01-29 15:26:44.588 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
  2315. -973> 2019-01-29 15:26:44.588 7f39c617f700 10 log_client _send_to_mon log to self
  2316. -972> 2019-01-29 15:26:44.588 7f39c617f700 10 log_client log_queue is 1 last_log 1 sent 0 num 1 unsent 1 sending 1
  2317. -971> 2019-01-29 15:26:44.588 7f39c617f700 10 log_client will send 2019-01-29 15:26:44.589509 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election
  2318. -970> 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 -- 0x56393abc06c0 con 0x563939d41600
  2319. -969> 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(electing).elector(0) start -- can i be leader?
  2320. -968> 2019-01-29 15:26:44.588 7f39c617f700 1 mon.b@1(electing).elector(0) init, first boot, initializing epoch at 1
  2321. -967> 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 415793505725448275
  2322. -966> 2019-01-29 15:26:44.605 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
  2323. -965> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0f800
  2324. -964> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0f800 con 0x563939c9c900
  2325. -963> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0fb00
  2326. -962> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0fb00 con 0x563939c9cd80
  2327. -961> 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 10881007314159201096
  2328. -960> 2019-01-29 15:26:44.605 7f39c417b700 20 Putting signature in client message(seq # 3): sig = 10881007314159201096
  2329. -959> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 ==== 0+0+0 (0 0 0) 0x56393abc06c0 con 0x563939d41600
  2330. -958> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0d80 MonSession(mon.1 [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  2331. -957> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
  2332. -956> 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2333. -955> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2334. -954> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:44.606661 lease_expire=0.000000 has v0 lc 0
  2335. -953> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2336. -952> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0f500 con 0x563939c9c900
  2337. -951> 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  2338. -950> 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2339. -949> 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  2340. -948> 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
  2341. -947> 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
  2342. -946> 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  2343. -945> 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
  2344. -944> 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
  2345. -943> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.0
  2346. -942> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  2347. -941> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) defer to 0
  2348. -940> 2019-01-29 15:26:44.606 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
  2349. -939> 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- ?+0 0x56393abd4000
  2350. -938> 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- 0x56393abd4000 con 0x563939c9c900
  2351. -937> 2019-01-29 15:26:44.606 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 3541336743 middle_crc = 0 data_crc = 0 sig = 8903611187062716984
  2352. -936> 2019-01-29 15:26:44.606 7f39c417b700 20 Putting signature in client message(seq # 4): sig = 8903611187062716984
  2353. -935> 2019-01-29 15:26:44.728 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=35 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53000/0 addrs are 145
  2354. -934> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  2355. -933> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  2356. -932> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 18425495964075312649
  2357. -931> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  2358. -930> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  2359. -929> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 18425495964075312650 expecting 18425495964075312650
  2360. -928> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
  2361. -927> 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept connect_seq 0 vs existing csq=0 existing_state=STATE_CONNECTING
  2362. -926> 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=CLOSED pgs=0 cs=0 l=0).replace stop myself to swap existing
  2363. -925> 2019-01-29 15:26:44.729 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_reset 0x563939c9f600 v2:10.215.99.125:40367/0
  2364. -924> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  2365. -923> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  2366. -922> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 12808771302250315903
  2367. -921> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
  2368. -920> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
  2369. -919> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 12808771302250315904 expecting 12808771302250315904
  2370. -918> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
  2371. -917> 2019-01-29 15:26:44.729 7f39c397a700 10 In get_auth_session_handler for protocol 2
  2372. -916> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 4598065285107427218
  2373. -915> 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 1): sig = 4598065285107427218
  2374. -914> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 12491993532019861601
  2375. -913> 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 2): sig = 12491993532019861601
  2376. -912> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 4581398896165078038
  2377. -911> 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a917600 con 0x563939c9cd80
  2378. -910> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0fc0 MonSession(mon.2 [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  2379. -909> 2019-01-29 15:26:44.730 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
  2380. -908> 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2381. -907> 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2382. -906> 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
  2383. -905> 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
  2384. -904> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
  2385. -903> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
  2386. -902> 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917b80 con 0x563939c9cd80
  2387. -901> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 17022378831273095551
  2388. -900> 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 3): sig = 17022378831273095551
  2389. -899> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1672072532 middle_crc = 0 data_crc = 0 sig = 8103692841582008270
  2390. -898> 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (1672072532 0 0) 0x56393a917b80 con 0x563939c9cd80
  2391. -897> 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2392. -896> 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2393. -895> 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2394. -894> 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
  2395. -893> 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
  2396. -892> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
  2397. -891> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
  2398. -890> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  2399. -889> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 peer name is c
  2400. -888> 2019-01-29 15:26:44.735 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4384171504794788302
  2401. -887> 2019-01-29 15:26:44.735 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0fb00 con 0x563939c9cd80
  2402. -886> 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2403. -885> 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2404. -884> 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2405. -883> 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
  2406. -882> 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
  2407. -881> 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  2408. -880> 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
  2409. -879> 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
  2410. -878> 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.2
  2411. -877> 2019-01-29 15:26:44.735 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  2412. -876> 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) no, we already acked 0
  2413. -875> 2019-01-29 15:26:44.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9fa80 0x56393ab9b800 :53002 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=36 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53002/0 addrs are 145
  2414. -874> 2019-01-29 15:26:44.868 7f39c3179700 10 In get_auth_session_handler for protocol 0
  2415. -873> 2019-01-29 15:26:44.868 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393abc1b00 con 0x563939c9fa80
  2416. -872> 2019-01-29 15:26:44.868 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc1200 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  2417. -871> 2019-01-29 15:26:44.868 7f39c617f700 20 mon.b@1(electing) e0 caps
  2418. -870> 2019-01-29 15:26:44.868 7f39c617f700 5 mon.b@1(electing) e0 waitlisting message auth(proto 0 30 bytes epoch 0) v1
  2419. -869> 2019-01-29 15:26:49.585 7f39c8984700 11 mon.b@1(electing) e0 tick
  2420. -868> 2019-01-29 15:26:49.585 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
  2421. -867> 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2422. -866> 2019-01-29 15:26:49.585 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.586781 lease_expire=0.000000 has v0 lc 0
  2423. -865> 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2424. -864> 2019-01-29 15:26:49.618 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 4020688038 middle_crc = 0 data_crc = 0 sig = 6749900040655150956
  2425. -863> 2019-01-29 15:26:49.618 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 4 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 victory 2) v7 ==== 41889+0+0 (4020688038 0 0) 0x56393abd4000 con 0x563939c9c900
  2426. -862> 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  2427. -861> 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2428. -860> 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  2429. -859> 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
  2430. -858> 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
  2431. -857> 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  2432. -856> 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
  2433. -855> 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
  2434. -854> 2019-01-29 15:26:49.619 7f39c617f700 5 mon.b@1(electing).elector(1) handle_victory from mon.0 quorum_features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  2435. -853> 2019-01-29 15:26:49.619 7f39c617f700 10 mon.b@1(electing).elector(1) bump_epoch 1 to 2
  2436. -852> 2019-01-29 15:26:49.623 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 2611882610 middle_crc = 0 data_crc = 0 sig = 10249081017414072692
  2437. -851> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 join_election
  2438. -850> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 _reset
  2439. -849> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
  2440. -848> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
  2441. -847> 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
  2442. -846> 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
  2443. -845> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
  2444. -844> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
  2445. -843> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  2446. -842> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  2447. -841> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  2448. -840> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  2449. -839> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2450. -838> 2019-01-29 15:26:49.625 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626824 lease_expire=0.000000 has v0 lc 0
  2451. -837> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2452. -836> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
  2453. -835> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  2454. -834> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2455. -833> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626971 lease_expire=0.000000 has v0 lc 0
  2456. -832> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2457. -831> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  2458. -830> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  2459. -829> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  2460. -828> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  2461. -827> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon) e0 lose_election, epoch 2 leader is mon0 quorum is 0,1 features are 4611087854031667199 mon_features are mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  2462. -826> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) peon_init -- i am a peon
  2463. -825> 2019-01-29 15:26:49.626 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
  2464. -824> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) election_finished
  2465. -823> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active - not active
  2466. -822> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) election_finished
  2467. -821> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active - not active
  2468. -820> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) election_finished
  2469. -819> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2470. -818> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627225 lease_expire=0.000000 has v0 lc 0
  2471. -817> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2472. -816> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active - not active
  2473. -815> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) election_finished
  2474. -814> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active - not active
  2475. -813> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) election_finished
  2476. -812> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2477. -811> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627327 lease_expire=0.000000 has v0 lc 0
  2478. -810> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2479. -809> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active - not active
  2480. -808> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) election_finished
  2481. -807> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active - not active
  2482. -806> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) election_finished
  2483. -805> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active - not active
  2484. -804> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) election_finished
  2485. -803> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active - not active
  2486. -802> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) election_finished
  2487. -801> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active - not active
  2488. -800> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon) e0 apply_quorum_to_compatset_features
  2489. -799> 2019-01-29 15:26:49.626 7f39c617f700 1 mon.b@1(peon) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
  2490. -798> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 calc_quorum_requirements required_features 549755813888
  2491. -797> 2019-01-29 15:26:49.632 7f39c617f700 5 mon.b@1(peon) e0 apply_monmap_to_compatset_features
  2492. -796> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
  2493. -795> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 resend_routed_requests
  2494. -794> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 register_cluster_logger
  2495. -793> 2019-01-29 15:26:49.634 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 5 ==== paxos(collect lc 0 fc 0 pn 100 opn 0) v4 ==== 84+0+0 (2611882610 0 0) 0x563939b0f800 con 0x563939c9c900
  2496. -792> 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  2497. -791> 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  2498. -790> 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  2499. -789> 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
  2500. -788> 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
  2501. -787> 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  2502. -786> 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
  2503. -785> 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
  2504. -784> 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) handle_collect paxos(collect lc 0 fc 0 pn 100 opn 0) v4
  2505. -783> 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
  2506. -782> 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) accepting pn 100 from 0
  2507. -781> 2019-01-29 15:26:49.639 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(last lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4300 con 0x563939c9c900
  2508. -780> 2019-01-29 15:26:49.639 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 237159531 middle_crc = 0 data_crc = 0 sig = 17831234451164454257
  2509. -779> 2019-01-29 15:26:49.639 7f39c417b700 20 Putting signature in client message(seq # 5): sig = 17831234451164454257
  2510. -778> 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 4255564515 middle_crc = 0 data_crc = 0 sig = 9698958856354677651
  2511. -777> 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 6 ==== paxos(lease lc 0 fc 0 pn 0 opn 0) v4 ==== 84+0+0 (4255564515 0 0) 0x56393abd4300 con 0x563939c9c900
  2512. -776> 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  2513. -775> 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  2514. -774> 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  2515. -773> 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
  2516. -772> 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
  2517. -771> 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  2518. -770> 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
  2519. -769> 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
  2520. -768> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_lease on 0 now 2019-01-29 15:26:54.641538
  2521. -767> 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(lease_ack lc 0 fc 0 pn 0 opn 0) v4 -- 0x56393abd4600 con 0x563939c9c900
  2522. -766> 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon).paxos(paxos active c 0..0) reset_lease_timeout - setting timeout event
  2523. -765> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active
  2524. -764> 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(mdsmap 0..0) _active we are not the leader, hence we propose nothing!
  2525. -763> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active
  2526. -762> 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(osdmap 0..0) _active we are not the leader, hence we propose nothing!
  2527. -761> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 update_logger
  2528. -760> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 take_all_failures on 0 osds
  2529. -759> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 start_mapping no pools, no mapping job
  2530. -758> 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 3188025122 middle_crc = 0 data_crc = 0 sig = 13621511732933224171
  2531. -757> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active
  2532. -756> 2019-01-29 15:26:49.641 7f39c417b700 20 Putting signature in client message(seq # 6): sig = 13621511732933224171
  2533. -755> 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(logm 0..0) _active we are not the leader, hence we propose nothing!
  2534. -754> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2535. -753> 2019-01-29 15:26:49.641 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.642816 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
  2536. -752> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2537. -751> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active
  2538. -750> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(monmap 0..0) _active we are not the leader, hence we propose nothing!
  2539. -749> 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).monmap v0 apply_mon_features wait for service to be writeable
  2540. -748> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active
  2541. -747> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(auth 0..0) _active we are not the leader, hence we propose nothing!
  2542. -746> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2543. -745> 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643006 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
  2544. -744> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2545. -743> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).auth v0 AuthMonitor::on_active()
  2546. -742> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active
  2547. -741> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgr 0..0) _active we are not the leader, hence we propose nothing!
  2548. -740> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active
  2549. -739> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgrstat 0..0) _active we are not the leader, hence we propose nothing!
  2550. -738> 2019-01-29 15:26:49.642 7f39c617f700 20 mon.b@1(peon).mgrstat update_logger
  2551. -737> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active
  2552. -736> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(health 0..0) _active we are not the leader, hence we propose nothing!
  2553. -735> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active
  2554. -734> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(config 0..0) _active we are not the leader, hence we propose nothing!
  2555. -733> 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643134 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
  2556. -732> 2019-01-29 15:26:49.652 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 359680839 middle_crc = 0 data_crc = 0 sig = 14961307058218807505
  2557. -731> 2019-01-29 15:26:49.653 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 7 ==== paxos(begin lc 0 fc 0 pn 100 opn 0) v4 ==== 2292+0+0 (359680839 0 0) 0x56393abd4600 con 0x563939c9c900
  2558. -730> 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
  2559. -729> 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  2560. -728> 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
  2561. -727> 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
  2562. -726> 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
  2563. -725> 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
  2564. -724> 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
  2565. -723> 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
  2566. -722> 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_begin paxos(begin lc 0 fc 0 pn 100 opn 0) v4
  2567. -721> 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) accepting value for 1 pn 100
  2568. -720> 2019-01-29 15:26:49.657 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(accept lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4900 con 0x563939c9c900
  2569. -719> 2019-01-29 15:26:49.657 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 3516937211 middle_crc = 0 data_crc = 0 sig = 18198979658301600163
  2570. -718> 2019-01-29 15:26:49.657 7f39c417b700 20 Putting signature in client message(seq # 7): sig = 18198979658301600163
  2571. -717> 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 1185258935604909951
  2572. -716> 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 4 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a916b00 con 0x563939c9cd80
  2573. -715> 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2574. -714> 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  2575. -713> 2019-01-29 15:26:49.736 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2576. -712> 2019-01-29 15:26:49.736 7f39c617f700 20 allow so far , doing grant allow *
  2577. -711> 2019-01-29 15:26:49.736 7f39c617f700 20 allow all
  2578. -710> 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
  2579. -709> 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
  2580. -708> 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b quorum 0,1 paxos( fc 0 lc 0 ) new) v6 -- 0x56393affe000 con 0x563939c9cd80
  2581. -707> 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 3631419306 middle_crc = 0 data_crc = 0 sig = 17101328230604225658
  2582. -706> 2019-01-29 15:26:49.736 7f39c397a700 20 Putting signature in client message(seq # 4): sig = 17101328230604225658
  2583. -705> 2019-01-29 15:26:49.754 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4909251354610610373
  2584. -704> 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 5 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x56393abd5500 con 0x563939c9cd80
  2585. -703> 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2586. -702> 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
  2587. -701> 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2588. -700> 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
  2589. -699> 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
  2590. -698> 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  2591. -697> 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
  2592. -696> 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
  2593. -695> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) handle_propose from mon.2
  2594. -694> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).elector(2) handle_propose required features 549755813888 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  2595. -693> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) got propose from old epoch, quorum is 0,1, mon.2 must have just started
  2596. -692> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 start_election
  2597. -691> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 _reset
  2598. -690> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 cancel_probe_timeout (none scheduled)
  2599. -689> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
  2600. -688> 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_tick_stop
  2601. -687> 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_interval_stop
  2602. -686> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_event_cancel
  2603. -685> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_reset
  2604. -684> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) restart -- canceling timeouts
  2605. -683> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) restart
  2606. -682> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) restart
  2607. -681> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) restart
  2608. -680> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2609. -679> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755397 lease_expire=0.000000 has v0 lc 0
  2610. -678> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2611. -677> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) restart
  2612. -676> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) restart
  2613. -675> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2614. -674> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755436 lease_expire=0.000000 has v0 lc 0
  2615. -673> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2616. -672> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) restart
  2617. -671> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) restart
  2618. -670> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) restart
  2619. -669> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) restart
  2620. -668> 2019-01-29 15:26:49.754 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
  2621. -667> 2019-01-29 15:26:49.754 7f39c617f700 10 log_client _send_to_mon log to self
  2622. -666> 2019-01-29 15:26:49.754 7f39c617f700 10 log_client log_queue is 2 last_log 2 sent 1 num 2 unsent 1 sending 1
  2623. -665> 2019-01-29 15:26:49.754 7f39c617f700 10 log_client will send 2019-01-29 15:26:49.755480 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election
  2624. -664> 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 -- 0x56393b000000 con 0x563939d41600
  2625. -663> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(electing).elector(2) start -- can i be leader?
  2626. -662> 2019-01-29 15:26:49.754 7f39c617f700 1 mon.b@1(electing).elector(2) init, last seen epoch 2
  2627. -661> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(electing).elector(2) bump_epoch 2 to 3
  2628. -660> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 join_election
  2629. -659> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 _reset
  2630. -658> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
  2631. -657> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
  2632. -656> 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
  2633. -655> 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
  2634. -654> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
  2635. -653> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
  2636. -652> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  2637. -651> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  2638. -650> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  2639. -649> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  2640. -648> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2641. -647> 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759261 lease_expire=0.000000 has v0 lc 0
  2642. -646> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2643. -645> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
  2644. -644> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  2645. -643> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2646. -642> 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759298 lease_expire=0.000000 has v0 lc 0
  2647. -641> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2648. -640> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  2649. -639> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  2650. -638> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  2651. -637> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  2652. -636> 2019-01-29 15:26:49.758 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
  2653. -635> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4c00
  2654. -634> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4c00 con 0x563939c9c900
  2655. -633> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4f00
  2656. -632> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4f00 con 0x563939c9cd80
  2657. -631> 2019-01-29 15:26:49.759 7f39c417b700 10 _calc_signature seq 8 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 12490028364350923317
  2658. -630> 2019-01-29 15:26:49.759 7f39c417b700 20 Putting signature in client message(seq # 8): sig = 12490028364350923317
  2659. -629> 2019-01-29 15:26:49.759 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 13778264798706442571
  2660. -628> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 ==== 0+0+0 (0 0 0) 0x56393b000000 con 0x563939d41600
  2661. -627> 2019-01-29 15:26:49.759 7f39c397a700 20 Putting signature in client message(seq # 5): sig = 13778264798706442571
  2662. -626> 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  2663. -625> 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2664. -624> 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2665. -623> 2019-01-29 15:26:49.759 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.760231 lease_expire=0.000000 has v0 lc 0
  2666. -622> 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2667. -621> 2019-01-29 15:26:49.764 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2637561887 middle_crc = 0 data_crc = 0 sig = 10520589114785615889
  2668. -620> 2019-01-29 15:26:49.764 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 6 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 3) v7 ==== 1190+0+0 (2637561887 0 0) 0x56393abd4f00 con 0x563939c9cd80
  2669. -619> 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2670. -618> 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
  2671. -617> 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2672. -616> 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
  2673. -615> 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
  2674. -614> 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  2675. -613> 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
  2676. -612> 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
  2677. -611> 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) handle_ack from mon.2
  2678. -610> 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
  2679. -609> 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk reading from fd=30 : Unknown error -104
  2680. -608> 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed
  2681. -607> 2019-01-29 15:26:50.038 7f39c417b700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).handle_message read tag failed
  2682. -606> 2019-01-29 15:26:50.038 7f39c417b700 0 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).fault initiating reconnect
  2683. -605> 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  2684. -604> 2019-01-29 15:26:50.038 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  2685. -603> 2019-01-29 15:26:50.239 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  2686. -602> 2019-01-29 15:26:50.239 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  2687. -601> 2019-01-29 15:26:50.639 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  2688. -600> 2019-01-29 15:26:50.640 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  2689. -599> 2019-01-29 15:26:51.441 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  2690. -598> 2019-01-29 15:26:51.441 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  2691. -597> 2019-01-29 15:26:53.043 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  2692. -596> 2019-01-29 15:26:53.043 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  2693. -595> 2019-01-29 15:26:54.586 7f39c8984700 11 mon.b@1(electing) e0 tick
  2694. -594> 2019-01-29 15:26:54.586 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
  2695. -593> 2019-01-29 15:26:54.759 7f39c8984700 5 mon.b@1(electing).elector(3) election timer expired
  2696. -592> 2019-01-29 15:26:54.759 7f39c8984700 10 mon.b@1(electing).elector(3) bump_epoch 3 to 4
  2697. -591> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 join_election
  2698. -590> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 _reset
  2699. -589> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
  2700. -588> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 timecheck_finish
  2701. -587> 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_tick_stop
  2702. -586> 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_interval_stop
  2703. -585> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 scrub_event_cancel
  2704. -584> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing) e0 scrub_reset
  2705. -583> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  2706. -582> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  2707. -581> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  2708. -580> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  2709. -579> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2710. -578> 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772105 lease_expire=0.000000 has v0 lc 0
  2711. -577> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2712. -576> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2713. -575> 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772223 lease_expire=0.000000 has v0 lc 0
  2714. -574> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2715. -573> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
  2716. -572> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  2717. -571> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2718. -570> 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772325 lease_expire=0.000000 has v0 lc 0
  2719. -569> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2720. -568> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  2721. -567> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  2722. -566> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  2723. -565> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  2724. -564> 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- ?+0 0x56393abd5800
  2725. -563> 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- 0x56393abd5800 con 0x563939c9cd80
  2726. -562> 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(electing) e0 win_election epoch 4 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  2727. -561> 2019-01-29 15:26:54.772 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
  2728. -560> 2019-01-29 15:26:54.772 7f39c8984700 10 log_client _send_to_mon log to self
  2729. -559> 2019-01-29 15:26:54.772 7f39c8984700 10 log_client log_queue is 3 last_log 3 sent 2 num 3 unsent 1 sending 1
  2730. -558> 2019-01-29 15:26:54.772 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2368543008 middle_crc = 0 data_crc = 0 sig = 5116208503718269665
  2731. -557> 2019-01-29 15:26:54.772 7f39c397a700 20 Putting signature in client message(seq # 6): sig = 5116208503718269665
  2732. -556> 2019-01-29 15:26:54.772 7f39c8984700 10 log_client will send 2019-01-29 15:26:54.773061 mon.b (mon.1) 3 : cluster [INF] mon.b is new leader, mons b,c in quorum (ranks 1,2)
  2733. -555> 2019-01-29 15:26:54.772 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 -- 0x56393b000d80 con 0x563939d41600
  2734. -554> 2019-01-29 15:26:54.772 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 ==== 0+0+0 (0 0 0) 0x56393b000d80 con 0x563939d41600
  2735. -553> 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) leader_init -- starting paxos recovery
  2736. -552> 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) learned uncommitted 1 pn 100 (2196 bytes) from myself
  2737. -551> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) get_new_proposal_number = 201
  2738. -550> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) collect with pn 201
  2739. -549> 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5b00
  2740. -548> 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5b00 con 0x563939c9cd80
  2741. -547> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) election_finished
  2742. -546> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) _active - not active
  2743. -545> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
  2744. -544> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
  2745. -543> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
  2746. -542> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
  2747. -541> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
  2748. -540> 2019-01-29 15:26:54.777 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 1789744833 middle_crc = 0 data_crc = 0 sig = 2735914805292709925
  2749. -539> 2019-01-29 15:26:54.777 7f39c397a700 20 Putting signature in client message(seq # 7): sig = 2735914805292709925
  2750. -538> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2751. -537> 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778696 lease_expire=0.000000 has v0 lc 0
  2752. -536> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2753. -535> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2754. -534> 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778814 lease_expire=0.000000 has v0 lc 0
  2755. -533> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2756. -532> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
  2757. -531> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
  2758. -530> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2759. -529> 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.779003 lease_expire=0.000000 has v0 lc 0
  2760. -528> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2761. -527> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
  2762. -526> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
  2763. -525> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
  2764. -524> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
  2765. -523> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
  2766. -522> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
  2767. -521> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
  2768. -520> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
  2769. -519> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
  2770. -518> 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_quorum_to_compatset_features
  2771. -517> 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_monmap_to_compatset_features
  2772. -516> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 timecheck_finish
  2773. -515> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 resend_routed_requests
  2774. -514> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 register_cluster_logger - already registered
  2775. -513> 2019-01-29 15:26:54.778 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  2776. -512> 2019-01-29 15:26:54.779 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
  2777. -511> 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2778. -510> 2019-01-29 15:26:54.779 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.780024 lease_expire=0.000000 has v0 lc 0
  2779. -509> 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2780. -508> 2019-01-29 15:26:54.794 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 2077645696 middle_crc = 0 data_crc = 0 sig = 16008821289831722732
  2781. -507> 2019-01-29 15:26:54.795 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 7 ==== paxos(last lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (2077645696 0 0) 0x56393abd5b00 con 0x563939c9cd80
  2782. -506> 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2783. -505> 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
  2784. -504> 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2785. -503> 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
  2786. -502> 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
  2787. -501> 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  2788. -500> 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
  2789. -499> 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
  2790. -498> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) handle_last paxos(last lc 0 fc 0 pn 201 opn 0) v4
  2791. -497> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) store_state nothing to commit
  2792. -496> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) they accepted our pn, we now have 2 peons
  2793. -495> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) that's everyone. begin on old learned value
  2794. -494> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) begin for 1 2196 bytes
  2795. -493> 2019-01-29 15:26:54.800 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) sending begin to mon.2
  2796. -492> 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5200
  2797. -491> 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5200 con 0x563939c9cd80
  2798. -490> 2019-01-29 15:26:54.800 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3535318905 middle_crc = 0 data_crc = 0 sig = 6870884659653601128
  2799. -489> 2019-01-29 15:26:54.800 7f39c397a700 20 Putting signature in client message(seq # 8): sig = 6870884659653601128
  2800. -488> 2019-01-29 15:26:54.807 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3909173416 middle_crc = 0 data_crc = 0 sig = 17406004129815643634
  2801. -487> 2019-01-29 15:26:54.807 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 8 ==== paxos(accept lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (3909173416 0 0) 0x56393abd5200 con 0x563939c9cd80
  2802. -486> 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2803. -485> 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
  2804. -484> 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2805. -483> 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
  2806. -482> 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
  2807. -481> 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  2808. -480> 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
  2809. -479> 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
  2810. -478> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) handle_accept paxos(accept lc 0 fc 0 pn 201 opn 0) v4
  2811. -477> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) now 1,2 have accepted
  2812. -476> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) got majority, committing, done with update
  2813. -475> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) commit_start 1
  2814. -474> 2019-01-29 15:26:54.813 7f39c2978700 20 mon.b@1(leader).paxos(paxos writing-previous c 0..0) commit_finish 1
  2815. -473> 2019-01-29 15:26:54.813 7f39c2978700 10 mon.b@1(leader).paxos(paxos writing-previous c 1..1) sending commit to mon.2
  2816. -472> 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393b00a000
  2817. -471> 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- 0x56393b00a000 con 0x563939c9cd80
  2818. -470> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader) e0 refresh_from_paxos
  2819. -469> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
  2820. -468> 2019-01-29 15:26:54.814 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 558359806 middle_crc = 0 data_crc = 0 sig = 15537979346010902130
  2821. -467> 2019-01-29 15:26:54.814 7f39c397a700 20 Putting signature in client message(seq # 9): sig = 15537979346010902130
  2822. -466> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
  2823. -465> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
  2824. -464> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos
  2825. -463> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
  2826. -462> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
  2827. -461> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos version 1, my v 0
  2828. -460> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 signaling that we need a bootstrap
  2829. -459> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos got 1
  2830. -458> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
  2831. -457> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).auth v0 update_from_paxos
  2832. -456> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
  2833. -455> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).config load_config got 0 keys
  2834. -454> 2019-01-29 15:26:54.819 7f39c2978700 20 mon.b@1(leader).config load_config config map:
  2835. {
  2836. "global": {},
  2837. "by_type": {},
  2838. "by_id": {}
  2839. }
  2840.  
  2841. -453> 2019-01-29 15:26:54.819 7f39c2978700 4 set_mon_vals no callback set
  2842. -452> 2019-01-29 15:26:54.830 7f39c2978700 20 mgrc handle_mgr_map mgrmap(e 0) v1
  2843. -451> 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Got map version 0
  2844. -450> 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Active mgr is now
  2845. -449> 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc reconnect No active mgr available yet
  2846. -448> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
  2847. -447> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat 0
  2848. -446> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat check_subs
  2849. -445> 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).mgrstat update_logger
  2850. -444> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
  2851. -443> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).health update_from_paxos
  2852. -442> 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).health dump:{
  2853. "quorum_health": {},
  2854. "leader_health": {}
  2855. }
  2856.  
  2857. -441> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
  2858. -440> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
  2859. -439> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
  2860. -438> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
  2861. -437> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
  2862. -436> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
  2863. -435> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
  2864. -434> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
  2865. -433> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
  2866. -432> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
  2867. -431> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader).paxos(paxos refresh c 1..1) doing requested bootstrap
  2868. -430> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 bootstrap
  2869. -429> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 sync_reset_requester
  2870. -428> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 unregister_cluster_logger
  2871. -427> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 cancel_probe_timeout (none scheduled)
  2872. -426> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  2873. -425> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 _reset
  2874. -424> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
  2875. -423> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 timecheck_finish
  2876. -422> 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_tick_stop
  2877. -421> 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_interval_stop
  2878. -420> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_event_cancel
  2879. -419> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_reset
  2880. -418> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxos(paxos refresh c 1..1) restart -- canceling timeouts
  2881. -417> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  2882. -416> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  2883. -415> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  2884. -414> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2885. -413> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832435 lease_expire=0.000000 has v0 lc 1
  2886. -412> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2887. -411> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2888. -410> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832553 lease_expire=0.000000 has v0 lc 1
  2889. -409> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2890. -408> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2891. -407> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832654 lease_expire=0.000000 has v0 lc 1
  2892. -406> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2893. -405> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
  2894. -404> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  2895. -403> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2896. -402> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832778 lease_expire=0.000000 has v0 lc 1
  2897. -401> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2898. -400> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  2899. -399> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  2900. -398> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(health 0..0) restart
  2901. -397> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(config 0..0) restart
  2902. -396> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
  2903. -395> 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 reset_probe_timeout 0x56393affc480 after 2 seconds
  2904. -394> 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 probing other monitors
  2905. -393> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affe840
  2906. -392> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affe840 con 0x563939c9c900
  2907. -391> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affeb00
  2908. -390> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affeb00 con 0x563939c9cd80
  2909. -389> 2019-01-29 15:26:54.832 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 3184037569713953523
  2910. -388> 2019-01-29 15:26:54.832 7f39c397a700 20 Putting signature in client message(seq # 10): sig = 3184037569713953523
  2911. -387> 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 8540967715760223295
  2912. -386> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 9 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393affeb00 con 0x563939c9cd80
  2913. -385> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2914. -384> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
  2915. -383> 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2916. -382> 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
  2917. -381> 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
  2918. -380> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
  2919. -379> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
  2920. -378> 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 800172242 middle_crc = 0 data_crc = 0 sig = 7787347958796391197
  2921. -377> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 1 lc 1 ) new) v6 -- 0x56393affe2c0 con 0x563939c9cd80
  2922. -376> 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 443766928 middle_crc = 0 data_crc = 0 sig = 3146968082825100645
  2923. -375> 2019-01-29 15:26:54.835 7f39c397a700 20 Putting signature in client message(seq # 11): sig = 3146968082825100645
  2924. -374> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 10 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6 ==== 430+0+0 (800172242 0 0) 0x56393affe000 con 0x563939c9cd80
  2925. -373> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  2926. -372> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
  2927. -371> 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  2928. -370> 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
  2929. -369> 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
  2930. -368> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
  2931. -367> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
  2932. -366> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 monmap is e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  2933. -365> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 peer name is c
  2934. -364> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 mon.c is outside the quorum
  2935. -363> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 outside_quorum now b,c, need 2
  2936. -362> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 that's enough to form a new quorum, calling election
  2937. -361> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 start_election
  2938. -360> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 _reset
  2939. -359> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 cancel_probe_timeout 0x56393affc480
  2940. -358> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 timecheck_finish
  2941. -357> 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_tick_stop
  2942. -356> 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_interval_stop
  2943. -355> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_event_cancel
  2944. -354> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_reset
  2945. -353> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
  2946. -352> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
  2947. -351> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
  2948. -350> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
  2949. -349> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2950. -348> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836598 lease_expire=0.000000 has v0 lc 1
  2951. -347> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2952. -346> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2953. -345> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836628 lease_expire=0.000000 has v0 lc 1
  2954. -344> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2955. -343> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2956. -342> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836663 lease_expire=0.000000 has v0 lc 1
  2957. -341> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2958. -340> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
  2959. -339> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
  2960. -338> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2961. -337> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836708 lease_expire=0.000000 has v0 lc 1
  2962. -336> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  2963. -335> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
  2964. -334> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
  2965. -333> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
  2966. -332> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
  2967. -331> 2019-01-29 15:26:54.835 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
  2968. -330> 2019-01-29 15:26:54.835 7f39c617f700 10 log_client _send_to_mon log to self
  2969. -329> 2019-01-29 15:26:54.835 7f39c617f700 10 log_client log_queue is 4 last_log 4 sent 3 num 4 unsent 1 sending 1
  2970. -328> 2019-01-29 15:26:54.835 7f39c617f700 10 log_client will send 2019-01-29 15:26:54.836762 mon.b (mon.1) 4 : cluster [INF] mon.b calling monitor election
  2971. -327> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 -- 0x56393b000240 con 0x563939d41600
  2972. -326> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(electing).elector(4) start -- can i be leader?
  2973. -325> 2019-01-29 15:26:54.835 7f39c617f700 1 mon.b@1(electing).elector(4) init, last seen epoch 4
  2974. -324> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(electing).elector(4) bump_epoch 4 to 5
  2975. -323> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 join_election
  2976. -322> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 _reset
  2977. -321> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
  2978. -320> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 timecheck_finish
  2979. -319> 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_tick_stop
  2980. -318> 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_interval_stop
  2981. -317> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_event_cancel
  2982. -316> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_reset
  2983. -315> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
  2984. -314> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  2985. -313> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  2986. -312> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  2987. -311> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2988. -310> 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840738 lease_expire=0.000000 has v0 lc 1
  2989. -309> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2990. -308> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2991. -307> 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840832 lease_expire=0.000000 has v0 lc 1
  2992. -306> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2993. -305> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  2994. -304> 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840935 lease_expire=0.000000 has v0 lc 1
  2995. -303> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  2996. -302> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
  2997. -301> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  2998. -300> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  2999. -299> 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.841040 lease_expire=0.000000 has v0 lc 1
  3000. -298> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  3001. -297> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  3002. -296> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  3003. -295> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  3004. -294> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  3005. -293> 2019-01-29 15:26:54.853 7f39c617f700 -1 mon.b@1(electing) e1 devname dm-0
  3006. -292> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00a900
  3007. -291> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00a900 con 0x563939c9c900
  3008. -290> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00ac00
  3009. -289> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00ac00 con 0x563939c9cd80
  3010. -288> 2019-01-29 15:26:54.854 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12969980896879027673
  3011. -287> 2019-01-29 15:26:54.854 7f39c397a700 20 Putting signature in client message(seq # 12): sig = 12969980896879027673
  3012. -286> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 ==== 0+0+0 (0 0 0) 0x56393b000240 con 0x563939d41600
  3013. -285> 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  3014. -284> 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
  3015. -283> 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3016. -282> 2019-01-29 15:26:54.854 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.855846 lease_expire=0.000000 has v0 lc 1
  3017. -281> 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3018. -280> 2019-01-29 15:26:54.855 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12994639518338884118
  3019. -279> 2019-01-29 15:26:54.855 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 11 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 ==== 450+0+0 (1196600255 0 0) 0x56393b00a000 con 0x563939c9cd80
  3020. -278> 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  3021. -277> 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
  3022. -276> 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  3023. -275> 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
  3024. -274> 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
  3025. -273> 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  3026. -272> 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
  3027. -271> 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
  3028. -270> 2019-01-29 15:26:54.855 7f39c617f700 5 mon.b@1(electing).elector(5) handle_propose from mon.2
  3029. -269> 2019-01-29 15:26:54.855 7f39c617f700 10 mon.b@1(electing).elector(5) handle_propose required features 549755813888 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  3030. -268> 2019-01-29 15:26:54.857 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 2042675853 middle_crc = 0 data_crc = 0 sig = 8776967150872131881
  3031. -267> 2019-01-29 15:26:54.857 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 12 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 5) v7 ==== 1190+0+0 (2042675853 0 0) 0x56393b00ac00 con 0x563939c9cd80
  3032. -266> 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  3033. -265> 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
  3034. -264> 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  3035. -263> 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
  3036. -262> 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
  3037. -261> 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  3038. -260> 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
  3039. -259> 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
  3040. -258> 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) handle_ack from mon.2
  3041. -257> 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
  3042. -256> 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 36
  3043. -255> 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
  3044. -254> 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).handle_message read tag failed
  3045. -253> 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).fault on lossy channel, failing
  3046. -252> 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x563939c9fa80 v2:10.215.99.125:53002/4155176800
  3047. -251> 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
  3048. -250> 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1200 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
  3049. -249> 2019-01-29 15:26:54.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026000 0x56393ab9be00 :53042 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53042/0 addrs are 145
  3050. -248> 2019-01-29 15:26:54.869 7f39c3179700 10 In get_auth_session_handler for protocol 0
  3051. -247> 2019-01-29 15:26:54.870 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001b00 con 0x56393b026000
  3052. -246> 2019-01-29 15:26:54.870 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393abc1d40 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  3053. -245> 2019-01-29 15:26:54.870 7f39c617f700 20 mon.b@1(electing) e1 caps
  3054. -244> 2019-01-29 15:26:54.870 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
  3055. -243> 2019-01-29 15:26:56.244 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
  3056. -242> 2019-01-29 15:26:56.244 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
  3057. -241> 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 30
  3058. -240> 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
  3059. -239> 2019-01-29 15:26:57.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).handle_message read tag failed
  3060. -238> 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).fault on lossy channel, failing
  3061. -237> 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x56393b026000 v2:10.215.99.125:53002/4155176800
  3062. -236> 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
  3063. -235> 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1d40 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
  3064. -234> 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026480 0x56393ab9d600 :53056 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53056/0 addrs are 145
  3065. -233> 2019-01-29 15:26:57.871 7f39c3179700 10 In get_auth_session_handler for protocol 0
  3066. -232> 2019-01-29 15:26:57.871 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001d40 con 0x56393b026480
  3067. -231> 2019-01-29 15:26:57.872 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393b000480 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
  3068. -230> 2019-01-29 15:26:57.872 7f39c617f700 20 mon.b@1(electing) e1 caps
  3069. -229> 2019-01-29 15:26:57.872 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
  3070. -228> 2019-01-29 15:26:59.586 7f39c8984700 11 mon.b@1(electing) e1 tick
  3071. -227> 2019-01-29 15:26:59.586 7f39c8984700 20 mon.b@1(electing) e1 sync_trim_providers
  3072. -226> 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing) e1 session closed, dropping 0x56393b001b00
  3073. -225> 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
  3074. -224> 2019-01-29 15:26:59.586 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.587891 lease_expire=0.000000 has v0 lc 1
  3075. -223> 2019-01-29 15:26:59.587 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  3076. -222> 2019-01-29 15:26:59.854 7f39c8984700 5 mon.b@1(electing).elector(5) election timer expired
  3077. -221> 2019-01-29 15:26:59.854 7f39c8984700 10 mon.b@1(electing).elector(5) bump_epoch 5 to 6
  3078. -220> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 join_election
  3079. -219> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 _reset
  3080. -218> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
  3081. -217> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 timecheck_finish
  3082. -216> 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_tick_stop
  3083. -215> 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_interval_stop
  3084. -214> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_event_cancel
  3085. -213> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_reset
  3086. -212> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
  3087. -211> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
  3088. -210> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
  3089. -209> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
  3090. -208> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3091. -207> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867221 lease_expire=0.000000 has v0 lc 1
  3092. -206> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3093. -205> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3094. -204> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867315 lease_expire=0.000000 has v0 lc 1
  3095. -203> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3096. -202> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3097. -201> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867394 lease_expire=0.000000 has v0 lc 1
  3098. -200> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3099. -199> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3100. -198> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867477 lease_expire=0.000000 has v0 lc 1
  3101. -197> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3102. -196> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
  3103. -195> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
  3104. -194> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
  3105. -193> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) discarding message from disconnected client client.? v2:10.215.99.125:53002/4155176800 auth(proto 0 30 bytes epoch 0) v1
  3106. -192> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
  3107. -191> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867654 lease_expire=0.000000 has v0 lc 1
  3108. -190> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  3109. -189> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
  3110. -188> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
  3111. -187> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
  3112. -186> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
  3113. -185> 2019-01-29 15:26:59.866 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- ?+0 0x56393b00b800
  3114. -184> 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- 0x56393b00b800 con 0x563939c9cd80
  3115. -183> 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(electing) e1 win_election epoch 6 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  3116. -182> 2019-01-29 15:26:59.867 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
  3117. -181> 2019-01-29 15:26:59.867 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 178927411 middle_crc = 0 data_crc = 0 sig = 794234354135278605
  3118. -180> 2019-01-29 15:26:59.867 7f39c8984700 10 log_client _send_to_mon log to self
  3119. -179> 2019-01-29 15:26:59.867 7f39c397a700 20 Putting signature in client message(seq # 13): sig = 794234354135278605
  3120. -178> 2019-01-29 15:26:59.867 7f39c8984700 10 log_client log_queue is 5 last_log 5 sent 4 num 5 unsent 1 sending 1
  3121. -177> 2019-01-29 15:26:59.867 7f39c8984700 10 log_client will send 2019-01-29 15:26:59.868170 mon.b (mon.1) 5 : cluster [INF] mon.b is new leader, mons b,c in quorum (ranks 1,2)
  3122. -176> 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 -- 0x56393b000fc0 con 0x563939d41600
  3123. -175> 2019-01-29 15:26:59.867 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 ==== 0+0+0 (0 0 0) 0x56393b000fc0 con 0x563939d41600
  3124. -174> 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) leader_init -- starting paxos recovery
  3125. -173> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) get_new_proposal_number = 301
  3126. -172> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) collect with pn 301
  3127. -171> 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- ?+0 0x56393b00bb00
  3128. -170> 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- 0x56393b00bb00 con 0x563939c9cd80
  3129. -169> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) election_finished
  3130. -168> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active - not active
  3131. -167> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
  3132. -166> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
  3133. -165> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
  3134. -164> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
  3135. -163> 2019-01-29 15:26:59.872 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 4116324937 middle_crc = 0 data_crc = 0 sig = 15318303623903287436
  3136. -162> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
  3137. -161> 2019-01-29 15:26:59.872 7f39c397a700 20 Putting signature in client message(seq # 14): sig = 15318303623903287436
  3138. -160> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3139. -159> 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873644 lease_expire=0.000000 has v0 lc 1
  3140. -158> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3141. -157> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3142. -156> 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873776 lease_expire=0.000000 has v0 lc 1
  3143. -155> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3144. -154> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3145. -153> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873916 lease_expire=0.000000 has v0 lc 1
  3146. -152> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3147. -151> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3148. -150> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874034 lease_expire=0.000000 has v0 lc 1
  3149. -149> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3150. -148> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
  3151. -147> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
  3152. -146> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
  3153. -145> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874163 lease_expire=0.000000 has v0 lc 1
  3154. -144> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
  3155. -143> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
  3156. -142> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
  3157. -141> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
  3158. -140> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
  3159. -139> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
  3160. -138> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
  3161. -137> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
  3162. -136> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
  3163. -135> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
  3164. -134> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_quorum_to_compatset_features
  3165. -133> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_monmap_to_compatset_features
  3166. -132> 2019-01-29 15:26:59.873 7f39c8984700 1 mon.b@1(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout}
  3167. -131> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 calc_quorum_requirements required_features 2449958747315912708
  3168. -130> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_finish
  3169. -129> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 resend_routed_requests
  3170. -128> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 register_cluster_logger
  3171. -127> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start
  3172. -126> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round curr 0
  3173. -125> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round new 1
  3174. -124> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck
  3175. -123> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck start timecheck epoch 6 round 1
  3176. -122> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck send time_check( ping e 6 r 1 ) v1 to mon.2
  3177. -121> 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- ?+0 0x56393b000b40
  3178. -120> 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- 0x56393b000b40 con 0x563939c9cd80
  3179. -119> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round setting up next event
  3180. -118> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_reset_event delay 300 rounds_since_clean 0
  3181. -117> 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_start
  3182. -116> 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_stop
  3183. -115> 2019-01-29 15:26:59.879 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 72719240 middle_crc = 0 data_crc = 0 sig = 11523137460518662160
  3184. -114> 2019-01-29 15:26:59.879 7f39c397a700 20 Putting signature in client message(seq # 15): sig = 11523137460518662160
  3185. -113> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 scrub_event_start
  3186. -112> 2019-01-29 15:26:59.879 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
  3187. -111> 2019-01-29 15:26:59.880 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
  3188. -110> 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000fc0 log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
  3189. -109> 2019-01-29 15:26:59.880 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.881040 lease_expire=0.000000 has v0 lc 1
  3190. -108> 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
  3191. -107> 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 4120674357 middle_crc = 0 data_crc = 0 sig = 15016538117729764366
  3192. -106> 2019-01-29 15:26:59.889 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 13 ==== paxos(last lc 1 fc 1 pn 301 opn 0) v4 ==== 84+0+0 (4120674357 0 0) 0x56393b00b800 con 0x563939c9cd80
  3193. -105> 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 1460593560 middle_crc = 0 data_crc = 0 sig = 3747477798607265139
  3194. -104> 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
  3195. -103> 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
  3196. -102> 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
  3197. -101> 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
  3198. -100> 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
  3199. -99> 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
  3200. -98> 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
  3201. -97> 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
  3202. -96> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) handle_last paxos(last lc 1 fc 1 pn 301 opn 0) v4
  3203. -95> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) store_state nothing to commit
  3204. -94> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) they accepted our pn, we now have 2 peons
  3205. -93> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) that's everyone. active!
  3206. -92> 2019-01-29 15:26:59.890 7f39c617f700 7 mon.b@1(leader).paxos(paxos recovering c 1..1) extend_lease now+5 (2019-01-29 15:27:04.891100)
  3207. -91> 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- ?+0 0x56393b00af00
  3208. -90> 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- 0x56393b00af00 con 0x563939c9cd80
  3209. -89> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader) e1 refresh_from_paxos
  3210. -88> 2019-01-29 15:26:59.890 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 3892865509 middle_crc = 0 data_crc = 0 sig = 11834348323597999916
  3211. -87> 2019-01-29 15:26:59.890 7f39c397a700 20 Putting signature in client message(seq # 16): sig = 11834348323597999916
  3212. -86> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
  3213. -85> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
  3214. -84> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
  3215. -83> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos
  3216. -82> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
  3217. -81> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
  3218. -80> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
  3219. -79> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).auth v0 update_from_paxos
  3220. -78> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
  3221. -77> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).config load_config got 0 keys
  3222. -76> 2019-01-29 15:26:59.891 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 3498592039 middle_crc = 0 data_crc = 0 sig = 6841074444368600247
  3223. -75> 2019-01-29 15:26:59.891 7f39c617f700 20 mon.b@1(leader).config load_config config map:
  3224. {
  3225. "global": {},
  3226. "by_type": {},
  3227. "by_id": {}
  3228. }
  3229.  
  3230. -74> 2019-01-29 15:26:59.891 7f39c617f700 4 set_mon_vals no callback set
  3231. -73> 2019-01-29 15:26:59.897 7f39c617f700 20 mgrc handle_mgr_map mgrmap(e 0) v1
  3232. -72> 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Got map version 0
  3233. -71> 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Active mgr is now
  3234. -70> 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc reconnect No active mgr available yet
  3235. -69> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
  3236. -68> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat 0
  3237. -67> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat check_subs
  3238. -66> 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).mgrstat update_logger
  3239. -65> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
  3240. -64> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).health update_from_paxos
  3241. -63> 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).health dump:{
  3242. "quorum_health": {},
  3243. "leader_health": {}
  3244. }
  3245.  
  3246. -62> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
  3247. -61> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
  3248. -60> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
  3249. -59> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
  3250. -58> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
  3251. -57> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
  3252. -56> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
  3253. -55> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
  3254. -54> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
  3255. -53> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
  3256. -52> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) finish_round
  3257. -51> 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).paxos(paxos active c 1..1) finish_round waiting_for_acting
  3258. -50> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active
  3259. -49> 2019-01-29 15:26:59.897 7f39c617f700 7 mon.b@1(leader).paxosservice(monmap 1..1) _active creating new pending
  3260. -48> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 create_pending monmap epoch 2
  3261. -47> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 noting that i was, once, part of an active quorum.
  3262. -46> 2019-01-29 15:26:59.902 7f39c617f700 0 log_channel(cluster) log [DBG] : monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  3263. -45> 2019-01-29 15:26:59.902 7f39c617f700 10 log_client _send_to_mon log to self
  3264. -44> 2019-01-29 15:26:59.902 7f39c617f700 10 log_client log_queue is 6 last_log 6 sent 5 num 6 unsent 1 sending 1
  3265. -43> 2019-01-29 15:26:59.902 7f39c617f700 10 log_client will send 2019-01-29 15:26:59.903091 mon.b (mon.1) 6 : cluster [DBG] monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
  3266. -42> 2019-01-29 15:26:59.902 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 6 at 2019-01-29 15:26:59.903091) v1 -- 0x56393b000900 con 0x563939d41600
  3267. -41> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).monmap v1 apply_mon_features features match current pending: mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  3268. -40> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active
  3269. -39> 2019-01-29 15:26:59.902 7f39c617f700 7 mon.b@1(leader).paxosservice(mdsmap 0..0) _active creating new pending
  3270. -38> 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) is_readable = 1 - now=2019-01-29 15:26:59.903204 lease_expire=2019-01-29 15:27:04.891100 has v0 lc 1
  3271. -37> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_pending e1
  3272. -36> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_initial
  3273. -35> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) propose_pending
  3274. -34> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 encode_pending e1
  3275. -33> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader) e1 log_health updated 0 previous 0
  3276. -32> 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) queue_pending_finisher 0x563939ab8950
  3277. -31> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) trigger_propose active, proposing now
  3278. -30> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) propose_pending 2 2867 bytes
  3279. -29> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) begin for 2 2867 bytes
  3280. -28> 2019-01-29 15:26:59.906 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) sending begin to mon.2
  3281. -27> 2019-01-29 15:26:59.906 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- ?+0 0x56393b00b200
  3282. -26> 2019-01-29 15:26:59.907 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- 0x56393b00b200 con 0x563939c9cd80
  3283. -25> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active
  3284. -24> 2019-01-29 15:26:59.907 7f39c617f700 7 mon.b@1(leader).paxosservice(osdmap 0..0) _active creating new pending
  3285. -23> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_pending e 1
  3286. -22> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting backfillfull_ratio = 0.99
  3287. -21> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting full_ratio = 0.99
  3288. -20> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting nearfull_ratio = 0.99
  3289. -19> 2019-01-29 15:26:59.907 7f39c397a700 10 _calc_signature seq 17 front_crc_ = 3823202930 middle_crc = 0 data_crc = 0 sig = 3998564941522667071
  3290. -18> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_initial for 3b02750c-f104-4301-aa14-258d2b37f104
  3291. -17> 2019-01-29 15:26:59.907 7f39c397a700 20 Putting signature in client message(seq # 17): sig = 3998564941522667071
  3292. -16> 2019-01-29 15:26:59.907 7f39c617f700 20 mon.b@1(leader).osd e0 full crc 3491248425
  3293. -15> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) propose_pending
  3294. -14> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending e 1
  3295. -13> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 do_prune osdmap full prune enabled
  3296. -12> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 should_prune currently holding only 0 epochs (min osdmap epochs: 500); do not prune.
  3297. -11> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
  3298. -10> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs
  3299. -9> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pools queued
  3300. -8> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pgs removed because they're created
  3301. -7> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs queue remaining: 0 pools
  3302. -6> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0/0 pgs added from queued pools
  3303. -5> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first mimic+ epoch
  3304. -4> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first nautilus+ epoch
  3305. -3> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending encoding full map with nautilus features 1080873256688298500
  3306. -2> 2019-01-29 15:26:59.908 7f39c617f700 20 mon.b@1(leader).osd e0 full_crc 3491248425 inc_crc 3830871662
  3307. -1> 2019-01-29 15:26:59.911 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 2534419215 middle_crc = 0 data_crc = 0 sig = 4957134661093499494
  3308. 0> 2019-01-29 15:26:59.940 7f39c617f700 -1 *** Caught signal (Segmentation fault) **
  3309. in thread 7f39c617f700 thread_name:ms_dispatch
  3310.  
  3311. ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev)
  3312. 1: (()+0x13dd820) [0x5639382a1820]
  3313. 2: (()+0x12080) [0x7f39d1683080]
  3314. 3: (OSDMap::check_health(health_check_map_t*) const+0x1235) [0x7f39d6284cfb]
  3315. 4: (OSDMonitor::encode_pending(std::shared_ptr<MonitorDBStore::Transaction>)+0x510d) [0x56393811017b]
  3316. 5: (PaxosService::propose_pending()+0x45a) [0x5639380fcb24]
  3317. 6: (PaxosService::_active()+0x62b) [0x5639380fdba7]
  3318. 7: (()+0x12394e9) [0x5639380fd4e9]
  3319. 8: (Context::complete(int)+0x27) [0x563937e12037]
  3320. 9: (void finish_contexts<std::__cxx11::list<Context*, std::allocator<Context*> > >(CephContext*, std::__cxx11::list<Context*, std::allocator<Context*> >&, int)+0x2c8) [0x563937e3642c]
  3321. 10: (Paxos::finish_round()+0x2ed) [0x5639380eb4e9]
  3322. 11: (Paxos::handle_last(boost::intrusive_ptr<MonOpRequest>)+0x17ae) [0x5639380e5ac8]
  3323. 12: (Paxos::dispatch(boost::intrusive_ptr<MonOpRequest>)+0x392) [0x5639380ef7cc]
  3324. 13: (Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0x1119) [0x563937de61d9]
  3325. 14: (Monitor::_ms_dispatch(Message*)+0xec6) [0x563937de4d9e]
  3326. 15: (Monitor::ms_dispatch(Message*)+0x38) [0x563937e20d04]
  3327. 16: (Dispatcher::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x5c) [0x563937e142c2]
  3328. 17: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0xe9) [0x7f39d5f6f247]
  3329. 18: (DispatchQueue::entry()+0x61c) [0x7f39d5f6dd3c]
  3330. 19: (DispatchQueue::DispatchThread::entry()+0x1c) [0x7f39d60cf7f4]
  3331. 20: (Thread::entry_wrapper()+0x78) [0x7f39d5d4cb4a]
  3332. 21: (Thread::_entry_func(void*)+0x18) [0x7f39d5d4cac8]
  3333. 22: (()+0x7594) [0x7f39d1678594]
  3334. 23: (clone()+0x3f) [0x7f39d041bf4f]
  3335. NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  3336.  
  3337. --- logging levels ---
  3338. 0/ 5 none
  3339. 0/ 1 lockdep
  3340. 0/ 1 context
  3341. 1/ 1 crush
  3342. 1/ 5 mds
  3343. 1/ 5 mds_balancer
  3344. 1/ 5 mds_locker
  3345. 1/ 5 mds_log
  3346. 1/ 5 mds_log_expire
  3347. 1/ 5 mds_migrator
  3348. 0/ 1 buffer
  3349. 0/ 1 timer
  3350. 0/ 1 filer
  3351. 0/ 1 striper
  3352. 0/ 1 objecter
  3353. 0/ 5 rados
  3354. 0/ 5 rbd
  3355. 0/ 5 rbd_mirror
  3356. 0/ 5 rbd_replay
  3357. 0/ 5 journaler
  3358. 0/ 5 objectcacher
  3359. 0/ 5 client
  3360. 1/ 5 osd
  3361. 0/ 5 optracker
  3362. 0/ 5 objclass
  3363. 1/ 3 filestore
  3364. 1/ 3 journal
  3365. 1/ 1 ms
  3366. 20/20 mon
  3367. 0/10 monc
  3368. 20/20 paxos
  3369. 0/ 5 tp
  3370. 20/20 auth
  3371. 1/ 5 crypto
  3372. 1/ 1 finisher
  3373. 1/ 1 reserver
  3374. 1/ 5 heartbeatmap
  3375. 1/ 5 perfcounter
  3376. 1/ 5 rgw
  3377. 1/ 5 rgw_sync
  3378. 1/10 civetweb
  3379. 1/ 5 javaclient
  3380. 1/ 5 asok
  3381. 1/ 1 throttle
  3382. 0/ 0 refs
  3383. 1/ 5 xio
  3384. 1/ 5 compressor
  3385. 1/ 5 bluestore
  3386. 1/ 5 bluefs
  3387. 1/ 3 bdev
  3388. 1/ 5 kstore
  3389. 4/ 5 rocksdb
  3390. 4/ 5 leveldb
  3391. 4/ 5 memdb
  3392. 1/ 5 kinetic
  3393. 1/ 5 fuse
  3394. 1/ 5 mgr
  3395. 20/20 mgrc
  3396. 1/ 5 dpdk
  3397. 1/ 5 eventtrace
  3398. -2/-2 (syslog threshold)
  3399. -1/-1 (stderr threshold)
  3400. max_recent 10000
  3401. max_new 1000
  3402. log_file /home/rraja/git/ceph/build/out/mon.b.log
  3403. --- end dump of recent events ---
Add Comment
Please, Sign In to add comment