Guest User

$ gedit ./out/mon.a.log

a guest
Jan 30th, 2019
60
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 148.63 KB | None | 0 0
  1. 2019-01-30 17:07:06.877 7f8fd6d67040 10 public_network
  2. 2019-01-30 17:07:06.877 7f8fd6d67040 10 public_addr
  3. 2019-01-30 17:07:06.895 7f8fd6d67040 1 imported monmap:
  4. epoch 0
  5. fsid 908016f6-62f9-4370-bdfc-86d21cadded4
  6. last_changed 2019-01-30 17:07:06.814321
  7. created 2019-01-30 17:07:06.814321
  8. 0: [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] mon.a
  9.  
  10. 2019-01-30 17:07:06.895 7f8fd6d67040 0 /home/rraja/git/ceph-29-01-2019/build/bin/ceph-mon: set fsid to 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  11. 2019-01-30 17:07:06.907 7f8fd6d67040 0 set rocksdb option compression = kNoCompression
  12. 2019-01-30 17:07:06.907 7f8fd6d67040 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  13. 2019-01-30 17:07:06.907 7f8fd6d67040 0 set rocksdb option write_buffer_size = 33554432
  14. 2019-01-30 17:07:06.907 7f8fd6d67040 0 set rocksdb option compression = kNoCompression
  15. 2019-01-30 17:07:06.907 7f8fd6d67040 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  16. 2019-01-30 17:07:06.907 7f8fd6d67040 0 set rocksdb option write_buffer_size = 33554432
  17. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: RocksDB version: 5.17.2
  18.  
  19. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
  20. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Compile date Jan 30 2019
  21. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: DB SUMMARY
  22.  
  23. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: SST files in /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db dir, Total Num: 0, files:
  24.  
  25. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db:
  26.  
  27. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.error_if_exists: 0
  28. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.create_if_missing: 1
  29. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.paranoid_checks: 1
  30. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.env: 0x55f7b70dde00
  31. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.info_log: 0x55f7b9754fe0
  32. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_file_opening_threads: 16
  33. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.statistics: (nil)
  34. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.use_fsync: 0
  35. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_log_file_size: 0
  36. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_manifest_file_size: 1073741824
  37. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.log_file_time_to_roll: 0
  38. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.keep_log_file_num: 1000
  39. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.recycle_log_file_num: 0
  40. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.allow_fallocate: 1
  41. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.allow_mmap_reads: 0
  42. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.allow_mmap_writes: 0
  43. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.use_direct_reads: 0
  44. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
  45. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.create_missing_column_families: 0
  46. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.db_log_dir:
  47. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db
  48. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.table_cache_numshardbits: 6
  49. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_subcompactions: 1
  50. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_background_flushes: -1
  51. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.WAL_ttl_seconds: 0
  52. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.WAL_size_limit_MB: 0
  53. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.manifest_preallocation_size: 4194304
  54. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.is_fd_close_on_exec: 1
  55. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.advise_random_on_open: 1
  56. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.db_write_buffer_size: 0
  57. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.write_buffer_manager: 0x55f7b972fdd0
  58. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.access_hint_on_compaction_start: 1
  59. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
  60. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.random_access_max_buffer_size: 1048576
  61. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.use_adaptive_mutex: 0
  62. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.rate_limiter: (nil)
  63. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
  64. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.wal_recovery_mode: 2
  65. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.enable_thread_tracking: 0
  66. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.enable_pipelined_write: 0
  67. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.allow_concurrent_memtable_write: 1
  68. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
  69. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.write_thread_max_yield_usec: 100
  70. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.write_thread_slow_yield_usec: 3
  71. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.row_cache: None
  72. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.wal_filter: None
  73. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.avoid_flush_during_recovery: 0
  74. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.allow_ingest_behind: 0
  75. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.preserve_deletes: 0
  76. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.two_write_queues: 0
  77. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.manual_wal_flush: 0
  78. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_background_jobs: 2
  79. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_background_compactions: -1
  80. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.avoid_flush_during_shutdown: 0
  81. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
  82. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.delayed_write_rate : 16777216
  83. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_total_wal_size: 0
  84. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
  85. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.stats_dump_period_sec: 600
  86. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.max_open_files: -1
  87. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.bytes_per_sync: 0
  88. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.wal_bytes_per_sync: 0
  89. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Options.compaction_readahead_size: 0
  90. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Compression algorithms supported:
  91. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kZSTDNotFinalCompression supported: 0
  92. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kZSTD supported: 0
  93. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kXpressCompression supported: 0
  94. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kLZ4HCCompression supported: 1
  95. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kLZ4Compression supported: 1
  96. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kBZip2Compression supported: 0
  97. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kZlibCompression supported: 1
  98. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: kSnappyCompression supported: 1
  99. 2019-01-30 17:07:06.908 7f8fd6d67040 4 rocksdb: Fast CRC32 supported: Supported on x86
  100. 2019-01-30 17:07:06.909 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl_open.cc:230] Creating manifest 1
  101.  
  102. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
  103.  
  104. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
  105.  
  106. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
  107. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.merge_operator:
  108. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_filter: None
  109. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_filter_factory: None
  110. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.memtable_factory: SkipListFactory
  111. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.table_factory: BlockBasedTable
  112. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f7b93b8750)
  113. cache_index_and_filter_blocks: 1
  114. cache_index_and_filter_blocks_with_high_priority: 1
  115. pin_l0_filter_and_index_blocks_in_cache: 1
  116. pin_top_level_index_and_filter: 1
  117. index_type: 0
  118. hash_index_allow_collision: 1
  119. checksum: 1
  120. no_block_cache: 0
  121. block_cache: 0x55f7ba275140
  122. block_cache_name: BinnedLRUCache
  123. block_cache_options:
  124. capacity : 536870912
  125. num_shard_bits : 4
  126. strict_capacity_limit : 0
  127. high_pri_pool_ratio: 0.000
  128. block_cache_compressed: (nil)
  129. persistent_cache: (nil)
  130. block_size: 4096
  131. block_size_deviation: 10
  132. block_restart_interval: 16
  133. index_block_restart_interval: 1
  134. metadata_block_size: 4096
  135. partition_filters: 0
  136. use_delta_encoding: 1
  137. filter_policy: rocksdb.BuiltinBloomFilter
  138. whole_key_filtering: 1
  139. verify_compression: 0
  140. read_amp_bytes_per_bit: 0
  141. format_version: 2
  142. enable_index_compression: 1
  143. block_align: 0
  144.  
  145. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.write_buffer_size: 33554432
  146. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_write_buffer_number: 2
  147. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compression: NoCompression
  148. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bottommost_compression: Disabled
  149. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.prefix_extractor: nullptr
  150. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
  151. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.num_levels: 7
  152. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
  153. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
  154. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
  155. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bottommost_compression_opts.level: 32767
  156. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
  157. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
  158. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
  159. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bottommost_compression_opts.enabled: false
  160. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compression_opts.window_bits: -14
  161. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compression_opts.level: 32767
  162. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compression_opts.strategy: 0
  163. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
  164. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
  165. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compression_opts.enabled: false
  166. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
  167. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
  168. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.level0_stop_writes_trigger: 36
  169. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.target_file_size_base: 67108864
  170. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.target_file_size_multiplier: 1
  171. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_base: 268435456
  172. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
  173. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
  174. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
  175. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
  176. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
  177. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
  178. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
  179. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
  180. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
  181. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
  182. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_compaction_bytes: 1677721600
  183. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.arena_block_size: 4194304
  184. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
  185. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
  186. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
  187. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.disable_auto_compactions: 0
  188. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
  189. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_pri: kByCompensatedSize
  190. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
  191. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
  192. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
  193. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
  194. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
  195. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
  196. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
  197. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
  198. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.compaction_options_fifo.ttl: 0
  199. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.table_properties_collectors:
  200. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.inplace_update_support: 0
  201. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.inplace_update_num_locks: 10000
  202. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
  203. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.memtable_huge_page_size: 0
  204. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.bloom_locality: 0
  205. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.max_successive_merges: 0
  206. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.optimize_filters_for_hits: 0
  207. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.paranoid_file_checks: 0
  208. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.force_consistency_checks: 0
  209. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.report_bg_io_stats: 0
  210. 2019-01-30 17:07:06.924 7f8fd6d67040 4 rocksdb: Options.ttl: 0
  211. 2019-01-30 17:07:06.926 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
  212.  
  213. 2019-01-30 17:07:06.926 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
  214.  
  215. 2019-01-30 17:07:06.939 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x55f7ba1ef600
  216. 2019-01-30 17:07:06.940 7f8fd6d67040 5 adding auth protocol: cephx
  217. 2019-01-30 17:07:06.940 7f8fd6d67040 5 adding auth protocol: cephx
  218. 2019-01-30 17:07:06.942 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  219. 2019-01-30 17:07:06.942 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  220. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  221. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  222. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  223. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
  224. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  225. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  226. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  227. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  228. 2019-01-30 17:07:06.943 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  229. 2019-01-30 17:07:06.944 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  230. 2019-01-30 17:07:06.944 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  231. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
  232. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  233. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  234. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  235. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  236. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  237. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  238. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  239. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  240. 2019-01-30 17:07:06.945 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  241. 2019-01-30 17:07:06.946 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  242. 2019-01-30 17:07:06.946 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  243. 2019-01-30 17:07:06.946 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  244. 2019-01-30 17:07:06.946 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  245. 2019-01-30 17:07:06.946 7f8fd6d67040 20 mon.a@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
  246. 2019-01-30 17:07:06.948 7f8fd6d67040 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph-29-01-2019/build/keyring
  247. 2019-01-30 17:07:06.948 7f8fd6d67040 10 mon.a@-1(probing) e0 extract_save_mon_key moving mon. key to separate keyring
  248. 2019-01-30 17:07:06.957 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl.cc:365] Shutdown: canceling all background work
  249. 2019-01-30 17:07:06.958 7f8fd6d67040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl.cc:521] Shutdown complete
  250. 2019-01-30 17:07:06.958 7f8fd6d67040 0 /home/rraja/git/ceph-29-01-2019/build/bin/ceph-mon: created monfs at /home/rraja/git/ceph-29-01-2019/build/dev/mon.a for mon.a
  251. 2019-01-30 17:07:07.050 7eff006c5040 0 ceph version 14.0.1-3061-g3e6ff119e2 (3e6ff119e298a9269f7c66d8c1a9b87fab16d987) nautilus (dev), process ceph-mon, pid 703601
  252. 2019-01-30 17:07:07.089 7eff006c5040 0 load: jerasure load: lrc load: isa
  253. 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option compression = kNoCompression
  254. 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  255. 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option write_buffer_size = 33554432
  256. 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option compression = kNoCompression
  257. 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  258. 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option write_buffer_size = 33554432
  259. 2019-01-30 17:07:07.090 7eff006c5040 1 rocksdb: do_open column families: [default]
  260. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: RocksDB version: 5.17.2
  261.  
  262. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
  263. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Compile date Jan 30 2019
  264. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: DB SUMMARY
  265.  
  266. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: CURRENT file: CURRENT
  267.  
  268. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: IDENTITY file: IDENTITY
  269.  
  270. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: MANIFEST file: MANIFEST-000001 size: 13 Bytes
  271.  
  272. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: SST files in /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db dir, Total Num: 0, files:
  273.  
  274. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db: 000003.log size: 895 ;
  275.  
  276. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.error_if_exists: 0
  277. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.create_if_missing: 0
  278. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.paranoid_checks: 1
  279. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.env: 0x555864487e00
  280. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.info_log: 0x55586582fb80
  281. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_file_opening_threads: 16
  282. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.statistics: (nil)
  283. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_fsync: 0
  284. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_log_file_size: 0
  285. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_manifest_file_size: 1073741824
  286. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.log_file_time_to_roll: 0
  287. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.keep_log_file_num: 1000
  288. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.recycle_log_file_num: 0
  289. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_fallocate: 1
  290. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_mmap_reads: 0
  291. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_mmap_writes: 0
  292. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_direct_reads: 0
  293. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
  294. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.create_missing_column_families: 0
  295. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.db_log_dir:
  296. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db
  297. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.table_cache_numshardbits: 6
  298. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_subcompactions: 1
  299. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_background_flushes: -1
  300. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.WAL_ttl_seconds: 0
  301. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.WAL_size_limit_MB: 0
  302. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.manifest_preallocation_size: 4194304
  303. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.is_fd_close_on_exec: 1
  304. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.advise_random_on_open: 1
  305. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.db_write_buffer_size: 0
  306. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_buffer_manager: 0x555865830360
  307. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.access_hint_on_compaction_start: 1
  308. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
  309. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.random_access_max_buffer_size: 1048576
  310. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_adaptive_mutex: 0
  311. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.rate_limiter: (nil)
  312. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
  313. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_recovery_mode: 2
  314. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.enable_thread_tracking: 0
  315. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.enable_pipelined_write: 0
  316. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_concurrent_memtable_write: 1
  317. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
  318. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_thread_max_yield_usec: 100
  319. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_thread_slow_yield_usec: 3
  320. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.row_cache: None
  321. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_filter: None
  322. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.avoid_flush_during_recovery: 0
  323. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_ingest_behind: 0
  324. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.preserve_deletes: 0
  325. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.two_write_queues: 0
  326. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.manual_wal_flush: 0
  327. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_background_jobs: 2
  328. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_background_compactions: -1
  329. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.avoid_flush_during_shutdown: 0
  330. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
  331. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.delayed_write_rate : 16777216
  332. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_total_wal_size: 0
  333. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
  334. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.stats_dump_period_sec: 600
  335. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_open_files: -1
  336. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bytes_per_sync: 0
  337. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_bytes_per_sync: 0
  338. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compaction_readahead_size: 0
  339. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Compression algorithms supported:
  340. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kZSTDNotFinalCompression supported: 0
  341. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kZSTD supported: 0
  342. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kXpressCompression supported: 0
  343. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kLZ4HCCompression supported: 1
  344. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kLZ4Compression supported: 1
  345. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kBZip2Compression supported: 0
  346. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kZlibCompression supported: 1
  347. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kSnappyCompression supported: 1
  348. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Fast CRC32 supported: Supported on x86
  349. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
  350.  
  351. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
  352.  
  353. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
  354. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.merge_operator:
  355. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compaction_filter: None
  356. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compaction_filter_factory: None
  357. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.memtable_factory: SkipListFactory
  358. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.table_factory: BlockBasedTable
  359. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558654926f8)
  360. cache_index_and_filter_blocks: 1
  361. cache_index_and_filter_blocks_with_high_priority: 1
  362. pin_l0_filter_and_index_blocks_in_cache: 1
  363. pin_top_level_index_and_filter: 1
  364. index_type: 0
  365. hash_index_allow_collision: 1
  366. checksum: 1
  367. no_block_cache: 0
  368. block_cache: 0x55586633b2a0
  369. block_cache_name: BinnedLRUCache
  370. block_cache_options:
  371. capacity : 536870912
  372. num_shard_bits : 4
  373. strict_capacity_limit : 0
  374. high_pri_pool_ratio: 0.000
  375. block_cache_compressed: (nil)
  376. persistent_cache: (nil)
  377. block_size: 4096
  378. block_size_deviation: 10
  379. block_restart_interval: 16
  380. index_block_restart_interval: 1
  381. metadata_block_size: 4096
  382. partition_filters: 0
  383. use_delta_encoding: 1
  384. filter_policy: rocksdb.BuiltinBloomFilter
  385. whole_key_filtering: 1
  386. verify_compression: 0
  387. read_amp_bytes_per_bit: 0
  388. format_version: 2
  389. enable_index_compression: 1
  390. block_align: 0
  391.  
  392. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_buffer_size: 33554432
  393. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_write_buffer_number: 2
  394. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression: NoCompression
  395. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression: Disabled
  396. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.prefix_extractor: nullptr
  397. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
  398. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.num_levels: 7
  399. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
  400. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
  401. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
  402. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.level: 32767
  403. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
  404. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
  405. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
  406. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.enabled: false
  407. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.window_bits: -14
  408. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.level: 32767
  409. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.strategy: 0
  410. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
  411. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
  412. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.enabled: false
  413. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
  414. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
  415. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level0_stop_writes_trigger: 36
  416. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.target_file_size_base: 67108864
  417. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.target_file_size_multiplier: 1
  418. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_base: 268435456
  419. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
  420. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
  421. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
  422. 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
  423. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
  424. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
  425. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
  426. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
  427. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
  428. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
  429. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_compaction_bytes: 1677721600
  430. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.arena_block_size: 4194304
  431. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
  432. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
  433. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
  434. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.disable_auto_compactions: 0
  435. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
  436. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_pri: kByCompensatedSize
  437. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
  438. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
  439. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
  440. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
  441. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
  442. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
  443. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
  444. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
  445. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_fifo.ttl: 0
  446. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.table_properties_collectors:
  447. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.inplace_update_support: 0
  448. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.inplace_update_num_locks: 10000
  449. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
  450. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.memtable_huge_page_size: 0
  451. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.bloom_locality: 0
  452. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_successive_merges: 0
  453. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.optimize_filters_for_hits: 0
  454. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.paranoid_file_checks: 0
  455. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.force_consistency_checks: 0
  456. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.report_bg_io_stats: 0
  457. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.ttl: 0
  458. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
  459.  
  460. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
  461.  
  462. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548848227093966, "job": 1, "event": "recovery_started", "log_files": [3]}
  463. 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl_open.cc:561] Recovering log #3 mode 2
  464. 2019-01-30 17:07:07.099 7eff006c5040 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548848227100661, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 4, "file_size": 1653, "table_properties": {"data_size": 907, "index_size": 28, "filter_size": 23, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 784, "raw_average_value_size": 156, "num_data_blocks": 1, "num_entries": 5, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
  465. 2019-01-30 17:07:07.099 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:2936] Creating manifest 5
  466.  
  467. 2019-01-30 17:07:07.107 7eff006c5040 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548848227108398, "job": 1, "event": "recovery_finished"}
  468. 2019-01-30 17:07:07.118 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x5558662ca800
  469. 2019-01-30 17:07:07.118 7eff006c5040 10 obtain_monmap
  470. 2019-01-30 17:07:07.118 7eff006c5040 10 obtain_monmap found mkfs monmap
  471. 2019-01-30 17:07:07.118 7eff006c5040 10 main monmap:
  472. {
  473. "epoch": 0,
  474. "fsid": "9d1465a9-7817-4f0a-8abf-dbd152810aa8",
  475. "modified": "2019-01-30 17:07:06.814321",
  476. "created": "2019-01-30 17:07:06.814321",
  477. "features": {
  478. "persistent": [],
  479. "optional": []
  480. },
  481. "mons": [
  482. {
  483. "rank": 0,
  484. "name": "a",
  485. "public_addrs": {
  486. "addrvec": [
  487. {
  488. "type": "v2",
  489. "addr": "127.0.0.1:40576",
  490. "nonce": 0
  491. },
  492. {
  493. "type": "v1",
  494. "addr": "127.0.0.1:40577",
  495. "nonce": 0
  496. }
  497. ]
  498. },
  499. "addr": "127.0.0.1:40577/0",
  500. "public_addr": "127.0.0.1:40577/0"
  501. }
  502. ]
  503. }
  504.  
  505. 2019-01-30 17:07:07.118 7eff006c5040 5 adding auth protocol: cephx
  506. 2019-01-30 17:07:07.118 7eff006c5040 5 adding auth protocol: cephx
  507. 2019-01-30 17:07:07.119 7eff006c5040 0 starting mon.a rank 0 at public addrs [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] at bind addrs [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] mon_data /home/rraja/git/ceph-29-01-2019/build/dev/mon.a fsid 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  508. 2019-01-30 17:07:07.119 7eff006c5040 1 -- [v2:127.0.0.1:0/0,v1:127.0.0.1:0/0] learned_addr learned my addr [v2:127.0.0.1:0/0,v1:127.0.0.1:0/0] (peer_addr_for_me v2:127.0.0.1:40576/0)
  509. 2019-01-30 17:07:07.119 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] _finish_bind bind my_addrs is [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0]
  510. 2019-01-30 17:07:07.119 7eff006c5040 5 adding auth protocol: cephx
  511. 2019-01-30 17:07:07.119 7eff006c5040 5 adding auth protocol: cephx
  512. 2019-01-30 17:07:07.119 7eff006c5040 0 starting mon.a rank 0 at [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] mon_data /home/rraja/git/ceph-29-01-2019/build/dev/mon.a fsid 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  513. 2019-01-30 17:07:07.120 7eff006c5040 5 adding auth protocol: cephx
  514. 2019-01-30 17:07:07.120 7eff006c5040 5 adding auth protocol: cephx
  515. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  516. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  517. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  518. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  519. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  520. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
  521. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  522. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  523. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  524. 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  525. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  526. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  527. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  528. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
  529. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  530. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  531. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  532. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  533. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  534. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  535. 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  536. 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  537. 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  538. 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  539. 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  540. 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  541. 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  542. 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
  543. 2019-01-30 17:07:07.123 7eff006c5040 1 mon.a@-1(probing) e0 preinit fsid 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  544. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 check_fsid cluster_uuid contains '9d1465a9-7817-4f0a-8abf-dbd152810aa8'
  545. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 features compat={},rocompat={},incompat={1=initial feature set (~v.18),3=single paxos with k/v store (v0.?)}
  546. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 calc_quorum_requirements required_features 0
  547. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 required_features 0
  548. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 has_ever_joined = 0
  549. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 sync_last_committed_floor 0
  550. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 init_paxos
  551. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxos(paxos recovering c 0..0) init last_pn: 0 accepted_pn: 0 last_committed: 0 first_committed: 0
  552. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxos(paxos recovering c 0..0) init
  553. 2019-01-30 17:07:07.124 7eff006c5040 5 mon.a@-1(probing).mds e0 Unable to load 'last_metadata'
  554. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).health init
  555. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).config init
  556. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 refresh_from_paxos
  557. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 refresh_from_paxos no cluster_fingerprint
  558. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(mdsmap 0..0) refresh
  559. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(osdmap 0..0) refresh
  560. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(logm 0..0) refresh
  561. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).log v0 update_from_paxos
  562. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).log v0 update_from_paxos version 0 summary v 0
  563. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(monmap 0..0) refresh
  564. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(auth 0..0) refresh
  565. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).auth v0 update_from_paxos
  566. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgr 0..0) refresh
  567. 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).config load_config got 0 keys
  568. 2019-01-30 17:07:07.124 7eff006c5040 20 mon.a@-1(probing).config load_config config map:
  569. {
  570. "global": {},
  571. "by_type": {},
  572. "by_id": {}
  573. }
  574.  
  575. 2019-01-30 17:07:07.128 7eff006c5040 20 mgrc handle_mgr_map mgrmap(e 0) v1
  576. 2019-01-30 17:07:07.128 7eff006c5040 4 mgrc handle_mgr_map Got map version 0
  577. 2019-01-30 17:07:07.128 7eff006c5040 4 mgrc handle_mgr_map Active mgr is now
  578. 2019-01-30 17:07:07.128 7eff006c5040 4 mgrc reconnect No active mgr available yet
  579. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgrstat 0..0) refresh
  580. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).mgrstat 0
  581. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).mgrstat check_subs
  582. 2019-01-30 17:07:07.128 7eff006c5040 20 mon.a@-1(probing).mgrstat update_logger
  583. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(health 0..0) refresh
  584. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).health update_from_paxos
  585. 2019-01-30 17:07:07.128 7eff006c5040 20 mon.a@-1(probing).health dump:{
  586. "quorum_health": {},
  587. "leader_health": {}
  588. }
  589.  
  590. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(config 0..0) refresh
  591. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mdsmap 0..0) post_refresh
  592. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(osdmap 0..0) post_refresh
  593. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(logm 0..0) post_refresh
  594. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(monmap 0..0) post_refresh
  595. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(auth 0..0) post_refresh
  596. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgr 0..0) post_refresh
  597. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgrstat 0..0) post_refresh
  598. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(health 0..0) post_refresh
  599. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(config 0..0) post_refresh
  600. 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing) e0 loading initial keyring to bootstrap authentication for mkfs
  601. 2019-01-30 17:07:07.128 7eff006c5040 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/keyring
  602. 2019-01-30 17:07:07.128 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] start start
  603. 2019-01-30 17:07:07.128 7eff006c5040 1 -- start start
  604. 2019-01-30 17:07:07.128 7eff006c5040 2 mon.a@-1(probing) e0 init
  605. 2019-01-30 17:07:07.128 7eff006c5040 1 Processor -- start
  606. 2019-01-30 17:07:07.129 7eff006c5040 1 Processor -- start
  607. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 bootstrap
  608. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 sync_reset_requester
  609. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 unregister_cluster_logger - not registered
  610. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 cancel_probe_timeout (none scheduled)
  611. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 reverting to legacy ranks for seed monmap (epoch 0)
  612. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 monmap e0: 1 mons at {a=[v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0]}
  613. 2019-01-30 17:07:07.129 7eff006c5040 0 mon.a@-1(probing) e0 my rank is now 0 (was -1)
  614. 2019-01-30 17:07:07.129 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] shutdown_connections
  615. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 _reset
  616. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 cancel_probe_timeout (none scheduled)
  617. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 timecheck_finish
  618. 2019-01-30 17:07:07.129 7eff006c5040 15 mon.a@0(probing) e0 health_tick_stop
  619. 2019-01-30 17:07:07.129 7eff006c5040 15 mon.a@0(probing) e0 health_interval_stop
  620. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 scrub_event_cancel
  621. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 scrub_reset
  622. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  623. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(mdsmap 0..0) restart
  624. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(osdmap 0..0) restart
  625. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(logm 0..0) restart
  626. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(monmap 0..0) restart
  627. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(auth 0..0) restart
  628. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(mgr 0..0) restart
  629. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(mgrstat 0..0) restart
  630. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(health 0..0) restart
  631. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(config 0..0) restart
  632. 2019-01-30 17:07:07.129 7eff006c5040 1 mon.a@0(probing) e0 win_standalone_election
  633. 2019-01-30 17:07:07.129 7eff006c5040 1 mon.a@0(probing).elector(0) init, first boot, initializing epoch at 1
  634. 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).elector(1) bump_epoch 1 to 2
  635. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 join_election
  636. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 _reset
  637. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 cancel_probe_timeout (none scheduled)
  638. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 timecheck_finish
  639. 2019-01-30 17:07:07.133 7eff006c5040 15 mon.a@0(probing) e0 health_tick_stop
  640. 2019-01-30 17:07:07.133 7eff006c5040 15 mon.a@0(probing) e0 health_interval_stop
  641. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 scrub_event_cancel
  642. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 scrub_reset
  643. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  644. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(mdsmap 0..0) restart
  645. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(osdmap 0..0) restart
  646. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(logm 0..0) restart
  647. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(monmap 0..0) restart
  648. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(auth 0..0) restart
  649. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(mgr 0..0) restart
  650. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(mgrstat 0..0) restart
  651. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(health 0..0) restart
  652. 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(config 0..0) restart
  653. 2019-01-30 17:07:07.133 7eff006c5040 -1 mon.a@0(electing) e0 devname dm-0
  654. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(electing) e0 win_election epoch 2 quorum 0 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  655. 2019-01-30 17:07:07.134 7eff006c5040 0 log_channel(cluster) log [INF] : mon.a is new leader, mons a in quorum (ranks 0)
  656. 2019-01-30 17:07:07.134 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] --> [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] -- log(1 entries from seq 1 at 2019-01-30 17:07:07.135354) v1 -- 0x555866522b40 con 0x555865711a80
  657. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxosservice(monmap 0..0) election_finished
  658. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxosservice(monmap 0..0) _active
  659. 2019-01-30 17:07:07.134 7eff006c5040 7 mon.a@0(leader).paxosservice(monmap 0..0) _active creating new pending
  660. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).monmap v0 create_pending monmap epoch 1
  661. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).monmap v0 create_initial using current monmap
  662. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxosservice(monmap 0..0) propose_pending
  663. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).monmap v0 encode_pending epoch 1
  664. 2019-01-30 17:07:07.134 7efee828e700 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] <== mon.0 v2:127.0.0.1:40576/0 0 ==== log(1 entries from seq 1 at 2019-01-30 17:07:07.135354) v1 ==== 0+0+0 (0 0 0) 0x555866522b40 con 0x555865711a80
  665. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader) e0 prepare_new_fingerprint proposing cluster_fingerprint cbe66b16-9e55-45b9-ad8e-294006ac0aa0
  666. 2019-01-30 17:07:07.134 7eff006c5040 5 mon.a@0(leader).paxos(paxos active c 0..0) queue_pending_finisher 0x55586547e6e0
  667. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxos(paxos active c 0..0) trigger_propose active, proposing now
  668. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxos(paxos active c 0..0) propose_pending 1 400 bytes
  669. 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxos(paxos updating c 0..0) begin for 1 400 bytes
  670. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxos(paxos updating c 0..0) commit_start 1
  671. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(mdsmap 0..0) election_finished
  672. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(mdsmap 0..0) _active
  673. 2019-01-30 17:07:07.138 7eff006c5040 7 mon.a@0(leader).paxosservice(mdsmap 0..0) _active creating new pending
  674. 2019-01-30 17:07:07.138 7eff006c5040 5 mon.a@0(leader).paxos(paxos writing c 0..0) is_readable = 0 - now=2019-01-30 17:07:07.139412 lease_expire=0.000000 has v0 lc 0
  675. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).mds e0 create_pending e1
  676. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).mds e0 create_initial
  677. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(mdsmap 0..0) propose_pending
  678. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).mds e0 encode_pending e1
  679. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader) e0 log_health updated 0 previous 0
  680. 2019-01-30 17:07:07.138 7eff006c5040 5 mon.a@0(leader).paxos(paxos writing c 0..0) queue_pending_finisher 0x55586547e650
  681. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxos(paxos writing c 0..0) trigger_propose not active, will propose later
  682. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(osdmap 0..0) election_finished
  683. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(osdmap 0..0) _active
  684. 2019-01-30 17:07:07.138 7eff006c5040 7 mon.a@0(leader).paxosservice(osdmap 0..0) _active creating new pending
  685. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 create_pending e 1
  686. 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.99
  687. 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 create_pending setting full_ratio = 0.99
  688. 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 create_pending setting nearfull_ratio = 0.99
  689. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 create_initial for 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  690. 2019-01-30 17:07:07.138 7eff006c5040 20 mon.a@0(leader).osd e0 full crc 2999819672
  691. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(osdmap 0..0) propose_pending
  692. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending e 1
  693. 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 do_prune osdmap full prune enabled
  694. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 should_prune currently holding only 0 epochs (min osdmap epochs: 500); do not prune.
  695. 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
  696. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs
  697. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs 0 pools queued
  698. 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs 0 pgs removed because they're created
  699. 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs queue remaining: 0 pools
  700. 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs 0/0 pgs added from queued pools
  701. 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending first mimic+ epoch
  702. 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending first nautilus+ epoch
  703. 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending encoding full map with nautilus features 1080873256688298500
  704. 2019-01-30 17:07:07.139 7eff006c5040 20 mon.a@0(leader).osd e0 full_crc 2999819672 inc_crc 2902584528
  705. 2019-01-30 17:07:07.152 7eff006c5040 -1 *** Caught signal (Segmentation fault) **
  706. in thread 7eff006c5040 thread_name:ceph-mon
  707.  
  708. ceph version 14.0.1-3061-g3e6ff119e2 (3e6ff119e298a9269f7c66d8c1a9b87fab16d987) nautilus (dev)
  709. 1: (()+0x13cbcc6) [0x555863ae1cc6]
  710. 2: (()+0x12080) [0x7efef30d4080]
  711. 3: (OSDMap::check_health(health_check_map_t*) const+0x1235) [0x7efef7aa77db]
  712. 4: (OSDMonitor::encode_pending(std::shared_ptr<MonitorDBStore::Transaction>)+0x510d) [0x555863950b7f]
  713. 5: (PaxosService::propose_pending()+0x45a) [0x55586393d528]
  714. 6: (PaxosService::_active()+0x62b) [0x55586393e5ab]
  715. 7: (PaxosService::election_finished()+0x183) [0x55586393dddf]
  716. 8: (Monitor::_finish_svc_election()+0xee) [0x55586360ddf8]
  717. 9: (Monitor::win_election(unsigned int, std::set<int, std::less<int>, std::allocator<int> >&, unsigned long, mon_feature_t const&, std::map<int, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >, std::less<int>, std::allocator<std::pair<int const, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > > > > const&)+0x418) [0x55586360e692]
  718. 10: (Monitor::win_standalone_election()+0x227) [0x55586360dc37]
  719. 11: (Monitor::bootstrap()+0xa1b) [0x5558636021db]
  720. 12: (Monitor::init()+0x1a4) [0x5558635ff258]
  721. 13: (main()+0x688f) [0x5558635d21fd]
  722. 14: (__libc_start_main()+0xeb) [0x7efef21c211b]
  723. 15: (_start()+0x2a) [0x5558635c890a]
  724. NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  725.  
  726. --- begin dump of recent events ---
  727. -466> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command assert hook 0x55586547e430
  728. -465> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command abort hook 0x55586547e430
  729. -464> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command perfcounters_dump hook 0x55586547e430
  730. -463> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command 1 hook 0x55586547e430
  731. -462> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command perf dump hook 0x55586547e430
  732. -461> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command perfcounters_schema hook 0x55586547e430
  733. -460> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command perf histogram dump hook 0x55586547e430
  734. -459> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command 2 hook 0x55586547e430
  735. -458> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command perf schema hook 0x55586547e430
  736. -457> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command perf histogram schema hook 0x55586547e430
  737. -456> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command perf reset hook 0x55586547e430
  738. -455> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command config show hook 0x55586547e430
  739. -454> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command config help hook 0x55586547e430
  740. -453> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command config set hook 0x55586547e430
  741. -452> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command config unset hook 0x55586547e430
  742. -451> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command config get hook 0x55586547e430
  743. -450> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command config diff hook 0x55586547e430
  744. -449> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command config diff get hook 0x55586547e430
  745. -448> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command log flush hook 0x55586547e430
  746. -447> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command log dump hook 0x55586547e430
  747. -446> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command log reopen hook 0x55586547e430
  748. -445> 2019-01-30 17:07:07.035 7eff006c5040 5 asok(0x5558657f0000) register_command dump_mempools hook 0x5558662da068
  749. -444> 2019-01-30 17:07:07.049 7eff006c5040 1 lockdep start
  750. -443> 2019-01-30 17:07:07.049 7eff006c5040 1 lockdep using id 0
  751. -442> 2019-01-30 17:07:07.049 7eff006c5040 1 lockdep using id 1
  752. -441> 2019-01-30 17:07:07.050 7eff006c5040 1 lockdep using id 2
  753. -440> 2019-01-30 17:07:07.050 7eff006c5040 1 lockdep using id 3
  754. -439> 2019-01-30 17:07:07.050 7eff006c5040 0 ceph version 14.0.1-3061-g3e6ff119e2 (3e6ff119e298a9269f7c66d8c1a9b87fab16d987) nautilus (dev), process ceph-mon, pid 703601
  755. -438> 2019-01-30 17:07:07.069 7eff006c5040 1 lockdep using id 4
  756. -437> 2019-01-30 17:07:07.070 7eff006c5040 1 lockdep using id 5
  757. -436> 2019-01-30 17:07:07.070 7eff006c5040 1 lockdep using id 6
  758. -435> 2019-01-30 17:07:07.070 7eff006c5040 1 lockdep using id 7
  759. -434> 2019-01-30 17:07:07.070 7eff006c5040 5 asok(0x5558657f0000) init /tmp/ceph-asok.dptYdu/mon.a.asok
  760. -433> 2019-01-30 17:07:07.070 7eff006c5040 5 asok(0x5558657f0000) bind_and_listen /tmp/ceph-asok.dptYdu/mon.a.asok
  761. -432> 2019-01-30 17:07:07.070 7eff006c5040 5 asok(0x5558657f0000) register_command 0 hook 0x555865492718
  762. -431> 2019-01-30 17:07:07.070 7eff006c5040 5 asok(0x5558657f0000) register_command version hook 0x555865492718
  763. -430> 2019-01-30 17:07:07.070 7eff006c5040 5 asok(0x5558657f0000) register_command git_version hook 0x555865492718
  764. -429> 2019-01-30 17:07:07.070 7eff006c5040 5 asok(0x5558657f0000) register_command help hook 0x55586547e2a0
  765. -428> 2019-01-30 17:07:07.070 7eff006c5040 5 asok(0x5558657f0000) register_command get_command_descriptions hook 0x55586547e270
  766. -427> 2019-01-30 17:07:07.070 7efeeeb57700 5 asok(0x5558657f0000) entry start
  767. -426> 2019-01-30 17:07:07.071 7eff006c5040 1 lockdep using id 8
  768. -425> 2019-01-30 17:07:07.089 7eff006c5040 1 lockdep using id 9
  769. -424> 2019-01-30 17:07:07.089 7eff006c5040 0 load: jerasure load: lrc load: isa
  770. -423> 2019-01-30 17:07:07.089 7eff006c5040 1 lockdep using id 10
  771. -422> 2019-01-30 17:07:07.089 7eff006c5040 1 lockdep using id 11
  772. -421> 2019-01-30 17:07:07.090 7eff006c5040 1 lockdep using id 12
  773. -420> 2019-01-30 17:07:07.090 7eff006c5040 1 lockdep using id 13
  774. -419> 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option compression = kNoCompression
  775. -418> 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  776. -417> 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option write_buffer_size = 33554432
  777. -416> 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option compression = kNoCompression
  778. -415> 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option level_compaction_dynamic_level_bytes = true
  779. -414> 2019-01-30 17:07:07.090 7eff006c5040 0 set rocksdb option write_buffer_size = 33554432
  780. -413> 2019-01-30 17:07:07.090 7eff006c5040 1 rocksdb: do_open column families: [default]
  781. -412> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: RocksDB version: 5.17.2
  782.  
  783. -411> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
  784. -410> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Compile date Jan 30 2019
  785. -409> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: DB SUMMARY
  786.  
  787. -408> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: CURRENT file: CURRENT
  788.  
  789. -407> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: IDENTITY file: IDENTITY
  790.  
  791. -406> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: MANIFEST file: MANIFEST-000001 size: 13 Bytes
  792.  
  793. -405> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: SST files in /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db dir, Total Num: 0, files:
  794.  
  795. -404> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db: 000003.log size: 895 ;
  796.  
  797. -403> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.error_if_exists: 0
  798. -402> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.create_if_missing: 0
  799. -401> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.paranoid_checks: 1
  800. -400> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.env: 0x555864487e00
  801. -399> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.info_log: 0x55586582fb80
  802. -398> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_file_opening_threads: 16
  803. -397> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.statistics: (nil)
  804. -396> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_fsync: 0
  805. -395> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_log_file_size: 0
  806. -394> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_manifest_file_size: 1073741824
  807. -393> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.log_file_time_to_roll: 0
  808. -392> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.keep_log_file_num: 1000
  809. -391> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.recycle_log_file_num: 0
  810. -390> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_fallocate: 1
  811. -389> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_mmap_reads: 0
  812. -388> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_mmap_writes: 0
  813. -387> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_direct_reads: 0
  814. -386> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
  815. -385> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.create_missing_column_families: 0
  816. -384> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.db_log_dir:
  817. -383> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db
  818. -382> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.table_cache_numshardbits: 6
  819. -381> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_subcompactions: 1
  820. -380> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_background_flushes: -1
  821. -379> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.WAL_ttl_seconds: 0
  822. -378> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.WAL_size_limit_MB: 0
  823. -377> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.manifest_preallocation_size: 4194304
  824. -376> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.is_fd_close_on_exec: 1
  825. -375> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.advise_random_on_open: 1
  826. -374> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.db_write_buffer_size: 0
  827. -373> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_buffer_manager: 0x555865830360
  828. -372> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.access_hint_on_compaction_start: 1
  829. -371> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
  830. -370> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.random_access_max_buffer_size: 1048576
  831. -369> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.use_adaptive_mutex: 0
  832. -368> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.rate_limiter: (nil)
  833. -367> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
  834. -366> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_recovery_mode: 2
  835. -365> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.enable_thread_tracking: 0
  836. -364> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.enable_pipelined_write: 0
  837. -363> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_concurrent_memtable_write: 1
  838. -362> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
  839. -361> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_thread_max_yield_usec: 100
  840. -360> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_thread_slow_yield_usec: 3
  841. -359> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.row_cache: None
  842. -358> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_filter: None
  843. -357> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.avoid_flush_during_recovery: 0
  844. -356> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.allow_ingest_behind: 0
  845. -355> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.preserve_deletes: 0
  846. -354> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.two_write_queues: 0
  847. -353> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.manual_wal_flush: 0
  848. -352> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_background_jobs: 2
  849. -351> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_background_compactions: -1
  850. -350> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.avoid_flush_during_shutdown: 0
  851. -349> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
  852. -348> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.delayed_write_rate : 16777216
  853. -347> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_total_wal_size: 0
  854. -346> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
  855. -345> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.stats_dump_period_sec: 600
  856. -344> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_open_files: -1
  857. -343> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bytes_per_sync: 0
  858. -342> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.wal_bytes_per_sync: 0
  859. -341> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compaction_readahead_size: 0
  860. -340> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Compression algorithms supported:
  861. -339> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kZSTDNotFinalCompression supported: 0
  862. -338> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kZSTD supported: 0
  863. -337> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kXpressCompression supported: 0
  864. -336> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kLZ4HCCompression supported: 1
  865. -335> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kLZ4Compression supported: 1
  866. -334> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kBZip2Compression supported: 0
  867. -333> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kZlibCompression supported: 1
  868. -332> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: kSnappyCompression supported: 1
  869. -331> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Fast CRC32 supported: Supported on x86
  870. -330> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
  871.  
  872. -329> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
  873.  
  874. -328> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
  875. -327> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.merge_operator:
  876. -326> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compaction_filter: None
  877. -325> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compaction_filter_factory: None
  878. -324> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.memtable_factory: SkipListFactory
  879. -323> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.table_factory: BlockBasedTable
  880. -322> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558654926f8)
  881. cache_index_and_filter_blocks: 1
  882. cache_index_and_filter_blocks_with_high_priority: 1
  883. pin_l0_filter_and_index_blocks_in_cache: 1
  884. pin_top_level_index_and_filter: 1
  885. index_type: 0
  886. hash_index_allow_collision: 1
  887. checksum: 1
  888. no_block_cache: 0
  889. block_cache: 0x55586633b2a0
  890. block_cache_name: BinnedLRUCache
  891. block_cache_options:
  892. capacity : 536870912
  893. num_shard_bits : 4
  894. strict_capacity_limit : 0
  895. high_pri_pool_ratio: 0.000
  896. block_cache_compressed: (nil)
  897. persistent_cache: (nil)
  898. block_size: 4096
  899. block_size_deviation: 10
  900. block_restart_interval: 16
  901. index_block_restart_interval: 1
  902. metadata_block_size: 4096
  903. partition_filters: 0
  904. use_delta_encoding: 1
  905. filter_policy: rocksdb.BuiltinBloomFilter
  906. whole_key_filtering: 1
  907. verify_compression: 0
  908. read_amp_bytes_per_bit: 0
  909. format_version: 2
  910. enable_index_compression: 1
  911. block_align: 0
  912.  
  913. -321> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.write_buffer_size: 33554432
  914. -320> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_write_buffer_number: 2
  915. -319> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression: NoCompression
  916. -318> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression: Disabled
  917. -317> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.prefix_extractor: nullptr
  918. -316> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
  919. -315> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.num_levels: 7
  920. -314> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
  921. -313> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
  922. -312> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
  923. -311> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.level: 32767
  924. -310> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
  925. -309> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
  926. -308> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
  927. -307> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.bottommost_compression_opts.enabled: false
  928. -306> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.window_bits: -14
  929. -305> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.level: 32767
  930. -304> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.strategy: 0
  931. -303> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
  932. -302> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
  933. -301> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.compression_opts.enabled: false
  934. -300> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
  935. -299> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
  936. -298> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level0_stop_writes_trigger: 36
  937. -297> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.target_file_size_base: 67108864
  938. -296> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.target_file_size_multiplier: 1
  939. -295> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_base: 268435456
  940. -294> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
  941. -293> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
  942. -292> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
  943. -291> 2019-01-30 17:07:07.091 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
  944. -290> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
  945. -289> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
  946. -288> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
  947. -287> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
  948. -286> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
  949. -285> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
  950. -284> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_compaction_bytes: 1677721600
  951. -283> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.arena_block_size: 4194304
  952. -282> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
  953. -281> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
  954. -280> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
  955. -279> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.disable_auto_compactions: 0
  956. -278> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
  957. -277> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_pri: kByCompensatedSize
  958. -276> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
  959. -275> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
  960. -274> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
  961. -273> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
  962. -272> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
  963. -271> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
  964. -270> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
  965. -269> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
  966. -268> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.compaction_options_fifo.ttl: 0
  967. -267> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.table_properties_collectors:
  968. -266> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.inplace_update_support: 0
  969. -265> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.inplace_update_num_locks: 10000
  970. -264> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
  971. -263> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.memtable_huge_page_size: 0
  972. -262> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.bloom_locality: 0
  973. -261> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.max_successive_merges: 0
  974. -260> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.optimize_filters_for_hits: 0
  975. -259> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.paranoid_file_checks: 0
  976. -258> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.force_consistency_checks: 0
  977. -257> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.report_bg_io_stats: 0
  978. -256> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: Options.ttl: 0
  979. -255> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph-29-01-2019/build/dev/mon.a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
  980.  
  981. -254> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
  982.  
  983. -253> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548848227093966, "job": 1, "event": "recovery_started", "log_files": [3]}
  984. -252> 2019-01-30 17:07:07.092 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl_open.cc:561] Recovering log #3 mode 2
  985. -251> 2019-01-30 17:07:07.099 7eff006c5040 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548848227100661, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 4, "file_size": 1653, "table_properties": {"data_size": 907, "index_size": 28, "filter_size": 23, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 784, "raw_average_value_size": 156, "num_data_blocks": 1, "num_entries": 5, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
  986. -250> 2019-01-30 17:07:07.099 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/version_set.cc:2936] Creating manifest 5
  987.  
  988. -249> 2019-01-30 17:07:07.107 7eff006c5040 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548848227108398, "job": 1, "event": "recovery_finished"}
  989. -248> 2019-01-30 17:07:07.118 7eff006c5040 4 rocksdb: [/home/rraja/git/ceph-29-01-2019/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x5558662ca800
  990. -247> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 14
  991. -246> 2019-01-30 17:07:07.118 7eff006c5040 10 obtain_monmap
  992. -245> 2019-01-30 17:07:07.118 7eff006c5040 10 obtain_monmap found mkfs monmap
  993. -244> 2019-01-30 17:07:07.118 7eff006c5040 10 main monmap:
  994. {
  995. "epoch": 0,
  996. "fsid": "9d1465a9-7817-4f0a-8abf-dbd152810aa8",
  997. "modified": "2019-01-30 17:07:06.814321",
  998. "created": "2019-01-30 17:07:06.814321",
  999. "features": {
  1000. "persistent": [],
  1001. "optional": []
  1002. },
  1003. "mons": [
  1004. {
  1005. "rank": 0,
  1006. "name": "a",
  1007. "public_addrs": {
  1008. "addrvec": [
  1009. {
  1010. "type": "v2",
  1011. "addr": "127.0.0.1:40576",
  1012. "nonce": 0
  1013. },
  1014. {
  1015. "type": "v1",
  1016. "addr": "127.0.0.1:40577",
  1017. "nonce": 0
  1018. }
  1019. ]
  1020. },
  1021. "addr": "127.0.0.1:40577/0",
  1022. "public_addr": "127.0.0.1:40577/0"
  1023. }
  1024. ]
  1025. }
  1026.  
  1027. -243> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 15
  1028. -242> 2019-01-30 17:07:07.118 7eff006c5040 5 adding auth protocol: cephx
  1029. -241> 2019-01-30 17:07:07.118 7eff006c5040 5 adding auth protocol: cephx
  1030. -240> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 16
  1031. -239> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 17
  1032. -238> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 18
  1033. -237> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 19
  1034. -236> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 20
  1035. -235> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 21
  1036. -234> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 22
  1037. -233> 2019-01-30 17:07:07.118 7eff006c5040 1 lockdep using id 23
  1038. -232> 2019-01-30 17:07:07.119 7eff006c5040 1 lockdep using id 24
  1039. -231> 2019-01-30 17:07:07.119 7eff006c5040 1 lockdep using id 25
  1040. -230> 2019-01-30 17:07:07.119 7eff006c5040 1 lockdep using id 26
  1041. -229> 2019-01-30 17:07:07.119 7eff006c5040 1 lockdep using id 27
  1042. -228> 2019-01-30 17:07:07.119 7eff006c5040 0 starting mon.a rank 0 at public addrs [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] at bind addrs [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] mon_data /home/rraja/git/ceph-29-01-2019/build/dev/mon.a fsid 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  1043. -227> 2019-01-30 17:07:07.119 7eff006c5040 1 -- [v2:127.0.0.1:0/0,v1:127.0.0.1:0/0] learned_addr learned my addr [v2:127.0.0.1:0/0,v1:127.0.0.1:0/0] (peer_addr_for_me v2:127.0.0.1:40576/0)
  1044. -226> 2019-01-30 17:07:07.119 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] _finish_bind bind my_addrs is [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0]
  1045. -225> 2019-01-30 17:07:07.119 7eff006c5040 5 adding auth protocol: cephx
  1046. -224> 2019-01-30 17:07:07.119 7eff006c5040 5 adding auth protocol: cephx
  1047. -223> 2019-01-30 17:07:07.119 7eff006c5040 1 lockdep using id 28
  1048. -222> 2019-01-30 17:07:07.119 7eff006c5040 1 lockdep using id 29
  1049. -221> 2019-01-30 17:07:07.119 7eff006c5040 1 lockdep using id 30
  1050. -220> 2019-01-30 17:07:07.119 7eff006c5040 0 starting mon.a rank 0 at [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] mon_data /home/rraja/git/ceph-29-01-2019/build/dev/mon.a fsid 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  1051. -219> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 31
  1052. -218> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 32
  1053. -217> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 33
  1054. -216> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 34
  1055. -215> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 35
  1056. -214> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 36
  1057. -213> 2019-01-30 17:07:07.120 7eff006c5040 5 adding auth protocol: cephx
  1058. -212> 2019-01-30 17:07:07.120 7eff006c5040 5 adding auth protocol: cephx
  1059. -211> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 37
  1060. -210> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 38
  1061. -209> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 39
  1062. -208> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 40
  1063. -207> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 41
  1064. -206> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 42
  1065. -205> 2019-01-30 17:07:07.120 7eff006c5040 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
  1066. -204> 2019-01-30 17:07:07.120 7eff006c5040 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
  1067. -203> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 43
  1068. -202> 2019-01-30 17:07:07.120 7eff006c5040 1 lockdep using id 44
  1069. -201> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  1070. -200> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
  1071. -199> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1072. -198> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1073. -197> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1074. -196> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
  1075. -195> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1076. -194> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1077. -193> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1078. -192> 2019-01-30 17:07:07.121 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1079. -191> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1080. -190> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1081. -189> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1082. -188> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
  1083. -187> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1084. -186> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1085. -185> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1086. -184> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1087. -183> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1088. -182> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1089. -181> 2019-01-30 17:07:07.122 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1090. -180> 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  1091. -179> 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
  1092. -178> 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1093. -177> 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1094. -176> 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1095. -175> 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
  1096. -174> 2019-01-30 17:07:07.123 7eff006c5040 20 mon.a@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
  1097. -173> 2019-01-30 17:07:07.123 7eff006c5040 1 mon.a@-1(probing) e0 preinit fsid 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  1098. -172> 2019-01-30 17:07:07.123 7eff006c5040 1 lockdep using id 45
  1099. -171> 2019-01-30 17:07:07.123 7eff006c5040 1 lockdep using id 46
  1100. -170> 2019-01-30 17:07:07.124 7eff006c5040 1 lockdep using id 47
  1101. -169> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 check_fsid cluster_uuid contains '9d1465a9-7817-4f0a-8abf-dbd152810aa8'
  1102. -168> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 features compat={},rocompat={},incompat={1=initial feature set (~v.18),3=single paxos with k/v store (v0.?)}
  1103. -167> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 calc_quorum_requirements required_features 0
  1104. -166> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 required_features 0
  1105. -165> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 has_ever_joined = 0
  1106. -164> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 sync_last_committed_floor 0
  1107. -163> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 init_paxos
  1108. -162> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxos(paxos recovering c 0..0) init last_pn: 0 accepted_pn: 0 last_committed: 0 first_committed: 0
  1109. -161> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxos(paxos recovering c 0..0) init
  1110. -160> 2019-01-30 17:07:07.124 7eff006c5040 5 mon.a@-1(probing).mds e0 Unable to load 'last_metadata'
  1111. -159> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).health init
  1112. -158> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).config init
  1113. -157> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 refresh_from_paxos
  1114. -156> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing) e0 refresh_from_paxos no cluster_fingerprint
  1115. -155> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(mdsmap 0..0) refresh
  1116. -154> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(osdmap 0..0) refresh
  1117. -153> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(logm 0..0) refresh
  1118. -152> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).log v0 update_from_paxos
  1119. -151> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).log v0 update_from_paxos version 0 summary v 0
  1120. -150> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(monmap 0..0) refresh
  1121. -149> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(auth 0..0) refresh
  1122. -148> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).auth v0 update_from_paxos
  1123. -147> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgr 0..0) refresh
  1124. -146> 2019-01-30 17:07:07.124 7eff006c5040 10 mon.a@-1(probing).config load_config got 0 keys
  1125. -145> 2019-01-30 17:07:07.124 7eff006c5040 20 mon.a@-1(probing).config load_config config map:
  1126. {
  1127. "global": {},
  1128. "by_type": {},
  1129. "by_id": {}
  1130. }
  1131.  
  1132. -144> 2019-01-30 17:07:07.124 7eff006c5040 4 set_mon_vals no callback set
  1133. -143> 2019-01-30 17:07:07.128 7eff006c5040 20 mgrc handle_mgr_map mgrmap(e 0) v1
  1134. -142> 2019-01-30 17:07:07.128 7eff006c5040 4 mgrc handle_mgr_map Got map version 0
  1135. -141> 2019-01-30 17:07:07.128 7eff006c5040 4 mgrc handle_mgr_map Active mgr is now
  1136. -140> 2019-01-30 17:07:07.128 7eff006c5040 4 mgrc reconnect No active mgr available yet
  1137. -139> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgrstat 0..0) refresh
  1138. -138> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).mgrstat 0
  1139. -137> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).mgrstat check_subs
  1140. -136> 2019-01-30 17:07:07.128 7eff006c5040 20 mon.a@-1(probing).mgrstat update_logger
  1141. -135> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(health 0..0) refresh
  1142. -134> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).health update_from_paxos
  1143. -133> 2019-01-30 17:07:07.128 7eff006c5040 20 mon.a@-1(probing).health dump:{
  1144. "quorum_health": {},
  1145. "leader_health": {}
  1146. }
  1147.  
  1148. -132> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(config 0..0) refresh
  1149. -131> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mdsmap 0..0) post_refresh
  1150. -130> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(osdmap 0..0) post_refresh
  1151. -129> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(logm 0..0) post_refresh
  1152. -128> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(monmap 0..0) post_refresh
  1153. -127> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(auth 0..0) post_refresh
  1154. -126> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgr 0..0) post_refresh
  1155. -125> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(mgrstat 0..0) post_refresh
  1156. -124> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(health 0..0) post_refresh
  1157. -123> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing).paxosservice(config 0..0) post_refresh
  1158. -122> 2019-01-30 17:07:07.128 7eff006c5040 10 mon.a@-1(probing) e0 loading initial keyring to bootstrap authentication for mkfs
  1159. -121> 2019-01-30 17:07:07.128 7eff006c5040 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph-29-01-2019/build/dev/mon.a/keyring
  1160. -120> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command mon_status hook 0x55586547e640
  1161. -119> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command quorum_status hook 0x55586547e640
  1162. -118> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command sync_force hook 0x55586547e640
  1163. -117> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command add_bootstrap_peer_hint hook 0x55586547e640
  1164. -116> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command add_bootstrap_peer_hintv hook 0x55586547e640
  1165. -115> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command quorum enter hook 0x55586547e640
  1166. -114> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command quorum exit hook 0x55586547e640
  1167. -113> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command ops hook 0x55586547e640
  1168. -112> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command sessions hook 0x55586547e640
  1169. -111> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command dump_historic_ops hook 0x55586547e640
  1170. -110> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command dump_historic_ops_by_duration hook 0x55586547e640
  1171. -109> 2019-01-30 17:07:07.128 7eff006c5040 5 asok(0x5558657f0000) register_command dump_historic_slow_ops hook 0x55586547e640
  1172. -108> 2019-01-30 17:07:07.128 7eff006c5040 1 finished global_init_daemonize
  1173. -107> 2019-01-30 17:07:07.128 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] start start
  1174. -106> 2019-01-30 17:07:07.128 7eff006c5040 1 -- start start
  1175. -105> 2019-01-30 17:07:07.128 7eff006c5040 2 mon.a@-1(probing) e0 init
  1176. -104> 2019-01-30 17:07:07.128 7eff006c5040 1 Processor -- start
  1177. -103> 2019-01-30 17:07:07.128 7efeea292700 1 lockdep using id 48
  1178. -102> 2019-01-30 17:07:07.129 7eff006c5040 1 Processor -- start
  1179. -101> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 bootstrap
  1180. -100> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 sync_reset_requester
  1181. -99> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 unregister_cluster_logger - not registered
  1182. -98> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 cancel_probe_timeout (none scheduled)
  1183. -97> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 reverting to legacy ranks for seed monmap (epoch 0)
  1184. -96> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@-1(probing) e0 monmap e0: 1 mons at {a=[v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0]}
  1185. -95> 2019-01-30 17:07:07.129 7eff006c5040 0 mon.a@-1(probing) e0 my rank is now 0 (was -1)
  1186. -94> 2019-01-30 17:07:07.129 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] shutdown_connections
  1187. -93> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 _reset
  1188. -92> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 cancel_probe_timeout (none scheduled)
  1189. -91> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 timecheck_finish
  1190. -90> 2019-01-30 17:07:07.129 7eff006c5040 15 mon.a@0(probing) e0 health_tick_stop
  1191. -89> 2019-01-30 17:07:07.129 7eff006c5040 15 mon.a@0(probing) e0 health_interval_stop
  1192. -88> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 scrub_event_cancel
  1193. -87> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing) e0 scrub_reset
  1194. -86> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  1195. -85> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(mdsmap 0..0) restart
  1196. -84> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(osdmap 0..0) restart
  1197. -83> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(logm 0..0) restart
  1198. -82> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(monmap 0..0) restart
  1199. -81> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(auth 0..0) restart
  1200. -80> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(mgr 0..0) restart
  1201. -79> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(mgrstat 0..0) restart
  1202. -78> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(health 0..0) restart
  1203. -77> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).paxosservice(config 0..0) restart
  1204. -76> 2019-01-30 17:07:07.129 7eff006c5040 1 mon.a@0(probing) e0 win_standalone_election
  1205. -75> 2019-01-30 17:07:07.129 7eff006c5040 1 mon.a@0(probing).elector(0) init, first boot, initializing epoch at 1
  1206. -74> 2019-01-30 17:07:07.129 7eff006c5040 10 mon.a@0(probing).elector(1) bump_epoch 1 to 2
  1207. -73> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 join_election
  1208. -72> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 _reset
  1209. -71> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 cancel_probe_timeout (none scheduled)
  1210. -70> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 timecheck_finish
  1211. -69> 2019-01-30 17:07:07.133 7eff006c5040 15 mon.a@0(probing) e0 health_tick_stop
  1212. -68> 2019-01-30 17:07:07.133 7eff006c5040 15 mon.a@0(probing) e0 health_interval_stop
  1213. -67> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 scrub_event_cancel
  1214. -66> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing) e0 scrub_reset
  1215. -65> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
  1216. -64> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(mdsmap 0..0) restart
  1217. -63> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(osdmap 0..0) restart
  1218. -62> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(logm 0..0) restart
  1219. -61> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(monmap 0..0) restart
  1220. -60> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(auth 0..0) restart
  1221. -59> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(mgr 0..0) restart
  1222. -58> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(mgrstat 0..0) restart
  1223. -57> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(health 0..0) restart
  1224. -56> 2019-01-30 17:07:07.133 7eff006c5040 10 mon.a@0(probing).paxosservice(config 0..0) restart
  1225. -55> 2019-01-30 17:07:07.133 7eff006c5040 -1 mon.a@0(electing) e0 devname dm-0
  1226. -54> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(electing) e0 win_election epoch 2 quorum 0 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
  1227. -53> 2019-01-30 17:07:07.134 7eff006c5040 0 log_channel(cluster) log [INF] : mon.a is new leader, mons a in quorum (ranks 0)
  1228. -52> 2019-01-30 17:07:07.134 7eff006c5040 10 log_client _send_to_mon log to self
  1229. -51> 2019-01-30 17:07:07.134 7eff006c5040 10 log_client log_queue is 1 last_log 1 sent 0 num 1 unsent 1 sending 1
  1230. -50> 2019-01-30 17:07:07.134 7eff006c5040 10 log_client will send 2019-01-30 17:07:07.135354 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0)
  1231. -49> 2019-01-30 17:07:07.134 7eff006c5040 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] --> [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] -- log(1 entries from seq 1 at 2019-01-30 17:07:07.135354) v1 -- 0x555866522b40 con 0x555865711a80
  1232. -48> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxosservice(monmap 0..0) election_finished
  1233. -47> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxosservice(monmap 0..0) _active
  1234. -46> 2019-01-30 17:07:07.134 7eff006c5040 7 mon.a@0(leader).paxosservice(monmap 0..0) _active creating new pending
  1235. -45> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).monmap v0 create_pending monmap epoch 1
  1236. -44> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).monmap v0 create_initial using current monmap
  1237. -43> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxosservice(monmap 0..0) propose_pending
  1238. -42> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).monmap v0 encode_pending epoch 1
  1239. -41> 2019-01-30 17:07:07.134 7efee828e700 1 -- [v2:127.0.0.1:40576/0,v1:127.0.0.1:40577/0] <== mon.0 v2:127.0.0.1:40576/0 0 ==== log(1 entries from seq 1 at 2019-01-30 17:07:07.135354) v1 ==== 0+0+0 (0 0 0) 0x555866522b40 con 0x555865711a80
  1240. -40> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader) e0 prepare_new_fingerprint proposing cluster_fingerprint cbe66b16-9e55-45b9-ad8e-294006ac0aa0
  1241. -39> 2019-01-30 17:07:07.134 7eff006c5040 5 mon.a@0(leader).paxos(paxos active c 0..0) queue_pending_finisher 0x55586547e6e0
  1242. -38> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxos(paxos active c 0..0) trigger_propose active, proposing now
  1243. -37> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxos(paxos active c 0..0) propose_pending 1 400 bytes
  1244. -36> 2019-01-30 17:07:07.134 7eff006c5040 10 mon.a@0(leader).paxos(paxos updating c 0..0) begin for 1 400 bytes
  1245. -35> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxos(paxos updating c 0..0) commit_start 1
  1246. -34> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(mdsmap 0..0) election_finished
  1247. -33> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(mdsmap 0..0) _active
  1248. -32> 2019-01-30 17:07:07.138 7eff006c5040 7 mon.a@0(leader).paxosservice(mdsmap 0..0) _active creating new pending
  1249. -31> 2019-01-30 17:07:07.138 7eff006c5040 5 mon.a@0(leader).paxos(paxos writing c 0..0) is_readable = 0 - now=2019-01-30 17:07:07.139412 lease_expire=0.000000 has v0 lc 0
  1250. -30> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).mds e0 create_pending e1
  1251. -29> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).mds e0 create_initial
  1252. -28> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(mdsmap 0..0) propose_pending
  1253. -27> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).mds e0 encode_pending e1
  1254. -26> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader) e0 log_health updated 0 previous 0
  1255. -25> 2019-01-30 17:07:07.138 7eff006c5040 5 mon.a@0(leader).paxos(paxos writing c 0..0) queue_pending_finisher 0x55586547e650
  1256. -24> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxos(paxos writing c 0..0) trigger_propose not active, will propose later
  1257. -23> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(osdmap 0..0) election_finished
  1258. -22> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(osdmap 0..0) _active
  1259. -21> 2019-01-30 17:07:07.138 7eff006c5040 7 mon.a@0(leader).paxosservice(osdmap 0..0) _active creating new pending
  1260. -20> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 create_pending e 1
  1261. -19> 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.99
  1262. -18> 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 create_pending setting full_ratio = 0.99
  1263. -17> 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 create_pending setting nearfull_ratio = 0.99
  1264. -16> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 create_initial for 9d1465a9-7817-4f0a-8abf-dbd152810aa8
  1265. -15> 2019-01-30 17:07:07.138 7eff006c5040 20 mon.a@0(leader).osd e0 full crc 2999819672
  1266. -14> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).paxosservice(osdmap 0..0) propose_pending
  1267. -13> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending e 1
  1268. -12> 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 do_prune osdmap full prune enabled
  1269. -11> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 should_prune currently holding only 0 epochs (min osdmap epochs: 500); do not prune.
  1270. -10> 2019-01-30 17:07:07.138 7eff006c5040 1 mon.a@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
  1271. -9> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs
  1272. -8> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs 0 pools queued
  1273. -7> 2019-01-30 17:07:07.138 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs 0 pgs removed because they're created
  1274. -6> 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs queue remaining: 0 pools
  1275. -5> 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 update_pending_pgs 0/0 pgs added from queued pools
  1276. -4> 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending first mimic+ epoch
  1277. -3> 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending first nautilus+ epoch
  1278. -2> 2019-01-30 17:07:07.139 7eff006c5040 10 mon.a@0(leader).osd e0 encode_pending encoding full map with nautilus features 1080873256688298500
  1279. -1> 2019-01-30 17:07:07.139 7eff006c5040 20 mon.a@0(leader).osd e0 full_crc 2999819672 inc_crc 2902584528
  1280. 0> 2019-01-30 17:07:07.152 7eff006c5040 -1 *** Caught signal (Segmentation fault) **
  1281. in thread 7eff006c5040 thread_name:ceph-mon
  1282.  
  1283. ceph version 14.0.1-3061-g3e6ff119e2 (3e6ff119e298a9269f7c66d8c1a9b87fab16d987) nautilus (dev)
  1284. 1: (()+0x13cbcc6) [0x555863ae1cc6]
  1285. 2: (()+0x12080) [0x7efef30d4080]
  1286. 3: (OSDMap::check_health(health_check_map_t*) const+0x1235) [0x7efef7aa77db]
  1287. 4: (OSDMonitor::encode_pending(std::shared_ptr<MonitorDBStore::Transaction>)+0x510d) [0x555863950b7f]
  1288. 5: (PaxosService::propose_pending()+0x45a) [0x55586393d528]
  1289. 6: (PaxosService::_active()+0x62b) [0x55586393e5ab]
  1290. 7: (PaxosService::election_finished()+0x183) [0x55586393dddf]
  1291. 8: (Monitor::_finish_svc_election()+0xee) [0x55586360ddf8]
  1292. 9: (Monitor::win_election(unsigned int, std::set<int, std::less<int>, std::allocator<int> >&, unsigned long, mon_feature_t const&, std::map<int, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >, std::less<int>, std::allocator<std::pair<int const, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > > > > const&)+0x418) [0x55586360e692]
  1293. 10: (Monitor::win_standalone_election()+0x227) [0x55586360dc37]
  1294. 11: (Monitor::bootstrap()+0xa1b) [0x5558636021db]
  1295. 12: (Monitor::init()+0x1a4) [0x5558635ff258]
  1296. 13: (main()+0x688f) [0x5558635d21fd]
  1297. 14: (__libc_start_main()+0xeb) [0x7efef21c211b]
  1298. 15: (_start()+0x2a) [0x5558635c890a]
  1299. NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  1300.  
  1301. --- logging levels ---
  1302. 0/ 5 none
  1303. 0/ 1 lockdep
  1304. 0/ 1 context
  1305. 1/ 1 crush
  1306. 1/ 5 mds
  1307. 1/ 5 mds_balancer
  1308. 1/ 5 mds_locker
  1309. 1/ 5 mds_log
  1310. 1/ 5 mds_log_expire
  1311. 1/ 5 mds_migrator
  1312. 0/ 1 buffer
  1313. 0/ 1 timer
  1314. 0/ 1 filer
  1315. 0/ 1 striper
  1316. 0/ 1 objecter
  1317. 0/ 5 rados
  1318. 0/ 5 rbd
  1319. 0/ 5 rbd_mirror
  1320. 0/ 5 rbd_replay
  1321. 0/ 5 journaler
  1322. 0/ 5 objectcacher
  1323. 0/ 5 client
  1324. 1/ 5 osd
  1325. 0/ 5 optracker
  1326. 0/ 5 objclass
  1327. 1/ 3 filestore
  1328. 1/ 3 journal
  1329. 1/ 1 ms
  1330. 20/20 mon
  1331. 0/10 monc
  1332. 20/20 paxos
  1333. 0/ 5 tp
  1334. 20/20 auth
  1335. 1/ 5 crypto
  1336. 1/ 1 finisher
  1337. 1/ 1 reserver
  1338. 1/ 5 heartbeatmap
  1339. 1/ 5 perfcounter
  1340. 1/ 5 rgw
  1341. 1/ 5 rgw_sync
  1342. 1/10 civetweb
  1343. 1/ 5 javaclient
  1344. 1/ 5 asok
  1345. 1/ 1 throttle
  1346. 0/ 0 refs
  1347. 1/ 5 xio
  1348. 1/ 5 compressor
  1349. 1/ 5 bluestore
  1350. 1/ 5 bluefs
  1351. 1/ 3 bdev
  1352. 1/ 5 kstore
  1353. 4/ 5 rocksdb
  1354. 4/ 5 leveldb
  1355. 4/ 5 memdb
  1356. 1/ 5 kinetic
  1357. 1/ 5 fuse
  1358. 1/ 5 mgr
  1359. 20/20 mgrc
  1360. 1/ 5 dpdk
  1361. 1/ 5 eventtrace
  1362. -2/-2 (syslog threshold)
  1363. -1/-1 (stderr threshold)
  1364. max_recent 10000
  1365. max_new 1000
  1366. log_file /home/rraja/git/ceph-29-01-2019/build/out/mon.a.log
  1367. --- end dump of recent events ---
Add Comment
Please, Sign In to add comment