Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2019-01-29 15:26:44.079 7f860f2c31c0 10 public_network
- 2019-01-29 15:26:44.079 7f860f2c31c0 10 public_addr
- 2019-01-29 15:26:44.097 7f860f2c31c0 1 imported monmap:
- epoch 0
- fsid 6ed38227-b5fb-47b5-8017-c3f6952380f8
- last_changed 2019-01-29 15:26:43.871480
- created 2019-01-29 15:26:43.871480
- 0: [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] mon.a
- 1: [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon.b
- 2: [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] mon.c
- 2019-01-29 15:26:44.097 7f860f2c31c0 0 /home/rraja/git/ceph/build/bin/ceph-mon: set fsid to 3b02750c-f104-4301-aa14-258d2b37f104
- 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option compression = kNoCompression
- 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
- 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option write_buffer_size = 33554432
- 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option compression = kNoCompression
- 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
- 2019-01-29 15:26:44.106 7f860f2c31c0 0 set rocksdb option write_buffer_size = 33554432
- 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: RocksDB version: 5.17.2
- 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
- 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: Compile date Jan 28 2019
- 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: DB SUMMARY
- 2019-01-29 15:26:44.106 7f860f2c31c0 4 rocksdb: SST files in /home/rraja/git/ceph/build/dev/mon.b/store.db dir, Total Num: 0, files:
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph/build/dev/mon.b/store.db:
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.error_if_exists: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.create_if_missing: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.paranoid_checks: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.env: 0x555a504171a0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.info_log: 0x555a51f49380
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_file_opening_threads: 16
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.statistics: (nil)
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_fsync: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_log_file_size: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_manifest_file_size: 1073741824
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.log_file_time_to_roll: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.keep_log_file_num: 1000
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.recycle_log_file_num: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_fallocate: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_mmap_reads: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_mmap_writes: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_direct_reads: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.create_missing_column_families: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.db_log_dir:
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph/build/dev/mon.b/store.db
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.table_cache_numshardbits: 6
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_subcompactions: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_background_flushes: -1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.WAL_ttl_seconds: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.WAL_size_limit_MB: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.manifest_preallocation_size: 4194304
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.is_fd_close_on_exec: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.advise_random_on_open: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.db_write_buffer_size: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.write_buffer_manager: 0x555a51f4a0f0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.access_hint_on_compaction_start: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.random_access_max_buffer_size: 1048576
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.use_adaptive_mutex: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.rate_limiter: (nil)
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_recovery_mode: 2
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.enable_thread_tracking: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.enable_pipelined_write: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_concurrent_memtable_write: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.write_thread_max_yield_usec: 100
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.write_thread_slow_yield_usec: 3
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.row_cache: None
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_filter: None
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.avoid_flush_during_recovery: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.allow_ingest_behind: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.preserve_deletes: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.two_write_queues: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.manual_wal_flush: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_background_jobs: 2
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_background_compactions: -1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.avoid_flush_during_shutdown: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.delayed_write_rate : 16777216
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_total_wal_size: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.stats_dump_period_sec: 600
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.max_open_files: -1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.bytes_per_sync: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.wal_bytes_per_sync: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Options.compaction_readahead_size: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Compression algorithms supported:
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kZSTDNotFinalCompression supported: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kZSTD supported: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kXpressCompression supported: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kLZ4HCCompression supported: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kLZ4Compression supported: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kBZip2Compression supported: 0
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kZlibCompression supported: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: kSnappyCompression supported: 1
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: Fast CRC32 supported: Supported on x86
- 2019-01-29 15:26:44.107 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:230] Creating manifest 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.merge_operator:
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_filter: None
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_filter_factory: None
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_factory: SkipListFactory
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.table_factory: BlockBasedTable
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555a51ba2b00)
- cache_index_and_filter_blocks: 1
- cache_index_and_filter_blocks_with_high_priority: 1
- pin_l0_filter_and_index_blocks_in_cache: 1
- pin_top_level_index_and_filter: 1
- index_type: 0
- hash_index_allow_collision: 1
- checksum: 1
- no_block_cache: 0
- block_cache: 0x555a52a61140
- block_cache_name: BinnedLRUCache
- block_cache_options:
- capacity : 536870912
- num_shard_bits : 4
- strict_capacity_limit : 0
- high_pri_pool_ratio: 0.000
- block_cache_compressed: (nil)
- persistent_cache: (nil)
- block_size: 4096
- block_size_deviation: 10
- block_restart_interval: 16
- index_block_restart_interval: 1
- metadata_block_size: 4096
- partition_filters: 0
- use_delta_encoding: 1
- filter_policy: rocksdb.BuiltinBloomFilter
- whole_key_filtering: 1
- verify_compression: 0
- read_amp_bytes_per_bit: 0
- format_version: 2
- enable_index_compression: 1
- block_align: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.write_buffer_size: 33554432
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_write_buffer_number: 2
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression: NoCompression
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression: Disabled
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.prefix_extractor: nullptr
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.num_levels: 7
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.level: 32767
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bottommost_compression_opts.enabled: false
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.window_bits: -14
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.level: 32767
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.strategy: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compression_opts.enabled: false
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level0_stop_writes_trigger: 36
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.target_file_size_base: 67108864
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.target_file_size_multiplier: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_base: 268435456
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_compaction_bytes: 1677721600
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.arena_block_size: 4194304
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.disable_auto_compactions: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_pri: kByCompensatedSize
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.compaction_options_fifo.ttl: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.table_properties_collectors:
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.inplace_update_support: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.inplace_update_num_locks: 10000
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.memtable_huge_page_size: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.bloom_locality: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.max_successive_merges: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.optimize_filters_for_hits: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.paranoid_file_checks: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.force_consistency_checks: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.report_bg_io_stats: 0
- 2019-01-29 15:26:44.122 7f860f2c31c0 4 rocksdb: Options.ttl: 0
- 2019-01-29 15:26:44.123 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph/build/dev/mon.b/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
- 2019-01-29 15:26:44.123 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
- 2019-01-29 15:26:44.133 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x555a529e5600
- 2019-01-29 15:26:44.133 7f860f2c31c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.133 7f860f2c31c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.134 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
- 2019-01-29 15:26:44.135 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
- 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.136 7f860f2c31c0 20 mon.b@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
- 2019-01-29 15:26:44.136 7f860f2c31c0 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph/build/keyring
- 2019-01-29 15:26:44.136 7f860f2c31c0 10 mon.b@-1(probing) e0 extract_save_mon_key moving mon. key to separate keyring
- 2019-01-29 15:26:44.145 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl.cc:365] Shutdown: canceling all background work
- 2019-01-29 15:26:44.145 7f860f2c31c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl.cc:521] Shutdown complete
- 2019-01-29 15:26:44.145 7f860f2c31c0 0 /home/rraja/git/ceph/build/bin/ceph-mon: created monfs at /home/rraja/git/ceph/build/dev/mon.b for mon.b
- 2019-01-29 15:26:44.507 7f39deeb51c0 0 ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev), process ceph-mon, pid 613920
- 2019-01-29 15:26:44.545 7f39deeb51c0 0 load: jerasure load: lrc load: isa
- 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
- 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
- 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
- 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
- 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
- 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
- 2019-01-29 15:26:44.546 7f39deeb51c0 1 rocksdb: do_open column families: [default]
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: RocksDB version: 5.17.2
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compile date Jan 28 2019
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: DB SUMMARY
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: CURRENT file: CURRENT
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: IDENTITY file: IDENTITY
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: MANIFEST file: MANIFEST-000001 size: 13 Bytes
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: SST files in /home/rraja/git/ceph/build/dev/mon.b/store.db dir, Total Num: 0, files:
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph/build/dev/mon.b/store.db: 000003.log size: 1091 ;
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.error_if_exists: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_if_missing: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.paranoid_checks: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.env: 0x563938d121a0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.info_log: 0x563939e69f40
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_file_opening_threads: 16
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.statistics: (nil)
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_fsync: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_log_file_size: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_manifest_file_size: 1073741824
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.log_file_time_to_roll: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.keep_log_file_num: 1000
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.recycle_log_file_num: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_fallocate: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_reads: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_writes: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_reads: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_missing_column_families: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_log_dir:
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph/build/dev/mon.b/store.db
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.table_cache_numshardbits: 6
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_subcompactions: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_flushes: -1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_ttl_seconds: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_size_limit_MB: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manifest_preallocation_size: 4194304
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.is_fd_close_on_exec: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.advise_random_on_open: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_write_buffer_size: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_buffer_manager: 0x563939e6a720
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.access_hint_on_compaction_start: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.random_access_max_buffer_size: 1048576
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_adaptive_mutex: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.rate_limiter: (nil)
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_recovery_mode: 2
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_thread_tracking: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_pipelined_write: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_concurrent_memtable_write: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_max_yield_usec: 100
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_slow_yield_usec: 3
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.row_cache: None
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_filter: None
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_recovery: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_ingest_behind: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.preserve_deletes: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.two_write_queues: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manual_wal_flush: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_jobs: 2
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_compactions: -1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_shutdown: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delayed_write_rate : 16777216
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_total_wal_size: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.stats_dump_period_sec: 600
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_open_files: -1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.bytes_per_sync: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_bytes_per_sync: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.compaction_readahead_size: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compression algorithms supported:
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTDNotFinalCompression supported: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTD supported: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kXpressCompression supported: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4HCCompression supported: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4Compression supported: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kBZip2Compression supported: 0
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZlibCompression supported: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kSnappyCompression supported: 1
- 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Fast CRC32 supported: Supported on x86
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.merge_operator:
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter: None
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter_factory: None
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_factory: SkipListFactory
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_factory: BlockBasedTable
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563939ac2ab0)
- cache_index_and_filter_blocks: 1
- cache_index_and_filter_blocks_with_high_priority: 1
- pin_l0_filter_and_index_blocks_in_cache: 1
- pin_top_level_index_and_filter: 1
- index_type: 0
- hash_index_allow_collision: 1
- checksum: 1
- no_block_cache: 0
- block_cache: 0x56393a9752a0
- block_cache_name: BinnedLRUCache
- block_cache_options:
- capacity : 536870912
- num_shard_bits : 4
- strict_capacity_limit : 0
- high_pri_pool_ratio: 0.000
- block_cache_compressed: (nil)
- persistent_cache: (nil)
- block_size: 4096
- block_size_deviation: 10
- block_restart_interval: 16
- index_block_restart_interval: 1
- metadata_block_size: 4096
- partition_filters: 0
- use_delta_encoding: 1
- filter_policy: rocksdb.BuiltinBloomFilter
- whole_key_filtering: 1
- verify_compression: 0
- read_amp_bytes_per_bit: 0
- format_version: 2
- enable_index_compression: 1
- block_align: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.write_buffer_size: 33554432
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number: 2
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression: NoCompression
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression: Disabled
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.prefix_extractor: nullptr
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.num_levels: 7
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.level: 32767
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.enabled: false
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.window_bits: -14
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.level: 32767
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.strategy: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.enabled: false
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_stop_writes_trigger: 36
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_base: 67108864
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_multiplier: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_base: 268435456
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_compaction_bytes: 1677721600
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.arena_block_size: 4194304
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.disable_auto_compactions: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_pri: kByCompensatedSize
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.ttl: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_properties_collectors:
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_support: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_num_locks: 10000
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_huge_page_size: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bloom_locality: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_successive_merges: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.optimize_filters_for_hits: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.paranoid_file_checks: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.force_consistency_checks: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.report_bg_io_stats: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.ttl: 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph/build/dev/mon.b/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
- 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
- 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804548935, "job": 1, "event": "recovery_started", "log_files": [3]}
- 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:561] Recovering log #3 mode 2
- 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804555547, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 4, "file_size": 1849, "table_properties": {"data_size": 1103, "index_size": 28, "filter_size": 23, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 980, "raw_average_value_size": 196, "num_data_blocks": 1, "num_entries": 5, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
- 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:2936] Creating manifest 5
- 2019-01-29 15:26:44.563 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804564214, "job": 1, "event": "recovery_finished"}
- 2019-01-29 15:26:44.574 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x56393a906800
- 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap
- 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap found mkfs monmap
- 2019-01-29 15:26:44.574 7f39deeb51c0 10 main monmap:
- {
- "epoch": 0,
- "fsid": "3b02750c-f104-4301-aa14-258d2b37f104",
- "modified": "2019-01-29 15:26:43.871480",
- "created": "2019-01-29 15:26:43.871480",
- "features": {
- "persistent": [],
- "optional": []
- },
- "mons": [
- {
- "rank": 0,
- "name": "a",
- "public_addrs": {
- "addrvec": [
- {
- "type": "v2",
- "addr": "10.215.99.125:40363",
- "nonce": 0
- },
- {
- "type": "v1",
- "addr": "10.215.99.125:40364",
- "nonce": 0
- }
- ]
- },
- "addr": "10.215.99.125:40364/0",
- "public_addr": "10.215.99.125:40364/0"
- },
- {
- "rank": 1,
- "name": "b",
- "public_addrs": {
- "addrvec": [
- {
- "type": "v2",
- "addr": "10.215.99.125:40365",
- "nonce": 0
- },
- {
- "type": "v1",
- "addr": "10.215.99.125:40366",
- "nonce": 0
- }
- ]
- },
- "addr": "10.215.99.125:40366/0",
- "public_addr": "10.215.99.125:40366/0"
- },
- {
- "rank": 2,
- "name": "c",
- "public_addrs": {
- "addrvec": [
- {
- "type": "v2",
- "addr": "10.215.99.125:40367",
- "nonce": 0
- },
- {
- "type": "v1",
- "addr": "10.215.99.125:40368",
- "nonce": 0
- }
- ]
- },
- "addr": "10.215.99.125:40368/0",
- "public_addr": "10.215.99.125:40368/0"
- }
- ]
- }
- 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.575 7f39deeb51c0 0 starting mon.b rank 1 at public addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] at bind addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
- 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] learned_addr learned my addr [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] (peer_addr_for_me v2:10.215.99.125:40365/0)
- 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] _finish_bind bind my_addrs is [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0]
- 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.576 7f39deeb51c0 0 starting mon.b rank 1 at [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
- 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- 2019-01-29 15:26:44.577 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
- 2019-01-29 15:26:44.580 7f39deeb51c0 1 mon.b@-1(probing) e0 preinit fsid 3b02750c-f104-4301-aa14-258d2b37f104
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 check_fsid cluster_uuid contains '3b02750c-f104-4301-aa14-258d2b37f104'
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 features compat={},rocompat={},incompat={1=initial feature set (~v.18),3=single paxos with k/v store (v0.?)}
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 calc_quorum_requirements required_features 0
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 required_features 0
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 has_ever_joined = 0
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_last_committed_floor 0
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 init_paxos
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init last_pn: 0 accepted_pn: 0 last_committed: 0 first_committed: 0
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init
- 2019-01-29 15:26:44.580 7f39deeb51c0 5 mon.b@-1(probing).mds e0 Unable to load 'last_metadata'
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).health init
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config init
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos no cluster_fingerprint
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) refresh
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) refresh
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) refresh
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos version 0 summary v 0
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) refresh
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) refresh
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).auth v0 update_from_paxos
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) refresh
- 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config load_config got 0 keys
- 2019-01-29 15:26:44.580 7f39deeb51c0 20 mon.b@-1(probing).config load_config config map:
- {
- "global": {},
- "by_type": {},
- "by_id": {}
- }
- 2019-01-29 15:26:44.584 7f39deeb51c0 20 mgrc handle_mgr_map mgrmap(e 0) v1
- 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Got map version 0
- 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Active mgr is now
- 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc reconnect No active mgr available yet
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat 0
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat check_subs
- 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).mgrstat update_logger
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).health update_from_paxos
- 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).health dump:{
- "quorum_health": {},
- "leader_health": {}
- }
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) post_refresh
- 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing) e0 loading initial keyring to bootstrap authentication for mkfs
- 2019-01-29 15:26:44.584 7f39deeb51c0 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph/build/dev/mon.b/keyring
- 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] start start
- 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- start start
- 2019-01-29 15:26:44.585 7f39deeb51c0 2 mon.b@-1(probing) e0 init
- 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
- 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
- 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 bootstrap
- 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_reset_requester
- 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 unregister_cluster_logger - not registered
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 reverting to legacy ranks for seed monmap (epoch 0)
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 monmap e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- 2019-01-29 15:26:44.586 7f39deeb51c0 0 mon.b@-1(probing) e0 my rank is now 1 (was -1)
- 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] shutdown_connections
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 _reset
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 timecheck_finish
- 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_tick_stop
- 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_interval_stop
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_event_cancel
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_reset
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 reset_probe_timeout 0x56393ab65d70 after 2 seconds
- 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 probing other monitors
- 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916840
- 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916840 con 0x563939c9c900
- 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916b00
- 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916b00 con 0x563939c9cd80
- 2019-01-29 15:26:44.586 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9cd80 msgr2=0x56393ab9a600 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- 2019-01-29 15:26:44.586 7f39c617f700 10 mon.b@1(probing) e0 ms_handle_refused 0x563939c9cd80 v2:10.215.99.125:40367/0
- 2019-01-29 15:26:44.586 7f39c417b700 10 mon.b@1(probing) e0 ms_get_authorizer for mon
- 2019-01-29 15:26:44.586 7f39c417b700 10 cephx: build_service_ticket service mon secret_id 18446744073709551615 ticket_info.ticket.name=mon.
- 2019-01-29 15:26:44.587 7f39c417b700 10 In get_auth_session_handler for protocol 2
- 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 14723405194060298632
- 2019-01-29 15:26:44.587 7f39c417b700 20 Putting signature in client message(seq # 1): sig = 14723405194060298632
- 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 2859661691 middle_crc = 0 data_crc = 0 sig = 12381238761605199092
- 2019-01-29 15:26:44.587 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 ==== 58+0+0 (2859661691 0 0) 0x56393a917080 con 0x563939c9c900
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _ms_dispatch new session 0x56393abc0000 MonSession(mon.0 [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(probing) e0 _ms_dispatch setting monitor caps on this connection
- 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
- 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_probe mon.0 v2:10.215.99.125:40363/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 features 4611087854031667199
- 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917340 con 0x563939c9c900
- 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 2632229567743115917
- 2019-01-29 15:26:44.588 7f39c417b700 20 Putting signature in client message(seq # 2): sig = 2632229567743115917
- 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 137113040 middle_crc = 0 data_crc = 0 sig = 5942540075320245331
- 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (137113040 0 0) 0x56393a916840 con 0x563939c9c900
- 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
- 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_reply mon.0 v2:10.215.99.125:40363/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 peer name is a
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 mon.a is outside the quorum
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 outside_quorum now a,b, need 2
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 that's enough to form a new quorum, calling election
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 start_election
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _reset
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 cancel_probe_timeout 0x56393ab65d70
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 timecheck_finish
- 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_tick_stop
- 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_interval_stop
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_event_cancel
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_reset
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:44.588 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
- 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 -- 0x56393abc06c0 con 0x563939d41600
- 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(electing).elector(0) start -- can i be leader?
- 2019-01-29 15:26:44.588 7f39c617f700 1 mon.b@1(electing).elector(0) init, first boot, initializing epoch at 1
- 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 415793505725448275
- 2019-01-29 15:26:44.605 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
- 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0f800
- 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0f800 con 0x563939c9c900
- 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0fb00
- 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0fb00 con 0x563939c9cd80
- 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 10881007314159201096
- 2019-01-29 15:26:44.605 7f39c417b700 20 Putting signature in client message(seq # 3): sig = 10881007314159201096
- 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 ==== 0+0+0 (0 0 0) 0x56393abc06c0 con 0x563939d41600
- 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0d80 MonSession(mon.1 [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
- 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:44.606661 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0f500 con 0x563939c9c900
- 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.0
- 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) defer to 0
- 2019-01-29 15:26:44.606 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
- 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- ?+0 0x56393abd4000
- 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- 0x56393abd4000 con 0x563939c9c900
- 2019-01-29 15:26:44.606 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 3541336743 middle_crc = 0 data_crc = 0 sig = 8903611187062716984
- 2019-01-29 15:26:44.606 7f39c417b700 20 Putting signature in client message(seq # 4): sig = 8903611187062716984
- 2019-01-29 15:26:44.728 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=35 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53000/0 addrs are 145
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 18425495964075312649
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 18425495964075312650 expecting 18425495964075312650
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
- 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept connect_seq 0 vs existing csq=0 existing_state=STATE_CONNECTING
- 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=CLOSED pgs=0 cs=0 l=0).replace stop myself to swap existing
- 2019-01-29 15:26:44.729 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_reset 0x563939c9f600 v2:10.215.99.125:40367/0
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 12808771302250315903
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 12808771302250315904 expecting 12808771302250315904
- 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
- 2019-01-29 15:26:44.729 7f39c397a700 10 In get_auth_session_handler for protocol 2
- 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 4598065285107427218
- 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 1): sig = 4598065285107427218
- 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 12491993532019861601
- 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 2): sig = 12491993532019861601
- 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 4581398896165078038
- 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a917600 con 0x563939c9cd80
- 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0fc0 MonSession(mon.2 [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- 2019-01-29 15:26:44.730 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
- 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
- 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
- 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917b80 con 0x563939c9cd80
- 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 17022378831273095551
- 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 3): sig = 17022378831273095551
- 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1672072532 middle_crc = 0 data_crc = 0 sig = 8103692841582008270
- 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (1672072532 0 0) 0x56393a917b80 con 0x563939c9cd80
- 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
- 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
- 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 peer name is c
- 2019-01-29 15:26:44.735 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4384171504794788302
- 2019-01-29 15:26:44.735 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0fb00 con 0x563939c9cd80
- 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
- 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.2
- 2019-01-29 15:26:44.735 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) no, we already acked 0
- 2019-01-29 15:26:44.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9fa80 0x56393ab9b800 :53002 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=36 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53002/0 addrs are 145
- 2019-01-29 15:26:44.868 7f39c3179700 10 In get_auth_session_handler for protocol 0
- 2019-01-29 15:26:44.868 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393abc1b00 con 0x563939c9fa80
- 2019-01-29 15:26:44.868 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc1200 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- 2019-01-29 15:26:44.868 7f39c617f700 20 mon.b@1(electing) e0 caps
- 2019-01-29 15:26:44.868 7f39c617f700 5 mon.b@1(electing) e0 waitlisting message auth(proto 0 30 bytes epoch 0) v1
- 2019-01-29 15:26:49.585 7f39c8984700 11 mon.b@1(electing) e0 tick
- 2019-01-29 15:26:49.585 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
- 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:49.585 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.586781 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.618 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 4020688038 middle_crc = 0 data_crc = 0 sig = 6749900040655150956
- 2019-01-29 15:26:49.618 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 4 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 victory 2) v7 ==== 41889+0+0 (4020688038 0 0) 0x56393abd4000 con 0x563939c9c900
- 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.619 7f39c617f700 5 mon.b@1(electing).elector(1) handle_victory from mon.0 quorum_features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:49.619 7f39c617f700 10 mon.b@1(electing).elector(1) bump_epoch 1 to 2
- 2019-01-29 15:26:49.623 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 2611882610 middle_crc = 0 data_crc = 0 sig = 10249081017414072692
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 join_election
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 _reset
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
- 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
- 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:49.625 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626824 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626971 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon) e0 lose_election, epoch 2 leader is mon0 quorum is 0,1 features are 4611087854031667199 mon_features are mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) peon_init -- i am a peon
- 2019-01-29 15:26:49.626 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627225 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627327 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) election_finished
- 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active - not active
- 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon) e0 apply_quorum_to_compatset_features
- 2019-01-29 15:26:49.626 7f39c617f700 1 mon.b@1(peon) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
- 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 calc_quorum_requirements required_features 549755813888
- 2019-01-29 15:26:49.632 7f39c617f700 5 mon.b@1(peon) e0 apply_monmap_to_compatset_features
- 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
- 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 resend_routed_requests
- 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 register_cluster_logger
- 2019-01-29 15:26:49.634 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 5 ==== paxos(collect lc 0 fc 0 pn 100 opn 0) v4 ==== 84+0+0 (2611882610 0 0) 0x563939b0f800 con 0x563939c9c900
- 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) handle_collect paxos(collect lc 0 fc 0 pn 100 opn 0) v4
- 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
- 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) accepting pn 100 from 0
- 2019-01-29 15:26:49.639 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(last lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4300 con 0x563939c9c900
- 2019-01-29 15:26:49.639 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 237159531 middle_crc = 0 data_crc = 0 sig = 17831234451164454257
- 2019-01-29 15:26:49.639 7f39c417b700 20 Putting signature in client message(seq # 5): sig = 17831234451164454257
- 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 4255564515 middle_crc = 0 data_crc = 0 sig = 9698958856354677651
- 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 6 ==== paxos(lease lc 0 fc 0 pn 0 opn 0) v4 ==== 84+0+0 (4255564515 0 0) 0x56393abd4300 con 0x563939c9c900
- 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_lease on 0 now 2019-01-29 15:26:54.641538
- 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(lease_ack lc 0 fc 0 pn 0 opn 0) v4 -- 0x56393abd4600 con 0x563939c9c900
- 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon).paxos(paxos active c 0..0) reset_lease_timeout - setting timeout event
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active
- 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(mdsmap 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active
- 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(osdmap 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 update_logger
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 take_all_failures on 0 osds
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 start_mapping no pools, no mapping job
- 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 3188025122 middle_crc = 0 data_crc = 0 sig = 13621511732933224171
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active
- 2019-01-29 15:26:49.641 7f39c417b700 20 Putting signature in client message(seq # 6): sig = 13621511732933224171
- 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(logm 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:49.641 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.642816 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
- 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active
- 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(monmap 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).monmap v0 apply_mon_features wait for service to be writeable
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active
- 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(auth 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643006 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).auth v0 AuthMonitor::on_active()
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active
- 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgr 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active
- 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgrstat 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.642 7f39c617f700 20 mon.b@1(peon).mgrstat update_logger
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active
- 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(health 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active
- 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(config 0..0) _active we are not the leader, hence we propose nothing!
- 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643134 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
- 2019-01-29 15:26:49.652 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 359680839 middle_crc = 0 data_crc = 0 sig = 14961307058218807505
- 2019-01-29 15:26:49.653 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 7 ==== paxos(begin lc 0 fc 0 pn 100 opn 0) v4 ==== 2292+0+0 (359680839 0 0) 0x56393abd4600 con 0x563939c9c900
- 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_begin paxos(begin lc 0 fc 0 pn 100 opn 0) v4
- 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) accepting value for 1 pn 100
- 2019-01-29 15:26:49.657 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(accept lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4900 con 0x563939c9c900
- 2019-01-29 15:26:49.657 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 3516937211 middle_crc = 0 data_crc = 0 sig = 18198979658301600163
- 2019-01-29 15:26:49.657 7f39c417b700 20 Putting signature in client message(seq # 7): sig = 18198979658301600163
- 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 1185258935604909951
- 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 4 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a916b00 con 0x563939c9cd80
- 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- 2019-01-29 15:26:49.736 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:49.736 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.736 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
- 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
- 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b quorum 0,1 paxos( fc 0 lc 0 ) new) v6 -- 0x56393affe000 con 0x563939c9cd80
- 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 3631419306 middle_crc = 0 data_crc = 0 sig = 17101328230604225658
- 2019-01-29 15:26:49.736 7f39c397a700 20 Putting signature in client message(seq # 4): sig = 17101328230604225658
- 2019-01-29 15:26:49.754 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4909251354610610373
- 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 5 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x56393abd5500 con 0x563939c9cd80
- 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) handle_propose from mon.2
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).elector(2) handle_propose required features 549755813888 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) got propose from old epoch, quorum is 0,1, mon.2 must have just started
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 start_election
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 _reset
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
- 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_tick_stop
- 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_interval_stop
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_event_cancel
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_reset
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) restart -- canceling timeouts
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755397 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755436 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) restart
- 2019-01-29 15:26:49.754 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
- 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 -- 0x56393b000000 con 0x563939d41600
- 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(electing).elector(2) start -- can i be leader?
- 2019-01-29 15:26:49.754 7f39c617f700 1 mon.b@1(electing).elector(2) init, last seen epoch 2
- 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(electing).elector(2) bump_epoch 2 to 3
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 join_election
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 _reset
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
- 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
- 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759261 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759298 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:49.758 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
- 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4c00
- 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4c00 con 0x563939c9c900
- 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4f00
- 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4f00 con 0x563939c9cd80
- 2019-01-29 15:26:49.759 7f39c417b700 10 _calc_signature seq 8 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 12490028364350923317
- 2019-01-29 15:26:49.759 7f39c417b700 20 Putting signature in client message(seq # 8): sig = 12490028364350923317
- 2019-01-29 15:26:49.759 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 13778264798706442571
- 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 ==== 0+0+0 (0 0 0) 0x56393b000000 con 0x563939d41600
- 2019-01-29 15:26:49.759 7f39c397a700 20 Putting signature in client message(seq # 5): sig = 13778264798706442571
- 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:49.759 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.760231 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:49.764 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2637561887 middle_crc = 0 data_crc = 0 sig = 10520589114785615889
- 2019-01-29 15:26:49.764 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 6 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 3) v7 ==== 1190+0+0 (2637561887 0 0) 0x56393abd4f00 con 0x563939c9cd80
- 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
- 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) handle_ack from mon.2
- 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
- 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk reading from fd=30 : Unknown error -104
- 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed
- 2019-01-29 15:26:50.038 7f39c417b700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).handle_message read tag failed
- 2019-01-29 15:26:50.038 7f39c417b700 0 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).fault initiating reconnect
- 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- 2019-01-29 15:26:50.038 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- 2019-01-29 15:26:50.239 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- 2019-01-29 15:26:50.239 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- 2019-01-29 15:26:50.639 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- 2019-01-29 15:26:50.640 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- 2019-01-29 15:26:51.441 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- 2019-01-29 15:26:51.441 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- 2019-01-29 15:26:53.043 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- 2019-01-29 15:26:53.043 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- 2019-01-29 15:26:54.586 7f39c8984700 11 mon.b@1(electing) e0 tick
- 2019-01-29 15:26:54.586 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
- 2019-01-29 15:26:54.759 7f39c8984700 5 mon.b@1(electing).elector(3) election timer expired
- 2019-01-29 15:26:54.759 7f39c8984700 10 mon.b@1(electing).elector(3) bump_epoch 3 to 4
- 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 join_election
- 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 _reset
- 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 timecheck_finish
- 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_tick_stop
- 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_interval_stop
- 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 scrub_event_cancel
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing) e0 scrub_reset
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772105 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772223 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772325 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- ?+0 0x56393abd5800
- 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- 0x56393abd5800 con 0x563939c9cd80
- 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(electing) e0 win_election epoch 4 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:54.772 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
- 2019-01-29 15:26:54.772 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2368543008 middle_crc = 0 data_crc = 0 sig = 5116208503718269665
- 2019-01-29 15:26:54.772 7f39c397a700 20 Putting signature in client message(seq # 6): sig = 5116208503718269665
- 2019-01-29 15:26:54.772 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 -- 0x56393b000d80 con 0x563939d41600
- 2019-01-29 15:26:54.772 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 ==== 0+0+0 (0 0 0) 0x56393b000d80 con 0x563939d41600
- 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) leader_init -- starting paxos recovery
- 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) learned uncommitted 1 pn 100 (2196 bytes) from myself
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) get_new_proposal_number = 201
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) collect with pn 201
- 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5b00
- 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5b00 con 0x563939c9cd80
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) election_finished
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) _active - not active
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
- 2019-01-29 15:26:54.777 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 1789744833 middle_crc = 0 data_crc = 0 sig = 2735914805292709925
- 2019-01-29 15:26:54.777 7f39c397a700 20 Putting signature in client message(seq # 7): sig = 2735914805292709925
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778696 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778814 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.779003 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
- 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_quorum_to_compatset_features
- 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_monmap_to_compatset_features
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 timecheck_finish
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 resend_routed_requests
- 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 register_cluster_logger - already registered
- 2019-01-29 15:26:54.778 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- 2019-01-29 15:26:54.779 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
- 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.779 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.780024 lease_expire=0.000000 has v0 lc 0
- 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.794 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 2077645696 middle_crc = 0 data_crc = 0 sig = 16008821289831722732
- 2019-01-29 15:26:54.795 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 7 ==== paxos(last lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (2077645696 0 0) 0x56393abd5b00 con 0x563939c9cd80
- 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
- 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) handle_last paxos(last lc 0 fc 0 pn 201 opn 0) v4
- 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) store_state nothing to commit
- 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) they accepted our pn, we now have 2 peons
- 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) that's everyone. begin on old learned value
- 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) begin for 1 2196 bytes
- 2019-01-29 15:26:54.800 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) sending begin to mon.2
- 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5200
- 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5200 con 0x563939c9cd80
- 2019-01-29 15:26:54.800 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3535318905 middle_crc = 0 data_crc = 0 sig = 6870884659653601128
- 2019-01-29 15:26:54.800 7f39c397a700 20 Putting signature in client message(seq # 8): sig = 6870884659653601128
- 2019-01-29 15:26:54.807 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3909173416 middle_crc = 0 data_crc = 0 sig = 17406004129815643634
- 2019-01-29 15:26:54.807 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 8 ==== paxos(accept lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (3909173416 0 0) 0x56393abd5200 con 0x563939c9cd80
- 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
- 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) handle_accept paxos(accept lc 0 fc 0 pn 201 opn 0) v4
- 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) now 1,2 have accepted
- 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) got majority, committing, done with update
- 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) commit_start 1
- 2019-01-29 15:26:54.813 7f39c2978700 20 mon.b@1(leader).paxos(paxos writing-previous c 0..0) commit_finish 1
- 2019-01-29 15:26:54.813 7f39c2978700 10 mon.b@1(leader).paxos(paxos writing-previous c 1..1) sending commit to mon.2
- 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393b00a000
- 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- 0x56393b00a000 con 0x563939c9cd80
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader) e0 refresh_from_paxos
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
- 2019-01-29 15:26:54.814 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 558359806 middle_crc = 0 data_crc = 0 sig = 15537979346010902130
- 2019-01-29 15:26:54.814 7f39c397a700 20 Putting signature in client message(seq # 9): sig = 15537979346010902130
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos version 1, my v 0
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 signaling that we need a bootstrap
- 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos got 1
- 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
- 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).auth v0 update_from_paxos
- 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
- 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).config load_config got 0 keys
- 2019-01-29 15:26:54.819 7f39c2978700 20 mon.b@1(leader).config load_config config map:
- {
- "global": {},
- "by_type": {},
- "by_id": {}
- }
- 2019-01-29 15:26:54.830 7f39c2978700 20 mgrc handle_mgr_map mgrmap(e 0) v1
- 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Got map version 0
- 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Active mgr is now
- 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc reconnect No active mgr available yet
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat 0
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat check_subs
- 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).mgrstat update_logger
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).health update_from_paxos
- 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).health dump:{
- "quorum_health": {},
- "leader_health": {}
- }
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
- 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader).paxos(paxos refresh c 1..1) doing requested bootstrap
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 bootstrap
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 sync_reset_requester
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 unregister_cluster_logger
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 _reset
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 timecheck_finish
- 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_tick_stop
- 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_interval_stop
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_event_cancel
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_reset
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxos(paxos refresh c 1..1) restart -- canceling timeouts
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832435 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832553 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832654 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832778 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 reset_probe_timeout 0x56393affc480 after 2 seconds
- 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 probing other monitors
- 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affe840
- 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affe840 con 0x563939c9c900
- 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affeb00
- 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affeb00 con 0x563939c9cd80
- 2019-01-29 15:26:54.832 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 3184037569713953523
- 2019-01-29 15:26:54.832 7f39c397a700 20 Putting signature in client message(seq # 10): sig = 3184037569713953523
- 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 8540967715760223295
- 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 9 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393affeb00 con 0x563939c9cd80
- 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
- 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
- 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 800172242 middle_crc = 0 data_crc = 0 sig = 7787347958796391197
- 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 1 lc 1 ) new) v6 -- 0x56393affe2c0 con 0x563939c9cd80
- 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 443766928 middle_crc = 0 data_crc = 0 sig = 3146968082825100645
- 2019-01-29 15:26:54.835 7f39c397a700 20 Putting signature in client message(seq # 11): sig = 3146968082825100645
- 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 10 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6 ==== 430+0+0 (800172242 0 0) 0x56393affe000 con 0x563939c9cd80
- 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
- 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 monmap is e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 peer name is c
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 mon.c is outside the quorum
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 outside_quorum now b,c, need 2
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 that's enough to form a new quorum, calling election
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 start_election
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 _reset
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 cancel_probe_timeout 0x56393affc480
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 timecheck_finish
- 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_tick_stop
- 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_interval_stop
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_event_cancel
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_reset
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836598 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836628 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836663 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836708 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:54.835 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
- 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 -- 0x56393b000240 con 0x563939d41600
- 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(electing).elector(4) start -- can i be leader?
- 2019-01-29 15:26:54.835 7f39c617f700 1 mon.b@1(electing).elector(4) init, last seen epoch 4
- 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(electing).elector(4) bump_epoch 4 to 5
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 join_election
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 _reset
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 timecheck_finish
- 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_tick_stop
- 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_interval_stop
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_event_cancel
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_reset
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840738 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840832 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840935 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.841040 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:54.853 7f39c617f700 -1 mon.b@1(electing) e1 devname dm-0
- 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00a900
- 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00a900 con 0x563939c9c900
- 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00ac00
- 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00ac00 con 0x563939c9cd80
- 2019-01-29 15:26:54.854 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12969980896879027673
- 2019-01-29 15:26:54.854 7f39c397a700 20 Putting signature in client message(seq # 12): sig = 12969980896879027673
- 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 ==== 0+0+0 (0 0 0) 0x56393b000240 con 0x563939d41600
- 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
- 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:54.854 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.855846 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:54.855 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12994639518338884118
- 2019-01-29 15:26:54.855 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 11 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 ==== 450+0+0 (1196600255 0 0) 0x56393b00a000 con 0x563939c9cd80
- 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
- 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.855 7f39c617f700 5 mon.b@1(electing).elector(5) handle_propose from mon.2
- 2019-01-29 15:26:54.855 7f39c617f700 10 mon.b@1(electing).elector(5) handle_propose required features 549755813888 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:54.857 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 2042675853 middle_crc = 0 data_crc = 0 sig = 8776967150872131881
- 2019-01-29 15:26:54.857 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 12 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 5) v7 ==== 1190+0+0 (2042675853 0 0) 0x56393b00ac00 con 0x563939c9cd80
- 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
- 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
- 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) handle_ack from mon.2
- 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
- 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 36
- 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
- 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).handle_message read tag failed
- 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).fault on lossy channel, failing
- 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x563939c9fa80 v2:10.215.99.125:53002/4155176800
- 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
- 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1200 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
- 2019-01-29 15:26:54.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026000 0x56393ab9be00 :53042 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53042/0 addrs are 145
- 2019-01-29 15:26:54.869 7f39c3179700 10 In get_auth_session_handler for protocol 0
- 2019-01-29 15:26:54.870 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001b00 con 0x56393b026000
- 2019-01-29 15:26:54.870 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393abc1d40 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- 2019-01-29 15:26:54.870 7f39c617f700 20 mon.b@1(electing) e1 caps
- 2019-01-29 15:26:54.870 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
- 2019-01-29 15:26:56.244 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- 2019-01-29 15:26:56.244 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 30
- 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
- 2019-01-29 15:26:57.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).handle_message read tag failed
- 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).fault on lossy channel, failing
- 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x56393b026000 v2:10.215.99.125:53002/4155176800
- 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
- 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1d40 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
- 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026480 0x56393ab9d600 :53056 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53056/0 addrs are 145
- 2019-01-29 15:26:57.871 7f39c3179700 10 In get_auth_session_handler for protocol 0
- 2019-01-29 15:26:57.871 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001d40 con 0x56393b026480
- 2019-01-29 15:26:57.872 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393b000480 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- 2019-01-29 15:26:57.872 7f39c617f700 20 mon.b@1(electing) e1 caps
- 2019-01-29 15:26:57.872 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
- 2019-01-29 15:26:59.586 7f39c8984700 11 mon.b@1(electing) e1 tick
- 2019-01-29 15:26:59.586 7f39c8984700 20 mon.b@1(electing) e1 sync_trim_providers
- 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing) e1 session closed, dropping 0x56393b001b00
- 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
- 2019-01-29 15:26:59.586 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.587891 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.587 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.854 7f39c8984700 5 mon.b@1(electing).elector(5) election timer expired
- 2019-01-29 15:26:59.854 7f39c8984700 10 mon.b@1(electing).elector(5) bump_epoch 5 to 6
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 join_election
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 _reset
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 timecheck_finish
- 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_tick_stop
- 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_interval_stop
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_event_cancel
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_reset
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867221 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867315 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867394 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867477 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) discarding message from disconnected client client.? v2:10.215.99.125:53002/4155176800 auth(proto 0 30 bytes epoch 0) v1
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
- 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867654 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- 2019-01-29 15:26:59.866 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- ?+0 0x56393b00b800
- 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- 0x56393b00b800 con 0x563939c9cd80
- 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(electing) e1 win_election epoch 6 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:59.867 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
- 2019-01-29 15:26:59.867 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 178927411 middle_crc = 0 data_crc = 0 sig = 794234354135278605
- 2019-01-29 15:26:59.867 7f39c397a700 20 Putting signature in client message(seq # 13): sig = 794234354135278605
- 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 -- 0x56393b000fc0 con 0x563939d41600
- 2019-01-29 15:26:59.867 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 ==== 0+0+0 (0 0 0) 0x56393b000fc0 con 0x563939d41600
- 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) leader_init -- starting paxos recovery
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) get_new_proposal_number = 301
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) collect with pn 301
- 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- ?+0 0x56393b00bb00
- 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- 0x56393b00bb00 con 0x563939c9cd80
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) election_finished
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active - not active
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
- 2019-01-29 15:26:59.872 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 4116324937 middle_crc = 0 data_crc = 0 sig = 15318303623903287436
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
- 2019-01-29 15:26:59.872 7f39c397a700 20 Putting signature in client message(seq # 14): sig = 15318303623903287436
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873644 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873776 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873916 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874034 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
- 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874163 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
- 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
- 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_quorum_to_compatset_features
- 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_monmap_to_compatset_features
- 2019-01-29 15:26:59.873 7f39c8984700 1 mon.b@1(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout}
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 calc_quorum_requirements required_features 2449958747315912708
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_finish
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 resend_routed_requests
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 register_cluster_logger
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round curr 0
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round new 1
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck start timecheck epoch 6 round 1
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck send time_check( ping e 6 r 1 ) v1 to mon.2
- 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- ?+0 0x56393b000b40
- 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- 0x56393b000b40 con 0x563939c9cd80
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round setting up next event
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_reset_event delay 300 rounds_since_clean 0
- 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_start
- 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_stop
- 2019-01-29 15:26:59.879 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 72719240 middle_crc = 0 data_crc = 0 sig = 11523137460518662160
- 2019-01-29 15:26:59.879 7f39c397a700 20 Putting signature in client message(seq # 15): sig = 11523137460518662160
- 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 scrub_event_start
- 2019-01-29 15:26:59.879 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- 2019-01-29 15:26:59.880 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
- 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000fc0 log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- 2019-01-29 15:26:59.880 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.881040 lease_expire=0.000000 has v0 lc 1
- 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 4120674357 middle_crc = 0 data_crc = 0 sig = 15016538117729764366
- 2019-01-29 15:26:59.889 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 13 ==== paxos(last lc 1 fc 1 pn 301 opn 0) v4 ==== 84+0+0 (4120674357 0 0) 0x56393b00b800 con 0x563939c9cd80
- 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 1460593560 middle_crc = 0 data_crc = 0 sig = 3747477798607265139
- 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
- 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
- 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
- 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) handle_last paxos(last lc 1 fc 1 pn 301 opn 0) v4
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) store_state nothing to commit
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) they accepted our pn, we now have 2 peons
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) that's everyone. active!
- 2019-01-29 15:26:59.890 7f39c617f700 7 mon.b@1(leader).paxos(paxos recovering c 1..1) extend_lease now+5 (2019-01-29 15:27:04.891100)
- 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- ?+0 0x56393b00af00
- 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- 0x56393b00af00 con 0x563939c9cd80
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader) e1 refresh_from_paxos
- 2019-01-29 15:26:59.890 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 3892865509 middle_crc = 0 data_crc = 0 sig = 11834348323597999916
- 2019-01-29 15:26:59.890 7f39c397a700 20 Putting signature in client message(seq # 16): sig = 11834348323597999916
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
- 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos
- 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
- 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
- 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
- 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).auth v0 update_from_paxos
- 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
- 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).config load_config got 0 keys
- 2019-01-29 15:26:59.891 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 3498592039 middle_crc = 0 data_crc = 0 sig = 6841074444368600247
- 2019-01-29 15:26:59.891 7f39c617f700 20 mon.b@1(leader).config load_config config map:
- {
- "global": {},
- "by_type": {},
- "by_id": {}
- }
- 2019-01-29 15:26:59.897 7f39c617f700 20 mgrc handle_mgr_map mgrmap(e 0) v1
- 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Got map version 0
- 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Active mgr is now
- 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc reconnect No active mgr available yet
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat 0
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat check_subs
- 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).mgrstat update_logger
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).health update_from_paxos
- 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).health dump:{
- "quorum_health": {},
- "leader_health": {}
- }
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) finish_round
- 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).paxos(paxos active c 1..1) finish_round waiting_for_acting
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active
- 2019-01-29 15:26:59.897 7f39c617f700 7 mon.b@1(leader).paxosservice(monmap 1..1) _active creating new pending
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 create_pending monmap epoch 2
- 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 noting that i was, once, part of an active quorum.
- 2019-01-29 15:26:59.902 7f39c617f700 0 log_channel(cluster) log [DBG] : monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- 2019-01-29 15:26:59.902 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 6 at 2019-01-29 15:26:59.903091) v1 -- 0x56393b000900 con 0x563939d41600
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).monmap v1 apply_mon_features features match current pending: mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active
- 2019-01-29 15:26:59.902 7f39c617f700 7 mon.b@1(leader).paxosservice(mdsmap 0..0) _active creating new pending
- 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) is_readable = 1 - now=2019-01-29 15:26:59.903204 lease_expire=2019-01-29 15:27:04.891100 has v0 lc 1
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_pending e1
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_initial
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) propose_pending
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 encode_pending e1
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader) e1 log_health updated 0 previous 0
- 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) queue_pending_finisher 0x563939ab8950
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) trigger_propose active, proposing now
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) propose_pending 2 2867 bytes
- 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) begin for 2 2867 bytes
- 2019-01-29 15:26:59.906 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) sending begin to mon.2
- 2019-01-29 15:26:59.906 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- ?+0 0x56393b00b200
- 2019-01-29 15:26:59.907 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- 0x56393b00b200 con 0x563939c9cd80
- 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active
- 2019-01-29 15:26:59.907 7f39c617f700 7 mon.b@1(leader).paxosservice(osdmap 0..0) _active creating new pending
- 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_pending e 1
- 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting backfillfull_ratio = 0.99
- 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting full_ratio = 0.99
- 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting nearfull_ratio = 0.99
- 2019-01-29 15:26:59.907 7f39c397a700 10 _calc_signature seq 17 front_crc_ = 3823202930 middle_crc = 0 data_crc = 0 sig = 3998564941522667071
- 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_initial for 3b02750c-f104-4301-aa14-258d2b37f104
- 2019-01-29 15:26:59.907 7f39c397a700 20 Putting signature in client message(seq # 17): sig = 3998564941522667071
- 2019-01-29 15:26:59.907 7f39c617f700 20 mon.b@1(leader).osd e0 full crc 3491248425
- 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) propose_pending
- 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending e 1
- 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 do_prune osdmap full prune enabled
- 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 should_prune currently holding only 0 epochs (min osdmap epochs: 500); do not prune.
- 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pools queued
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pgs removed because they're created
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs queue remaining: 0 pools
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0/0 pgs added from queued pools
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first mimic+ epoch
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first nautilus+ epoch
- 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending encoding full map with nautilus features 1080873256688298500
- 2019-01-29 15:26:59.908 7f39c617f700 20 mon.b@1(leader).osd e0 full_crc 3491248425 inc_crc 3830871662
- 2019-01-29 15:26:59.911 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 2534419215 middle_crc = 0 data_crc = 0 sig = 4957134661093499494
- 2019-01-29 15:26:59.940 7f39c617f700 -1 *** Caught signal (Segmentation fault) **
- in thread 7f39c617f700 thread_name:ms_dispatch
- ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev)
- 1: (()+0x13dd820) [0x5639382a1820]
- 2: (()+0x12080) [0x7f39d1683080]
- 3: (OSDMap::check_health(health_check_map_t*) const+0x1235) [0x7f39d6284cfb]
- 4: (OSDMonitor::encode_pending(std::shared_ptr<MonitorDBStore::Transaction>)+0x510d) [0x56393811017b]
- 5: (PaxosService::propose_pending()+0x45a) [0x5639380fcb24]
- 6: (PaxosService::_active()+0x62b) [0x5639380fdba7]
- 7: (()+0x12394e9) [0x5639380fd4e9]
- 8: (Context::complete(int)+0x27) [0x563937e12037]
- 9: (void finish_contexts<std::__cxx11::list<Context*, std::allocator<Context*> > >(CephContext*, std::__cxx11::list<Context*, std::allocator<Context*> >&, int)+0x2c8) [0x563937e3642c]
- 10: (Paxos::finish_round()+0x2ed) [0x5639380eb4e9]
- 11: (Paxos::handle_last(boost::intrusive_ptr<MonOpRequest>)+0x17ae) [0x5639380e5ac8]
- 12: (Paxos::dispatch(boost::intrusive_ptr<MonOpRequest>)+0x392) [0x5639380ef7cc]
- 13: (Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0x1119) [0x563937de61d9]
- 14: (Monitor::_ms_dispatch(Message*)+0xec6) [0x563937de4d9e]
- 15: (Monitor::ms_dispatch(Message*)+0x38) [0x563937e20d04]
- 16: (Dispatcher::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x5c) [0x563937e142c2]
- 17: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0xe9) [0x7f39d5f6f247]
- 18: (DispatchQueue::entry()+0x61c) [0x7f39d5f6dd3c]
- 19: (DispatchQueue::DispatchThread::entry()+0x1c) [0x7f39d60cf7f4]
- 20: (Thread::entry_wrapper()+0x78) [0x7f39d5d4cb4a]
- 21: (Thread::_entry_func(void*)+0x18) [0x7f39d5d4cac8]
- 22: (()+0x7594) [0x7f39d1678594]
- 23: (clone()+0x3f) [0x7f39d041bf4f]
- NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
- --- begin dump of recent events ---
- -1424> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command assert hook 0x563939ab8540
- -1423> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command abort hook 0x563939ab8540
- -1422> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perfcounters_dump hook 0x563939ab8540
- -1421> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command 1 hook 0x563939ab8540
- -1420> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf dump hook 0x563939ab8540
- -1419> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perfcounters_schema hook 0x563939ab8540
- -1418> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf histogram dump hook 0x563939ab8540
- -1417> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command 2 hook 0x563939ab8540
- -1416> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf schema hook 0x563939ab8540
- -1415> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf histogram schema hook 0x563939ab8540
- -1414> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command perf reset hook 0x563939ab8540
- -1413> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config show hook 0x563939ab8540
- -1412> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config help hook 0x563939ab8540
- -1411> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config set hook 0x563939ab8540
- -1410> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config unset hook 0x563939ab8540
- -1409> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config get hook 0x563939ab8540
- -1408> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config diff hook 0x563939ab8540
- -1407> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command config diff get hook 0x563939ab8540
- -1406> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command log flush hook 0x563939ab8540
- -1405> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command log dump hook 0x563939ab8540
- -1404> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command log reopen hook 0x563939ab8540
- -1403> 2019-01-29 15:26:44.490 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_mempools hook 0x56393a916068
- -1402> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep start
- -1401> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 0
- -1400> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 1
- -1399> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 2
- -1398> 2019-01-29 15:26:44.506 7f39deeb51c0 1 lockdep using id 3
- -1397> 2019-01-29 15:26:44.507 7f39deeb51c0 0 ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev), process ceph-mon, pid 613920
- -1396> 2019-01-29 15:26:44.526 7f39deeb51c0 1 lockdep using id 4
- -1395> 2019-01-29 15:26:44.526 7f39deeb51c0 1 lockdep using id 5
- -1394> 2019-01-29 15:26:44.527 7f39deeb51c0 1 lockdep using id 6
- -1393> 2019-01-29 15:26:44.527 7f39deeb51c0 1 lockdep using id 7
- -1392> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) init /tmp/ceph-asok.we8t9p/mon.b.asok
- -1391> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) bind_and_listen /tmp/ceph-asok.we8t9p/mon.b.asok
- -1390> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command 0 hook 0x563939ac2ad0
- -1389> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command version hook 0x563939ac2ad0
- -1388> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command git_version hook 0x563939ac2ad0
- -1387> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command help hook 0x563939ab81b0
- -1386> 2019-01-29 15:26:44.527 7f39deeb51c0 5 asok(0x563939e2a000) register_command get_command_descriptions hook 0x563939ab81f0
- -1385> 2019-01-29 15:26:44.527 7f39deeb51c0 1 lockdep using id 8
- -1384> 2019-01-29 15:26:44.527 7f39cca47700 5 asok(0x563939e2a000) entry start
- -1383> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 9
- -1382> 2019-01-29 15:26:44.545 7f39deeb51c0 0 load: jerasure load: lrc load: isa
- -1381> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 10
- -1380> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 11
- -1379> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 12
- -1378> 2019-01-29 15:26:44.545 7f39deeb51c0 1 lockdep using id 13
- -1377> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
- -1376> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
- -1375> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
- -1374> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option compression = kNoCompression
- -1373> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option level_compaction_dynamic_level_bytes = true
- -1372> 2019-01-29 15:26:44.545 7f39deeb51c0 0 set rocksdb option write_buffer_size = 33554432
- -1371> 2019-01-29 15:26:44.546 7f39deeb51c0 1 rocksdb: do_open column families: [default]
- -1370> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: RocksDB version: 5.17.2
- -1369> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Git sha rocksdb_build_git_sha:@37828c548a886dccf58a7a93fc2ce13877884c0c@
- -1368> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compile date Jan 28 2019
- -1367> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: DB SUMMARY
- -1366> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: CURRENT file: CURRENT
- -1365> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: IDENTITY file: IDENTITY
- -1364> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: MANIFEST file: MANIFEST-000001 size: 13 Bytes
- -1363> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: SST files in /home/rraja/git/ceph/build/dev/mon.b/store.db dir, Total Num: 0, files:
- -1362> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Write Ahead Log file in /home/rraja/git/ceph/build/dev/mon.b/store.db: 000003.log size: 1091 ;
- -1361> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.error_if_exists: 0
- -1360> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_if_missing: 0
- -1359> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.paranoid_checks: 1
- -1358> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.env: 0x563938d121a0
- -1357> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.info_log: 0x563939e69f40
- -1356> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_file_opening_threads: 16
- -1355> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.statistics: (nil)
- -1354> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_fsync: 0
- -1353> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_log_file_size: 0
- -1352> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_manifest_file_size: 1073741824
- -1351> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.log_file_time_to_roll: 0
- -1350> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.keep_log_file_num: 1000
- -1349> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.recycle_log_file_num: 0
- -1348> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_fallocate: 1
- -1347> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_reads: 0
- -1346> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_mmap_writes: 0
- -1345> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_reads: 0
- -1344> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
- -1343> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.create_missing_column_families: 0
- -1342> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_log_dir:
- -1341> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_dir: /home/rraja/git/ceph/build/dev/mon.b/store.db
- -1340> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.table_cache_numshardbits: 6
- -1339> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_subcompactions: 1
- -1338> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_flushes: -1
- -1337> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_ttl_seconds: 0
- -1336> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.WAL_size_limit_MB: 0
- -1335> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manifest_preallocation_size: 4194304
- -1334> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.is_fd_close_on_exec: 1
- -1333> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.advise_random_on_open: 1
- -1332> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.db_write_buffer_size: 0
- -1331> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_buffer_manager: 0x563939e6a720
- -1330> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.access_hint_on_compaction_start: 1
- -1329> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
- -1328> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.random_access_max_buffer_size: 1048576
- -1327> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.use_adaptive_mutex: 0
- -1326> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.rate_limiter: (nil)
- -1325> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
- -1324> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_recovery_mode: 2
- -1323> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_thread_tracking: 0
- -1322> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_pipelined_write: 0
- -1321> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_concurrent_memtable_write: 1
- -1320> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
- -1319> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_max_yield_usec: 100
- -1318> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.write_thread_slow_yield_usec: 3
- -1317> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.row_cache: None
- -1316> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_filter: None
- -1315> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_recovery: 0
- -1314> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.allow_ingest_behind: 0
- -1313> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.preserve_deletes: 0
- -1312> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.two_write_queues: 0
- -1311> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.manual_wal_flush: 0
- -1310> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_jobs: 2
- -1309> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_background_compactions: -1
- -1308> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.avoid_flush_during_shutdown: 0
- -1307> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
- -1306> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delayed_write_rate : 16777216
- -1305> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_total_wal_size: 0
- -1304> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
- -1303> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.stats_dump_period_sec: 600
- -1302> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.max_open_files: -1
- -1301> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.bytes_per_sync: 0
- -1300> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.wal_bytes_per_sync: 0
- -1299> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Options.compaction_readahead_size: 0
- -1298> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Compression algorithms supported:
- -1297> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTDNotFinalCompression supported: 0
- -1296> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZSTD supported: 0
- -1295> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kXpressCompression supported: 0
- -1294> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4HCCompression supported: 1
- -1293> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kLZ4Compression supported: 1
- -1292> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kBZip2Compression supported: 0
- -1291> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kZlibCompression supported: 1
- -1290> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: kSnappyCompression supported: 1
- -1289> 2019-01-29 15:26:44.546 7f39deeb51c0 4 rocksdb: Fast CRC32 supported: Supported on x86
- -1288> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
- -1287> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
- -1286> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
- -1285> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.merge_operator:
- -1284> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter: None
- -1283> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_filter_factory: None
- -1282> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_factory: SkipListFactory
- -1281> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_factory: BlockBasedTable
- -1280> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563939ac2ab0)
- cache_index_and_filter_blocks: 1
- cache_index_and_filter_blocks_with_high_priority: 1
- pin_l0_filter_and_index_blocks_in_cache: 1
- pin_top_level_index_and_filter: 1
- index_type: 0
- hash_index_allow_collision: 1
- checksum: 1
- no_block_cache: 0
- block_cache: 0x56393a9752a0
- block_cache_name: BinnedLRUCache
- block_cache_options:
- capacity : 536870912
- num_shard_bits : 4
- strict_capacity_limit : 0
- high_pri_pool_ratio: 0.000
- block_cache_compressed: (nil)
- persistent_cache: (nil)
- block_size: 4096
- block_size_deviation: 10
- block_restart_interval: 16
- index_block_restart_interval: 1
- metadata_block_size: 4096
- partition_filters: 0
- use_delta_encoding: 1
- filter_policy: rocksdb.BuiltinBloomFilter
- whole_key_filtering: 1
- verify_compression: 0
- read_amp_bytes_per_bit: 0
- format_version: 2
- enable_index_compression: 1
- block_align: 0
- -1279> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.write_buffer_size: 33554432
- -1278> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number: 2
- -1277> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression: NoCompression
- -1276> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression: Disabled
- -1275> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.prefix_extractor: nullptr
- -1274> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
- -1273> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.num_levels: 7
- -1272> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
- -1271> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
- -1270> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
- -1269> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.level: 32767
- -1268> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
- -1267> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
- -1266> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
- -1265> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bottommost_compression_opts.enabled: false
- -1264> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.window_bits: -14
- -1263> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.level: 32767
- -1262> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.strategy: 0
- -1261> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
- -1260> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
- -1259> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compression_opts.enabled: false
- -1258> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
- -1257> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
- -1256> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level0_stop_writes_trigger: 36
- -1255> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_base: 67108864
- -1254> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.target_file_size_multiplier: 1
- -1253> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_base: 268435456
- -1252> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
- -1251> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
- -1250> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
- -1249> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
- -1248> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
- -1247> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
- -1246> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
- -1245> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
- -1244> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
- -1243> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
- -1242> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_compaction_bytes: 1677721600
- -1241> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.arena_block_size: 4194304
- -1240> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
- -1239> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
- -1238> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
- -1237> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.disable_auto_compactions: 0
- -1236> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
- -1235> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_pri: kByCompensatedSize
- -1234> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
- -1233> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
- -1232> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
- -1231> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
- -1230> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
- -1229> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
- -1228> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
- -1227> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
- -1226> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.compaction_options_fifo.ttl: 0
- -1225> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.table_properties_collectors:
- -1224> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_support: 0
- -1223> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.inplace_update_num_locks: 10000
- -1222> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
- -1221> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.memtable_huge_page_size: 0
- -1220> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.bloom_locality: 0
- -1219> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.max_successive_merges: 0
- -1218> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.optimize_filters_for_hits: 0
- -1217> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.paranoid_file_checks: 0
- -1216> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.force_consistency_checks: 0
- -1215> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.report_bg_io_stats: 0
- -1214> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: Options.ttl: 0
- -1213> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/home/rraja/git/ceph/build/dev/mon.b/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
- -1212> 2019-01-29 15:26:44.547 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
- -1211> 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804548935, "job": 1, "event": "recovery_started", "log_files": [3]}
- -1210> 2019-01-29 15:26:44.548 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:561] Recovering log #3 mode 2
- -1209> 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804555547, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 4, "file_size": 1849, "table_properties": {"data_size": 1103, "index_size": 28, "filter_size": 23, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 980, "raw_average_value_size": 196, "num_data_blocks": 1, "num_entries": 5, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
- -1208> 2019-01-29 15:26:44.554 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/version_set.cc:2936] Creating manifest 5
- -1207> 2019-01-29 15:26:44.563 7f39deeb51c0 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1548755804564214, "job": 1, "event": "recovery_finished"}
- -1206> 2019-01-29 15:26:44.574 7f39deeb51c0 4 rocksdb: [/home/rraja/git/ceph/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x56393a906800
- -1205> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 14
- -1204> 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap
- -1203> 2019-01-29 15:26:44.574 7f39deeb51c0 10 obtain_monmap found mkfs monmap
- -1202> 2019-01-29 15:26:44.574 7f39deeb51c0 10 main monmap:
- {
- "epoch": 0,
- "fsid": "3b02750c-f104-4301-aa14-258d2b37f104",
- "modified": "2019-01-29 15:26:43.871480",
- "created": "2019-01-29 15:26:43.871480",
- "features": {
- "persistent": [],
- "optional": []
- },
- "mons": [
- {
- "rank": 0,
- "name": "a",
- "public_addrs": {
- "addrvec": [
- {
- "type": "v2",
- "addr": "10.215.99.125:40363",
- "nonce": 0
- },
- {
- "type": "v1",
- "addr": "10.215.99.125:40364",
- "nonce": 0
- }
- ]
- },
- "addr": "10.215.99.125:40364/0",
- "public_addr": "10.215.99.125:40364/0"
- },
- {
- "rank": 1,
- "name": "b",
- "public_addrs": {
- "addrvec": [
- {
- "type": "v2",
- "addr": "10.215.99.125:40365",
- "nonce": 0
- },
- {
- "type": "v1",
- "addr": "10.215.99.125:40366",
- "nonce": 0
- }
- ]
- },
- "addr": "10.215.99.125:40366/0",
- "public_addr": "10.215.99.125:40366/0"
- },
- {
- "rank": 2,
- "name": "c",
- "public_addrs": {
- "addrvec": [
- {
- "type": "v2",
- "addr": "10.215.99.125:40367",
- "nonce": 0
- },
- {
- "type": "v1",
- "addr": "10.215.99.125:40368",
- "nonce": 0
- }
- ]
- },
- "addr": "10.215.99.125:40368/0",
- "public_addr": "10.215.99.125:40368/0"
- }
- ]
- }
- -1201> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 15
- -1200> 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
- -1199> 2019-01-29 15:26:44.574 7f39deeb51c0 5 adding auth protocol: cephx
- -1198> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 16
- -1197> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 17
- -1196> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 18
- -1195> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 19
- -1194> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 20
- -1193> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 21
- -1192> 2019-01-29 15:26:44.574 7f39deeb51c0 1 lockdep using id 22
- -1191> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 23
- -1190> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 24
- -1189> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 25
- -1188> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 26
- -1187> 2019-01-29 15:26:44.575 7f39deeb51c0 1 lockdep using id 27
- -1186> 2019-01-29 15:26:44.575 7f39deeb51c0 0 starting mon.b rank 1 at public addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] at bind addrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
- -1185> 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] learned_addr learned my addr [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] (peer_addr_for_me v2:10.215.99.125:40365/0)
- -1184> 2019-01-29 15:26:44.576 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] _finish_bind bind my_addrs is [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0]
- -1183> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- -1182> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- -1181> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 28
- -1180> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 29
- -1179> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 30
- -1178> 2019-01-29 15:26:44.576 7f39deeb51c0 0 starting mon.b rank 1 at [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] mon_data /home/rraja/git/ceph/build/dev/mon.b fsid 3b02750c-f104-4301-aa14-258d2b37f104
- -1177> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 31
- -1176> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 32
- -1175> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 33
- -1174> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 34
- -1173> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 35
- -1172> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 36
- -1171> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- -1170> 2019-01-29 15:26:44.576 7f39deeb51c0 5 adding auth protocol: cephx
- -1169> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 37
- -1168> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 38
- -1167> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 39
- -1166> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 40
- -1165> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 41
- -1164> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 42
- -1163> 2019-01-29 15:26:44.576 7f39deeb51c0 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
- -1162> 2019-01-29 15:26:44.576 7f39deeb51c0 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
- -1161> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 43
- -1160> 2019-01-29 15:26:44.576 7f39deeb51c0 1 lockdep using id 44
- -1159> 2019-01-29 15:26:44.577 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
- -1158> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon sync force name=yes_i_really_mean_it,type=CephBool,req=false name=i_know_what_i_am_doing,type=CephBool,req=false -> mon sync force name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices name=i_know_what_i_am_doing,req=false,strings=--i-know-what-i-am-doing,type=CephChoices
- -1157> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds set name=var,type=CephChoices,strings=max_mds|max_file_size|inline_data|allow_new_snaps|allow_multimds|allow_multimds_snaps|allow_dirfrags name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1156> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mds rmfailed name=role,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1155> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,type=CephBool,req=false -> mds newfs name=metadata,type=CephInt,range=0 name=data,type=CephInt,range=0 name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1154> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,type=CephBool,req=false name=allow_dangerous_metadata_overlay,type=CephBool,req=false -> fs new name=fs_name,type=CephString name=metadata,type=CephString name=data,type=CephString name=force,req=false,strings=--force,type=CephChoices name=allow_dangerous_metadata_overlay,req=false,strings=--allow-dangerous-metadata-overlay,type=CephChoices
- -1153> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs rm name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1152> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs reset name=fs_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1151> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs set name=fs_name,type=CephString name=var,type=CephChoices,strings=max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|down|joinable|min_compat_client name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1150> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> fs flag set name=flag_name,type=CephChoices,strings=enable_multiple name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1149> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> mon feature set name=feature_name,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1148> 2019-01-29 15:26:44.578 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd crush swap-bucket name=source,type=CephString,goodchars=[A-Za-z0-9-_.] name=dest,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1147> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd set-require-min-compat-client name=version,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1146> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,type=CephBool,req=false -> osd erasure-code-profile set name=name,type=CephString,goodchars=[A-Za-z0-9-_.] name=profile,type=CephString,n=N,req=false name=force,req=false,strings=--force,type=CephChoices
- -1145> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,type=CephBool,req=false -> osd set name=key,type=CephChoices,strings=full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds|pglog_hardlimit name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1144> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,type=CephBool,req=false -> osd require-osd-release name=release,type=CephChoices,strings=luminous|mimic|nautilus name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1143> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,type=CephBool,req=false -> osd force-create-pg name=pgid,type=CephPgid name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1142> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd destroy-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1141> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-new name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1140> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd purge-actual name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1139> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,type=CephBool,req=false -> osd lost name=id,type=CephOsdName name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1138> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool delete name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
- -1137> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,type=CephBool,req=false name=yes_i_really_really_mean_it_not_faking,type=CephBool,req=false -> osd pool rm name=pool,type=CephPoolname name=pool2,type=CephPoolname,req=false name=yes_i_really_really_mean_it,req=false,strings=--yes-i-really-really-mean-it,type=CephChoices name=yes_i_really_really_mean_it_not_faking,req=false,strings=--yes-i-really-really-mean-it-not-faking,type=CephChoices
- -1136> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool set name=pool,type=CephPoolname name=var,type=CephChoices,strings=size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_num_min|target_size_bytes|target_size_ratio name=val,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1135> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application enable name=pool,type=CephPoolname name=app,type=CephString,goodchars=[A-Za-z0-9-_.] name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1134> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,type=CephBool,req=false -> osd pool application disable name=pool,type=CephPoolname name=app,type=CephString name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1133> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,type=CephBool,req=false -> osd tier cache-mode name=pool,type=CephPoolname name=mode,type=CephChoices,strings=none|writeback|forward|readonly|readforward|proxy|readproxy name=yes_i_really_mean_it,req=false,strings=--yes-i-really-mean-it,type=CephChoices
- -1132> 2019-01-29 15:26:44.579 7f39deeb51c0 20 mon.b@-1(probing) e0 pre-nautilus cmd config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,type=CephBool,req=false -> config set name=who,type=CephString name=name,type=CephString name=value,type=CephString name=force,req=false,strings=--force,type=CephChoices
- -1131> 2019-01-29 15:26:44.580 7f39deeb51c0 1 mon.b@-1(probing) e0 preinit fsid 3b02750c-f104-4301-aa14-258d2b37f104
- -1130> 2019-01-29 15:26:44.580 7f39deeb51c0 1 lockdep using id 45
- -1129> 2019-01-29 15:26:44.580 7f39deeb51c0 1 lockdep using id 46
- -1128> 2019-01-29 15:26:44.580 7f39deeb51c0 1 lockdep using id 47
- -1127> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 check_fsid cluster_uuid contains '3b02750c-f104-4301-aa14-258d2b37f104'
- -1126> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 features compat={},rocompat={},incompat={1=initial feature set (~v.18),3=single paxos with k/v store (v0.?)}
- -1125> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 calc_quorum_requirements required_features 0
- -1124> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 required_features 0
- -1123> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 has_ever_joined = 0
- -1122> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_last_committed_floor 0
- -1121> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 init_paxos
- -1120> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init last_pn: 0 accepted_pn: 0 last_committed: 0 first_committed: 0
- -1119> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxos(paxos recovering c 0..0) init
- -1118> 2019-01-29 15:26:44.580 7f39deeb51c0 5 mon.b@-1(probing).mds e0 Unable to load 'last_metadata'
- -1117> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).health init
- -1116> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config init
- -1115> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos
- -1114> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing) e0 refresh_from_paxos no cluster_fingerprint
- -1113> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) refresh
- -1112> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) refresh
- -1111> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) refresh
- -1110> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos
- -1109> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).log v0 update_from_paxos version 0 summary v 0
- -1108> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) refresh
- -1107> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) refresh
- -1106> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).auth v0 update_from_paxos
- -1105> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) refresh
- -1104> 2019-01-29 15:26:44.580 7f39deeb51c0 10 mon.b@-1(probing).config load_config got 0 keys
- -1103> 2019-01-29 15:26:44.580 7f39deeb51c0 20 mon.b@-1(probing).config load_config config map:
- {
- "global": {},
- "by_type": {},
- "by_id": {}
- }
- -1102> 2019-01-29 15:26:44.581 7f39deeb51c0 4 set_mon_vals no callback set
- -1101> 2019-01-29 15:26:44.584 7f39deeb51c0 20 mgrc handle_mgr_map mgrmap(e 0) v1
- -1100> 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Got map version 0
- -1099> 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc handle_mgr_map Active mgr is now
- -1098> 2019-01-29 15:26:44.584 7f39deeb51c0 4 mgrc reconnect No active mgr available yet
- -1097> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) refresh
- -1096> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat 0
- -1095> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).mgrstat check_subs
- -1094> 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).mgrstat update_logger
- -1093> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) refresh
- -1092> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).health update_from_paxos
- -1091> 2019-01-29 15:26:44.584 7f39deeb51c0 20 mon.b@-1(probing).health dump:{
- "quorum_health": {},
- "leader_health": {}
- }
- -1090> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) refresh
- -1089> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mdsmap 0..0) post_refresh
- -1088> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(osdmap 0..0) post_refresh
- -1087> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(logm 0..0) post_refresh
- -1086> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(monmap 0..0) post_refresh
- -1085> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(auth 0..0) post_refresh
- -1084> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgr 0..0) post_refresh
- -1083> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(mgrstat 0..0) post_refresh
- -1082> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(health 0..0) post_refresh
- -1081> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing).paxosservice(config 0..0) post_refresh
- -1080> 2019-01-29 15:26:44.584 7f39deeb51c0 10 mon.b@-1(probing) e0 loading initial keyring to bootstrap authentication for mkfs
- -1079> 2019-01-29 15:26:44.584 7f39deeb51c0 2 auth: KeyRing::load: loaded key file /home/rraja/git/ceph/build/dev/mon.b/keyring
- -1078> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command mon_status hook 0x563939ab8750
- -1077> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command quorum_status hook 0x563939ab8750
- -1076> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command sync_force hook 0x563939ab8750
- -1075> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command add_bootstrap_peer_hint hook 0x563939ab8750
- -1074> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command add_bootstrap_peer_hintv hook 0x563939ab8750
- -1073> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command quorum enter hook 0x563939ab8750
- -1072> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command quorum exit hook 0x563939ab8750
- -1071> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command ops hook 0x563939ab8750
- -1070> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command sessions hook 0x563939ab8750
- -1069> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_historic_ops hook 0x563939ab8750
- -1068> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_historic_ops_by_duration hook 0x563939ab8750
- -1067> 2019-01-29 15:26:44.585 7f39deeb51c0 5 asok(0x563939e2a000) register_command dump_historic_slow_ops hook 0x563939ab8750
- -1066> 2019-01-29 15:26:44.585 7f39deeb51c0 1 finished global_init_daemonize
- -1065> 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] start start
- -1064> 2019-01-29 15:26:44.585 7f39deeb51c0 1 -- start start
- -1063> 2019-01-29 15:26:44.585 7f39deeb51c0 2 mon.b@-1(probing) e0 init
- -1062> 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
- -1061> 2019-01-29 15:26:44.585 7f39c6980700 1 lockdep using id 48
- -1060> 2019-01-29 15:26:44.585 7f39deeb51c0 1 Processor -- start
- -1059> 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 bootstrap
- -1058> 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 sync_reset_requester
- -1057> 2019-01-29 15:26:44.585 7f39deeb51c0 10 mon.b@-1(probing) e0 unregister_cluster_logger - not registered
- -1056> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 cancel_probe_timeout (none scheduled)
- -1055> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 reverting to legacy ranks for seed monmap (epoch 0)
- -1054> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@-1(probing) e0 monmap e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- -1053> 2019-01-29 15:26:44.586 7f39deeb51c0 0 mon.b@-1(probing) e0 my rank is now 1 (was -1)
- -1052> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] shutdown_connections
- -1051> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 _reset
- -1050> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
- -1049> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 timecheck_finish
- -1048> 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_tick_stop
- -1047> 2019-01-29 15:26:44.586 7f39deeb51c0 15 mon.b@1(probing) e0 health_interval_stop
- -1046> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_event_cancel
- -1045> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 scrub_reset
- -1044> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- -1043> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- -1042> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- -1041> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- -1040> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
- -1039> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- -1038> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- -1037> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- -1036> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(health 0..0) restart
- -1035> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing).paxosservice(config 0..0) restart
- -1034> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 cancel_probe_timeout (none scheduled)
- -1033> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 reset_probe_timeout 0x56393ab65d70 after 2 seconds
- -1032> 2019-01-29 15:26:44.586 7f39deeb51c0 10 mon.b@1(probing) e0 probing other monitors
- -1031> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916840
- -1030> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916840 con 0x563939c9c900
- -1029> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393a916b00
- -1028> 2019-01-29 15:26:44.586 7f39deeb51c0 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393a916b00 con 0x563939c9cd80
- -1027> 2019-01-29 15:26:44.586 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9cd80 msgr2=0x56393ab9a600 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- -1026> 2019-01-29 15:26:44.586 7f39c617f700 10 mon.b@1(probing) e0 ms_handle_refused 0x563939c9cd80 v2:10.215.99.125:40367/0
- -1025> 2019-01-29 15:26:44.586 7f39c417b700 10 mon.b@1(probing) e0 ms_get_authorizer for mon
- -1024> 2019-01-29 15:26:44.586 7f39c417b700 10 cephx: build_service_ticket service mon secret_id 18446744073709551615 ticket_info.ticket.name=mon.
- -1023> 2019-01-29 15:26:44.587 7f39c417b700 10 In get_auth_session_handler for protocol 2
- -1022> 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 14723405194060298632
- -1021> 2019-01-29 15:26:44.587 7f39c417b700 20 Putting signature in client message(seq # 1): sig = 14723405194060298632
- -1020> 2019-01-29 15:26:44.587 7f39c417b700 10 _calc_signature seq 1 front_crc_ = 2859661691 middle_crc = 0 data_crc = 0 sig = 12381238761605199092
- -1019> 2019-01-29 15:26:44.587 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 ==== 58+0+0 (2859661691 0 0) 0x56393a917080 con 0x563939c9c900
- -1018> 2019-01-29 15:26:44.588 7f39c617f700 1 lockdep using id 49
- -1017> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _ms_dispatch new session 0x56393abc0000 MonSession(mon.0 [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- -1016> 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(probing) e0 _ms_dispatch setting monitor caps on this connection
- -1015> 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
- -1014> 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- -1013> 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
- -1012> 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
- -1011> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6
- -1010> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_probe mon.0 v2:10.215.99.125:40363/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name a new) v6 features 4611087854031667199
- -1009> 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917340 con 0x563939c9c900
- -1008> 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 2632229567743115917
- -1007> 2019-01-29 15:26:44.588 7f39c417b700 20 Putting signature in client message(seq # 2): sig = 2632229567743115917
- -1006> 2019-01-29 15:26:44.588 7f39c417b700 10 _calc_signature seq 2 front_crc_ = 137113040 middle_crc = 0 data_crc = 0 sig = 5942540075320245331
- -1005> 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (137113040 0 0) 0x56393a916840 con 0x563939c9c900
- -1004> 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- -1003> 2019-01-29 15:26:44.588 7f39c617f700 20 mon.b@1(probing) e0 caps allow *
- -1002> 2019-01-29 15:26:44.588 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- -1001> 2019-01-29 15:26:44.588 7f39c617f700 20 allow so far , doing grant allow *
- -1000> 2019-01-29 15:26:44.588 7f39c617f700 20 allow all
- -999> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
- -998> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 handle_probe_reply mon.0 v2:10.215.99.125:40363/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name a paxos( fc 0 lc 0 ) new) v6
- -997> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- -996> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 peer name is a
- -995> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 mon.a is outside the quorum
- -994> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 outside_quorum now a,b, need 2
- -993> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 that's enough to form a new quorum, calling election
- -992> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 start_election
- -991> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 _reset
- -990> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 cancel_probe_timeout 0x56393ab65d70
- -989> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 timecheck_finish
- -988> 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_tick_stop
- -987> 2019-01-29 15:26:44.588 7f39c617f700 15 mon.b@1(probing) e0 health_interval_stop
- -986> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_event_cancel
- -985> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing) e0 scrub_reset
- -984> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- -983> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- -982> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- -981> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- -980> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 0..0) restart
- -979> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- -978> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- -977> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- -976> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
- -975> 2019-01-29 15:26:44.588 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
- -974> 2019-01-29 15:26:44.588 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
- -973> 2019-01-29 15:26:44.588 7f39c617f700 10 log_client _send_to_mon log to self
- -972> 2019-01-29 15:26:44.588 7f39c617f700 10 log_client log_queue is 1 last_log 1 sent 0 num 1 unsent 1 sending 1
- -971> 2019-01-29 15:26:44.588 7f39c617f700 10 log_client will send 2019-01-29 15:26:44.589509 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election
- -970> 2019-01-29 15:26:44.588 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 -- 0x56393abc06c0 con 0x563939d41600
- -969> 2019-01-29 15:26:44.588 7f39c617f700 5 mon.b@1(electing).elector(0) start -- can i be leader?
- -968> 2019-01-29 15:26:44.588 7f39c617f700 1 mon.b@1(electing).elector(0) init, first boot, initializing epoch at 1
- -967> 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 415793505725448275
- -966> 2019-01-29 15:26:44.605 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
- -965> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0f800
- -964> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0f800 con 0x563939c9c900
- -963> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- ?+0 0x563939b0fb00
- -962> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 -- 0x563939b0fb00 con 0x563939c9cd80
- -961> 2019-01-29 15:26:44.605 7f39c417b700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 10881007314159201096
- -960> 2019-01-29 15:26:44.605 7f39c417b700 20 Putting signature in client message(seq # 3): sig = 10881007314159201096
- -959> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 ==== 0+0+0 (0 0 0) 0x56393abc06c0 con 0x563939d41600
- -958> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0d80 MonSession(mon.1 [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- -957> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
- -956> 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -955> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -954> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:44.606661 lease_expire=0.000000 has v0 lc 0
- -953> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -952> 2019-01-29 15:26:44.605 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0f500 con 0x563939c9c900
- -951> 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- -950> 2019-01-29 15:26:44.605 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -949> 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- -948> 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
- -947> 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
- -946> 2019-01-29 15:26:44.605 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- -945> 2019-01-29 15:26:44.605 7f39c617f700 20 allow so far , doing grant allow *
- -944> 2019-01-29 15:26:44.605 7f39c617f700 20 allow all
- -943> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.0
- -942> 2019-01-29 15:26:44.605 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -941> 2019-01-29 15:26:44.605 7f39c617f700 5 mon.b@1(electing).elector(1) defer to 0
- -940> 2019-01-29 15:26:44.606 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
- -939> 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- ?+0 0x56393abd4000
- -938> 2019-01-29 15:26:44.606 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 ack 1) v7 -- 0x56393abd4000 con 0x563939c9c900
- -937> 2019-01-29 15:26:44.606 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 3541336743 middle_crc = 0 data_crc = 0 sig = 8903611187062716984
- -936> 2019-01-29 15:26:44.606 7f39c417b700 20 Putting signature in client message(seq # 4): sig = 8903611187062716984
- -935> 2019-01-29 15:26:44.728 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=35 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53000/0 addrs are 145
- -934> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- -933> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- -932> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 18425495964075312649
- -931> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- -930> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- -929> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 18425495964075312650 expecting 18425495964075312650
- -928> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
- -927> 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept connect_seq 0 vs existing csq=0 existing_state=STATE_CONNECTING
- -926> 2019-01-29 15:26:44.729 7f39c397a700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] conn(0x563939c9f600 0x56393ab9b200 :53000 s=CLOSED pgs=0 cs=0 l=0).replace stop myself to swap existing
- -925> 2019-01-29 15:26:44.729 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_reset 0x563939c9f600 v2:10.215.99.125:40367/0
- -924> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- -923> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- -922> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer adding server_challenge 12808771302250315903
- -921> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer decrypted service mon secret_id=18446744073709551615
- -920> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer global_id=0
- -919> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: cephx_verify_authorizer got server_challenge+1 12808771302250315904 expecting 12808771302250315904
- -918> 2019-01-29 15:26:44.729 7f39c397a700 10 cephx: verify_authorizer ok nonce 3ba81e136c9e6ffc reply_bl.length()=36
- -917> 2019-01-29 15:26:44.729 7f39c397a700 10 In get_auth_session_handler for protocol 2
- -916> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 4598065285107427218
- -915> 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 1): sig = 4598065285107427218
- -914> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 12491993532019861601
- -913> 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 2): sig = 12491993532019861601
- -912> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 1 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 4581398896165078038
- -911> 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 1 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a917600 con 0x563939c9cd80
- -910> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc0fc0 MonSession(mon.2 [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- -909> 2019-01-29 15:26:44.730 7f39c617f700 5 mon.b@1(electing) e0 _ms_dispatch setting monitor caps on this connection
- -908> 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -907> 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -906> 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
- -905> 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
- -904> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
- -903> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
- -902> 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 0 lc 0 ) new) v6 -- 0x56393a917b80 con 0x563939c9cd80
- -901> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1449868566 middle_crc = 0 data_crc = 0 sig = 17022378831273095551
- -900> 2019-01-29 15:26:44.730 7f39c397a700 20 Putting signature in client message(seq # 3): sig = 17022378831273095551
- -899> 2019-01-29 15:26:44.730 7f39c397a700 10 _calc_signature seq 2 front_crc_ = 1672072532 middle_crc = 0 data_crc = 0 sig = 8103692841582008270
- -898> 2019-01-29 15:26:44.730 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 2 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6 ==== 430+0+0 (1672072532 0 0) 0x56393a917b80 con 0x563939c9cd80
- -897> 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -896> 2019-01-29 15:26:44.730 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -895> 2019-01-29 15:26:44.730 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -894> 2019-01-29 15:26:44.730 7f39c617f700 20 allow so far , doing grant allow *
- -893> 2019-01-29 15:26:44.730 7f39c617f700 20 allow all
- -892> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
- -891> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 0 lc 0 ) new) v6
- -890> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 monmap is e0: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- -889> 2019-01-29 15:26:44.730 7f39c617f700 10 mon.b@1(electing) e0 peer name is c
- -888> 2019-01-29 15:26:44.735 7f39c397a700 10 _calc_signature seq 3 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4384171504794788302
- -887> 2019-01-29 15:26:44.735 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 3 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x563939b0fb00 con 0x563939c9cd80
- -886> 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -885> 2019-01-29 15:26:44.735 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -884> 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -883> 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
- -882> 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
- -881> 2019-01-29 15:26:44.735 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -880> 2019-01-29 15:26:44.735 7f39c617f700 20 allow so far , doing grant allow *
- -879> 2019-01-29 15:26:44.735 7f39c617f700 20 allow all
- -878> 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) handle_propose from mon.2
- -877> 2019-01-29 15:26:44.735 7f39c617f700 10 mon.b@1(electing).elector(1) handle_propose required features 0 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -876> 2019-01-29 15:26:44.735 7f39c617f700 5 mon.b@1(electing).elector(1) no, we already acked 0
- -875> 2019-01-29 15:26:44.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x563939c9fa80 0x56393ab9b800 :53002 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=36 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53002/0 addrs are 145
- -874> 2019-01-29 15:26:44.868 7f39c3179700 10 In get_auth_session_handler for protocol 0
- -873> 2019-01-29 15:26:44.868 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393abc1b00 con 0x563939c9fa80
- -872> 2019-01-29 15:26:44.868 7f39c617f700 10 mon.b@1(electing) e0 _ms_dispatch new session 0x56393abc1200 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- -871> 2019-01-29 15:26:44.868 7f39c617f700 20 mon.b@1(electing) e0 caps
- -870> 2019-01-29 15:26:44.868 7f39c617f700 5 mon.b@1(electing) e0 waitlisting message auth(proto 0 30 bytes epoch 0) v1
- -869> 2019-01-29 15:26:49.585 7f39c8984700 11 mon.b@1(electing) e0 tick
- -868> 2019-01-29 15:26:49.585 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
- -867> 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -866> 2019-01-29 15:26:49.585 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.586781 lease_expire=0.000000 has v0 lc 0
- -865> 2019-01-29 15:26:49.585 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -864> 2019-01-29 15:26:49.618 7f39c417b700 10 _calc_signature seq 4 front_crc_ = 4020688038 middle_crc = 0 data_crc = 0 sig = 6749900040655150956
- -863> 2019-01-29 15:26:49.618 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 4 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 victory 2) v7 ==== 41889+0+0 (4020688038 0 0) 0x56393abd4000 con 0x563939c9c900
- -862> 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- -861> 2019-01-29 15:26:49.618 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -860> 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- -859> 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
- -858> 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
- -857> 2019-01-29 15:26:49.618 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- -856> 2019-01-29 15:26:49.618 7f39c617f700 20 allow so far , doing grant allow *
- -855> 2019-01-29 15:26:49.618 7f39c617f700 20 allow all
- -854> 2019-01-29 15:26:49.619 7f39c617f700 5 mon.b@1(electing).elector(1) handle_victory from mon.0 quorum_features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -853> 2019-01-29 15:26:49.619 7f39c617f700 10 mon.b@1(electing).elector(1) bump_epoch 1 to 2
- -852> 2019-01-29 15:26:49.623 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 2611882610 middle_crc = 0 data_crc = 0 sig = 10249081017414072692
- -851> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 join_election
- -850> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 _reset
- -849> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
- -848> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
- -847> 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
- -846> 2019-01-29 15:26:49.625 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
- -845> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
- -844> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
- -843> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- -842> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- -841> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- -840> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- -839> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -838> 2019-01-29 15:26:49.625 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626824 lease_expire=0.000000 has v0 lc 0
- -837> 2019-01-29 15:26:49.625 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -836> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
- -835> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- -834> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -833> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.626971 lease_expire=0.000000 has v0 lc 0
- -832> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -831> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- -830> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- -829> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- -828> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- -827> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon) e0 lose_election, epoch 2 leader is mon0 quorum is 0,1 features are 4611087854031667199 mon_features are mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -826> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) peon_init -- i am a peon
- -825> 2019-01-29 15:26:49.626 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
- -824> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) election_finished
- -823> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active - not active
- -822> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) election_finished
- -821> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active - not active
- -820> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) election_finished
- -819> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -818> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627225 lease_expire=0.000000 has v0 lc 0
- -817> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -816> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active - not active
- -815> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) election_finished
- -814> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active - not active
- -813> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) election_finished
- -812> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -811> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.627327 lease_expire=0.000000 has v0 lc 0
- -810> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -809> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active - not active
- -808> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) election_finished
- -807> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active - not active
- -806> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) election_finished
- -805> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active - not active
- -804> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) election_finished
- -803> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active - not active
- -802> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) election_finished
- -801> 2019-01-29 15:26:49.626 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active - not active
- -800> 2019-01-29 15:26:49.626 7f39c617f700 5 mon.b@1(peon) e0 apply_quorum_to_compatset_features
- -799> 2019-01-29 15:26:49.626 7f39c617f700 1 mon.b@1(peon) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
- -798> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 calc_quorum_requirements required_features 549755813888
- -797> 2019-01-29 15:26:49.632 7f39c617f700 5 mon.b@1(peon) e0 apply_monmap_to_compatset_features
- -796> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
- -795> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 resend_routed_requests
- -794> 2019-01-29 15:26:49.632 7f39c617f700 10 mon.b@1(peon) e0 register_cluster_logger
- -793> 2019-01-29 15:26:49.634 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 5 ==== paxos(collect lc 0 fc 0 pn 100 opn 0) v4 ==== 84+0+0 (2611882610 0 0) 0x563939b0f800 con 0x563939c9c900
- -792> 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- -791> 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- -790> 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- -789> 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
- -788> 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
- -787> 2019-01-29 15:26:49.634 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- -786> 2019-01-29 15:26:49.634 7f39c617f700 20 allow so far , doing grant allow *
- -785> 2019-01-29 15:26:49.634 7f39c617f700 20 allow all
- -784> 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) handle_collect paxos(collect lc 0 fc 0 pn 100 opn 0) v4
- -783> 2019-01-29 15:26:49.634 7f39c617f700 20 mon.b@1(peon).paxos(paxos recovering c 0..0) reset_lease_timeout - setting timeout event
- -782> 2019-01-29 15:26:49.634 7f39c617f700 10 mon.b@1(peon).paxos(paxos recovering c 0..0) accepting pn 100 from 0
- -781> 2019-01-29 15:26:49.639 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(last lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4300 con 0x563939c9c900
- -780> 2019-01-29 15:26:49.639 7f39c417b700 10 _calc_signature seq 5 front_crc_ = 237159531 middle_crc = 0 data_crc = 0 sig = 17831234451164454257
- -779> 2019-01-29 15:26:49.639 7f39c417b700 20 Putting signature in client message(seq # 5): sig = 17831234451164454257
- -778> 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 4255564515 middle_crc = 0 data_crc = 0 sig = 9698958856354677651
- -777> 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 6 ==== paxos(lease lc 0 fc 0 pn 0 opn 0) v4 ==== 84+0+0 (4255564515 0 0) 0x56393abd4300 con 0x563939c9c900
- -776> 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- -775> 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- -774> 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- -773> 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
- -772> 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
- -771> 2019-01-29 15:26:49.641 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- -770> 2019-01-29 15:26:49.641 7f39c617f700 20 allow so far , doing grant allow *
- -769> 2019-01-29 15:26:49.641 7f39c617f700 20 allow all
- -768> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_lease on 0 now 2019-01-29 15:26:54.641538
- -767> 2019-01-29 15:26:49.641 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(lease_ack lc 0 fc 0 pn 0 opn 0) v4 -- 0x56393abd4600 con 0x563939c9c900
- -766> 2019-01-29 15:26:49.641 7f39c617f700 20 mon.b@1(peon).paxos(paxos active c 0..0) reset_lease_timeout - setting timeout event
- -765> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) _active
- -764> 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(mdsmap 0..0) _active we are not the leader, hence we propose nothing!
- -763> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) _active
- -762> 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(osdmap 0..0) _active we are not the leader, hence we propose nothing!
- -761> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 update_logger
- -760> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 take_all_failures on 0 osds
- -759> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).osd e0 start_mapping no pools, no mapping job
- -758> 2019-01-29 15:26:49.641 7f39c417b700 10 _calc_signature seq 6 front_crc_ = 3188025122 middle_crc = 0 data_crc = 0 sig = 13621511732933224171
- -757> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) _active
- -756> 2019-01-29 15:26:49.641 7f39c417b700 20 Putting signature in client message(seq # 6): sig = 13621511732933224171
- -755> 2019-01-29 15:26:49.641 7f39c617f700 7 mon.b@1(peon).paxosservice(logm 0..0) _active we are not the leader, hence we propose nothing!
- -754> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -753> 2019-01-29 15:26:49.641 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.642816 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
- -752> 2019-01-29 15:26:49.641 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -751> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) _active
- -750> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(monmap 0..0) _active we are not the leader, hence we propose nothing!
- -749> 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).monmap v0 apply_mon_features wait for service to be writeable
- -748> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) _active
- -747> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(auth 0..0) _active we are not the leader, hence we propose nothing!
- -746> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -745> 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643006 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
- -744> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -743> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).auth v0 AuthMonitor::on_active()
- -742> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) _active
- -741> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgr 0..0) _active we are not the leader, hence we propose nothing!
- -740> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) _active
- -739> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(mgrstat 0..0) _active we are not the leader, hence we propose nothing!
- -738> 2019-01-29 15:26:49.642 7f39c617f700 20 mon.b@1(peon).mgrstat update_logger
- -737> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) _active
- -736> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(health 0..0) _active we are not the leader, hence we propose nothing!
- -735> 2019-01-29 15:26:49.642 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) _active
- -734> 2019-01-29 15:26:49.642 7f39c617f700 7 mon.b@1(peon).paxosservice(config 0..0) _active we are not the leader, hence we propose nothing!
- -733> 2019-01-29 15:26:49.642 7f39c617f700 5 mon.b@1(peon).paxos(paxos active c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.643134 lease_expire=2019-01-29 15:26:54.641538 has v0 lc 0
- -732> 2019-01-29 15:26:49.652 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 359680839 middle_crc = 0 data_crc = 0 sig = 14961307058218807505
- -731> 2019-01-29 15:26:49.653 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.0 v2:10.215.99.125:40363/0 7 ==== paxos(begin lc 0 fc 0 pn 100 opn 0) v4 ==== 2292+0+0 (359680839 0 0) 0x56393abd4600 con 0x563939c9c900
- -730> 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0000 for mon.0
- -729> 2019-01-29 15:26:49.653 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- -728> 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40363/0 on cap allow *
- -727> 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
- -726> 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
- -725> 2019-01-29 15:26:49.653 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40363/0 on cap allow *
- -724> 2019-01-29 15:26:49.653 7f39c617f700 20 allow so far , doing grant allow *
- -723> 2019-01-29 15:26:49.653 7f39c617f700 20 allow all
- -722> 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos active c 0..0) handle_begin paxos(begin lc 0 fc 0 pn 100 opn 0) v4
- -721> 2019-01-29 15:26:49.653 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) accepting value for 1 pn 100
- -720> 2019-01-29 15:26:49.657 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- paxos(accept lc 0 fc 0 pn 100 opn 0) v4 -- 0x56393abd4900 con 0x563939c9c900
- -719> 2019-01-29 15:26:49.657 7f39c417b700 10 _calc_signature seq 7 front_crc_ = 3516937211 middle_crc = 0 data_crc = 0 sig = 18198979658301600163
- -718> 2019-01-29 15:26:49.657 7f39c417b700 20 Putting signature in client message(seq # 7): sig = 18198979658301600163
- -717> 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 1185258935604909951
- -716> 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 4 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393a916b00 con 0x563939c9cd80
- -715> 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -714> 2019-01-29 15:26:49.736 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- -713> 2019-01-29 15:26:49.736 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -712> 2019-01-29 15:26:49.736 7f39c617f700 20 allow so far , doing grant allow *
- -711> 2019-01-29 15:26:49.736 7f39c617f700 20 allow all
- -710> 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
- -709> 2019-01-29 15:26:49.736 7f39c617f700 10 mon.b@1(peon) e0 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
- -708> 2019-01-29 15:26:49.736 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b quorum 0,1 paxos( fc 0 lc 0 ) new) v6 -- 0x56393affe000 con 0x563939c9cd80
- -707> 2019-01-29 15:26:49.736 7f39c397a700 10 _calc_signature seq 4 front_crc_ = 3631419306 middle_crc = 0 data_crc = 0 sig = 17101328230604225658
- -706> 2019-01-29 15:26:49.736 7f39c397a700 20 Putting signature in client message(seq # 4): sig = 17101328230604225658
- -705> 2019-01-29 15:26:49.754 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 1459118475 middle_crc = 0 data_crc = 0 sig = 4909251354610610373
- -704> 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 5 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 1) v7 ==== 450+0+0 (1459118475 0 0) 0x56393abd5500 con 0x563939c9cd80
- -703> 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -702> 2019-01-29 15:26:49.754 7f39c617f700 20 mon.b@1(peon) e0 caps allow *
- -701> 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -700> 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
- -699> 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
- -698> 2019-01-29 15:26:49.754 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -697> 2019-01-29 15:26:49.754 7f39c617f700 20 allow so far , doing grant allow *
- -696> 2019-01-29 15:26:49.754 7f39c617f700 20 allow all
- -695> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) handle_propose from mon.2
- -694> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).elector(2) handle_propose required features 549755813888 mon_feature_t([none]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -693> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).elector(2) got propose from old epoch, quorum is 0,1, mon.2 must have just started
- -692> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 start_election
- -691> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 _reset
- -690> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 cancel_probe_timeout (none scheduled)
- -689> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 timecheck_finish
- -688> 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_tick_stop
- -687> 2019-01-29 15:26:49.754 7f39c617f700 15 mon.b@1(peon) e0 health_interval_stop
- -686> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_event_cancel
- -685> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon) e0 scrub_reset
- -684> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxos(paxos updating c 0..0) restart -- canceling timeouts
- -683> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mdsmap 0..0) restart
- -682> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(osdmap 0..0) restart
- -681> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) restart
- -680> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -679> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755397 lease_expire=0.000000 has v0 lc 0
- -678> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -677> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(monmap 0..0) restart
- -676> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) restart
- -675> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -674> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(peon).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.755436 lease_expire=0.000000 has v0 lc 0
- -673> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -672> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgr 0..0) restart
- -671> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(mgrstat 0..0) restart
- -670> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(health 0..0) restart
- -669> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(peon).paxosservice(config 0..0) restart
- -668> 2019-01-29 15:26:49.754 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
- -667> 2019-01-29 15:26:49.754 7f39c617f700 10 log_client _send_to_mon log to self
- -666> 2019-01-29 15:26:49.754 7f39c617f700 10 log_client log_queue is 2 last_log 2 sent 1 num 2 unsent 1 sending 1
- -665> 2019-01-29 15:26:49.754 7f39c617f700 10 log_client will send 2019-01-29 15:26:49.755480 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election
- -664> 2019-01-29 15:26:49.754 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 -- 0x56393b000000 con 0x563939d41600
- -663> 2019-01-29 15:26:49.754 7f39c617f700 5 mon.b@1(electing).elector(2) start -- can i be leader?
- -662> 2019-01-29 15:26:49.754 7f39c617f700 1 mon.b@1(electing).elector(2) init, last seen epoch 2
- -661> 2019-01-29 15:26:49.754 7f39c617f700 10 mon.b@1(electing).elector(2) bump_epoch 2 to 3
- -660> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 join_election
- -659> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 _reset
- -658> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
- -657> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 timecheck_finish
- -656> 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_tick_stop
- -655> 2019-01-29 15:26:49.758 7f39c617f700 15 mon.b@1(electing) e0 health_interval_stop
- -654> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_event_cancel
- -653> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing) e0 scrub_reset
- -652> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- -651> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- -650> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- -649> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- -648> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -647> 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759261 lease_expire=0.000000 has v0 lc 0
- -646> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -645> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
- -644> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- -643> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -642> 2019-01-29 15:26:49.758 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.759298 lease_expire=0.000000 has v0 lc 0
- -641> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -640> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- -639> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- -638> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- -637> 2019-01-29 15:26:49.758 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- -636> 2019-01-29 15:26:49.758 7f39c617f700 -1 mon.b@1(electing) e0 devname dm-0
- -635> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4c00
- -634> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4c00 con 0x563939c9c900
- -633> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- ?+0 0x56393abd4f00
- -632> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 3) v7 -- 0x56393abd4f00 con 0x563939c9cd80
- -631> 2019-01-29 15:26:49.759 7f39c417b700 10 _calc_signature seq 8 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 12490028364350923317
- -630> 2019-01-29 15:26:49.759 7f39c417b700 20 Putting signature in client message(seq # 8): sig = 12490028364350923317
- -629> 2019-01-29 15:26:49.759 7f39c397a700 10 _calc_signature seq 5 front_crc_ = 68489554 middle_crc = 0 data_crc = 0 sig = 13778264798706442571
- -628> 2019-01-29 15:26:49.759 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 ==== 0+0+0 (0 0 0) 0x56393b000000 con 0x563939d41600
- -627> 2019-01-29 15:26:49.759 7f39c397a700 20 Putting signature in client message(seq # 5): sig = 13778264798706442571
- -626> 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- -625> 2019-01-29 15:26:49.759 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -624> 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -623> 2019-01-29 15:26:49.759 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:49.760231 lease_expire=0.000000 has v0 lc 0
- -622> 2019-01-29 15:26:49.759 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -621> 2019-01-29 15:26:49.764 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2637561887 middle_crc = 0 data_crc = 0 sig = 10520589114785615889
- -620> 2019-01-29 15:26:49.764 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 6 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 3) v7 ==== 1190+0+0 (2637561887 0 0) 0x56393abd4f00 con 0x563939c9cd80
- -619> 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -618> 2019-01-29 15:26:49.764 7f39c617f700 20 mon.b@1(electing) e0 caps allow *
- -617> 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -616> 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
- -615> 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
- -614> 2019-01-29 15:26:49.764 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -613> 2019-01-29 15:26:49.764 7f39c617f700 20 allow so far , doing grant allow *
- -612> 2019-01-29 15:26:49.764 7f39c617f700 20 allow all
- -611> 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) handle_ack from mon.2
- -610> 2019-01-29 15:26:49.764 7f39c617f700 5 mon.b@1(electing).elector(3) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
- -609> 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk reading from fd=30 : Unknown error -104
- -608> 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed
- -607> 2019-01-29 15:26:50.038 7f39c417b700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).handle_message read tag failed
- -606> 2019-01-29 15:26:50.038 7f39c417b700 0 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 0x56393ab9a000 :-1 s=OPENED pgs=5 cs=1 l=0).fault initiating reconnect
- -605> 2019-01-29 15:26:50.038 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- -604> 2019-01-29 15:26:50.038 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- -603> 2019-01-29 15:26:50.239 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- -602> 2019-01-29 15:26:50.239 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- -601> 2019-01-29 15:26:50.639 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- -600> 2019-01-29 15:26:50.640 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- -599> 2019-01-29 15:26:51.441 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- -598> 2019-01-29 15:26:51.441 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- -597> 2019-01-29 15:26:53.043 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- -596> 2019-01-29 15:26:53.043 7f39c617f700 10 mon.b@1(electing) e0 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- -595> 2019-01-29 15:26:54.586 7f39c8984700 11 mon.b@1(electing) e0 tick
- -594> 2019-01-29 15:26:54.586 7f39c8984700 20 mon.b@1(electing) e0 sync_trim_providers
- -593> 2019-01-29 15:26:54.759 7f39c8984700 5 mon.b@1(electing).elector(3) election timer expired
- -592> 2019-01-29 15:26:54.759 7f39c8984700 10 mon.b@1(electing).elector(3) bump_epoch 3 to 4
- -591> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 join_election
- -590> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 _reset
- -589> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 cancel_probe_timeout (none scheduled)
- -588> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 timecheck_finish
- -587> 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_tick_stop
- -586> 2019-01-29 15:26:54.770 7f39c8984700 15 mon.b@1(electing) e0 health_interval_stop
- -585> 2019-01-29 15:26:54.770 7f39c8984700 10 mon.b@1(electing) e0 scrub_event_cancel
- -584> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing) e0 scrub_reset
- -583> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 0..0) restart -- canceling timeouts
- -582> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- -581> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- -580> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- -579> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -578> 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772105 lease_expire=0.000000 has v0 lc 0
- -577> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -576> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -575> 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772223 lease_expire=0.000000 has v0 lc 0
- -574> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -573> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 0..0) restart
- -572> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- -571> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -570> 2019-01-29 15:26:54.771 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.772325 lease_expire=0.000000 has v0 lc 0
- -569> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -568> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- -567> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- -566> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- -565> 2019-01-29 15:26:54.771 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- -564> 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- ?+0 0x56393abd5800
- -563> 2019-01-29 15:26:54.771 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 4) v7 -- 0x56393abd5800 con 0x563939c9cd80
- -562> 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(electing) e0 win_election epoch 4 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -561> 2019-01-29 15:26:54.772 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
- -560> 2019-01-29 15:26:54.772 7f39c8984700 10 log_client _send_to_mon log to self
- -559> 2019-01-29 15:26:54.772 7f39c8984700 10 log_client log_queue is 3 last_log 3 sent 2 num 3 unsent 1 sending 1
- -558> 2019-01-29 15:26:54.772 7f39c397a700 10 _calc_signature seq 6 front_crc_ = 2368543008 middle_crc = 0 data_crc = 0 sig = 5116208503718269665
- -557> 2019-01-29 15:26:54.772 7f39c397a700 20 Putting signature in client message(seq # 6): sig = 5116208503718269665
- -556> 2019-01-29 15:26:54.772 7f39c8984700 10 log_client will send 2019-01-29 15:26:54.773061 mon.b (mon.1) 3 : cluster [INF] mon.b is new leader, mons b,c in quorum (ranks 1,2)
- -555> 2019-01-29 15:26:54.772 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 -- 0x56393b000d80 con 0x563939d41600
- -554> 2019-01-29 15:26:54.772 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 ==== 0+0+0 (0 0 0) 0x56393b000d80 con 0x563939d41600
- -553> 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) leader_init -- starting paxos recovery
- -552> 2019-01-29 15:26:54.772 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) learned uncommitted 1 pn 100 (2196 bytes) from myself
- -551> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) get_new_proposal_number = 201
- -550> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) collect with pn 201
- -549> 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5b00
- -548> 2019-01-29 15:26:54.777 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5b00 con 0x563939c9cd80
- -547> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) election_finished
- -546> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 0..0) _active - not active
- -545> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
- -544> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
- -543> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
- -542> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
- -541> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
- -540> 2019-01-29 15:26:54.777 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 1789744833 middle_crc = 0 data_crc = 0 sig = 2735914805292709925
- -539> 2019-01-29 15:26:54.777 7f39c397a700 20 Putting signature in client message(seq # 7): sig = 2735914805292709925
- -538> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -537> 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778696 lease_expire=0.000000 has v0 lc 0
- -536> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -535> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -534> 2019-01-29 15:26:54.777 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.778814 lease_expire=0.000000 has v0 lc 0
- -533> 2019-01-29 15:26:54.777 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -532> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
- -531> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
- -530> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -529> 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.779003 lease_expire=0.000000 has v0 lc 0
- -528> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -527> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
- -526> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
- -525> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
- -524> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
- -523> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
- -522> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
- -521> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
- -520> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
- -519> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
- -518> 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_quorum_to_compatset_features
- -517> 2019-01-29 15:26:54.778 7f39c8984700 5 mon.b@1(leader) e0 apply_monmap_to_compatset_features
- -516> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 timecheck_finish
- -515> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 resend_routed_requests
- -514> 2019-01-29 15:26:54.778 7f39c8984700 10 mon.b@1(leader) e0 register_cluster_logger - already registered
- -513> 2019-01-29 15:26:54.778 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- -512> 2019-01-29 15:26:54.779 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
- -511> 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -510> 2019-01-29 15:26:54.779 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 0..0) is_readable = 0 - now=2019-01-29 15:26:54.780024 lease_expire=0.000000 has v0 lc 0
- -509> 2019-01-29 15:26:54.779 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -508> 2019-01-29 15:26:54.794 7f39c397a700 10 _calc_signature seq 7 front_crc_ = 2077645696 middle_crc = 0 data_crc = 0 sig = 16008821289831722732
- -507> 2019-01-29 15:26:54.795 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 7 ==== paxos(last lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (2077645696 0 0) 0x56393abd5b00 con 0x563939c9cd80
- -506> 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -505> 2019-01-29 15:26:54.795 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
- -504> 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -503> 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
- -502> 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
- -501> 2019-01-29 15:26:54.795 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -500> 2019-01-29 15:26:54.795 7f39c617f700 20 allow so far , doing grant allow *
- -499> 2019-01-29 15:26:54.795 7f39c617f700 20 allow all
- -498> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) handle_last paxos(last lc 0 fc 0 pn 201 opn 0) v4
- -497> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) store_state nothing to commit
- -496> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) they accepted our pn, we now have 2 peons
- -495> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 0..0) that's everyone. begin on old learned value
- -494> 2019-01-29 15:26:54.795 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) begin for 1 2196 bytes
- -493> 2019-01-29 15:26:54.800 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) sending begin to mon.2
- -492> 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393abd5200
- -491> 2019-01-29 15:26:54.800 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 0 fc 0 pn 201 opn 0) v4 -- 0x56393abd5200 con 0x563939c9cd80
- -490> 2019-01-29 15:26:54.800 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3535318905 middle_crc = 0 data_crc = 0 sig = 6870884659653601128
- -489> 2019-01-29 15:26:54.800 7f39c397a700 20 Putting signature in client message(seq # 8): sig = 6870884659653601128
- -488> 2019-01-29 15:26:54.807 7f39c397a700 10 _calc_signature seq 8 front_crc_ = 3909173416 middle_crc = 0 data_crc = 0 sig = 17406004129815643634
- -487> 2019-01-29 15:26:54.807 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 8 ==== paxos(accept lc 0 fc 0 pn 201 opn 0) v4 ==== 84+0+0 (3909173416 0 0) 0x56393abd5200 con 0x563939c9cd80
- -486> 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -485> 2019-01-29 15:26:54.807 7f39c617f700 20 mon.b@1(leader) e0 caps allow *
- -484> 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -483> 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
- -482> 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
- -481> 2019-01-29 15:26:54.807 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -480> 2019-01-29 15:26:54.807 7f39c617f700 20 allow so far , doing grant allow *
- -479> 2019-01-29 15:26:54.807 7f39c617f700 20 allow all
- -478> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) handle_accept paxos(accept lc 0 fc 0 pn 201 opn 0) v4
- -477> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) now 1,2 have accepted
- -476> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) got majority, committing, done with update
- -475> 2019-01-29 15:26:54.807 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating-previous c 0..0) commit_start 1
- -474> 2019-01-29 15:26:54.813 7f39c2978700 20 mon.b@1(leader).paxos(paxos writing-previous c 0..0) commit_finish 1
- -473> 2019-01-29 15:26:54.813 7f39c2978700 10 mon.b@1(leader).paxos(paxos writing-previous c 1..1) sending commit to mon.2
- -472> 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- ?+0 0x56393b00a000
- -471> 2019-01-29 15:26:54.813 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(commit lc 1 fc 0 pn 201 opn 0) v4 -- 0x56393b00a000 con 0x563939c9cd80
- -470> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader) e0 refresh_from_paxos
- -469> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
- -468> 2019-01-29 15:26:54.814 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 558359806 middle_crc = 0 data_crc = 0 sig = 15537979346010902130
- -467> 2019-01-29 15:26:54.814 7f39c397a700 20 Putting signature in client message(seq # 9): sig = 15537979346010902130
- -466> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
- -465> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
- -464> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos
- -463> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
- -462> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
- -461> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos version 1, my v 0
- -460> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 signaling that we need a bootstrap
- -459> 2019-01-29 15:26:54.814 7f39c2978700 10 mon.b@1(leader).monmap v0 update_from_paxos got 1
- -458> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
- -457> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).auth v0 update_from_paxos
- -456> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
- -455> 2019-01-29 15:26:54.819 7f39c2978700 10 mon.b@1(leader).config load_config got 0 keys
- -454> 2019-01-29 15:26:54.819 7f39c2978700 20 mon.b@1(leader).config load_config config map:
- {
- "global": {},
- "by_type": {},
- "by_id": {}
- }
- -453> 2019-01-29 15:26:54.819 7f39c2978700 4 set_mon_vals no callback set
- -452> 2019-01-29 15:26:54.830 7f39c2978700 20 mgrc handle_mgr_map mgrmap(e 0) v1
- -451> 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Got map version 0
- -450> 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc handle_mgr_map Active mgr is now
- -449> 2019-01-29 15:26:54.830 7f39c2978700 4 mgrc reconnect No active mgr available yet
- -448> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
- -447> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat 0
- -446> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).mgrstat check_subs
- -445> 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).mgrstat update_logger
- -444> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
- -443> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).health update_from_paxos
- -442> 2019-01-29 15:26:54.830 7f39c2978700 20 mon.b@1(leader).health dump:{
- "quorum_health": {},
- "leader_health": {}
- }
- -441> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
- -440> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
- -439> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
- -438> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
- -437> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
- -436> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
- -435> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
- -434> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
- -433> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
- -432> 2019-01-29 15:26:54.830 7f39c2978700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
- -431> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader).paxos(paxos refresh c 1..1) doing requested bootstrap
- -430> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 bootstrap
- -429> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 sync_reset_requester
- -428> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 unregister_cluster_logger
- -427> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 cancel_probe_timeout (none scheduled)
- -426> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(leader) e1 monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- -425> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 _reset
- -424> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
- -423> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 timecheck_finish
- -422> 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_tick_stop
- -421> 2019-01-29 15:26:54.831 7f39c2978700 15 mon.b@1(probing) e1 health_interval_stop
- -420> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_event_cancel
- -419> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 scrub_reset
- -418> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxos(paxos refresh c 1..1) restart -- canceling timeouts
- -417> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- -416> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- -415> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- -414> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -413> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832435 lease_expire=0.000000 has v0 lc 1
- -412> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -411> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -410> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832553 lease_expire=0.000000 has v0 lc 1
- -409> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -408> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -407> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832654 lease_expire=0.000000 has v0 lc 1
- -406> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -405> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
- -404> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- -403> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -402> 2019-01-29 15:26:54.831 7f39c2978700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.832778 lease_expire=0.000000 has v0 lc 1
- -401> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -400> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- -399> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- -398> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(health 0..0) restart
- -397> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing).paxosservice(config 0..0) restart
- -396> 2019-01-29 15:26:54.831 7f39c2978700 10 mon.b@1(probing) e1 cancel_probe_timeout (none scheduled)
- -395> 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 reset_probe_timeout 0x56393affc480 after 2 seconds
- -394> 2019-01-29 15:26:54.832 7f39c2978700 10 mon.b@1(probing) e1 probing other monitors
- -393> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affe840
- -392> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affe840 con 0x563939c9c900
- -391> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- ?+0 0x56393affeb00
- -390> 2019-01-29 15:26:54.832 7f39c2978700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name b new) v6 -- 0x56393affeb00 con 0x563939c9cd80
- -389> 2019-01-29 15:26:54.832 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 695166216 middle_crc = 0 data_crc = 0 sig = 3184037569713953523
- -388> 2019-01-29 15:26:54.832 7f39c397a700 20 Putting signature in client message(seq # 10): sig = 3184037569713953523
- -387> 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 9 front_crc_ = 1469837017 middle_crc = 0 data_crc = 0 sig = 8540967715760223295
- -386> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 9 ==== mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 ==== 58+0+0 (1469837017 0 0) 0x56393affeb00 con 0x563939c9cd80
- -385> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -384> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
- -383> 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -382> 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
- -381> 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
- -380> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6
- -379> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_probe mon.2 v2:10.215.99.125:40367/0mon_probe(probe 3b02750c-f104-4301-aa14-258d2b37f104 name c new) v6 features 4611087854031667199
- -378> 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 10 front_crc_ = 800172242 middle_crc = 0 data_crc = 0 sig = 7787347958796391197
- -377> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name b paxos( fc 1 lc 1 ) new) v6 -- 0x56393affe2c0 con 0x563939c9cd80
- -376> 2019-01-29 15:26:54.835 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 443766928 middle_crc = 0 data_crc = 0 sig = 3146968082825100645
- -375> 2019-01-29 15:26:54.835 7f39c397a700 20 Putting signature in client message(seq # 11): sig = 3146968082825100645
- -374> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 10 ==== mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6 ==== 430+0+0 (800172242 0 0) 0x56393affe000 con 0x563939c9cd80
- -373> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -372> 2019-01-29 15:26:54.835 7f39c617f700 20 mon.b@1(probing) e1 caps allow *
- -371> 2019-01-29 15:26:54.835 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -370> 2019-01-29 15:26:54.835 7f39c617f700 20 allow so far , doing grant allow *
- -369> 2019-01-29 15:26:54.835 7f39c617f700 20 allow all
- -368> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
- -367> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 handle_probe_reply mon.2 v2:10.215.99.125:40367/0 mon_probe(reply 3b02750c-f104-4301-aa14-258d2b37f104 name c paxos( fc 1 lc 1 ) new) v6
- -366> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 monmap is e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- -365> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 peer name is c
- -364> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 mon.c is outside the quorum
- -363> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 outside_quorum now b,c, need 2
- -362> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 that's enough to form a new quorum, calling election
- -361> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 start_election
- -360> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 _reset
- -359> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 cancel_probe_timeout 0x56393affc480
- -358> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 timecheck_finish
- -357> 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_tick_stop
- -356> 2019-01-29 15:26:54.835 7f39c617f700 15 mon.b@1(probing) e1 health_interval_stop
- -355> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_event_cancel
- -354> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing) e1 scrub_reset
- -353> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
- -352> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mdsmap 0..0) restart
- -351> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(osdmap 0..0) restart
- -350> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) restart
- -349> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -348> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836598 lease_expire=0.000000 has v0 lc 1
- -347> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -346> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -345> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836628 lease_expire=0.000000 has v0 lc 1
- -344> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -343> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -342> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836663 lease_expire=0.000000 has v0 lc 1
- -341> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -340> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(monmap 1..1) restart
- -339> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) restart
- -338> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -337> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(probing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.836708 lease_expire=0.000000 has v0 lc 1
- -336> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -335> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgr 0..0) restart
- -334> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(mgrstat 0..0) restart
- -333> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(health 0..0) restart
- -332> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(probing).paxosservice(config 0..0) restart
- -331> 2019-01-29 15:26:54.835 7f39c617f700 0 log_channel(cluster) log [INF] : mon.b calling monitor election
- -330> 2019-01-29 15:26:54.835 7f39c617f700 10 log_client _send_to_mon log to self
- -329> 2019-01-29 15:26:54.835 7f39c617f700 10 log_client log_queue is 4 last_log 4 sent 3 num 4 unsent 1 sending 1
- -328> 2019-01-29 15:26:54.835 7f39c617f700 10 log_client will send 2019-01-29 15:26:54.836762 mon.b (mon.1) 4 : cluster [INF] mon.b calling monitor election
- -327> 2019-01-29 15:26:54.835 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 -- 0x56393b000240 con 0x563939d41600
- -326> 2019-01-29 15:26:54.835 7f39c617f700 5 mon.b@1(electing).elector(4) start -- can i be leader?
- -325> 2019-01-29 15:26:54.835 7f39c617f700 1 mon.b@1(electing).elector(4) init, last seen epoch 4
- -324> 2019-01-29 15:26:54.835 7f39c617f700 10 mon.b@1(electing).elector(4) bump_epoch 4 to 5
- -323> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 join_election
- -322> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 _reset
- -321> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
- -320> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 timecheck_finish
- -319> 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_tick_stop
- -318> 2019-01-29 15:26:54.839 7f39c617f700 15 mon.b@1(electing) e1 health_interval_stop
- -317> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_event_cancel
- -316> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing) e1 scrub_reset
- -315> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
- -314> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- -313> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- -312> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- -311> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -310> 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840738 lease_expire=0.000000 has v0 lc 1
- -309> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -308> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -307> 2019-01-29 15:26:54.839 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840832 lease_expire=0.000000 has v0 lc 1
- -306> 2019-01-29 15:26:54.839 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -305> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -304> 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.840935 lease_expire=0.000000 has v0 lc 1
- -303> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -302> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
- -301> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- -300> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -299> 2019-01-29 15:26:54.840 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.841040 lease_expire=0.000000 has v0 lc 1
- -298> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -297> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- -296> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- -295> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- -294> 2019-01-29 15:26:54.840 7f39c617f700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- -293> 2019-01-29 15:26:54.853 7f39c617f700 -1 mon.b@1(electing) e1 devname dm-0
- -292> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00a900
- -291> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00a900 con 0x563939c9c900
- -290> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- ?+0 0x56393b00ac00
- -289> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 -- 0x56393b00ac00 con 0x563939c9cd80
- -288> 2019-01-29 15:26:54.854 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12969980896879027673
- -287> 2019-01-29 15:26:54.854 7f39c397a700 20 Putting signature in client message(seq # 12): sig = 12969980896879027673
- -286> 2019-01-29 15:26:54.854 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 ==== 0+0+0 (0 0 0) 0x56393b000240 con 0x563939d41600
- -285> 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- -284> 2019-01-29 15:26:54.854 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
- -283> 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -282> 2019-01-29 15:26:54.854 7f39c617f700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:54.855846 lease_expire=0.000000 has v0 lc 1
- -281> 2019-01-29 15:26:54.854 7f39c617f700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -280> 2019-01-29 15:26:54.855 7f39c397a700 10 _calc_signature seq 11 front_crc_ = 1196600255 middle_crc = 0 data_crc = 0 sig = 12994639518338884118
- -279> 2019-01-29 15:26:54.855 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 11 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 propose 5) v7 ==== 450+0+0 (1196600255 0 0) 0x56393b00a000 con 0x563939c9cd80
- -278> 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -277> 2019-01-29 15:26:54.855 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
- -276> 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -275> 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
- -274> 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
- -273> 2019-01-29 15:26:54.855 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -272> 2019-01-29 15:26:54.855 7f39c617f700 20 allow so far , doing grant allow *
- -271> 2019-01-29 15:26:54.855 7f39c617f700 20 allow all
- -270> 2019-01-29 15:26:54.855 7f39c617f700 5 mon.b@1(electing).elector(5) handle_propose from mon.2
- -269> 2019-01-29 15:26:54.855 7f39c617f700 10 mon.b@1(electing).elector(5) handle_propose required features 549755813888 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), peer features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -268> 2019-01-29 15:26:54.857 7f39c397a700 10 _calc_signature seq 12 front_crc_ = 2042675853 middle_crc = 0 data_crc = 0 sig = 8776967150872131881
- -267> 2019-01-29 15:26:54.857 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 12 ==== election(3b02750c-f104-4301-aa14-258d2b37f104 ack 5) v7 ==== 1190+0+0 (2042675853 0 0) 0x56393b00ac00 con 0x563939c9cd80
- -266> 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -265> 2019-01-29 15:26:54.857 7f39c617f700 20 mon.b@1(electing) e1 caps allow *
- -264> 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -263> 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
- -262> 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
- -261> 2019-01-29 15:26:54.857 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -260> 2019-01-29 15:26:54.857 7f39c617f700 20 allow so far , doing grant allow *
- -259> 2019-01-29 15:26:54.857 7f39c617f700 20 allow all
- -258> 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) handle_ack from mon.2
- -257> 2019-01-29 15:26:54.858 7f39c617f700 5 mon.b@1(electing).elector(5) so far i have { mon.1: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]), mon.2: features 4611087854031667199 mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus]) }
- -256> 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 36
- -255> 2019-01-29 15:26:54.868 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 msgr2=0x56393ab9b800 :53002 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
- -254> 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).handle_message read tag failed
- -253> 2019-01-29 15:26:54.868 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x563939c9fa80 0x56393ab9b800 :53002 s=OPENED pgs=1 cs=1 l=1).fault on lossy channel, failing
- -252> 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x563939c9fa80 v2:10.215.99.125:53002/4155176800
- -251> 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
- -250> 2019-01-29 15:26:54.868 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1200 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
- -249> 2019-01-29 15:26:54.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026000 0x56393ab9be00 :53042 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53042/0 addrs are 145
- -248> 2019-01-29 15:26:54.869 7f39c3179700 10 In get_auth_session_handler for protocol 0
- -247> 2019-01-29 15:26:54.870 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001b00 con 0x56393b026000
- -246> 2019-01-29 15:26:54.870 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393abc1d40 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- -245> 2019-01-29 15:26:54.870 7f39c617f700 20 mon.b@1(electing) e1 caps
- -244> 2019-01-29 15:26:54.870 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
- -243> 2019-01-29 15:26:56.244 7f39c417b700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> [v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0] conn(0x563939c9c900 msgr2=0x56393ab9a000 :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed
- -242> 2019-01-29 15:26:56.244 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_refused 0x563939c9c900 v2:10.215.99.125:40363/0
- -241> 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 30
- -240> 2019-01-29 15:26:57.869 7f39c3179700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 msgr2=0x56393ab9be00 :53042 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
- -239> 2019-01-29 15:26:57.869 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).handle_message read tag failed
- -238> 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> v2:10.215.99.125:53002/4155176800 conn(0x56393b026000 0x56393ab9be00 :53042 s=OPENED pgs=6 cs=1 l=1).fault on lossy channel, failing
- -237> 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 ms_handle_reset 0x56393b026000 v2:10.215.99.125:53002/4155176800
- -236> 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 reset/close on session client.? v2:10.215.99.125:53002/4155176800
- -235> 2019-01-29 15:26:57.870 7f39c617f700 10 mon.b@1(electing) e1 remove_session 0x56393abc1d40 client.? v2:10.215.99.125:53002/4155176800 features 0x3ffddff8ffacffff
- -234> 2019-01-29 15:26:57.870 7f39c3179700 1 --2- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] >> conn(0x56393b026480 0x56393ab9d600 :53056 s=ACCEPTING pgs=0 cs=0 l=0).send_server_banner sd=30 v2:10.215.99.125:40365/0 myaddrs [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] target_addr v2:10.215.99.125:53056/0 addrs are 145
- -233> 2019-01-29 15:26:57.871 7f39c3179700 10 In get_auth_session_handler for protocol 0
- -232> 2019-01-29 15:26:57.871 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== client.? v2:10.215.99.125:53002/4155176800 1 ==== auth(proto 0 30 bytes epoch 0) v1 ==== 60+0+0 (673663173 0 0) 0x56393b001d40 con 0x56393b026480
- -231> 2019-01-29 15:26:57.872 7f39c617f700 10 mon.b@1(electing) e1 _ms_dispatch new session 0x56393b000480 MonSession(client.? v2:10.215.99.125:53002/4155176800 is open , features 0x3ffddff8ffacffff (luminous)) features 0x3ffddff8ffacffff
- -230> 2019-01-29 15:26:57.872 7f39c617f700 20 mon.b@1(electing) e1 caps
- -229> 2019-01-29 15:26:57.872 7f39c617f700 5 mon.b@1(electing) e1 waitlisting message auth(proto 0 30 bytes epoch 0) v1
- -228> 2019-01-29 15:26:59.586 7f39c8984700 11 mon.b@1(electing) e1 tick
- -227> 2019-01-29 15:26:59.586 7f39c8984700 20 mon.b@1(electing) e1 sync_trim_providers
- -226> 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing) e1 session closed, dropping 0x56393b001b00
- -225> 2019-01-29 15:26:59.586 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
- -224> 2019-01-29 15:26:59.586 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.587891 lease_expire=0.000000 has v0 lc 1
- -223> 2019-01-29 15:26:59.587 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -222> 2019-01-29 15:26:59.854 7f39c8984700 5 mon.b@1(electing).elector(5) election timer expired
- -221> 2019-01-29 15:26:59.854 7f39c8984700 10 mon.b@1(electing).elector(5) bump_epoch 5 to 6
- -220> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 join_election
- -219> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 _reset
- -218> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 cancel_probe_timeout (none scheduled)
- -217> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 timecheck_finish
- -216> 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_tick_stop
- -215> 2019-01-29 15:26:59.866 7f39c8984700 15 mon.b@1(electing) e1 health_interval_stop
- -214> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_event_cancel
- -213> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing) e1 scrub_reset
- -212> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxos(paxos recovering c 1..1) restart -- canceling timeouts
- -211> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mdsmap 0..0) restart
- -210> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(osdmap 0..0) restart
- -209> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) restart
- -208> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -207> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867221 lease_expire=0.000000 has v0 lc 1
- -206> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -205> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -204> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867315 lease_expire=0.000000 has v0 lc 1
- -203> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -202> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -201> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867394 lease_expire=0.000000 has v0 lc 1
- -200> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -199> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -198> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867477 lease_expire=0.000000 has v0 lc 1
- -197> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -196> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(monmap 1..1) restart
- -195> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) restart
- -194> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393abc1b00 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x563939c9fa80
- -193> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) discarding message from disconnected client client.? v2:10.215.99.125:53002/4155176800 auth(proto 0 30 bytes epoch 0) v1
- -192> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
- -191> 2019-01-29 15:26:59.866 7f39c8984700 5 mon.b@1(electing).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.867654 lease_expire=0.000000 has v0 lc 1
- -190> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -189> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgr 0..0) restart
- -188> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(mgrstat 0..0) restart
- -187> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(health 0..0) restart
- -186> 2019-01-29 15:26:59.866 7f39c8984700 10 mon.b@1(electing).paxosservice(config 0..0) restart
- -185> 2019-01-29 15:26:59.866 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- ?+0 0x56393b00b800
- -184> 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- election(3b02750c-f104-4301-aa14-258d2b37f104 victory 6) v7 -- 0x56393b00b800 con 0x563939c9cd80
- -183> 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(electing) e1 win_election epoch 6 quorum 1,2 features 4611087854031667199 mon_features mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -182> 2019-01-29 15:26:59.867 7f39c8984700 0 log_channel(cluster) log [INF] : mon.b is new leader, mons b,c in quorum (ranks 1,2)
- -181> 2019-01-29 15:26:59.867 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 178927411 middle_crc = 0 data_crc = 0 sig = 794234354135278605
- -180> 2019-01-29 15:26:59.867 7f39c8984700 10 log_client _send_to_mon log to self
- -179> 2019-01-29 15:26:59.867 7f39c397a700 20 Putting signature in client message(seq # 13): sig = 794234354135278605
- -178> 2019-01-29 15:26:59.867 7f39c8984700 10 log_client log_queue is 5 last_log 5 sent 4 num 5 unsent 1 sending 1
- -177> 2019-01-29 15:26:59.867 7f39c8984700 10 log_client will send 2019-01-29 15:26:59.868170 mon.b (mon.1) 5 : cluster [INF] mon.b is new leader, mons b,c in quorum (ranks 1,2)
- -176> 2019-01-29 15:26:59.867 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 -- 0x56393b000fc0 con 0x563939d41600
- -175> 2019-01-29 15:26:59.867 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.1 v2:10.215.99.125:40365/0 0 ==== log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 ==== 0+0+0 (0 0 0) 0x56393b000fc0 con 0x563939d41600
- -174> 2019-01-29 15:26:59.867 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) leader_init -- starting paxos recovery
- -173> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) get_new_proposal_number = 301
- -172> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) collect with pn 301
- -171> 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- ?+0 0x56393b00bb00
- -170> 2019-01-29 15:26:59.872 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(collect lc 1 fc 1 pn 301 opn 0) v4 -- 0x56393b00bb00 con 0x563939c9cd80
- -169> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) election_finished
- -168> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active - not active
- -167> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) election_finished
- -166> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active - not active
- -165> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) election_finished
- -164> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active - not active
- -163> 2019-01-29 15:26:59.872 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 4116324937 middle_crc = 0 data_crc = 0 sig = 15318303623903287436
- -162> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) election_finished
- -161> 2019-01-29 15:26:59.872 7f39c397a700 20 Putting signature in client message(seq # 14): sig = 15318303623903287436
- -160> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393abc06c0 log(1 entries from seq 1 at 2019-01-29 15:26:44.589509) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -159> 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873644 lease_expire=0.000000 has v0 lc 1
- -158> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -157> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000000 log(1 entries from seq 2 at 2019-01-29 15:26:49.755480) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -156> 2019-01-29 15:26:59.872 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873776 lease_expire=0.000000 has v0 lc 1
- -155> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -154> 2019-01-29 15:26:59.872 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000d80 log(1 entries from seq 3 at 2019-01-29 15:26:54.773061) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -153> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.873916 lease_expire=0.000000 has v0 lc 1
- -152> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -151> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000240 log(1 entries from seq 4 at 2019-01-29 15:26:54.836762) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -150> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874034 lease_expire=0.000000 has v0 lc 1
- -149> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -148> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(logm 0..0) _active - not active
- -147> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) election_finished
- -146> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) dispatch 0x56393b001d40 auth(proto 0 30 bytes epoch 0) v1 from client.? v2:10.215.99.125:53002/4155176800 con 0x56393b026480
- -145> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.874163 lease_expire=0.000000 has v0 lc 1
- -144> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) waiting for paxos -> readable (v0)
- -143> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(auth 0..0) _active - not active
- -142> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) election_finished
- -141> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgr 0..0) _active - not active
- -140> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) election_finished
- -139> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) _active - not active
- -138> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) election_finished
- -137> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(health 0..0) _active - not active
- -136> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) election_finished
- -135> 2019-01-29 15:26:59.873 7f39c8984700 10 mon.b@1(leader).paxosservice(config 0..0) _active - not active
- -134> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_quorum_to_compatset_features
- -133> 2019-01-29 15:26:59.873 7f39c8984700 5 mon.b@1(leader) e1 apply_monmap_to_compatset_features
- -132> 2019-01-29 15:26:59.873 7f39c8984700 1 mon.b@1(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout}
- -131> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 calc_quorum_requirements required_features 2449958747315912708
- -130> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_finish
- -129> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 resend_routed_requests
- -128> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 register_cluster_logger
- -127> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start
- -126> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round curr 0
- -125> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round new 1
- -124> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck
- -123> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck start timecheck epoch 6 round 1
- -122> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck send time_check( ping e 6 r 1 ) v1 to mon.2
- -121> 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- ?+0 0x56393b000b40
- -120> 2019-01-29 15:26:59.879 7f39c8984700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- time_check( ping e 6 r 1 ) v1 -- 0x56393b000b40 con 0x563939c9cd80
- -119> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_start_round setting up next event
- -118> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 timecheck_reset_event delay 300 rounds_since_clean 0
- -117> 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_start
- -116> 2019-01-29 15:26:59.879 7f39c8984700 15 mon.b@1(leader) e1 health_tick_stop
- -115> 2019-01-29 15:26:59.879 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 72719240 middle_crc = 0 data_crc = 0 sig = 11523137460518662160
- -114> 2019-01-29 15:26:59.879 7f39c397a700 20 Putting signature in client message(seq # 15): sig = 11523137460518662160
- -113> 2019-01-29 15:26:59.879 7f39c8984700 10 mon.b@1(leader) e1 scrub_event_start
- -112> 2019-01-29 15:26:59.879 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0d80 for mon.1
- -111> 2019-01-29 15:26:59.880 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
- -110> 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) dispatch 0x56393b000fc0 log(1 entries from seq 5 at 2019-01-29 15:26:59.868170) v1 from mon.1 v2:10.215.99.125:40365/0 con 0x563939d41600
- -109> 2019-01-29 15:26:59.880 7f39c617f700 5 mon.b@1(leader).paxos(paxos recovering c 1..1) is_readable = 0 - now=2019-01-29 15:26:59.881040 lease_expire=0.000000 has v0 lc 1
- -108> 2019-01-29 15:26:59.880 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) waiting for paxos -> readable (v0)
- -107> 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 13 front_crc_ = 4120674357 middle_crc = 0 data_crc = 0 sig = 15016538117729764366
- -106> 2019-01-29 15:26:59.889 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] <== mon.2 v2:10.215.99.125:40367/0 13 ==== paxos(last lc 1 fc 1 pn 301 opn 0) v4 ==== 84+0+0 (4120674357 0 0) 0x56393b00b800 con 0x563939c9cd80
- -105> 2019-01-29 15:26:59.889 7f39c397a700 10 _calc_signature seq 14 front_crc_ = 1460593560 middle_crc = 0 data_crc = 0 sig = 3747477798607265139
- -104> 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 _ms_dispatch existing session 0x56393abc0fc0 for mon.2
- -103> 2019-01-29 15:26:59.890 7f39c617f700 20 mon.b@1(leader) e1 caps allow *
- -102> 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= read addr v2:10.215.99.125:40367/0 on cap allow *
- -101> 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
- -100> 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
- -99> 2019-01-29 15:26:59.890 7f39c617f700 20 is_capable service=mon command= exec addr v2:10.215.99.125:40367/0 on cap allow *
- -98> 2019-01-29 15:26:59.890 7f39c617f700 20 allow so far , doing grant allow *
- -97> 2019-01-29 15:26:59.890 7f39c617f700 20 allow all
- -96> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) handle_last paxos(last lc 1 fc 1 pn 301 opn 0) v4
- -95> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) store_state nothing to commit
- -94> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) they accepted our pn, we now have 2 peons
- -93> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) that's everyone. active!
- -92> 2019-01-29 15:26:59.890 7f39c617f700 7 mon.b@1(leader).paxos(paxos recovering c 1..1) extend_lease now+5 (2019-01-29 15:27:04.891100)
- -91> 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- ?+0 0x56393b00af00
- -90> 2019-01-29 15:26:59.890 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(lease lc 1 fc 1 pn 0 opn 0) v4 -- 0x56393b00af00 con 0x563939c9cd80
- -89> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader) e1 refresh_from_paxos
- -88> 2019-01-29 15:26:59.890 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 3892865509 middle_crc = 0 data_crc = 0 sig = 11834348323597999916
- -87> 2019-01-29 15:26:59.890 7f39c397a700 20 Putting signature in client message(seq # 16): sig = 11834348323597999916
- -86> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) refresh
- -85> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) refresh
- -84> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) refresh
- -83> 2019-01-29 15:26:59.890 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos
- -82> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).log v0 update_from_paxos version 0 summary v 0
- -81> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) refresh
- -80> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) refresh
- -79> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).auth v0 update_from_paxos
- -78> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) refresh
- -77> 2019-01-29 15:26:59.891 7f39c617f700 10 mon.b@1(leader).config load_config got 0 keys
- -76> 2019-01-29 15:26:59.891 7f39c397a700 10 _calc_signature seq 15 front_crc_ = 3498592039 middle_crc = 0 data_crc = 0 sig = 6841074444368600247
- -75> 2019-01-29 15:26:59.891 7f39c617f700 20 mon.b@1(leader).config load_config config map:
- {
- "global": {},
- "by_type": {},
- "by_id": {}
- }
- -74> 2019-01-29 15:26:59.891 7f39c617f700 4 set_mon_vals no callback set
- -73> 2019-01-29 15:26:59.897 7f39c617f700 20 mgrc handle_mgr_map mgrmap(e 0) v1
- -72> 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Got map version 0
- -71> 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc handle_mgr_map Active mgr is now
- -70> 2019-01-29 15:26:59.897 7f39c617f700 4 mgrc reconnect No active mgr available yet
- -69> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) refresh
- -68> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat 0
- -67> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).mgrstat check_subs
- -66> 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).mgrstat update_logger
- -65> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) refresh
- -64> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).health update_from_paxos
- -63> 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).health dump:{
- "quorum_health": {},
- "leader_health": {}
- }
- -62> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) refresh
- -61> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) post_refresh
- -60> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) post_refresh
- -59> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(logm 0..0) post_refresh
- -58> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) post_refresh
- -57> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(auth 0..0) post_refresh
- -56> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgr 0..0) post_refresh
- -55> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(mgrstat 0..0) post_refresh
- -54> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(health 0..0) post_refresh
- -53> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(config 0..0) post_refresh
- -52> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxos(paxos recovering c 1..1) finish_round
- -51> 2019-01-29 15:26:59.897 7f39c617f700 20 mon.b@1(leader).paxos(paxos active c 1..1) finish_round waiting_for_acting
- -50> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).paxosservice(monmap 1..1) _active
- -49> 2019-01-29 15:26:59.897 7f39c617f700 7 mon.b@1(leader).paxosservice(monmap 1..1) _active creating new pending
- -48> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 create_pending monmap epoch 2
- -47> 2019-01-29 15:26:59.897 7f39c617f700 10 mon.b@1(leader).monmap v1 noting that i was, once, part of an active quorum.
- -46> 2019-01-29 15:26:59.902 7f39c617f700 0 log_channel(cluster) log [DBG] : monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- -45> 2019-01-29 15:26:59.902 7f39c617f700 10 log_client _send_to_mon log to self
- -44> 2019-01-29 15:26:59.902 7f39c617f700 10 log_client log_queue is 6 last_log 6 sent 5 num 6 unsent 1 sending 1
- -43> 2019-01-29 15:26:59.902 7f39c617f700 10 log_client will send 2019-01-29 15:26:59.903091 mon.b (mon.1) 6 : cluster [DBG] monmap e1: 3 mons at {a=[v2:10.215.99.125:40363/0,v1:10.215.99.125:40364/0],b=[v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0],c=[v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0]}
- -42> 2019-01-29 15:26:59.902 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] -- log(1 entries from seq 6 at 2019-01-29 15:26:59.903091) v1 -- 0x56393b000900 con 0x563939d41600
- -41> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).monmap v1 apply_mon_features features match current pending: mon_feature_t([kraken,luminous,mimic,osdmap-prune,nautilus])
- -40> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) _active
- -39> 2019-01-29 15:26:59.902 7f39c617f700 7 mon.b@1(leader).paxosservice(mdsmap 0..0) _active creating new pending
- -38> 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) is_readable = 1 - now=2019-01-29 15:26:59.903204 lease_expire=2019-01-29 15:27:04.891100 has v0 lc 1
- -37> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_pending e1
- -36> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 create_initial
- -35> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxosservice(mdsmap 0..0) propose_pending
- -34> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).mds e0 encode_pending e1
- -33> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader) e1 log_health updated 0 previous 0
- -32> 2019-01-29 15:26:59.902 7f39c617f700 5 mon.b@1(leader).paxos(paxos active c 1..1) queue_pending_finisher 0x563939ab8950
- -31> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) trigger_propose active, proposing now
- -30> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos active c 1..1) propose_pending 2 2867 bytes
- -29> 2019-01-29 15:26:59.902 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) begin for 2 2867 bytes
- -28> 2019-01-29 15:26:59.906 7f39c617f700 10 mon.b@1(leader).paxos(paxos updating c 1..1) sending begin to mon.2
- -27> 2019-01-29 15:26:59.906 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] send_to--> mon [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- ?+0 0x56393b00b200
- -26> 2019-01-29 15:26:59.907 7f39c617f700 1 -- [v2:10.215.99.125:40365/0,v1:10.215.99.125:40366/0] --> [v2:10.215.99.125:40367/0,v1:10.215.99.125:40368/0] -- paxos(begin lc 1 fc 0 pn 301 opn 0) v4 -- 0x56393b00b200 con 0x563939c9cd80
- -25> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) _active
- -24> 2019-01-29 15:26:59.907 7f39c617f700 7 mon.b@1(leader).paxosservice(osdmap 0..0) _active creating new pending
- -23> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_pending e 1
- -22> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting backfillfull_ratio = 0.99
- -21> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting full_ratio = 0.99
- -20> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 create_pending setting nearfull_ratio = 0.99
- -19> 2019-01-29 15:26:59.907 7f39c397a700 10 _calc_signature seq 17 front_crc_ = 3823202930 middle_crc = 0 data_crc = 0 sig = 3998564941522667071
- -18> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 create_initial for 3b02750c-f104-4301-aa14-258d2b37f104
- -17> 2019-01-29 15:26:59.907 7f39c397a700 20 Putting signature in client message(seq # 17): sig = 3998564941522667071
- -16> 2019-01-29 15:26:59.907 7f39c617f700 20 mon.b@1(leader).osd e0 full crc 3491248425
- -15> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).paxosservice(osdmap 0..0) propose_pending
- -14> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending e 1
- -13> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 do_prune osdmap full prune enabled
- -12> 2019-01-29 15:26:59.907 7f39c617f700 10 mon.b@1(leader).osd e0 should_prune currently holding only 0 epochs (min osdmap epochs: 500); do not prune.
- -11> 2019-01-29 15:26:59.907 7f39c617f700 1 mon.b@1(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
- -10> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs
- -9> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pools queued
- -8> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0 pgs removed because they're created
- -7> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs queue remaining: 0 pools
- -6> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 update_pending_pgs 0/0 pgs added from queued pools
- -5> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first mimic+ epoch
- -4> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending first nautilus+ epoch
- -3> 2019-01-29 15:26:59.908 7f39c617f700 10 mon.b@1(leader).osd e0 encode_pending encoding full map with nautilus features 1080873256688298500
- -2> 2019-01-29 15:26:59.908 7f39c617f700 20 mon.b@1(leader).osd e0 full_crc 3491248425 inc_crc 3830871662
- -1> 2019-01-29 15:26:59.911 7f39c397a700 10 _calc_signature seq 16 front_crc_ = 2534419215 middle_crc = 0 data_crc = 0 sig = 4957134661093499494
- 0> 2019-01-29 15:26:59.940 7f39c617f700 -1 *** Caught signal (Segmentation fault) **
- in thread 7f39c617f700 thread_name:ms_dispatch
- ceph version 14.0.1-2971-g8b175ee4cc (8b175ee4cc2233625934faec055dba6a367b2275) nautilus (dev)
- 1: (()+0x13dd820) [0x5639382a1820]
- 2: (()+0x12080) [0x7f39d1683080]
- 3: (OSDMap::check_health(health_check_map_t*) const+0x1235) [0x7f39d6284cfb]
- 4: (OSDMonitor::encode_pending(std::shared_ptr<MonitorDBStore::Transaction>)+0x510d) [0x56393811017b]
- 5: (PaxosService::propose_pending()+0x45a) [0x5639380fcb24]
- 6: (PaxosService::_active()+0x62b) [0x5639380fdba7]
- 7: (()+0x12394e9) [0x5639380fd4e9]
- 8: (Context::complete(int)+0x27) [0x563937e12037]
- 9: (void finish_contexts<std::__cxx11::list<Context*, std::allocator<Context*> > >(CephContext*, std::__cxx11::list<Context*, std::allocator<Context*> >&, int)+0x2c8) [0x563937e3642c]
- 10: (Paxos::finish_round()+0x2ed) [0x5639380eb4e9]
- 11: (Paxos::handle_last(boost::intrusive_ptr<MonOpRequest>)+0x17ae) [0x5639380e5ac8]
- 12: (Paxos::dispatch(boost::intrusive_ptr<MonOpRequest>)+0x392) [0x5639380ef7cc]
- 13: (Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0x1119) [0x563937de61d9]
- 14: (Monitor::_ms_dispatch(Message*)+0xec6) [0x563937de4d9e]
- 15: (Monitor::ms_dispatch(Message*)+0x38) [0x563937e20d04]
- 16: (Dispatcher::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x5c) [0x563937e142c2]
- 17: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0xe9) [0x7f39d5f6f247]
- 18: (DispatchQueue::entry()+0x61c) [0x7f39d5f6dd3c]
- 19: (DispatchQueue::DispatchThread::entry()+0x1c) [0x7f39d60cf7f4]
- 20: (Thread::entry_wrapper()+0x78) [0x7f39d5d4cb4a]
- 21: (Thread::_entry_func(void*)+0x18) [0x7f39d5d4cac8]
- 22: (()+0x7594) [0x7f39d1678594]
- 23: (clone()+0x3f) [0x7f39d041bf4f]
- NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
- --- logging levels ---
- 0/ 5 none
- 0/ 1 lockdep
- 0/ 1 context
- 1/ 1 crush
- 1/ 5 mds
- 1/ 5 mds_balancer
- 1/ 5 mds_locker
- 1/ 5 mds_log
- 1/ 5 mds_log_expire
- 1/ 5 mds_migrator
- 0/ 1 buffer
- 0/ 1 timer
- 0/ 1 filer
- 0/ 1 striper
- 0/ 1 objecter
- 0/ 5 rados
- 0/ 5 rbd
- 0/ 5 rbd_mirror
- 0/ 5 rbd_replay
- 0/ 5 journaler
- 0/ 5 objectcacher
- 0/ 5 client
- 1/ 5 osd
- 0/ 5 optracker
- 0/ 5 objclass
- 1/ 3 filestore
- 1/ 3 journal
- 1/ 1 ms
- 20/20 mon
- 0/10 monc
- 20/20 paxos
- 0/ 5 tp
- 20/20 auth
- 1/ 5 crypto
- 1/ 1 finisher
- 1/ 1 reserver
- 1/ 5 heartbeatmap
- 1/ 5 perfcounter
- 1/ 5 rgw
- 1/ 5 rgw_sync
- 1/10 civetweb
- 1/ 5 javaclient
- 1/ 5 asok
- 1/ 1 throttle
- 0/ 0 refs
- 1/ 5 xio
- 1/ 5 compressor
- 1/ 5 bluestore
- 1/ 5 bluefs
- 1/ 3 bdev
- 1/ 5 kstore
- 4/ 5 rocksdb
- 4/ 5 leveldb
- 4/ 5 memdb
- 1/ 5 kinetic
- 1/ 5 fuse
- 1/ 5 mgr
- 20/20 mgrc
- 1/ 5 dpdk
- 1/ 5 eventtrace
- -2/-2 (syslog threshold)
- -1/-1 (stderr threshold)
- max_recent 10000
- max_new 1000
- log_file /home/rraja/git/ceph/build/out/mon.b.log
- --- end dump of recent events ---
Add Comment
Please, Sign In to add comment