Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- kafka_1 | 2017-12-20 09:01:13,214 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-28 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,214 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-28 broker=1] __consumer_offsets-28 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,217 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-38 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,217 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-38 broker=1] __consumer_offsets-38 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,218 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition my_connect_offsets-10 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,218 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition my_connect_offsets-10 broker=1] my_connect_offsets-10 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,225 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-35 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,225 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-35 broker=1] __consumer_offsets-35 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,227 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition my_connect_offsets-7 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,227 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition my_connect_offsets-7 broker=1] my_connect_offsets-7 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,230 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-44 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,230 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-44 broker=1] __consumer_offsets-44 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,235 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-6 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,235 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-6 broker=1] __consumer_offsets-6 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,238 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-25 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,239 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-25 broker=1] __consumer_offsets-25 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,244 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-16 with initial high watermark 41
- kafka_1 | 2017-12-20 09:01:13,244 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-16 broker=1] __consumer_offsets-16 starts at Leader Epoch 14 from offset 41. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,246 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition connect-status-4 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,246 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition connect-status-4 broker=1] connect-status-4 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,249 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-22 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,249 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-22 broker=1] __consumer_offsets-22 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,255 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-41 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,255 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-41 broker=1] __consumer_offsets-41 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,258 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition my_connect_offsets-4 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,259 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition my_connect_offsets-4 broker=1] my_connect_offsets-4 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,267 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-32 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,267 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-32 broker=1] __consumer_offsets-32 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,270 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-3 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,271 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-3 broker=1] __consumer_offsets-3 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,279 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition __consumer_offsets-13 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,279 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition __consumer_offsets-13 broker=1] __consumer_offsets-13 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,283 - INFO [kafka-request-handler-0:Logging$class@72] - Replica loaded for partition my_connect_offsets-23 with initial high watermark 0
- kafka_1 | 2017-12-20 09:01:13,284 - INFO [kafka-request-handler-0:Logging$class@72] - [Partition my_connect_offsets-23 broker=1] my_connect_offsets-23 starts at Leader Epoch 14 from offset 0. Previous Leader Epoch was: -1
- kafka_1 | 2017-12-20 09:01:13,318 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25
- kafka_1 | 2017-12-20 09:01:13,329 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31
- kafka_1 | 2017-12-20 09:01:13,330 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37
- kafka_1 | 2017-12-20 09:01:13,331 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43
- kafka_1 | 2017-12-20 09:01:13,332 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49
- kafka_1 | 2017-12-20 09:01:13,333 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44
- kafka_1 | 2017-12-20 09:01:13,333 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1
- kafka_1 | 2017-12-20 09:01:13,334 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7
- kafka_1 | 2017-12-20 09:01:13,335 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13
- kafka_1 | 2017-12-20 09:01:13,336 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19
- kafka_1 | 2017-12-20 09:01:13,336 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2
- kafka_1 | 2017-12-20 09:01:13,337 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8
- kafka_1 | 2017-12-20 09:01:13,338 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 19 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,338 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14
- kafka_1 | 2017-12-20 09:01:13,344 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20
- kafka_1 | 2017-12-20 09:01:13,344 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26
- kafka_1 | 2017-12-20 09:01:13,344 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32
- kafka_1 | 2017-12-20 09:01:13,345 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38
- kafka_1 | 2017-12-20 09:01:13,345 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3
- kafka_1 | 2017-12-20 09:01:13,346 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9
- kafka_1 | 2017-12-20 09:01:13,345 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,346 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15
- kafka_1 | 2017-12-20 09:01:13,347 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21
- kafka_1 | 2017-12-20 09:01:13,346 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,347 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27
- kafka_1 | 2017-12-20 09:01:13,348 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33
- kafka_1 | 2017-12-20 09:01:13,348 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,348 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39
- kafka_1 | 2017-12-20 09:01:13,351 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45
- kafka_1 | 2017-12-20 09:01:13,354 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22
- kafka_1 | 2017-12-20 09:01:13,354 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28
- kafka_1 | 2017-12-20 09:01:13,354 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34
- kafka_1 | 2017-12-20 09:01:13,355 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40
- kafka_1 | 2017-12-20 09:01:13,357 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46
- kafka_1 | 2017-12-20 09:01:13,357 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41
- kafka_1 | 2017-12-20 09:01:13,358 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47
- kafka_1 | 2017-12-20 09:01:13,358 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4
- kafka_1 | 2017-12-20 09:01:13,358 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10
- kafka_1 | 2017-12-20 09:01:13,359 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16
- kafka_1 | 2017-12-20 09:01:13,359 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5
- kafka_1 | 2017-12-20 09:01:13,359 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11
- kafka_1 | 2017-12-20 09:01:13,363 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17
- kafka_1 | 2017-12-20 09:01:13,363 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23
- kafka_1 | 2017-12-20 09:01:13,363 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29
- kafka_1 | 2017-12-20 09:01:13,364 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35
- kafka_1 | 2017-12-20 09:01:13,368 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0
- kafka_1 | 2017-12-20 09:01:13,368 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6
- kafka_1 | 2017-12-20 09:01:13,369 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12
- kafka_1 | 2017-12-20 09:01:13,369 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18
- kafka_1 | 2017-12-20 09:01:13,369 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24
- kafka_1 | 2017-12-20 09:01:13,369 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30
- kafka_1 | 2017-12-20 09:01:13,370 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36
- kafka_1 | 2017-12-20 09:01:13,371 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42
- kafka_1 | 2017-12-20 09:01:13,371 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48
- kafka_1 | 2017-12-20 09:01:13,457 - INFO [kafka-request-handler-2:Logging$class@72] - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __consumer_offsets-22,my_connect_offsets-7,__consumer_offsets-30,my_connect_offsets-15,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,my_connect_offsets-20,__consumer_offsets-25,my_connect_offsets-12,connect-status-4,__consumer_offsets-35,my_connect_offsets-2,__consumer_offsets-41,__consumer_offsets-33,my_connect_offsets-22,__consumer_offsets-23,__consumer_offsets-49,my_connect_offsets-1,my_connect_offsets-11,my_connect_offsets-24,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,my_connect_offsets-19,__consumer_offsets-31,__consumer_offsets-36,connect-status-3,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,my_connect_offsets-16,__consumer_offsets-15,__consumer_offsets-24,my_connect_configs-0,my_connect_offsets-18,connect-status-2,my_connect_offsets-10,__consumer_offsets-38,__consumer_offsets-17,my_connect_offsets-23,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,my_connect_offsets-5,__consumer_offsets-14,my_connect_offsets-13,my_connect_offsets-4,my_connect_offsets-17,my_connect_offsets-6,my_connect_offsets-3,my_connect_offsets-21,my_connect_offsets-14,connect-status-0,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,customers-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,connect-status-1,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40,my_connect_offsets-0,my_connect_offsets-8,my_connect_offsets-9
- kafka_1 | 2017-12-20 09:01:13,457 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-0 broker=1] __consumer_offsets-0 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,459 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition connect-status-1 broker=1] connect-status-1 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,460 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-29 broker=1] __consumer_offsets-29 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,461 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-48 broker=1] __consumer_offsets-48 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,463 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-1 broker=1] my_connect_offsets-1 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,471 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-10 broker=1] __consumer_offsets-10 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,472 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_configs-0 broker=1] my_connect_configs-0 starts at Leader Epoch 15 from offset 16. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,472 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-20 broker=1] my_connect_offsets-20 starts at Leader Epoch 15 from offset 9. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,473 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-45 broker=1] __consumer_offsets-45 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,474 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-26 broker=1] __consumer_offsets-26 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,475 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-7 broker=1] __consumer_offsets-7 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,476 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-17 broker=1] my_connect_offsets-17 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,478 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-42 broker=1] __consumer_offsets-42 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,478 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupCoordinator 1]: Loading group metadata for 1 with generation 39
- kafka_1 | 2017-12-20 09:01:13,478 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-24 broker=1] my_connect_offsets-24 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,480 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-4 broker=1] __consumer_offsets-4 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,480 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 131 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,480 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-14 broker=1] my_connect_offsets-14 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,481 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-23 broker=1] __consumer_offsets-23 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,481 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-5 broker=1] my_connect_offsets-5 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,481 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,482 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,482 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,483 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,483 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,486 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-11 broker=1] my_connect_offsets-11 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,490 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,490 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-21 broker=1] my_connect_offsets-21 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,491 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-2 broker=1] my_connect_offsets-2 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,491 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,491 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-1 broker=1] __consumer_offsets-1 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,491 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,492 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,492 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,493 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,492 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-20 broker=1] __consumer_offsets-20 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,495 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-39 broker=1] __consumer_offsets-39 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,493 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,495 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,495 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,495 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-17 broker=1] __consumer_offsets-17 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,497 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-36 broker=1] __consumer_offsets-36 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,497 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-8 broker=1] my_connect_offsets-8 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,498 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-18 broker=1] my_connect_offsets-18 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,499 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-14 broker=1] __consumer_offsets-14 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,496 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,500 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,500 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,501 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,502 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,503 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,503 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,504 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,505 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,506 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,504 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-33 broker=1] __consumer_offsets-33 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,507 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-15 broker=1] my_connect_offsets-15 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,507 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-49 broker=1] __consumer_offsets-49 starts at Leader Epoch 15 from offset 39. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,509 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-12 broker=1] my_connect_offsets-12 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,508 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 2 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,509 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,511 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-11 broker=1] __consumer_offsets-11 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,511 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition connect-status-2 broker=1] connect-status-2 starts at Leader Epoch 15 from offset 35. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,512 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-30 broker=1] __consumer_offsets-30 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,513 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-46 broker=1] __consumer_offsets-46 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,511 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,514 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,515 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,519 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-27 broker=1] __consumer_offsets-27 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,519 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-9 broker=1] my_connect_offsets-9 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,519 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-8 broker=1] __consumer_offsets-8 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,521 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-6 broker=1] my_connect_offsets-6 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,522 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-24 broker=1] __consumer_offsets-24 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,522 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-43 broker=1] __consumer_offsets-43 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,523 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-5 broker=1] __consumer_offsets-5 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,524 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-22 broker=1] my_connect_offsets-22 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,525 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-3 broker=1] my_connect_offsets-3 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,525 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-21 broker=1] __consumer_offsets-21 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,527 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-2 broker=1] __consumer_offsets-2 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,527 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-40 broker=1] __consumer_offsets-40 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,528 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-37 broker=1] __consumer_offsets-37 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,528 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-18 broker=1] __consumer_offsets-18 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,529 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-19 broker=1] my_connect_offsets-19 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,530 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-0 broker=1] my_connect_offsets-0 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,531 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-16 broker=1] my_connect_offsets-16 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,532 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-15 broker=1] __consumer_offsets-15 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,532 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-34 broker=1] __consumer_offsets-34 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,532 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-12 broker=1] __consumer_offsets-12 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,533 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-31 broker=1] __consumer_offsets-31 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,535 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition connect-status-3 broker=1] connect-status-3 starts at Leader Epoch 15 from offset 114. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,535 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition customers-0 broker=1] customers-0 starts at Leader Epoch 12 from offset 51. Previous Leader Epoch was: 11
- kafka_1 | 2017-12-20 09:01:13,536 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-13 broker=1] my_connect_offsets-13 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,536 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-9 broker=1] __consumer_offsets-9 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,537 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition connect-status-0 broker=1] connect-status-0 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,537 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-47 broker=1] __consumer_offsets-47 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,538 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-19 broker=1] __consumer_offsets-19 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,539 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-28 broker=1] __consumer_offsets-28 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,540 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-38 broker=1] __consumer_offsets-38 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,540 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-10 broker=1] my_connect_offsets-10 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,542 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-35 broker=1] __consumer_offsets-35 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,542 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-7 broker=1] my_connect_offsets-7 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,542 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-44 broker=1] __consumer_offsets-44 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,543 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-6 broker=1] __consumer_offsets-6 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,544 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-25 broker=1] __consumer_offsets-25 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,544 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-16 broker=1] __consumer_offsets-16 starts at Leader Epoch 15 from offset 41. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,545 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition connect-status-4 broker=1] connect-status-4 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,545 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-22 broker=1] __consumer_offsets-22 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,546 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-41 broker=1] __consumer_offsets-41 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,547 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-4 broker=1] my_connect_offsets-4 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,548 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-32 broker=1] __consumer_offsets-32 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,548 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupCoordinator 1]: Loading group metadata for connect-jdbc-sink with generation 36
- kafka_1 | 2017-12-20 09:01:13,549 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 33 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,549 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-3 broker=1] __consumer_offsets-3 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,549 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,550 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,550 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition __consumer_offsets-13 broker=1] __consumer_offsets-13 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,550 - INFO [kafka-request-handler-2:Logging$class@72] - [Partition my_connect_offsets-23 broker=1] my_connect_offsets-23 starts at Leader Epoch 15 from offset 0. Previous Leader Epoch was: 14
- kafka_1 | 2017-12-20 09:01:13,551 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,553 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,553 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,553 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,554 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,555 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,556 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,556 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,557 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 1 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,557 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,558 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,559 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds.
- kafka_1 | 2017-12-20 09:01:13,560 - INFO [group-metadata-manager-0:Logging$class@72] - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 1 milliseconds.
- connect_1 | 2017-12-20 09:01:13,888 INFO || Registered loader: sun.misc.Launcher$AppClassLoader@764c12b6 [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,888 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,889 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,890 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,891 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,892 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,892 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,892 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,892 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,892 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,893 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,893 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,893 INFO || Added aliases 'JdbcSinkConnector' and 'JdbcSink' to plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,894 INFO || Added aliases 'JdbcSourceConnector' and 'JdbcSource' to plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,894 INFO || Added aliases 'MongoDbConnector' and 'MongoDb' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,894 INFO || Added aliases 'MySqlConnector' and 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,894 INFO || Added aliases 'PostgresConnector' and 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,895 INFO || Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,895 INFO || Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,895 INFO || Added aliases 'MockConnector' and 'Mock' to plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,895 INFO || Added aliases 'MockSinkConnector' and 'MockSink' to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,895 INFO || Added aliases 'MockSourceConnector' and 'MockSource' to plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,896 INFO || Added aliases 'SchemaSourceConnector' and 'SchemaSource' to plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,896 INFO || Added aliases 'VerifiableSinkConnector' and 'VerifiableSink' to plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,896 INFO || Added aliases 'VerifiableSourceConnector' and 'VerifiableSource' to plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,896 INFO || Added aliases 'AvroConverter' and 'Avro' to plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,896 INFO || Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,897 INFO || Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,897 INFO || Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,898 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,898 INFO || Added alias 'UnwrapFromEnvelope' to plugin 'io.debezium.transforms.UnwrapFromEnvelope' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,899 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,899 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,899 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
- connect_1 | 2017-12-20 09:01:13,915 INFO || DistributedConfig values:
- connect_1 | access.control.allow.methods =
- connect_1 | access.control.allow.origin =
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | client.id =
- connect_1 | config.storage.replication.factor = 1
- connect_1 | config.storage.topic = my_connect_configs
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | group.id = 1
- connect_1 | heartbeat.interval.ms = 3000
- connect_1 | internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
- connect_1 | internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
- connect_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | offset.flush.interval.ms = 60000
- connect_1 | offset.flush.timeout.ms = 5000
- connect_1 | offset.storage.partitions = 25
- connect_1 | offset.storage.replication.factor = 1
- connect_1 | offset.storage.topic = my_connect_offsets
- connect_1 | plugin.path = [/kafka/connect]
- connect_1 | rebalance.timeout.ms = 60000
- connect_1 | receive.buffer.bytes = 32768
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 40000
- connect_1 | rest.advertised.host.name = 172.19.0.5
- connect_1 | rest.advertised.port = 8083
- connect_1 | rest.host.name = 172.19.0.5
- connect_1 | rest.port = 8083
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | session.timeout.ms = 10000
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | status.storage.partitions = 5
- connect_1 | status.storage.replication.factor = 1
- connect_1 | status.storage.topic = connect-status
- connect_1 | task.shutdown.graceful.timeout.ms = 10000
- connect_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter
- connect_1 | worker.sync.timeout.ms = 3000
- connect_1 | worker.unsync.backoff.ms = 300000
- connect_1 | [org.apache.kafka.connect.runtime.distributed.DistributedConfig]
- connect_1 | 2017-12-20 09:01:14,027 INFO || Logging initialized @5805ms [org.eclipse.jetty.util.log]
- connect_1 | 2017-12-20 09:01:14,233 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,233 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,237 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect]
- connect_1 | 2017-12-20 09:01:14,238 INFO || Starting REST server [org.apache.kafka.connect.runtime.rest.RestServer]
- connect_1 | 2017-12-20 09:01:14,242 INFO || Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:14,243 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:14,243 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore]
- connect_1 | 2017-12-20 09:01:14,244 INFO || Starting KafkaBasedLog with topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:14,247 INFO || AdminClientConfig values:
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | client.id =
- connect_1 | connections.max.idle.ms = 300000
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | receive.buffer.bytes = 65536
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 120000
- connect_1 | retries = 5
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,272 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,272 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,273 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,273 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,273 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,274 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,274 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,275 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,275 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,275 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,275 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,276 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,277 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,277 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,278 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,279 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,279 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,279 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,279 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,280 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,280 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,280 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,280 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,281 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,281 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,310 INFO || jetty-9.2.15.v20160210 [org.eclipse.jetty.server.Server]
- kafka_1 | 2017-12-20 09:01:14,474 - INFO [kafka-request-handler-5:Logging$class@80] - [Admin Manager on Broker 1]: Error processing create topic request for topic my_connect_offsets with arguments (numPartitions=25, replicationFactor=1, replicasAssignments={}, configs={cleanup.policy=compact})
- kafka_1 | org.apache.kafka.common.errors.TopicExistsException: Topic 'my_connect_offsets' already exists.
- connect_1 | 2017-12-20 09:01:14,510 INFO || ProducerConfig values:
- connect_1 | acks = all
- connect_1 | batch.size = 16384
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | buffer.memory = 33554432
- connect_1 | client.id =
- connect_1 | compression.type = none
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.idempotence = false
- connect_1 | interceptor.classes = null
- connect_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- connect_1 | linger.ms = 0
- connect_1 | max.block.ms = 60000
- connect_1 | max.in.flight.requests.per.connection = 1
- connect_1 | max.request.size = 1048576
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- connect_1 | receive.buffer.bytes = 32768
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 30000
- connect_1 | retries = 2147483647
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | transaction.timeout.ms = 60000
- connect_1 | transactional.id = null
- connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- connect_1 | [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,548 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,549 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,549 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,553 INFO || ConsumerConfig values:
- connect_1 | auto.commit.interval.ms = 5000
- connect_1 | auto.offset.reset = earliest
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | check.crcs = true
- connect_1 | client.id =
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.auto.commit = false
- connect_1 | exclude.internal.topics = true
- connect_1 | fetch.max.bytes = 52428800
- connect_1 | fetch.max.wait.ms = 500
- connect_1 | fetch.min.bytes = 1
- connect_1 | group.id = 1
- connect_1 | heartbeat.interval.ms = 3000
- connect_1 | interceptor.classes = null
- connect_1 | internal.leave.group.on.close = true
- connect_1 | isolation.level = read_uncommitted
- connect_1 | key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- connect_1 | max.partition.fetch.bytes = 1048576
- connect_1 | max.poll.interval.ms = 300000
- connect_1 | max.poll.records = 500
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- connect_1 | receive.buffer.bytes = 65536
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 305000
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | session.timeout.ms = 10000
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,583 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,595 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,596 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,598 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,598 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,598 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,598 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,599 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,599 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,599 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,599 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,600 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,600 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,600 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,600 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,601 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,601 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,677 INFO || Discovered coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) for group 1. [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- connect_1 | 2017-12-20 09:01:14,779 INFO || Finished reading KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:14,779 INFO || Started KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:14,779 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore]
- connect_1 | 2017-12-20 09:01:14,780 INFO || Worker started [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:14,781 INFO || Starting KafkaBasedLog with topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:14,782 INFO || AdminClientConfig values:
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | client.id =
- connect_1 | connections.max.idle.ms = 300000
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | receive.buffer.bytes = 65536
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 120000
- connect_1 | retries = 5
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,792 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,793 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,793 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,794 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,794 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,794 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,794 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,794 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,795 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,795 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,795 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,795 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,795 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,796 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,796 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,796 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,797 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,797 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,797 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,798 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,798 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,798 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,798 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:14,799 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,799 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- kafka_1 | 2017-12-20 09:01:14,899 - INFO [kafka-request-handler-3:Logging$class@80] - [Admin Manager on Broker 1]: Error processing create topic request for topic connect-status with arguments (numPartitions=5, replicationFactor=1, replicasAssignments={}, configs={cleanup.policy=compact})
- kafka_1 | org.apache.kafka.common.errors.TopicExistsException: Topic 'connect-status' already exists.
- connect_1 | 2017-12-20 09:01:14,906 INFO || ProducerConfig values:
- connect_1 | acks = all
- connect_1 | batch.size = 16384
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | buffer.memory = 33554432
- connect_1 | client.id =
- connect_1 | compression.type = none
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.idempotence = false
- connect_1 | interceptor.classes = null
- connect_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer
- connect_1 | linger.ms = 0
- connect_1 | max.block.ms = 60000
- connect_1 | max.in.flight.requests.per.connection = 1
- connect_1 | max.request.size = 1048576
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- connect_1 | receive.buffer.bytes = 32768
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 30000
- connect_1 | retries = 0
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | transaction.timeout.ms = 60000
- connect_1 | transactional.id = null
- connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- connect_1 | [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,913 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,914 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,914 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,914 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,915 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,915 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,915 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,915 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,915 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,916 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,916 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,916 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,916 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,917 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,917 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,917 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,917 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,918 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,918 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,918 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,918 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,919 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,919 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:14,919 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,919 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,920 INFO || ConsumerConfig values:
- connect_1 | auto.commit.interval.ms = 5000
- connect_1 | auto.offset.reset = earliest
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | check.crcs = true
- connect_1 | client.id =
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.auto.commit = false
- connect_1 | exclude.internal.topics = true
- connect_1 | fetch.max.bytes = 52428800
- connect_1 | fetch.max.wait.ms = 500
- connect_1 | fetch.min.bytes = 1
- connect_1 | group.id = 1
- connect_1 | heartbeat.interval.ms = 3000
- connect_1 | interceptor.classes = null
- connect_1 | internal.leave.group.on.close = true
- connect_1 | isolation.level = read_uncommitted
- connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- connect_1 | max.partition.fetch.bytes = 1048576
- connect_1 | max.poll.interval.ms = 300000
- connect_1 | max.poll.records = 500
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- connect_1 | receive.buffer.bytes = 65536
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 305000
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | session.timeout.ms = 10000
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,924 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,925 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,925 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,925 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,925 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,925 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,925 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:14,925 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,925 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:14,965 INFO || Discovered coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) for group 1. [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- connect_1 | 2017-12-20 09:01:15,057 INFO || Finished reading KafkaBasedLog for topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:15,057 INFO || Started KafkaBasedLog for topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:15,057 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore]
- connect_1 | 2017-12-20 09:01:15,057 INFO || Starting KafkaBasedLog with topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:15,057 INFO || AdminClientConfig values:
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | client.id =
- connect_1 | connections.max.idle.ms = 300000
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | receive.buffer.bytes = 65536
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 120000
- connect_1 | retries = 5
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,058 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig]
- connect_1 | 2017-12-20 09:01:15,059 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,059 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | Dec 20, 2017 9:01:15 AM org.glassfish.jersey.internal.Errors logErrors
- connect_1 | WARNING: The following warnings have been detected: WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
- connect_1 | WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
- connect_1 | WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
- connect_1 | WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.
- connect_1 |
- connect_1 | 2017-12-20 09:01:15,153 INFO || Started o.e.j.s.ServletContextHandler@5df0e5f0{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler]
- connect_1 | 2017-12-20 09:01:15,163 INFO || Started ServerConnector@5125f46e{HTTP/1.1}{172.19.0.5:8083} [org.eclipse.jetty.server.ServerConnector]
- connect_1 | 2017-12-20 09:01:15,163 INFO || Started @6942ms [org.eclipse.jetty.server.Server]
- connect_1 | 2017-12-20 09:01:15,164 INFO || REST server listening at http://172.19.0.5:8083/, advertising URL http://172.19.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer]
- connect_1 | 2017-12-20 09:01:15,164 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect]
- kafka_1 | 2017-12-20 09:01:15,165 - INFO [kafka-request-handler-3:Logging$class@80] - [Admin Manager on Broker 1]: Error processing create topic request for topic my_connect_configs with arguments (numPartitions=1, replicationFactor=1, replicasAssignments={}, configs={cleanup.policy=compact})
- kafka_1 | org.apache.kafka.common.errors.TopicExistsException: Topic 'my_connect_configs' already exists.
- connect_1 | 2017-12-20 09:01:15,167 INFO || ProducerConfig values:
- connect_1 | acks = all
- connect_1 | batch.size = 16384
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | buffer.memory = 33554432
- connect_1 | client.id =
- connect_1 | compression.type = none
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.idempotence = false
- connect_1 | interceptor.classes = null
- connect_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer
- connect_1 | linger.ms = 0
- connect_1 | max.block.ms = 60000
- connect_1 | max.in.flight.requests.per.connection = 1
- connect_1 | max.request.size = 1048576
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- connect_1 | receive.buffer.bytes = 32768
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 30000
- connect_1 | retries = 2147483647
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | transaction.timeout.ms = 60000
- connect_1 | transactional.id = null
- connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- connect_1 | [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,175 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,176 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,176 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,177 INFO || ConsumerConfig values:
- connect_1 | auto.commit.interval.ms = 5000
- connect_1 | auto.offset.reset = earliest
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | check.crcs = true
- connect_1 | client.id =
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.auto.commit = false
- connect_1 | exclude.internal.topics = true
- connect_1 | fetch.max.bytes = 52428800
- connect_1 | fetch.max.wait.ms = 500
- connect_1 | fetch.min.bytes = 1
- connect_1 | group.id = 1
- connect_1 | heartbeat.interval.ms = 3000
- connect_1 | interceptor.classes = null
- connect_1 | internal.leave.group.on.close = true
- connect_1 | isolation.level = read_uncommitted
- connect_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- connect_1 | max.partition.fetch.bytes = 1048576
- connect_1 | max.poll.interval.ms = 300000
- connect_1 | max.poll.records = 500
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- connect_1 | receive.buffer.bytes = 65536
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 305000
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | session.timeout.ms = 10000
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,180 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,181 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 WARN || The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,182 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,183 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,201 INFO || Discovered coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) for group 1. [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- connect_1 | 2017-12-20 09:01:15,212 INFO || Removed connector customer-connector due to null configuration. This is usually intentional and does not indicate an issue. [org.apache.kafka.connect.storage.KafkaConfigBackingStore]
- connect_1 | 2017-12-20 09:01:15,213 INFO || Removed connector customer-connector due to null configuration. This is usually intentional and does not indicate an issue. [org.apache.kafka.connect.storage.KafkaConfigBackingStore]
- connect_1 | 2017-12-20 09:01:15,218 INFO || Finished reading KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:15,218 INFO || Started KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog]
- connect_1 | 2017-12-20 09:01:15,218 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore]
- connect_1 | 2017-12-20 09:01:15,218 INFO || Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,223 INFO || Discovered coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) for group 1. [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- connect_1 | 2017-12-20 09:01:15,227 INFO || (Re-)joining group 1 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- kafka_1 | 2017-12-20 09:01:15,238 - INFO [kafka-request-handler-0:Logging$class@72] - [GroupCoordinator 1]: Preparing to rebalance group 1 with old generation 39 (__consumer_offsets-49)
- kafka_1 | 2017-12-20 09:01:15,244 - INFO [executor-Rebalance:Logging$class@72] - [GroupCoordinator 1]: Stabilized group 1 generation 40 (__consumer_offsets-49)
- kafka_1 | 2017-12-20 09:01:15,255 - INFO [kafka-request-handler-1:Logging$class@72] - [GroupCoordinator 1]: Assignment received from leader for group 1 for generation 40
- kafka_1 | 2017-12-20 09:01:15,269 - INFO [kafka-request-handler-1:Logging$class@72] - Updated PartitionLeaderEpoch. New: {epoch:15, offset:39}, Current: {epoch:14, offset37} for Partition: __consumer_offsets-49. Cache now contains 15 entries.
- connect_1 | 2017-12-20 09:01:15,297 INFO || Successfully joined group 1 with generation 40 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- connect_1 | 2017-12-20 09:01:15,297 INFO || Joined group and got assignment: Assignment{error=0, leader='connect-1-abac49d6-6e86-49c0-959a-9843d67bf716', leaderUrl='http://172.19.0.5:8083/', offset=16, connectorIds=[jdbc-sink, customer-connector], taskIds=[jdbc-sink-0, customer-connector-0]} [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,298 WARN || Catching up to assignment's config offset. [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,298 INFO || Current config state offset -1 is behind group assignment 16, reading to end of config log [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,711 INFO || Finished reading to end of log and updated config snapshot, new config log offset: 16 [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,712 INFO || Starting connectors and tasks using config offset 16 [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,713 INFO || Starting connector jdbc-sink [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,713 INFO || Starting task jdbc-sink-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,714 INFO || Creating task jdbc-sink-0 [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,713 INFO || Starting connector customer-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,715 INFO || Starting task customer-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,715 INFO || Creating task customer-connector-0 [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,715 INFO || ConnectorConfig values:
- connect_1 | connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
- connect_1 | key.converter = null
- connect_1 | name = jdbc-sink
- connect_1 | tasks.max = 1
- connect_1 | transforms = [unwrap]
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,715 INFO || ConnectorConfig values:
- connect_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector
- connect_1 | key.converter = null
- connect_1 | name = customer-connector
- connect_1 | tasks.max = 1
- connect_1 | transforms = [route]
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,715 INFO || ConnectorConfig values:
- connect_1 | connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
- connect_1 | key.converter = null
- connect_1 | name = jdbc-sink
- connect_1 | tasks.max = 1
- connect_1 | transforms = [unwrap]
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,716 INFO || ConnectorConfig values:
- connect_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector
- connect_1 | key.converter = null
- connect_1 | name = customer-connector
- connect_1 | tasks.max = 1
- connect_1 | transforms = [route]
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,719 INFO || EnrichedConnectorConfig values:
- connect_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector
- connect_1 | key.converter = null
- connect_1 | name = customer-connector
- connect_1 | tasks.max = 1
- connect_1 | transforms = [route]
- connect_1 | transforms.route.regex = ([^.]+)\.([^.]+)\.([^.]+)
- connect_1 | transforms.route.replacement = $3
- connect_1 | transforms.route.type = class org.apache.kafka.connect.transforms.RegexRouter
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,719 INFO || Creating connector customer-connector of type io.debezium.connector.postgresql.PostgresConnector [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,719 INFO || EnrichedConnectorConfig values:
- connect_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector
- connect_1 | key.converter = null
- connect_1 | name = customer-connector
- connect_1 | tasks.max = 1
- connect_1 | transforms = [route]
- connect_1 | transforms.route.regex = ([^.]+)\.([^.]+)\.([^.]+)
- connect_1 | transforms.route.replacement = $3
- connect_1 | transforms.route.type = class org.apache.kafka.connect.transforms.RegexRouter
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,721 INFO || Instantiated connector customer-connector with version 0.6.2 of type class io.debezium.connector.postgresql.PostgresConnector [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,732 INFO || TaskConfig values:
- connect_1 | task.class = class io.debezium.connector.postgresql.PostgresConnectorTask
- connect_1 | [org.apache.kafka.connect.runtime.TaskConfig]
- connect_1 | 2017-12-20 09:01:15,732 INFO || Instantiated task customer-connector-0 with version 0.6.2 of type io.debezium.connector.postgresql.PostgresConnectorTask [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,735 INFO || ProducerConfig values:
- connect_1 | acks = all
- connect_1 | batch.size = 16384
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | buffer.memory = 33554432
- connect_1 | client.id =
- connect_1 | compression.type = none
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.idempotence = false
- connect_1 | interceptor.classes = null
- connect_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- connect_1 | linger.ms = 0
- connect_1 | max.block.ms = 9223372036854775807
- connect_1 | max.in.flight.requests.per.connection = 1
- connect_1 | max.request.size = 1048576
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- connect_1 | receive.buffer.bytes = 32768
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 2147483647
- connect_1 | retries = 2147483647
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | transaction.timeout.ms = 60000
- connect_1 | transactional.id = null
- connect_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- connect_1 | [org.apache.kafka.clients.producer.ProducerConfig]
- connect_1 | 2017-12-20 09:01:15,744 INFO || Finished creating connector customer-connector [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,749 INFO || SourceConnectorConfig values:
- connect_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector
- connect_1 | key.converter = null
- kafka_1 | 2017-12-20 09:01:15,749 - INFO [kafka-request-handler-3:Logging$class@72] - Updated PartitionLeaderEpoch. New: {epoch:15, offset:114}, Current: {epoch:14, offset108} for Partition: connect-status-3. Cache now contains 12 entries.
- connect_1 | name = customer-connector
- connect_1 | tasks.max = 1
- connect_1 | transforms = [route]
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,750 INFO || EnrichedConnectorConfig values:
- connect_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector
- connect_1 | key.converter = null
- connect_1 | name = customer-connector
- connect_1 | tasks.max = 1
- connect_1 | transforms = [route]
- connect_1 | transforms.route.regex = ([^.]+)\.([^.]+)\.([^.]+)
- connect_1 | transforms.route.replacement = $3
- connect_1 | transforms.route.type = class org.apache.kafka.connect.transforms.RegexRouter
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,762 INFO || EnrichedConnectorConfig values:
- connect_1 | connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
- connect_1 | key.converter = null
- connect_1 | name = jdbc-sink
- connect_1 | tasks.max = 1
- connect_1 | transforms = [unwrap]
- connect_1 | transforms.unwrap.drop.deletes = true
- connect_1 | transforms.unwrap.drop.tombstones = true
- connect_1 | transforms.unwrap.type = class io.debezium.transforms.UnwrapFromEnvelope
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,763 INFO || EnrichedConnectorConfig values:
- connect_1 | connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
- connect_1 | key.converter = null
- connect_1 | name = jdbc-sink
- connect_1 | tasks.max = 1
- connect_1 | transforms = [unwrap]
- connect_1 | transforms.unwrap.drop.deletes = true
- connect_1 | transforms.unwrap.drop.tombstones = true
- connect_1 | transforms.unwrap.type = class io.debezium.transforms.UnwrapFromEnvelope
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,763 INFO || Creating connector jdbc-sink of type io.confluent.connect.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,764 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,764 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,765 INFO || TaskConfig values:
- connect_1 | task.class = class io.confluent.connect.jdbc.sink.JdbcSinkTask
- connect_1 | [org.apache.kafka.connect.runtime.TaskConfig]
- connect_1 | 2017-12-20 09:01:15,765 INFO || Instantiated task jdbc-sink-0 with version null of type io.confluent.connect.jdbc.sink.JdbcSinkTask [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,766 INFO || Instantiated connector jdbc-sink with version 3.3.0 of type class io.confluent.connect.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,768 INFO || Finished creating connector jdbc-sink [org.apache.kafka.connect.runtime.Worker]
- connect_1 | 2017-12-20 09:01:15,771 INFO || SinkConnectorConfig values:
- connect_1 | connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
- connect_1 | key.converter = null
- connect_1 | name = jdbc-sink
- connect_1 | tasks.max = 1
- connect_1 | topics = [customers]
- connect_1 | transforms = [unwrap]
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.SinkConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,772 INFO || EnrichedConnectorConfig values:
- connect_1 | connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
- connect_1 | key.converter = null
- connect_1 | name = jdbc-sink
- connect_1 | tasks.max = 1
- connect_1 | topics = [customers]
- connect_1 | transforms = [unwrap]
- connect_1 | transforms.unwrap.drop.deletes = true
- connect_1 | transforms.unwrap.drop.tombstones = true
- connect_1 | transforms.unwrap.type = class io.debezium.transforms.UnwrapFromEnvelope
- connect_1 | value.converter = null
- connect_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
- connect_1 | 2017-12-20 09:01:15,773 INFO || Setting task configurations for 1 workers. [io.confluent.connect.jdbc.JdbcSinkConnector]
- connect_1 | 2017-12-20 09:01:15,799 INFO || ConsumerConfig values:
- connect_1 | auto.commit.interval.ms = 5000
- connect_1 | auto.offset.reset = earliest
- connect_1 | bootstrap.servers = [kafka:9092]
- connect_1 | check.crcs = true
- connect_1 | client.id =
- connect_1 | connections.max.idle.ms = 540000
- connect_1 | enable.auto.commit = false
- connect_1 | exclude.internal.topics = true
- connect_1 | fetch.max.bytes = 52428800
- connect_1 | fetch.max.wait.ms = 500
- connect_1 | fetch.min.bytes = 1
- connect_1 | group.id = connect-jdbc-sink
- connect_1 | heartbeat.interval.ms = 3000
- connect_1 | interceptor.classes = null
- connect_1 | internal.leave.group.on.close = true
- connect_1 | isolation.level = read_uncommitted
- connect_1 | key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- connect_1 | max.partition.fetch.bytes = 1048576
- connect_1 | max.poll.interval.ms = 300000
- connect_1 | max.poll.records = 500
- connect_1 | metadata.max.age.ms = 300000
- connect_1 | metric.reporters = []
- connect_1 | metrics.num.samples = 2
- connect_1 | metrics.recording.level = INFO
- connect_1 | metrics.sample.window.ms = 30000
- connect_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- connect_1 | receive.buffer.bytes = 65536
- connect_1 | reconnect.backoff.max.ms = 1000
- connect_1 | reconnect.backoff.ms = 50
- connect_1 | request.timeout.ms = 305000
- connect_1 | retry.backoff.ms = 100
- connect_1 | sasl.jaas.config = null
- connect_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- connect_1 | sasl.kerberos.min.time.before.relogin = 60000
- connect_1 | sasl.kerberos.service.name = null
- connect_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- connect_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- connect_1 | sasl.mechanism = GSSAPI
- connect_1 | security.protocol = PLAINTEXT
- connect_1 | send.buffer.bytes = 131072
- connect_1 | session.timeout.ms = 10000
- connect_1 | ssl.cipher.suites = null
- connect_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- connect_1 | ssl.endpoint.identification.algorithm = null
- connect_1 | ssl.key.password = null
- connect_1 | ssl.keymanager.algorithm = SunX509
- connect_1 | ssl.keystore.location = null
- connect_1 | ssl.keystore.password = null
- connect_1 | ssl.keystore.type = JKS
- connect_1 | ssl.protocol = TLS
- connect_1 | ssl.provider = null
- connect_1 | ssl.secure.random.implementation = null
- connect_1 | ssl.trustmanager.algorithm = PKIX
- connect_1 | ssl.truststore.location = null
- connect_1 | ssl.truststore.password = null
- connect_1 | ssl.truststore.type = JKS
- connect_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- connect_1 | [org.apache.kafka.clients.consumer.ConsumerConfig]
- connect_1 | 2017-12-20 09:01:15,813 INFO || Kafka version : 0.11.0.1 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,813 INFO || Kafka commitId : c2a0d5f9b1f45bf5 [org.apache.kafka.common.utils.AppInfoParser]
- connect_1 | 2017-12-20 09:01:15,824 INFO || Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
- connect_1 | 2017-12-20 09:01:15,825 INFO || Starting task [io.confluent.connect.jdbc.sink.JdbcSinkTask]
- kafka_1 | 2017-12-20 09:01:15,825 - INFO [kafka-request-handler-7:Logging$class@72] - Updated PartitionLeaderEpoch. New: {epoch:15, offset:35}, Current: {epoch:14, offset33} for Partition: connect-status-2. Cache now contains 12 entries.
- connect_1 | 2017-12-20 09:01:15,844 INFO || JdbcSinkConfig values:
- connect_1 | auto.create = true
- connect_1 | auto.evolve = false
- connect_1 | batch.size = 3000
- connect_1 | connection.password = null
- connect_1 | connection.url = jdbc:postgresql://postgres:5432/inventory?user=postgresuser&password=postgrespw
- connect_1 | connection.user = null
- connect_1 | fields.whitelist = []
- connect_1 | insert.mode = upsert
- connect_1 | max.retries = 10
- connect_1 | pk.fields = [id]
- connect_1 | pk.mode = record_value
- connect_1 | retry.backoff.ms = 3000
- connect_1 | table.name.format = ${topic}
- connect_1 | [io.confluent.connect.jdbc.sink.JdbcSinkConfig]
- connect_1 | 2017-12-20 09:01:15,856 INFO || Initializing writer using SQL dialect: PostgreSqlDialect [io.confluent.connect.jdbc.sink.JdbcSinkTask]
- connect_1 | 2017-12-20 09:01:15,861 INFO || Sink task WorkerSinkTask{id=jdbc-sink-0} finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask]
- connect_1 | 2017-12-20 09:01:15,875 INFO || Discovered coordinator 172.19.0.4:9092 (id: 2147483646 rack: null) for group connect-jdbc-sink. [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- connect_1 | 2017-12-20 09:01:15,889 INFO || Revoking previously assigned partitions [] for group connect-jdbc-sink [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
- connect_1 | 2017-12-20 09:01:15,889 INFO || (Re-)joining group connect-jdbc-sink [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- kafka_1 | 2017-12-20 09:01:15,892 - INFO [kafka-request-handler-5:Logging$class@72] - [GroupCoordinator 1]: Preparing to rebalance group connect-jdbc-sink with old generation 36 (__consumer_offsets-16)
- kafka_1 | 2017-12-20 09:01:15,893 - INFO [executor-Rebalance:Logging$class@72] - [GroupCoordinator 1]: Stabilized group connect-jdbc-sink generation 37 (__consumer_offsets-16)
- kafka_1 | 2017-12-20 09:01:15,894 - INFO [kafka-request-handler-2:Logging$class@72] - [GroupCoordinator 1]: Assignment received from leader for group connect-jdbc-sink for generation 37
- kafka_1 | 2017-12-20 09:01:15,895 - INFO [kafka-request-handler-2:Logging$class@72] - Updated PartitionLeaderEpoch. New: {epoch:15, offset:41}, Current: {epoch:14, offset39} for Partition: __consumer_offsets-16. Cache now contains 12 entries.
- connect_1 | 2017-12-20 09:01:15,900 INFO || Successfully joined group connect-jdbc-sink with generation 37 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
- connect_1 | 2017-12-20 09:01:15,900 INFO || Setting newly assigned partitions [customers-0] for group connect-jdbc-sink [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
- connect_1 | 2017-12-20 09:01:16,000 INFO || Checking table:customers exists for product:PostgreSQL schema:public catalog: [io.confluent.connect.jdbc.sink.DbMetadataQueries]
- connect_1 | 2017-12-20 09:01:16,005 INFO || product:PostgreSQL schema:public catalog:inventory -- table:customers is present [io.confluent.connect.jdbc.sink.DbMetadataQueries]
- connect_1 | 2017-12-20 09:01:16,005 INFO || Querying column metadata for product:PostgreSQL schema:public catalog:inventory table:customers [io.confluent.connect.jdbc.sink.DbMetadataQueries]
- connect_1 | 2017-12-20 09:01:16,415 INFO Postgres|dbserver1|postgres-connector-task user 'moneradbuser' connected to database 'moneradb' on PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit with roles:
- connect_1 | role 'rds_replication' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
- connect_1 | role 'rds_superuser' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
- connect_1 | role 'pg_signal_backend' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false]
- connect_1 | role 'moneradbuser' [superuser: false, replication: false, inherit: true, create role: true, create db: true, can log in: true] [io.debezium.connector.postgresql.PostgresConnectorTask]
- connect_1 | 2017-12-20 09:01:16,417 INFO Postgres|dbserver1|postgres-connector-task Found previous offset source_info[server='dbserver1', lsn=1/AC000458, txId=2323, useconds=1513760428454000, snapshot=false] [io.debezium.connector.postgresql.PostgresConnectorTask]
- connect_1 | 2017-12-20 09:01:16,417 INFO Postgres|dbserver1|postgres-connector-task Taking a new snapshot as per configuration [io.debezium.connector.postgresql.PostgresConnectorTask]
- connect_1 | 2017-12-20 09:01:16,454 INFO || Source task WorkerSourceTask{id=customer-connector-0} finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask]
- connect_1 | 2017-12-20 09:01:16,456 INFO Postgres|dbserver1|records-snapshot-producer Step 0: disabling autocommit [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,457 INFO Postgres|dbserver1|records-snapshot-producer Step 1: starting transaction and refreshing the DB schemas for database 'moneradb' and user 'moneradbuser' [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,911 INFO Postgres|dbserver1|records-snapshot-producer Step 2: locking each of the database tables, waiting a maximum of '10.0' seconds for each lock [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,966 INFO Postgres|dbserver1|records-snapshot-producer read xlogStart at '1/AC000660' from transaction '2324' [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,966 INFO Postgres|dbserver1|records-snapshot-producer Step 3: reading and exporting the contents of each table [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,967 INFO Postgres|dbserver1|records-snapshot-producer exporting data from table 'public.customers' [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,983 INFO Postgres|dbserver1|records-snapshot-producer finished exporting '5' records for 'public.customers'; total duration '00:00:00.015' [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,983 INFO Postgres|dbserver1|records-snapshot-producer Step 4: committing transaction '2324' [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,984 INFO Postgres|dbserver1|records-snapshot-producer Step 5: sending the last snapshot record [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,985 INFO Postgres|dbserver1|records-snapshot-producer Snapshot completed in '00:00:00.534' [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:16,986 INFO Postgres|dbserver1|records-snapshot-producer Snapshot finished, continuing streaming changes from 1/AC000660 [io.debezium.connector.postgresql.RecordsSnapshotProducer]
- connect_1 | 2017-12-20 09:01:17,070 INFO Postgres|dbserver1|records-stream-producer REPLICA IDENTITY for 'public.customers' is 'FULL'; UPDATE AND DELETE events will contain the previous values of all the columns [io.debezium.connector.postgresql.PostgresSchema]
- kafka_1 | 2017-12-20 09:01:17,456 - INFO [kafka-request-handler-0:Logging$class@72] - Updated PartitionLeaderEpoch. New: {epoch:12, offset:51}, Current: {epoch:11, offset46} for Partition: customers-0. Cache now contains 7 entries.
- kafka_1 | 2017-12-20 09:02:15,781 - INFO [kafka-request-handler-5:Logging$class@72] - Updated PartitionLeaderEpoch. New: {epoch:15, offset:9}, Current: {epoch:14, offset8} for Partition: my_connect_offsets-20. Cache now contains 7 entries.
- connect_1 | 2017-12-20 09:02:15,786 INFO || Finished WorkerSourceTask{id=customer-connector-0} commitOffsets successfully in 12 ms [org.apache.kafka.connect.runtime.WorkerSourceTask]
- connect_1 | 2017-12-20 09:02:15,799 INFO || WorkerSinkTask{id=jdbc-sink-0} Committing offsets asynchronously using sequence number 1: {customers-0=OffsetAndMetadata{offset=56, metadata=''}} [org.apache.kafka.connect.runtime.WorkerSinkTask]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement