aironman

docker-compose demo-kafka-elasctic output

May 14th, 2018
452
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. :demo-2 aironman$ clear && docker-compose up
  2.  
  3. WARNING: Some services (demo-kafka-elastic, demo-quartz) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
  4. Pulling kibana (seeruk/docker-kibana-sense:4.5)...
  5. 4.5: Pulling from seeruk/docker-kibana-sense
  6. 357ea8c3d80b: Already exists
  7. 9d99c1434c75: Pull complete
  8. aa9e96f4d5f4: Pull complete
  9. 393684003c1e: Pull complete
  10. e2578dae99ba: Pull complete
  11. e93da2cb19e9: Pull complete
  12. b11b2a2ce046: Pull complete
  13. 136e77e2bc04: Pull complete
  14. c90792d80587: Pull complete
  15. e7d4af8bae7c: Pull complete
  16. Digest: sha256:1c7c6b0a027078c5a50a1a418a2bcf06e1a2d5b3636d62bbf00ca0a93d05d7be
  17. Status: Downloaded newer image for seeruk/docker-kibana-sense:4.5
  18. Starting demo-2_zookeeper_1 ... done
  19. Starting demo-2_kafka_1 ... done
  20. Starting demo-2_demo-quartz_1 ... done
  21. Starting demo-2_demo-kafka-elastic_1 ... done
  22. Recreating demo-2_kibana_1 ... done
  23. Starting demo-2_elastic_1 ... done
  24. Attaching to demo-2_demo-quartz_1, demo-2_elastic_1, demo-2_kibana_1, demo-2_zookeeper_1, demo-2_kafka_1, demo-2_demo-kafka-elastic_1
  25. kafka_1 | Excluding KAFKA_HOME from broker config
  26. kafka_1 | [Configuring] 'port' in '/opt/kafka/config/server.properties'
  27. kafka_1 | [Configuring] 'advertised.listeners' in '/opt/kafka/config/server.properties'
  28. zookeeper_1 | ZooKeeper JMX enabled by default
  29. zookeeper_1 | Using config: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
  30. kafka_1 | [Configuring] 'broker.id' in '/opt/kafka/config/server.properties'
  31. kafka_1 | Excluding KAFKA_VERSION from broker config
  32. kafka_1 | [Configuring] 'listeners' in '/opt/kafka/config/server.properties'
  33. kafka_1 | [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties'
  34. kafka_1 | [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties'
  35. elastic_1 | [2018-05-14 16:35:54,669][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
  36. elastic_1 | [2018-05-14 16:35:55,177][INFO ][node ] [Proteus] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
  37. elastic_1 | [2018-05-14 16:35:55,177][INFO ][node ] [Proteus] initializing ...
  38. zookeeper_1 | 2018-05-14 16:35:55,415 [myid:] - INFO [main:QuorumPeerConfig@124] - Reading configuration from: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
  39. zookeeper_1 | 2018-05-14 16:35:55,432 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
  40. zookeeper_1 | 2018-05-14 16:35:55,432 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
  41. zookeeper_1 | 2018-05-14 16:35:55,435 [myid:] - WARN [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running in standalone mode
  42. zookeeper_1 | 2018-05-14 16:35:55,435 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
  43. zookeeper_1 | 2018-05-14 16:35:55,462 [myid:] - INFO [main:QuorumPeerConfig@124] - Reading configuration from: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
  44. zookeeper_1 | 2018-05-14 16:35:55,463 [myid:] - INFO [main:ZooKeeperServerMain@96] - Starting server
  45. zookeeper_1 | 2018-05-14 16:35:55,471 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
  46. zookeeper_1 | 2018-05-14 16:35:55,475 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
  47. zookeeper_1 | 2018-05-14 16:35:55,475 [myid:] - INFO [main:Environment@100] - Server environment:host.name=dfb8ed0eb666
  48. zookeeper_1 | 2018-05-14 16:35:55,476 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.7.0_65
  49. zookeeper_1 | 2018-05-14 16:35:55,476 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
  50. zookeeper_1 | 2018-05-14 16:35:55,477 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
  51. zookeeper_1 | 2018-05-14 16:35:55,478 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper-3.4.9/bin/../build/classes:/opt/zookeeper-3.4.9/bin/../build/lib/*.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/opt/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.9/bin/../conf:
  52. zookeeper_1 | 2018-05-14 16:35:55,483 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
  53. zookeeper_1 | 2018-05-14 16:35:55,483 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
  54. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:sense","info"],"pid":12,"name":"plugin:sense","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  55. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:kibana","info"],"pid":12,"name":"plugin:kibana","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  56. zookeeper_1 | 2018-05-14 16:35:55,487 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=<NA>
  57. zookeeper_1 | 2018-05-14 16:35:55,493 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux
  58. zookeeper_1 | 2018-05-14 16:35:55,493 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64
  59. zookeeper_1 | 2018-05-14 16:35:55,494 [myid:] - INFO [main:Environment@100] - Server environment:os.version=4.9.87-linuxkit-aufs
  60. zookeeper_1 | 2018-05-14 16:35:55,494 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root
  61. zookeeper_1 | 2018-05-14 16:35:55,495 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root
  62. zookeeper_1 | 2018-05-14 16:35:55,495 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.9
  63. zookeeper_1 | 2018-05-14 16:35:55,520 [myid:] - INFO [main:ZooKeeperServer@815] - tickTime set to 2000
  64. zookeeper_1 | 2018-05-14 16:35:55,521 [myid:] - INFO [main:ZooKeeperServer@824] - minSessionTimeout set to -1
  65. zookeeper_1 | 2018-05-14 16:35:55,522 [myid:] - INFO [main:ZooKeeperServer@833] - maxSessionTimeout set to -1
  66. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:elasticsearch","info"],"pid":12,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
  67. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["error","elasticsearch"],"pid":12,"message":"Request error, retrying -- connect ECONNREFUSED 172.21.0.5:9200"}
  68. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:kbn_vislib_vis_types","info"],"pid":12,"name":"plugin:kbn_vislib_vis_types","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  69. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:markdown_vis","info"],"pid":12,"name":"plugin:markdown_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  70. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
  71. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
  72. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:elasticsearch","error"],"pid":12,"name":"plugin:elasticsearch","state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elastic:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
  73. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:metric_vis","info"],"pid":12,"name":"plugin:metric_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  74. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:spyModes","info"],"pid":12,"name":"plugin:spyModes","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  75. zookeeper_1 | 2018-05-14 16:35:55,580 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
  76. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:statusPage","info"],"pid":12,"name":"plugin:statusPage","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  77. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:table_vis","info"],"pid":12,"name":"plugin:table_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  78. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["listening","info"],"pid":12,"message":"Server running at http://0.0.0.0:5601"}
  79. demo-quartz_1 |
  80. demo-quartz_1 | . ____ _ __ _ _
  81. demo-quartz_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
  82. demo-quartz_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
  83. demo-quartz_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
  84. demo-quartz_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
  85. demo-quartz_1 | =========|_|==============|___/=/_/_/_/
  86. demo-quartz_1 | :: Spring Boot :: (v1.5.12.RELEASE)
  87. demo-quartz_1 |
  88. elastic_1 | [2018-05-14 16:35:56,637][INFO ][plugins ] [Proteus] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
  89. kafka_1 | [2018-05-14 16:35:56,653] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
  90. elastic_1 | [2018-05-14 16:35:56,738][INFO ][env ] [Proteus] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda2)]], net usable_space [39.7gb], net total_space [59gb], spins? [possibly], types [ext4]
  91. elastic_1 | [2018-05-14 16:35:56,738][INFO ][env ] [Proteus] heap size [990.7mb], compressed ordinary object pointers [true]
  92. demo-kafka-elastic_1 |
  93. demo-kafka-elastic_1 | . ____ _ __ _ _
  94. demo-kafka-elastic_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
  95. demo-kafka-elastic_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
  96. demo-kafka-elastic_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
  97. demo-kafka-elastic_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
  98. demo-kafka-elastic_1 | =========|_|==============|___/=/_/_/_/
  99. demo-kafka-elastic_1 | :: Spring Boot :: (v1.5.13.RELEASE)
  100. demo-kafka-elastic_1 |
  101. demo-quartz_1 | web - 2018-05-14 16:35:57,274 [main] INFO c.a.demoquartz.DemoQuartzApplication - Starting DemoQuartzApplication v0.0.1-SNAPSHOT on 85b6c5d32806 with PID 1 (/usr/share/aironman/demo-quartz.jar started by root in /)
  102. demo-quartz_1 | web - 2018-05-14 16:35:57,294 [main] INFO c.a.demoquartz.DemoQuartzApplication - No active profile set, falling back to default profiles: default
  103. demo-kafka-elastic_1 | web - 2018-05-14 16:35:57,378 [main] INFO c.a.demo.DemoKafkaElasticApplication - Starting DemoKafkaElasticApplication v0.0.1-SNAPSHOT on 6c9aaac17b42 with PID 1 (/usr/share/aironman/demo-kafka-elastic.jar started by root in /)
  104. demo-kafka-elastic_1 | web - 2018-05-14 16:35:57,383 [main] INFO c.a.demo.DemoKafkaElasticApplication - No active profile set, falling back to default profiles: default
  105. kafka_1 | [2018-05-14 16:35:58,022] INFO starting (kafka.server.KafkaServer)
  106. kafka_1 | [2018-05-14 16:35:58,025] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
  107. kafka_1 | [2018-05-14 16:35:58,056] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
  108. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:58+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
  109. kafka_1 | [2018-05-14 16:35:58,084] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
  110. kafka_1 | [2018-05-14 16:35:58,084] INFO Client environment:host.name=5409d6899724 (org.apache.zookeeper.ZooKeeper)
  111. kafka_1 | [2018-05-14 16:35:58,085] INFO Client environment:java.version=1.8.0_151 (org.apache.zookeeper.ZooKeeper)
  112. kafka_1 | [2018-05-14 16:35:58,085] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
  113. kafka_1 | [2018-05-14 16:35:58,085] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper)
  114. kafka_1 | [2018-05-14 16:35:58,086] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-1.1.0.jar:/opt/kafka/bin/../libs/connect-file-1.1.0.jar:/opt/kafka/bin/../libs/connect-json-1.1.0.jar:/opt/kafka/bin/../libs/connect-runtime-1.1.0.jar:/opt/kafka/bin/../libs/connect-transforms-1.1.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.4.jar:/opt/kafka/bin/../libs/jackson-core-2.9.4.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.4.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.4.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.4.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka/bin/../libs/jetty-client-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-http-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-io-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-security-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-server-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-util-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-1.1.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-1.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-1.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-1.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-1.1.0.jar:/opt/kafka/bin/../libs/kafka-tools-1.1.0.jar:/opt/kafka/bin/../libs/kafka_2.12-1.1.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-1.1.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-1.1.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.2.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.1.0.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.12.4.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.7.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.1.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
  115. kafka_1 | [2018-05-14 16:35:58,087] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
  116. kafka_1 | [2018-05-14 16:35:58,087] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
  117. kafka_1 | [2018-05-14 16:35:58,087] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
  118. kafka_1 | [2018-05-14 16:35:58,088] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
  119. kafka_1 | [2018-05-14 16:35:58,088] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
  120. kafka_1 | [2018-05-14 16:35:58,089] INFO Client environment:os.version=4.9.87-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
  121. kafka_1 | [2018-05-14 16:35:58,089] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
  122. kafka_1 | [2018-05-14 16:35:58,090] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
  123. kafka_1 | [2018-05-14 16:35:58,090] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
  124. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:58+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
  125. kafka_1 | [2018-05-14 16:35:58,093] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@be64738 (org.apache.zookeeper.ZooKeeper)
  126. kafka_1 | [2018-05-14 16:35:58,130] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
  127. kafka_1 | [2018-05-14 16:35:58,141] INFO Opening socket connection to server demo-2_zookeeper_1.demo-2_default/172.21.0.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
  128. kafka_1 | [2018-05-14 16:35:58,172] INFO Socket connection established to demo-2_zookeeper_1.demo-2_default/172.21.0.4:2181, initiating session (org.apache.zookeeper.ClientCnxn)
  129. zookeeper_1 | 2018-05-14 16:35:58,173 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.21.0.2:56606
  130. zookeeper_1 | 2018-05-14 16:35:58,191 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.21.0.2:56606
  131. zookeeper_1 | 2018-05-14 16:35:58,203 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.dc
  132. kafka_1 | [2018-05-14 16:35:58,219] INFO Session establishment complete on server demo-2_zookeeper_1.demo-2_default/172.21.0.4:2181, sessionid = 0x1635f8228c90000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
  133. zookeeper_1 | 2018-05-14 16:35:58,223 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x1635f8228c90000 with negotiated timeout 6000 for client /172.21.0.2:56606
  134. kafka_1 | [2018-05-14 16:35:58,227] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
  135. zookeeper_1 | 2018-05-14 16:35:58,331 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x1 zxid:0xdd txntype:-1 reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers
  136. zookeeper_1 | 2018-05-14 16:35:58,356 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x2 zxid:0xde txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
  137. zookeeper_1 | 2018-05-14 16:35:58,362 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x3 zxid:0xdf txntype:-1 reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NodeExists for /brokers/topics
  138. zookeeper_1 | 2018-05-14 16:35:58,365 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x4 zxid:0xe0 txntype:-1 reqpath:n/a Error Path:/config/changes Error:KeeperErrorCode = NodeExists for /config/changes
  139. zookeeper_1 | 2018-05-14 16:35:58,369 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x5 zxid:0xe1 txntype:-1 reqpath:n/a Error Path:/admin/delete_topics Error:KeeperErrorCode = NodeExists for /admin/delete_topics
  140. zookeeper_1 | 2018-05-14 16:35:58,373 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x6 zxid:0xe2 txntype:-1 reqpath:n/a Error Path:/brokers/seqid Error:KeeperErrorCode = NodeExists for /brokers/seqid
  141. zookeeper_1 | 2018-05-14 16:35:58,379 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x7 zxid:0xe3 txntype:-1 reqpath:n/a Error Path:/isr_change_notification Error:KeeperErrorCode = NodeExists for /isr_change_notification
  142. zookeeper_1 | 2018-05-14 16:35:58,382 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x8 zxid:0xe4 txntype:-1 reqpath:n/a Error Path:/latest_producer_id_block Error:KeeperErrorCode = NodeExists for /latest_producer_id_block
  143. zookeeper_1 | 2018-05-14 16:35:58,386 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x9 zxid:0xe5 txntype:-1 reqpath:n/a Error Path:/log_dir_event_notification Error:KeeperErrorCode = NodeExists for /log_dir_event_notification
  144. zookeeper_1 | 2018-05-14 16:35:58,389 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xa zxid:0xe6 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
  145. zookeeper_1 | 2018-05-14 16:35:58,392 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xb zxid:0xe7 txntype:-1 reqpath:n/a Error Path:/config/clients Error:KeeperErrorCode = NodeExists for /config/clients
  146. zookeeper_1 | 2018-05-14 16:35:58,394 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xc zxid:0xe8 txntype:-1 reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = NodeExists for /config/users
  147. zookeeper_1 | 2018-05-14 16:35:58,397 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xd zxid:0xe9 txntype:-1 reqpath:n/a Error Path:/config/brokers Error:KeeperErrorCode = NodeExists for /config/brokers
  148. kafka_1 | [2018-05-14 16:35:58,922] INFO Cluster ID = rYF83OU0RCan_mdzL10D5w (kafka.server.KafkaServer)
  149. demo-quartz_1 | web - 2018-05-14 16:35:59,058 [background-preinit] INFO o.h.validator.internal.util.Version - HV000001: Hibernate Validator 5.3.6.Final
  150. kafka_1 | [2018-05-14 16:35:59,144] INFO KafkaConfig values:
  151. kafka_1 | advertised.host.name = null
  152. kafka_1 | advertised.listeners = PLAINTEXT://kafka:9092
  153. kafka_1 | advertised.port = null
  154. kafka_1 | alter.config.policy.class.name = null
  155. kafka_1 | alter.log.dirs.replication.quota.window.num = 11
  156. kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1
  157. kafka_1 | authorizer.class.name =
  158. kafka_1 | auto.create.topics.enable = true
  159. kafka_1 | auto.leader.rebalance.enable = true
  160. kafka_1 | background.threads = 10
  161. kafka_1 | broker.id = 1
  162. kafka_1 | broker.id.generation.enable = true
  163. kafka_1 | broker.rack = null
  164. kafka_1 | compression.type = producer
  165. kafka_1 | connections.max.idle.ms = 600000
  166. kafka_1 | controlled.shutdown.enable = true
  167. kafka_1 | controlled.shutdown.max.retries = 3
  168. kafka_1 | controlled.shutdown.retry.backoff.ms = 5000
  169. kafka_1 | controller.socket.timeout.ms = 30000
  170. kafka_1 | create.topic.policy.class.name = null
  171. kafka_1 | default.replication.factor = 1
  172. kafka_1 | delegation.token.expiry.check.interval.ms = 3600000
  173. kafka_1 | delegation.token.expiry.time.ms = 86400000
  174. kafka_1 | delegation.token.master.key = null
  175. kafka_1 | delegation.token.max.lifetime.ms = 604800000
  176. kafka_1 | delete.records.purgatory.purge.interval.requests = 1
  177. kafka_1 | delete.topic.enable = true
  178. kafka_1 | fetch.purgatory.purge.interval.requests = 1000
  179. kafka_1 | group.initial.rebalance.delay.ms = 0
  180. kafka_1 | group.max.session.timeout.ms = 300000
  181. kafka_1 | group.min.session.timeout.ms = 6000
  182. kafka_1 | host.name =
  183. kafka_1 | inter.broker.listener.name = null
  184. kafka_1 | inter.broker.protocol.version = 1.1-IV0
  185. kafka_1 | leader.imbalance.check.interval.seconds = 300
  186. kafka_1 | leader.imbalance.per.broker.percentage = 10
  187. kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
  188. kafka_1 | listeners = PLAINTEXT://0.0.0.0:9092
  189. kafka_1 | log.cleaner.backoff.ms = 15000
  190. kafka_1 | log.cleaner.dedupe.buffer.size = 134217728
  191. kafka_1 | log.cleaner.delete.retention.ms = 86400000
  192. kafka_1 | log.cleaner.enable = true
  193. kafka_1 | log.cleaner.io.buffer.load.factor = 0.9
  194. kafka_1 | log.cleaner.io.buffer.size = 524288
  195. kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
  196. kafka_1 | log.cleaner.min.cleanable.ratio = 0.5
  197. kafka_1 | log.cleaner.min.compaction.lag.ms = 0
  198. kafka_1 | log.cleaner.threads = 1
  199. kafka_1 | log.cleanup.policy = [delete]
  200. kafka_1 | log.dir = /tmp/kafka-logs
  201. kafka_1 | log.dirs = /kafka/kafka-logs-5409d6899724
  202. kafka_1 | log.flush.interval.messages = 9223372036854775807
  203. kafka_1 | log.flush.interval.ms = null
  204. kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000
  205. kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807
  206. kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000
  207. kafka_1 | log.index.interval.bytes = 4096
  208. kafka_1 | log.index.size.max.bytes = 10485760
  209. kafka_1 | log.message.format.version = 1.1-IV0
  210. kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807
  211. kafka_1 | log.message.timestamp.type = CreateTime
  212. kafka_1 | log.preallocate = false
  213. kafka_1 | log.retention.bytes = -1
  214. kafka_1 | log.retention.check.interval.ms = 300000
  215. kafka_1 | log.retention.hours = 168
  216. kafka_1 | log.retention.minutes = null
  217. kafka_1 | log.retention.ms = null
  218. kafka_1 | log.roll.hours = 168
  219. kafka_1 | log.roll.jitter.hours = 0
  220. kafka_1 | log.roll.jitter.ms = null
  221. kafka_1 | log.roll.ms = null
  222. kafka_1 | log.segment.bytes = 1073741824
  223. kafka_1 | log.segment.delete.delay.ms = 60000
  224. kafka_1 | max.connections.per.ip = 2147483647
  225. kafka_1 | max.connections.per.ip.overrides =
  226. kafka_1 | max.incremental.fetch.session.cache.slots = 1000
  227. kafka_1 | message.max.bytes = 1000012
  228. kafka_1 | metric.reporters = []
  229. kafka_1 | metrics.num.samples = 2
  230. kafka_1 | metrics.recording.level = INFO
  231. kafka_1 | metrics.sample.window.ms = 30000
  232. kafka_1 | min.insync.replicas = 1
  233. kafka_1 | num.io.threads = 8
  234. kafka_1 | num.network.threads = 3
  235. kafka_1 | num.partitions = 1
  236. kafka_1 | num.recovery.threads.per.data.dir = 1
  237. kafka_1 | num.replica.alter.log.dirs.threads = null
  238. kafka_1 | num.replica.fetchers = 1
  239. kafka_1 | offset.metadata.max.bytes = 4096
  240. kafka_1 | offsets.commit.required.acks = -1
  241. kafka_1 | offsets.commit.timeout.ms = 5000
  242. kafka_1 | offsets.load.buffer.size = 5242880
  243. kafka_1 | offsets.retention.check.interval.ms = 600000
  244. kafka_1 | offsets.retention.minutes = 1440
  245. kafka_1 | offsets.topic.compression.codec = 0
  246. kafka_1 | offsets.topic.num.partitions = 50
  247. kafka_1 | offsets.topic.replication.factor = 1
  248. kafka_1 | offsets.topic.segment.bytes = 104857600
  249. kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
  250. kafka_1 | password.encoder.iterations = 4096
  251. kafka_1 | password.encoder.key.length = 128
  252. kafka_1 | password.encoder.keyfactory.algorithm = null
  253. kafka_1 | password.encoder.old.secret = null
  254. kafka_1 | password.encoder.secret = null
  255. kafka_1 | port = 9092
  256. kafka_1 | principal.builder.class = null
  257. kafka_1 | producer.purgatory.purge.interval.requests = 1000
  258. kafka_1 | queued.max.request.bytes = -1
  259. kafka_1 | queued.max.requests = 500
  260. kafka_1 | quota.consumer.default = 9223372036854775807
  261. kafka_1 | quota.producer.default = 9223372036854775807
  262. kafka_1 | quota.window.num = 11
  263. kafka_1 | quota.window.size.seconds = 1
  264. kafka_1 | replica.fetch.backoff.ms = 1000
  265. kafka_1 | replica.fetch.max.bytes = 1048576
  266. kafka_1 | replica.fetch.min.bytes = 1
  267. kafka_1 | replica.fetch.response.max.bytes = 10485760
  268. kafka_1 | replica.fetch.wait.max.ms = 500
  269. kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000
  270. kafka_1 | replica.lag.time.max.ms = 10000
  271. kafka_1 | replica.socket.receive.buffer.bytes = 65536
  272. kafka_1 | replica.socket.timeout.ms = 30000
  273. kafka_1 | replication.quota.window.num = 11
  274. kafka_1 | replication.quota.window.size.seconds = 1
  275. kafka_1 | request.timeout.ms = 30000
  276. kafka_1 | reserved.broker.max.id = 1000
  277. kafka_1 | sasl.enabled.mechanisms = [GSSAPI]
  278. kafka_1 | sasl.jaas.config = null
  279. kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  280. kafka_1 | sasl.kerberos.min.time.before.relogin = 60000
  281. kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
  282. kafka_1 | sasl.kerberos.service.name = null
  283. kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  284. kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  285. kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI
  286. kafka_1 | security.inter.broker.protocol = PLAINTEXT
  287. kafka_1 | socket.receive.buffer.bytes = 102400
  288. kafka_1 | socket.request.max.bytes = 104857600
  289. kafka_1 | socket.send.buffer.bytes = 102400
  290. kafka_1 | ssl.cipher.suites = []
  291. kafka_1 | ssl.client.auth = none
  292. kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  293. kafka_1 | ssl.endpoint.identification.algorithm = null
  294. kafka_1 | ssl.key.password = null
  295. kafka_1 | ssl.keymanager.algorithm = SunX509
  296. kafka_1 | ssl.keystore.location = null
  297. kafka_1 | ssl.keystore.password = null
  298. kafka_1 | ssl.keystore.type = JKS
  299. kafka_1 | ssl.protocol = TLS
  300. kafka_1 | ssl.provider = null
  301. kafka_1 | ssl.secure.random.implementation = null
  302. kafka_1 | ssl.trustmanager.algorithm = PKIX
  303. kafka_1 | ssl.truststore.location = null
  304. kafka_1 | ssl.truststore.password = null
  305. kafka_1 | ssl.truststore.type = JKS
  306. kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
  307. kafka_1 | transaction.max.timeout.ms = 900000
  308. kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
  309. kafka_1 | transaction.state.log.load.buffer.size = 5242880
  310. kafka_1 | transaction.state.log.min.isr = 1
  311. kafka_1 | transaction.state.log.num.partitions = 50
  312. kafka_1 | transaction.state.log.replication.factor = 1
  313. kafka_1 | transaction.state.log.segment.bytes = 104857600
  314. kafka_1 | transactional.id.expiration.ms = 604800000
  315. kafka_1 | unclean.leader.election.enable = false
  316. kafka_1 | zookeeper.connect = zookeeper:2181
  317. kafka_1 | zookeeper.connection.timeout.ms = 6000
  318. kafka_1 | zookeeper.max.in.flight.requests = 10
  319. kafka_1 | zookeeper.session.timeout.ms = 6000
  320. kafka_1 | zookeeper.set.acl = false
  321. kafka_1 | zookeeper.sync.time.ms = 2000
  322. kafka_1 | (kafka.server.KafkaConfig)
  323. kafka_1 | [2018-05-14 16:35:59,171] INFO KafkaConfig values:
  324. kafka_1 | advertised.host.name = null
  325. kafka_1 | advertised.listeners = PLAINTEXT://kafka:9092
  326. kafka_1 | advertised.port = null
  327. kafka_1 | alter.config.policy.class.name = null
  328. kafka_1 | alter.log.dirs.replication.quota.window.num = 11
  329. kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1
  330. kafka_1 | authorizer.class.name =
  331. kafka_1 | auto.create.topics.enable = true
  332. kafka_1 | auto.leader.rebalance.enable = true
  333. kafka_1 | background.threads = 10
  334. kafka_1 | broker.id = 1
  335. kafka_1 | broker.id.generation.enable = true
  336. kafka_1 | broker.rack = null
  337. kafka_1 | compression.type = producer
  338. kafka_1 | connections.max.idle.ms = 600000
  339. kafka_1 | controlled.shutdown.enable = true
  340. kafka_1 | controlled.shutdown.max.retries = 3
  341. kafka_1 | controlled.shutdown.retry.backoff.ms = 5000
  342. kafka_1 | controller.socket.timeout.ms = 30000
  343. kafka_1 | create.topic.policy.class.name = null
  344. kafka_1 | default.replication.factor = 1
  345. kafka_1 | delegation.token.expiry.check.interval.ms = 3600000
  346. kafka_1 | delegation.token.expiry.time.ms = 86400000
  347. kafka_1 | delegation.token.master.key = null
  348. kafka_1 | delegation.token.max.lifetime.ms = 604800000
  349. kafka_1 | delete.records.purgatory.purge.interval.requests = 1
  350. kafka_1 | delete.topic.enable = true
  351. kafka_1 | fetch.purgatory.purge.interval.requests = 1000
  352. kafka_1 | group.initial.rebalance.delay.ms = 0
  353. kafka_1 | group.max.session.timeout.ms = 300000
  354. kafka_1 | group.min.session.timeout.ms = 6000
  355. kafka_1 | host.name =
  356. kafka_1 | inter.broker.listener.name = null
  357. kafka_1 | inter.broker.protocol.version = 1.1-IV0
  358. kafka_1 | leader.imbalance.check.interval.seconds = 300
  359. kafka_1 | leader.imbalance.per.broker.percentage = 10
  360. kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
  361. kafka_1 | listeners = PLAINTEXT://0.0.0.0:9092
  362. kafka_1 | log.cleaner.backoff.ms = 15000
  363. kafka_1 | log.cleaner.dedupe.buffer.size = 134217728
  364. kafka_1 | log.cleaner.delete.retention.ms = 86400000
  365. kafka_1 | log.cleaner.enable = true
  366. kafka_1 | log.cleaner.io.buffer.load.factor = 0.9
  367. kafka_1 | log.cleaner.io.buffer.size = 524288
  368. kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
  369. kafka_1 | log.cleaner.min.cleanable.ratio = 0.5
  370. kafka_1 | log.cleaner.min.compaction.lag.ms = 0
  371. kafka_1 | log.cleaner.threads = 1
  372. kafka_1 | log.cleanup.policy = [delete]
  373. kafka_1 | log.dir = /tmp/kafka-logs
  374. kafka_1 | log.dirs = /kafka/kafka-logs-5409d6899724
  375. kafka_1 | log.flush.interval.messages = 9223372036854775807
  376. kafka_1 | log.flush.interval.ms = null
  377. kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000
  378. kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807
  379. kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000
  380. kafka_1 | log.index.interval.bytes = 4096
  381. kafka_1 | log.index.size.max.bytes = 10485760
  382. kafka_1 | log.message.format.version = 1.1-IV0
  383. kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807
  384. kafka_1 | log.message.timestamp.type = CreateTime
  385. kafka_1 | log.preallocate = false
  386. kafka_1 | log.retention.bytes = -1
  387. kafka_1 | log.retention.check.interval.ms = 300000
  388. kafka_1 | log.retention.hours = 168
  389. kafka_1 | log.retention.minutes = null
  390. kafka_1 | log.retention.ms = null
  391. kafka_1 | log.roll.hours = 168
  392. kafka_1 | log.roll.jitter.hours = 0
  393. kafka_1 | log.roll.jitter.ms = null
  394. kafka_1 | log.roll.ms = null
  395. kafka_1 | log.segment.bytes = 1073741824
  396. kafka_1 | log.segment.delete.delay.ms = 60000
  397. kafka_1 | max.connections.per.ip = 2147483647
  398. kafka_1 | max.connections.per.ip.overrides =
  399. kafka_1 | max.incremental.fetch.session.cache.slots = 1000
  400. kafka_1 | message.max.bytes = 1000012
  401. kafka_1 | metric.reporters = []
  402. kafka_1 | metrics.num.samples = 2
  403. kafka_1 | metrics.recording.level = INFO
  404. kafka_1 | metrics.sample.window.ms = 30000
  405. kafka_1 | min.insync.replicas = 1
  406. kafka_1 | num.io.threads = 8
  407. kafka_1 | num.network.threads = 3
  408. kafka_1 | num.partitions = 1
  409. kafka_1 | num.recovery.threads.per.data.dir = 1
  410. kafka_1 | num.replica.alter.log.dirs.threads = null
  411. kafka_1 | num.replica.fetchers = 1
  412. kafka_1 | offset.metadata.max.bytes = 4096
  413. kafka_1 | offsets.commit.required.acks = -1
  414. kafka_1 | offsets.commit.timeout.ms = 5000
  415. kafka_1 | offsets.load.buffer.size = 5242880
  416. kafka_1 | offsets.retention.check.interval.ms = 600000
  417. kafka_1 | offsets.retention.minutes = 1440
  418. kafka_1 | offsets.topic.compression.codec = 0
  419. kafka_1 | offsets.topic.num.partitions = 50
  420. kafka_1 | offsets.topic.replication.factor = 1
  421. kafka_1 | offsets.topic.segment.bytes = 104857600
  422. kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
  423. kafka_1 | password.encoder.iterations = 4096
  424. kafka_1 | password.encoder.key.length = 128
  425. kafka_1 | password.encoder.keyfactory.algorithm = null
  426. kafka_1 | password.encoder.old.secret = null
  427. kafka_1 | password.encoder.secret = null
  428. kafka_1 | port = 9092
  429. kafka_1 | principal.builder.class = null
  430. kafka_1 | producer.purgatory.purge.interval.requests = 1000
  431. kafka_1 | queued.max.request.bytes = -1
  432. kafka_1 | queued.max.requests = 500
  433. kafka_1 | quota.consumer.default = 9223372036854775807
  434. kafka_1 | quota.producer.default = 9223372036854775807
  435. kafka_1 | quota.window.num = 11
  436. kafka_1 | quota.window.size.seconds = 1
  437. kafka_1 | replica.fetch.backoff.ms = 1000
  438. kafka_1 | replica.fetch.max.bytes = 1048576
  439. kafka_1 | replica.fetch.min.bytes = 1
  440. kafka_1 | replica.fetch.response.max.bytes = 10485760
  441. kafka_1 | replica.fetch.wait.max.ms = 500
  442. kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000
  443. kafka_1 | replica.lag.time.max.ms = 10000
  444. kafka_1 | replica.socket.receive.buffer.bytes = 65536
  445. kafka_1 | replica.socket.timeout.ms = 30000
  446. kafka_1 | replication.quota.window.num = 11
  447. kafka_1 | replication.quota.window.size.seconds = 1
  448. kafka_1 | request.timeout.ms = 30000
  449. kafka_1 | reserved.broker.max.id = 1000
  450. kafka_1 | sasl.enabled.mechanisms = [GSSAPI]
  451. kafka_1 | sasl.jaas.config = null
  452. kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  453. kafka_1 | sasl.kerberos.min.time.before.relogin = 60000
  454. kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
  455. kafka_1 | sasl.kerberos.service.name = null
  456. kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  457. kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  458. kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI
  459. kafka_1 | security.inter.broker.protocol = PLAINTEXT
  460. kafka_1 | socket.receive.buffer.bytes = 102400
  461. kafka_1 | socket.request.max.bytes = 104857600
  462. kafka_1 | socket.send.buffer.bytes = 102400
  463. kafka_1 | ssl.cipher.suites = []
  464. kafka_1 | ssl.client.auth = none
  465. kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  466. kafka_1 | ssl.endpoint.identification.algorithm = null
  467. kafka_1 | ssl.key.password = null
  468. kafka_1 | ssl.keymanager.algorithm = SunX509
  469. kafka_1 | ssl.keystore.location = null
  470. kafka_1 | ssl.keystore.password = null
  471. kafka_1 | ssl.keystore.type = JKS
  472. kafka_1 | ssl.protocol = TLS
  473. kafka_1 | ssl.provider = null
  474. kafka_1 | ssl.secure.random.implementation = null
  475. kafka_1 | ssl.trustmanager.algorithm = PKIX
  476. kafka_1 | ssl.truststore.location = null
  477. kafka_1 | ssl.truststore.password = null
  478. kafka_1 | ssl.truststore.type = JKS
  479. kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
  480. kafka_1 | transaction.max.timeout.ms = 900000
  481. kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
  482. kafka_1 | transaction.state.log.load.buffer.size = 5242880
  483. kafka_1 | transaction.state.log.min.isr = 1
  484. kafka_1 | transaction.state.log.num.partitions = 50
  485. kafka_1 | transaction.state.log.replication.factor = 1
  486. kafka_1 | transaction.state.log.segment.bytes = 104857600
  487. kafka_1 | transactional.id.expiration.ms = 604800000
  488. kafka_1 | unclean.leader.election.enable = false
  489. kafka_1 | zookeeper.connect = zookeeper:2181
  490. kafka_1 | zookeeper.connection.timeout.ms = 6000
  491. kafka_1 | zookeeper.max.in.flight.requests = 10
  492. kafka_1 | zookeeper.session.timeout.ms = 6000
  493. kafka_1 | zookeeper.set.acl = false
  494. kafka_1 | zookeeper.sync.time.ms = 2000
  495. kafka_1 | (kafka.server.KafkaConfig)
  496. kafka_1 | [2018-05-14 16:35:59,291] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
  497. kafka_1 | [2018-05-14 16:35:59,304] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
  498. kafka_1 | [2018-05-14 16:35:59,306] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
  499. kafka_1 | [2018-05-14 16:35:59,480] INFO Loading logs. (kafka.log.LogManager)
  500. kafka_1 | [2018-05-14 16:35:59,667] WARN [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-5/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-5/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  501. kafka_1 | [2018-05-14 16:35:59,764] INFO [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  502. kafka_1 | [2018-05-14 16:35:59,794] INFO [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  503. kafka_1 | [2018-05-14 16:35:59,801] INFO [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 202 ms (kafka.log.Log)
  504. kafka_1 | [2018-05-14 16:35:59,831] WARN [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-36/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-36/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  505. kafka_1 | [2018-05-14 16:35:59,834] INFO [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  506. kafka_1 | [2018-05-14 16:35:59,839] INFO [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  507. kafka_1 | [2018-05-14 16:35:59,843] INFO [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 13 ms (kafka.log.Log)
  508. kafka_1 | [2018-05-14 16:35:59,850] WARN [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-49/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-49/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  509. kafka_1 | [2018-05-14 16:35:59,940] INFO [ProducerStateManager partition=__consumer_offsets-49] Writing producer snapshot at offset 880 (kafka.log.ProducerStateManager)
  510. kafka_1 | [2018-05-14 16:35:59,950] INFO [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  511. kafka_1 | [2018-05-14 16:36:00,008] INFO [ProducerStateManager partition=__consumer_offsets-49] Writing producer snapshot at offset 880 (kafka.log.ProducerStateManager)
  512. kafka_1 | [2018-05-14 16:36:00,024] INFO [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 880 with message format version 2 (kafka.log.Log)
  513. kafka_1 | [2018-05-14 16:36:00,038] INFO [ProducerStateManager partition=__consumer_offsets-49] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-49/00000000000000000880.snapshot' (kafka.log.ProducerStateManager)
  514. kafka_1 | [2018-05-14 16:36:00,083] INFO [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 880 in 234 ms (kafka.log.Log)
  515. kafka_1 | [2018-05-14 16:36:00,104] WARN [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-37/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-37/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  516. kafka_1 | [2018-05-14 16:36:00,106] INFO [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  517. kafka_1 | [2018-05-14 16:36:00,110] INFO [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  518. kafka_1 | [2018-05-14 16:36:00,111] INFO [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
  519. kafka_1 | [2018-05-14 16:36:00,149] WARN [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-20/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-20/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  520. kafka_1 | [2018-05-14 16:36:00,152] INFO [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  521. kafka_1 | [2018-05-14 16:36:00,155] INFO [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  522. kafka_1 | [2018-05-14 16:36:00,158] INFO [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
  523. kafka_1 | [2018-05-14 16:36:00,170] WARN [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-31/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-31/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  524. kafka_1 | [2018-05-14 16:36:00,172] INFO [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  525. kafka_1 | [2018-05-14 16:36:00,187] INFO [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  526. kafka_1 | [2018-05-14 16:36:00,188] INFO [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 19 ms (kafka.log.Log)
  527. kafka_1 | [2018-05-14 16:36:00,204] WARN [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-23/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-23/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  528. kafka_1 | [2018-05-14 16:36:00,206] INFO [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  529. kafka_1 | [2018-05-14 16:36:00,217] INFO [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  530. kafka_1 | [2018-05-14 16:36:00,218] INFO [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
  531. kafka_1 | [2018-05-14 16:36:00,238] WARN [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-9/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-9/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  532. kafka_1 | [2018-05-14 16:36:00,239] INFO [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  533. kafka_1 | [2018-05-14 16:36:00,241] INFO [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  534. kafka_1 | [2018-05-14 16:36:00,241] INFO [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
  535. kafka_1 | [2018-05-14 16:36:00,266] WARN [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-32/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-32/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  536. kafka_1 | [2018-05-14 16:36:00,268] INFO [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  537. kafka_1 | [2018-05-14 16:36:00,275] INFO [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  538. kafka_1 | [2018-05-14 16:36:00,277] INFO [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
  539. kafka_1 | [2018-05-14 16:36:00,306] WARN [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-28/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-28/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  540. kafka_1 | [2018-05-14 16:36:00,307] INFO [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  541. kafka_1 | [2018-05-14 16:36:00,308] INFO [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  542. kafka_1 | [2018-05-14 16:36:00,309] INFO [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  543. kafka_1 | [2018-05-14 16:36:00,337] WARN [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-17/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-17/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  544. kafka_1 | [2018-05-14 16:36:00,338] INFO [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  545. kafka_1 | [2018-05-14 16:36:00,346] INFO [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  546. kafka_1 | [2018-05-14 16:36:00,347] INFO [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 12 ms (kafka.log.Log)
  547. kafka_1 | [2018-05-14 16:36:00,354] WARN [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-35/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-35/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  548. kafka_1 | [2018-05-14 16:36:00,360] INFO [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  549. kafka_1 | [2018-05-14 16:36:00,361] INFO [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  550. kafka_1 | [2018-05-14 16:36:00,363] INFO [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
  551. kafka_1 | [2018-05-14 16:36:00,367] WARN [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-42/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-42/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  552. kafka_1 | [2018-05-14 16:36:00,391] INFO [ProducerStateManager partition=__consumer_offsets-42] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  553. kafka_1 | [2018-05-14 16:36:00,415] INFO [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  554. kafka_1 | [2018-05-14 16:36:00,419] INFO [ProducerStateManager partition=__consumer_offsets-42] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  555. kafka_1 | [2018-05-14 16:36:00,420] INFO [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
  556. kafka_1 | [2018-05-14 16:36:00,422] INFO [ProducerStateManager partition=__consumer_offsets-42] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-42/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
  557. kafka_1 | [2018-05-14 16:36:00,423] INFO [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 57 ms (kafka.log.Log)
  558. kafka_1 | [2018-05-14 16:36:00,427] WARN [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-34/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-34/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  559. kafka_1 | [2018-05-14 16:36:00,428] INFO [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  560. kafka_1 | [2018-05-14 16:36:00,429] INFO [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  561. kafka_1 | [2018-05-14 16:36:00,430] INFO [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  562. kafka_1 | [2018-05-14 16:36:00,438] WARN [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-21/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-21/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  563. kafka_1 | [2018-05-14 16:36:00,445] INFO [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  564. kafka_1 | [2018-05-14 16:36:00,446] INFO [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  565. kafka_1 | [2018-05-14 16:36:00,453] INFO [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 16 ms (kafka.log.Log)
  566. kafka_1 | [2018-05-14 16:36:00,464] WARN [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-3/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-3/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  567. kafka_1 | [2018-05-14 16:36:00,465] INFO [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  568. kafka_1 | [2018-05-14 16:36:00,476] INFO [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  569. kafka_1 | [2018-05-14 16:36:00,477] INFO [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 14 ms (kafka.log.Log)
  570. kafka_1 | [2018-05-14 16:36:00,488] WARN [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-27/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-27/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  571. kafka_1 | [2018-05-14 16:36:00,493] INFO [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  572. kafka_1 | [2018-05-14 16:36:00,495] INFO [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  573. kafka_1 | [2018-05-14 16:36:00,503] INFO [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 16 ms (kafka.log.Log)
  574. kafka_1 | [2018-05-14 16:36:00,518] WARN [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-19/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-19/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  575. kafka_1 | [2018-05-14 16:36:00,519] INFO [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  576. kafka_1 | [2018-05-14 16:36:00,521] INFO [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  577. kafka_1 | [2018-05-14 16:36:00,525] INFO [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
  578. kafka_1 | [2018-05-14 16:36:00,547] WARN [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-13/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-13/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  579. kafka_1 | [2018-05-14 16:36:00,548] INFO [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  580. kafka_1 | [2018-05-14 16:36:00,550] INFO [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  581. kafka_1 | [2018-05-14 16:36:00,551] INFO [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 23 ms (kafka.log.Log)
  582. kafka_1 | [2018-05-14 16:36:00,554] WARN [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-1/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-1/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  583. kafka_1 | [2018-05-14 16:36:00,556] INFO [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  584. kafka_1 | [2018-05-14 16:36:00,558] INFO [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  585. kafka_1 | [2018-05-14 16:36:00,558] INFO [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
  586. kafka_1 | [2018-05-14 16:36:00,568] WARN [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-26/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-26/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  587. kafka_1 | [2018-05-14 16:36:00,569] INFO [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  588. kafka_1 | [2018-05-14 16:36:00,578] INFO [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  589. kafka_1 | [2018-05-14 16:36:00,578] INFO [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
  590. kafka_1 | [2018-05-14 16:36:00,588] WARN [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-41/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-41/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  591. kafka_1 | [2018-05-14 16:36:00,590] INFO [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  592. kafka_1 | [2018-05-14 16:36:00,591] INFO [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  593. kafka_1 | [2018-05-14 16:36:00,592] INFO [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
  594. kafka_1 | [2018-05-14 16:36:00,595] WARN [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-2/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-2/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  595. kafka_1 | [2018-05-14 16:36:00,597] INFO [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  596. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:00+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
  597. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:00+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
  598. kafka_1 | [2018-05-14 16:36:00,599] INFO [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  599. kafka_1 | [2018-05-14 16:36:00,604] INFO [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
  600. kafka_1 | [2018-05-14 16:36:00,608] WARN [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-46/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-46/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  601. kafka_1 | [2018-05-14 16:36:00,609] INFO [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  602. kafka_1 | [2018-05-14 16:36:00,610] INFO [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  603. kafka_1 | [2018-05-14 16:36:00,612] INFO [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  604. kafka_1 | [2018-05-14 16:36:00,629] WARN [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-15/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-15/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  605. kafka_1 | [2018-05-14 16:36:00,630] INFO [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  606. kafka_1 | [2018-05-14 16:36:00,632] INFO [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  607. kafka_1 | [2018-05-14 16:36:00,633] INFO [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
  608. kafka_1 | [2018-05-14 16:36:00,637] WARN [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-6/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-6/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  609. kafka_1 | [2018-05-14 16:36:00,638] INFO [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  610. kafka_1 | [2018-05-14 16:36:00,639] INFO [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  611. kafka_1 | [2018-05-14 16:36:00,640] INFO [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  612. kafka_1 | [2018-05-14 16:36:00,645] WARN [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-18/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-18/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  613. kafka_1 | [2018-05-14 16:36:00,646] INFO [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  614. kafka_1 | [2018-05-14 16:36:00,648] INFO [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  615. kafka_1 | [2018-05-14 16:36:00,648] INFO [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  616. kafka_1 | [2018-05-14 16:36:00,677] WARN [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/greeting-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/greeting-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  617. kafka_1 | [2018-05-14 16:36:00,678] INFO [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  618. kafka_1 | [2018-05-14 16:36:00,690] INFO [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  619. kafka_1 | [2018-05-14 16:36:00,691] INFO [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
  620. kafka_1 | [2018-05-14 16:36:00,696] WARN [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/aironman-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/aironman-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  621. kafka_1 | [2018-05-14 16:36:00,707] INFO [ProducerStateManager partition=aironman-0] Writing producer snapshot at offset 76 (kafka.log.ProducerStateManager)
  622. kafka_1 | [2018-05-14 16:36:00,708] INFO [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  623. kafka_1 | [2018-05-14 16:36:00,716] INFO [ProducerStateManager partition=aironman-0] Writing producer snapshot at offset 76 (kafka.log.ProducerStateManager)
  624. kafka_1 | [2018-05-14 16:36:00,718] INFO [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 76 with message format version 2 (kafka.log.Log)
  625. kafka_1 | [2018-05-14 16:36:00,719] INFO [ProducerStateManager partition=aironman-0] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/aironman-0/00000000000000000076.snapshot' (kafka.log.ProducerStateManager)
  626. kafka_1 | [2018-05-14 16:36:00,723] INFO [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 76 in 28 ms (kafka.log.Log)
  627. kafka_1 | [2018-05-14 16:36:00,729] WARN [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-38/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-38/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  628. kafka_1 | [2018-05-14 16:36:00,745] INFO [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  629. kafka_1 | [2018-05-14 16:36:00,747] INFO [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  630. kafka_1 | [2018-05-14 16:36:00,747] INFO [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 18 ms (kafka.log.Log)
  631. kafka_1 | [2018-05-14 16:36:00,750] WARN [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-39/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-39/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  632. kafka_1 | [2018-05-14 16:36:00,756] INFO [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  633. kafka_1 | [2018-05-14 16:36:00,775] INFO [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  634. kafka_1 | [2018-05-14 16:36:00,776] INFO [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 27 ms (kafka.log.Log)
  635. kafka_1 | [2018-05-14 16:36:00,786] WARN [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-12/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-12/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  636. kafka_1 | [2018-05-14 16:36:00,787] INFO [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  637. kafka_1 | [2018-05-14 16:36:00,788] INFO [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  638. kafka_1 | [2018-05-14 16:36:00,793] INFO [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
  639. kafka_1 | [2018-05-14 16:36:00,817] WARN [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-30/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-30/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  640. kafka_1 | [2018-05-14 16:36:00,818] INFO [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  641. kafka_1 | [2018-05-14 16:36:00,845] INFO [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  642. kafka_1 | [2018-05-14 16:36:00,855] INFO [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 60 ms (kafka.log.Log)
  643. kafka_1 | [2018-05-14 16:36:00,858] WARN [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-14/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-14/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  644. kafka_1 | [2018-05-14 16:36:00,862] INFO [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  645. kafka_1 | [2018-05-14 16:36:00,864] INFO [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  646. kafka_1 | [2018-05-14 16:36:00,865] INFO [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
  647. kafka_1 | [2018-05-14 16:36:00,875] WARN [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-43/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-43/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  648. kafka_1 | [2018-05-14 16:36:00,876] INFO [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  649. kafka_1 | [2018-05-14 16:36:00,879] INFO [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  650. kafka_1 | [2018-05-14 16:36:00,880] INFO [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
  651. kafka_1 | [2018-05-14 16:36:00,893] WARN [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/filtered-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/filtered-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  652. kafka_1 | [2018-05-14 16:36:00,906] INFO [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  653. kafka_1 | [2018-05-14 16:36:00,908] INFO [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  654. kafka_1 | [2018-05-14 16:36:00,910] INFO [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 27 ms (kafka.log.Log)
  655. kafka_1 | [2018-05-14 16:36:00,913] WARN [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-45/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-45/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  656. kafka_1 | [2018-05-14 16:36:00,915] INFO [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  657. kafka_1 | [2018-05-14 16:36:00,917] INFO [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  658. kafka_1 | [2018-05-14 16:36:00,918] INFO [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
  659. kafka_1 | [2018-05-14 16:36:00,922] WARN [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-40/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-40/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  660. kafka_1 | [2018-05-14 16:36:00,932] INFO [ProducerStateManager partition=__consumer_offsets-40] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  661. kafka_1 | [2018-05-14 16:36:00,933] INFO [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  662. kafka_1 | [2018-05-14 16:36:00,939] INFO [ProducerStateManager partition=__consumer_offsets-40] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  663. kafka_1 | [2018-05-14 16:36:00,945] INFO [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
  664. kafka_1 | [2018-05-14 16:36:00,956] INFO [ProducerStateManager partition=__consumer_offsets-40] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-40/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
  665. kafka_1 | [2018-05-14 16:36:00,956] INFO [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 35 ms (kafka.log.Log)
  666. kafka_1 | [2018-05-14 16:36:00,959] WARN [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-22/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-22/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  667. demo-kafka-elastic_1 | web - 2018-05-14 16:36:00,939 [main] INFO org.elasticsearch.plugins - [Neurotap] modules [], plugins [], sites []
  668. kafka_1 | [2018-05-14 16:36:00,995] INFO [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  669. kafka_1 | [2018-05-14 16:36:01,000] INFO [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  670. kafka_1 | [2018-05-14 16:36:01,009] INFO [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 50 ms (kafka.log.Log)
  671. kafka_1 | [2018-05-14 16:36:01,014] WARN [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-48/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-48/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  672. kafka_1 | [2018-05-14 16:36:01,017] INFO [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  673. kafka_1 | [2018-05-14 16:36:01,018] INFO [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  674. kafka_1 | [2018-05-14 16:36:01,020] INFO [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
  675. kafka_1 | [2018-05-14 16:36:01,034] WARN [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-24/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-24/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  676. kafka_1 | [2018-05-14 16:36:01,048] INFO [ProducerStateManager partition=__consumer_offsets-24] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  677. kafka_1 | [2018-05-14 16:36:01,050] INFO [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  678. kafka_1 | [2018-05-14 16:36:01,053] INFO [ProducerStateManager partition=__consumer_offsets-24] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  679. kafka_1 | [2018-05-14 16:36:01,054] INFO [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
  680. kafka_1 | [2018-05-14 16:36:01,056] INFO [ProducerStateManager partition=__consumer_offsets-24] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-24/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
  681. kafka_1 | [2018-05-14 16:36:01,057] INFO [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 24 ms (kafka.log.Log)
  682. kafka_1 | [2018-05-14 16:36:01,060] WARN [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-16/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-16/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  683. kafka_1 | [2018-05-14 16:36:01,061] INFO [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  684. kafka_1 | [2018-05-14 16:36:01,063] INFO [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  685. kafka_1 | [2018-05-14 16:36:01,065] INFO [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
  686. kafka_1 | [2018-05-14 16:36:01,068] WARN [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-8/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-8/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  687. kafka_1 | [2018-05-14 16:36:01,070] INFO [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  688. kafka_1 | [2018-05-14 16:36:01,071] INFO [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  689. kafka_1 | [2018-05-14 16:36:01,072] INFO [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  690. kafka_1 | [2018-05-14 16:36:01,080] WARN [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-10/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-10/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  691. kafka_1 | [2018-05-14 16:36:01,095] INFO [ProducerStateManager partition=__consumer_offsets-10] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  692. kafka_1 | [2018-05-14 16:36:01,096] INFO [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  693. kafka_1 | [2018-05-14 16:36:01,100] INFO [ProducerStateManager partition=__consumer_offsets-10] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
  694. kafka_1 | [2018-05-14 16:36:01,101] INFO [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
  695. kafka_1 | [2018-05-14 16:36:01,101] INFO [ProducerStateManager partition=__consumer_offsets-10] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-10/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
  696. kafka_1 | [2018-05-14 16:36:01,102] INFO [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 23 ms (kafka.log.Log)
  697. kafka_1 | [2018-05-14 16:36:01,106] WARN [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-47/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-47/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  698. kafka_1 | [2018-05-14 16:36:01,107] INFO [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  699. kafka_1 | [2018-05-14 16:36:01,108] INFO [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  700. kafka_1 | [2018-05-14 16:36:01,109] INFO [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
  701. kafka_1 | [2018-05-14 16:36:01,111] WARN [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-25/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-25/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  702. kafka_1 | [2018-05-14 16:36:01,112] INFO [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  703. kafka_1 | [2018-05-14 16:36:01,113] INFO [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  704. kafka_1 | [2018-05-14 16:36:01,114] INFO [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  705. kafka_1 | [2018-05-14 16:36:01,116] WARN [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-7/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-7/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  706. kafka_1 | [2018-05-14 16:36:01,117] INFO [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  707. kafka_1 | [2018-05-14 16:36:01,119] INFO [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  708. kafka_1 | [2018-05-14 16:36:01,119] INFO [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  709. kafka_1 | [2018-05-14 16:36:01,122] WARN [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-29/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-29/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  710. kafka_1 | [2018-05-14 16:36:01,124] INFO [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  711. kafka_1 | [2018-05-14 16:36:01,125] INFO [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  712. kafka_1 | [2018-05-14 16:36:01,125] INFO [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
  713. kafka_1 | [2018-05-14 16:36:01,129] WARN [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-4/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-4/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  714. kafka_1 | [2018-05-14 16:36:01,130] INFO [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  715. kafka_1 | [2018-05-14 16:36:01,131] INFO [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  716. kafka_1 | [2018-05-14 16:36:01,132] INFO [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  717. kafka_1 | [2018-05-14 16:36:01,134] WARN [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-44/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-44/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  718. kafka_1 | [2018-05-14 16:36:01,138] INFO [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  719. kafka_1 | [2018-05-14 16:36:01,139] INFO [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  720. kafka_1 | [2018-05-14 16:36:01,140] INFO [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
  721. kafka_1 | [2018-05-14 16:36:01,168] WARN [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-33/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-33/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  722. kafka_1 | [2018-05-14 16:36:01,170] INFO [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  723. kafka_1 | [2018-05-14 16:36:01,175] INFO [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  724. kafka_1 | [2018-05-14 16:36:01,186] INFO [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 39 ms (kafka.log.Log)
  725. kafka_1 | [2018-05-14 16:36:01,189] WARN [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  726. kafka_1 | [2018-05-14 16:36:01,190] INFO [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  727. kafka_1 | [2018-05-14 16:36:01,192] INFO [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  728. kafka_1 | [2018-05-14 16:36:01,192] INFO [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
  729. kafka_1 | [2018-05-14 16:36:01,196] WARN [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-11/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-11/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
  730. kafka_1 | [2018-05-14 16:36:01,197] INFO [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
  731. kafka_1 | [2018-05-14 16:36:01,205] INFO [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
  732. kafka_1 | [2018-05-14 16:36:01,215] INFO [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 21 ms (kafka.log.Log)
  733. kafka_1 | [2018-05-14 16:36:01,219] INFO Logs loading complete in 1739 ms. (kafka.log.LogManager)
  734. kafka_1 | [2018-05-14 16:36:01,288] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
  735. kafka_1 | [2018-05-14 16:36:01,291] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
  736. elastic_1 | [2018-05-14 16:36:01,683][INFO ][node ] [Proteus] initialized
  737. elastic_1 | [2018-05-14 16:36:01,684][INFO ][node ] [Proteus] starting ...
  738. kafka_1 | [2018-05-14 16:36:01,908] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
  739. zookeeper_1 | 2018-05-14 16:36:02,000 [myid:] - INFO [SessionTracker:ZooKeeperServer@358] - Expiring session 0x1635f5dbd090000, timeout of 6000ms exceeded
  740. zookeeper_1 | 2018-05-14 16:36:02,001 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1635f5dbd090000
  741. kafka_1 | [2018-05-14 16:36:02,005] INFO [SocketServer brokerId=1] Started 1 acceptor threads (kafka.network.SocketServer)
  742. elastic_1 | [2018-05-14 16:36:02,044][INFO ][transport ] [Proteus] publish_address {172.21.0.5:9300}, bound_addresses {0.0.0.0:9300}
  743. elastic_1 | [2018-05-14 16:36:02,066][INFO ][discovery ] [Proteus] elasticsearch/S1NULhBpRDKOOewsabt2ZQ
  744. kafka_1 | [2018-05-14 16:36:02,125] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
  745. kafka_1 | [2018-05-14 16:36:02,155] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
  746. kafka_1 | [2018-05-14 16:36:02,156] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
  747. kafka_1 | [2018-05-14 16:36:02,254] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
  748. kafka_1 | [2018-05-14 16:36:02,405] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
  749. kafka_1 | [2018-05-14 16:36:02,410] INFO Result of znode creation at /brokers/ids/1 is: OK (kafka.zk.KafkaZkClient)
  750. kafka_1 | [2018-05-14 16:36:02,412] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
  751. kafka_1 | [2018-05-14 16:36:02,534] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
  752. kafka_1 | [2018-05-14 16:36:02,549] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
  753. kafka_1 | [2018-05-14 16:36:02,550] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
  754. kafka_1 | [2018-05-14 16:36:02,580] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
  755. kafka_1 | [2018-05-14 16:36:02,581] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
  756. kafka_1 | [2018-05-14 16:36:02,577] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
  757. kafka_1 | [2018-05-14 16:36:02,589] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 8 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  758. kafka_1 | [2018-05-14 16:36:02,606] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
  759. kafka_1 | [2018-05-14 16:36:02,634] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:3000,blockEndProducerId:3999) by writing to Zk with path version 4 (kafka.coordinator.transaction.ProducerIdManager)
  760. demo-kafka-elastic_1 | web - 2018-05-14 16:36:02,692 [main] WARN org.elasticsearch.client.transport - [Neurotap] node {#transport#-1}{elastic}{172.21.0.5:9300} not part of the cluster Cluster [elasticsearch_aironman], ignoring...
  761. kafka_1 | [2018-05-14 16:36:02,711] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
  762. kafka_1 | [2018-05-14 16:36:02,742] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
  763. kafka_1 | [2018-05-14 16:36:02,768] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
  764. kafka_1 | [2018-05-14 16:36:03,037] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
  765. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:03+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
  766. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:03+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
  767. kafka_1 | [2018-05-14 16:36:03,192] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)
  768. kafka_1 | [2018-05-14 16:36:03,192] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)
  769. kafka_1 | [2018-05-14 16:36:03,197] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
  770. demo-kafka-elastic_1 | web - 2018-05-14 16:36:03,337 [main] ERROR o.s.d.e.r.s.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
  771. kafka_1 | [2018-05-14 16:36:03,405] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,greeting-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,filtered-0,__consumer_offsets-15,__consumer_offsets-24,aironman-0,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.server.ReplicaFetcherManager)
  772. zookeeper_1 | 2018-05-14 16:36:03,426 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:delete cxid:0x6e zxid:0xef txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
  773. kafka_1 | [2018-05-14 16:36:03,452] INFO Replica loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Replica)
  774. kafka_1 | [2018-05-14 16:36:03,516] INFO [Partition __consumer_offsets-0 broker=1] __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  775. kafka_1 | [2018-05-14 16:36:03,579] INFO Replica loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Replica)
  776. kafka_1 | [2018-05-14 16:36:03,579] INFO [Partition __consumer_offsets-29 broker=1] __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  777. demo-kafka-elastic_1 | web - 2018-05-14 16:36:03,583 [main] ERROR o.s.d.e.r.s.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
  778. kafka_1 | [2018-05-14 16:36:03,608] INFO Replica loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Replica)
  779. kafka_1 | [2018-05-14 16:36:03,609] INFO [Partition __consumer_offsets-48 broker=1] __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  780. kafka_1 | [2018-05-14 16:36:03,638] INFO Replica loaded for partition __consumer_offsets-10 with initial high watermark 440 (kafka.cluster.Replica)
  781. kafka_1 | [2018-05-14 16:36:03,642] INFO [Partition __consumer_offsets-10 broker=1] __consumer_offsets-10 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  782. kafka_1 | [2018-05-14 16:36:03,660] INFO Replica loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Replica)
  783. kafka_1 | [2018-05-14 16:36:03,666] INFO [Partition __consumer_offsets-45 broker=1] __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  784. kafka_1 | [2018-05-14 16:36:03,685] INFO Replica loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Replica)
  785. kafka_1 | [2018-05-14 16:36:03,688] INFO [Partition __consumer_offsets-26 broker=1] __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  786. kafka_1 | [2018-05-14 16:36:03,706] INFO Replica loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Replica)
  787. kafka_1 | [2018-05-14 16:36:03,712] INFO [Partition __consumer_offsets-7 broker=1] __consumer_offsets-7 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  788. kafka_1 | [2018-05-14 16:36:03,773] INFO Replica loaded for partition __consumer_offsets-42 with initial high watermark 440 (kafka.cluster.Replica)
  789. kafka_1 | [2018-05-14 16:36:03,786] INFO [Partition __consumer_offsets-42 broker=1] __consumer_offsets-42 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  790. demo-quartz_1 | web - 2018-05-14 16:36:03,812 [main] INFO o.a.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8080"]
  791. kafka_1 | [2018-05-14 16:36:03,820] INFO Replica loaded for partition greeting-0 with initial high watermark 0 (kafka.cluster.Replica)
  792. kafka_1 | [2018-05-14 16:36:03,830] INFO [Partition greeting-0 broker=1] greeting-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  793. kafka_1 | [2018-05-14 16:36:03,857] INFO Replica loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Replica)
  794. kafka_1 | [2018-05-14 16:36:03,860] INFO [Partition __consumer_offsets-4 broker=1] __consumer_offsets-4 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  795. demo-quartz_1 | web - 2018-05-14 16:36:03,867 [main] INFO o.a.catalina.core.StandardService - Starting service [Tomcat]
  796. demo-quartz_1 | web - 2018-05-14 16:36:03,867 [main] INFO o.a.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/8.5.29
  797. kafka_1 | [2018-05-14 16:36:03,870] INFO Replica loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Replica)
  798. kafka_1 | [2018-05-14 16:36:03,871] INFO [Partition __consumer_offsets-23 broker=1] __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  799. kafka_1 | [2018-05-14 16:36:03,876] INFO Replica loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Replica)
  800. kafka_1 | [2018-05-14 16:36:03,877] INFO [Partition __consumer_offsets-1 broker=1] __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  801. kafka_1 | [2018-05-14 16:36:03,887] INFO Replica loaded for partition filtered-0 with initial high watermark 0 (kafka.cluster.Replica)
  802. kafka_1 | [2018-05-14 16:36:03,887] INFO [Partition filtered-0 broker=1] filtered-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  803. kafka_1 | [2018-05-14 16:36:03,896] INFO Replica loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Replica)
  804. kafka_1 | [2018-05-14 16:36:03,897] INFO [Partition __consumer_offsets-20 broker=1] __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  805. kafka_1 | [2018-05-14 16:36:03,909] INFO Replica loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Replica)
  806. kafka_1 | [2018-05-14 16:36:03,909] INFO [Partition __consumer_offsets-39 broker=1] __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  807. kafka_1 | [2018-05-14 16:36:03,923] INFO Replica loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Replica)
  808. kafka_1 | [2018-05-14 16:36:03,923] INFO [Partition __consumer_offsets-17 broker=1] __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  809. kafka_1 | [2018-05-14 16:36:03,938] INFO Replica loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Replica)
  810. kafka_1 | [2018-05-14 16:36:03,941] INFO [Partition __consumer_offsets-36 broker=1] __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  811. kafka_1 | [2018-05-14 16:36:03,969] INFO Replica loaded for partition aironman-0 with initial high watermark 76 (kafka.cluster.Replica)
  812. kafka_1 | [2018-05-14 16:36:03,970] INFO [Partition aironman-0 broker=1] aironman-0 starts at Leader Epoch 0 from offset 76. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  813. kafka_1 | [2018-05-14 16:36:04,008] INFO Replica loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Replica)
  814. kafka_1 | [2018-05-14 16:36:04,012] INFO [Partition __consumer_offsets-14 broker=1] __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  815. kafka_1 | [2018-05-14 16:36:04,016] INFO Replica loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Replica)
  816. kafka_1 | [2018-05-14 16:36:04,017] INFO [Partition __consumer_offsets-33 broker=1] __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  817. kafka_1 | [2018-05-14 16:36:04,026] INFO Replica loaded for partition __consumer_offsets-49 with initial high watermark 880 (kafka.cluster.Replica)
  818. kafka_1 | [2018-05-14 16:36:04,026] INFO [Partition __consumer_offsets-49 broker=1] __consumer_offsets-49 starts at Leader Epoch 0 from offset 880. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  819. kafka_1 | [2018-05-14 16:36:04,034] INFO Replica loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Replica)
  820. kafka_1 | [2018-05-14 16:36:04,038] INFO [Partition __consumer_offsets-11 broker=1] __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  821. kafka_1 | [2018-05-14 16:36:04,044] INFO Replica loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Replica)
  822. kafka_1 | [2018-05-14 16:36:04,044] INFO [Partition __consumer_offsets-30 broker=1] __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  823. kafka_1 | [2018-05-14 16:36:04,049] INFO Replica loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Replica)
  824. kafka_1 | [2018-05-14 16:36:04,049] INFO [Partition __consumer_offsets-46 broker=1] __consumer_offsets-46 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  825. kafka_1 | [2018-05-14 16:36:04,055] INFO Replica loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Replica)
  826. kafka_1 | [2018-05-14 16:36:04,063] INFO [Partition __consumer_offsets-27 broker=1] __consumer_offsets-27 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  827. kafka_1 | [2018-05-14 16:36:04,068] INFO Replica loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Replica)
  828. kafka_1 | [2018-05-14 16:36:04,068] INFO [Partition __consumer_offsets-8 broker=1] __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  829. kafka_1 | [2018-05-14 16:36:04,074] INFO Replica loaded for partition __consumer_offsets-24 with initial high watermark 440 (kafka.cluster.Replica)
  830. kafka_1 | [2018-05-14 16:36:04,074] INFO [Partition __consumer_offsets-24 broker=1] __consumer_offsets-24 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  831. kafka_1 | [2018-05-14 16:36:04,077] INFO Replica loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Replica)
  832. kafka_1 | [2018-05-14 16:36:04,078] INFO [Partition __consumer_offsets-43 broker=1] __consumer_offsets-43 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  833. kafka_1 | [2018-05-14 16:36:04,086] INFO Replica loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Replica)
  834. kafka_1 | [2018-05-14 16:36:04,087] INFO [Partition __consumer_offsets-5 broker=1] __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  835. kafka_1 | [2018-05-14 16:36:04,093] INFO Replica loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Replica)
  836. kafka_1 | [2018-05-14 16:36:04,093] INFO [Partition __consumer_offsets-21 broker=1] __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  837. kafka_1 | [2018-05-14 16:36:04,104] INFO Replica loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Replica)
  838. kafka_1 | [2018-05-14 16:36:04,105] INFO [Partition __consumer_offsets-2 broker=1] __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  839. kafka_1 | [2018-05-14 16:36:04,108] INFO Replica loaded for partition __consumer_offsets-40 with initial high watermark 440 (kafka.cluster.Replica)
  840. kafka_1 | [2018-05-14 16:36:04,109] INFO [Partition __consumer_offsets-40 broker=1] __consumer_offsets-40 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  841. kafka_1 | [2018-05-14 16:36:04,118] INFO Replica loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Replica)
  842. kafka_1 | [2018-05-14 16:36:04,118] INFO [Partition __consumer_offsets-37 broker=1] __consumer_offsets-37 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  843. kafka_1 | [2018-05-14 16:36:04,125] INFO Replica loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Replica)
  844. kafka_1 | [2018-05-14 16:36:04,125] INFO [Partition __consumer_offsets-18 broker=1] __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  845. kafka_1 | [2018-05-14 16:36:04,132] INFO Replica loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Replica)
  846. kafka_1 | [2018-05-14 16:36:04,132] INFO [Partition __consumer_offsets-34 broker=1] __consumer_offsets-34 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  847. kafka_1 | [2018-05-14 16:36:04,146] INFO Replica loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Replica)
  848. kafka_1 | [2018-05-14 16:36:04,147] INFO [Partition __consumer_offsets-15 broker=1] __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  849. kafka_1 | [2018-05-14 16:36:04,159] INFO Replica loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Replica)
  850. kafka_1 | [2018-05-14 16:36:04,163] INFO [Partition __consumer_offsets-12 broker=1] __consumer_offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  851. kafka_1 | [2018-05-14 16:36:04,169] INFO Replica loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Replica)
  852. kafka_1 | [2018-05-14 16:36:04,170] INFO [Partition __consumer_offsets-31 broker=1] __consumer_offsets-31 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  853. kafka_1 | [2018-05-14 16:36:04,179] INFO Replica loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Replica)
  854. kafka_1 | [2018-05-14 16:36:04,179] INFO [Partition __consumer_offsets-9 broker=1] __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  855. kafka_1 | [2018-05-14 16:36:04,199] INFO Replica loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Replica)
  856. kafka_1 | [2018-05-14 16:36:04,200] INFO [Partition __consumer_offsets-47 broker=1] __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  857. kafka_1 | [2018-05-14 16:36:04,204] INFO Replica loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Replica)
  858. kafka_1 | [2018-05-14 16:36:04,204] INFO [Partition __consumer_offsets-19 broker=1] __consumer_offsets-19 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  859. kafka_1 | [2018-05-14 16:36:04,218] INFO Replica loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Replica)
  860. kafka_1 | [2018-05-14 16:36:04,219] INFO [Partition __consumer_offsets-28 broker=1] __consumer_offsets-28 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  861. kafka_1 | [2018-05-14 16:36:04,222] INFO Replica loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Replica)
  862. kafka_1 | [2018-05-14 16:36:04,224] INFO [Partition __consumer_offsets-38 broker=1] __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  863. kafka_1 | [2018-05-14 16:36:04,236] INFO Replica loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Replica)
  864. kafka_1 | [2018-05-14 16:36:04,236] INFO [Partition __consumer_offsets-35 broker=1] __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  865. demo-quartz_1 | web - 2018-05-14 16:36:04,225 [localhost-startStop-1] INFO o.a.c.c.C.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
  866. kafka_1 | [2018-05-14 16:36:04,242] INFO Replica loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Replica)
  867. kafka_1 | [2018-05-14 16:36:04,243] INFO [Partition __consumer_offsets-6 broker=1] __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  868. kafka_1 | [2018-05-14 16:36:04,250] INFO Replica loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Replica)
  869. kafka_1 | [2018-05-14 16:36:04,251] INFO [Partition __consumer_offsets-44 broker=1] __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  870. kafka_1 | [2018-05-14 16:36:04,254] INFO Replica loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Replica)
  871. kafka_1 | [2018-05-14 16:36:04,254] INFO [Partition __consumer_offsets-25 broker=1] __consumer_offsets-25 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  872. kafka_1 | [2018-05-14 16:36:04,259] INFO Replica loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Replica)
  873. kafka_1 | [2018-05-14 16:36:04,260] INFO [Partition __consumer_offsets-16 broker=1] __consumer_offsets-16 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  874. kafka_1 | [2018-05-14 16:36:04,272] INFO Replica loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Replica)
  875. kafka_1 | [2018-05-14 16:36:04,272] INFO [Partition __consumer_offsets-22 broker=1] __consumer_offsets-22 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  876. kafka_1 | [2018-05-14 16:36:04,276] INFO Replica loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Replica)
  877. kafka_1 | [2018-05-14 16:36:04,276] INFO [Partition __consumer_offsets-41 broker=1] __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  878. kafka_1 | [2018-05-14 16:36:04,279] INFO Replica loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Replica)
  879. kafka_1 | [2018-05-14 16:36:04,279] INFO [Partition __consumer_offsets-32 broker=1] __consumer_offsets-32 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  880. kafka_1 | [2018-05-14 16:36:04,282] INFO Replica loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Replica)
  881. kafka_1 | [2018-05-14 16:36:04,283] INFO [Partition __consumer_offsets-3 broker=1] __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  882. kafka_1 | [2018-05-14 16:36:04,286] INFO Replica loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Replica)
  883. kafka_1 | [2018-05-14 16:36:04,287] INFO [Partition __consumer_offsets-13 broker=1] __consumer_offsets-13 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
  884. kafka_1 | [2018-05-14 16:36:04,314] INFO [ReplicaAlterLogDirsManager on broker 1] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
  885. kafka_1 | [2018-05-14 16:36:04,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
  886. kafka_1 | [2018-05-14 16:36:04,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
  887. kafka_1 | [2018-05-14 16:36:04,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
  888. kafka_1 | [2018-05-14 16:36:04,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
  889. kafka_1 | [2018-05-14 16:36:04,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
  890. kafka_1 | [2018-05-14 16:36:04,347] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
  891. kafka_1 | [2018-05-14 16:36:04,347] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
  892. kafka_1 | [2018-05-14 16:36:04,348] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
  893. kafka_1 | [2018-05-14 16:36:04,348] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
  894. kafka_1 | [2018-05-14 16:36:04,348] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
  895. kafka_1 | [2018-05-14 16:36:04,349] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
  896. kafka_1 | [2018-05-14 16:36:04,349] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
  897. kafka_1 | [2018-05-14 16:36:04,349] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
  898. kafka_1 | [2018-05-14 16:36:04,350] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
  899. kafka_1 | [2018-05-14 16:36:04,350] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
  900. kafka_1 | [2018-05-14 16:36:04,350] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
  901. kafka_1 | [2018-05-14 16:36:04,351] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
  902. kafka_1 | [2018-05-14 16:36:04,351] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
  903. kafka_1 | [2018-05-14 16:36:04,351] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
  904. kafka_1 | [2018-05-14 16:36:04,352] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
  905. kafka_1 | [2018-05-14 16:36:04,352] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
  906. kafka_1 | [2018-05-14 16:36:04,352] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
  907. kafka_1 | [2018-05-14 16:36:04,353] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
  908. kafka_1 | [2018-05-14 16:36:04,353] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
  909. kafka_1 | [2018-05-14 16:36:04,353] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
  910. kafka_1 | [2018-05-14 16:36:04,354] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
  911. kafka_1 | [2018-05-14 16:36:04,354] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
  912. kafka_1 | [2018-05-14 16:36:04,355] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
  913. kafka_1 | [2018-05-14 16:36:04,355] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
  914. kafka_1 | [2018-05-14 16:36:04,355] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
  915. kafka_1 | [2018-05-14 16:36:04,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
  916. kafka_1 | [2018-05-14 16:36:04,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
  917. kafka_1 | [2018-05-14 16:36:04,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
  918. kafka_1 | [2018-05-14 16:36:04,357] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
  919. kafka_1 | [2018-05-14 16:36:04,357] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
  920. kafka_1 | [2018-05-14 16:36:04,358] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
  921. kafka_1 | [2018-05-14 16:36:04,358] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
  922. kafka_1 | [2018-05-14 16:36:04,359] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
  923. kafka_1 | [2018-05-14 16:36:04,359] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
  924. kafka_1 | [2018-05-14 16:36:04,360] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
  925. kafka_1 | [2018-05-14 16:36:04,360] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
  926. kafka_1 | [2018-05-14 16:36:04,361] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
  927. kafka_1 | [2018-05-14 16:36:04,361] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
  928. kafka_1 | [2018-05-14 16:36:04,362] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
  929. kafka_1 | [2018-05-14 16:36:04,362] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
  930. kafka_1 | [2018-05-14 16:36:04,363] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
  931. kafka_1 | [2018-05-14 16:36:04,363] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
  932. kafka_1 | [2018-05-14 16:36:04,364] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
  933. kafka_1 | [2018-05-14 16:36:04,364] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
  934. kafka_1 | [2018-05-14 16:36:04,365] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
  935. kafka_1 | [2018-05-14 16:36:04,403] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 53 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  936. kafka_1 | [2018-05-14 16:36:04,493] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  937. kafka_1 | [2018-05-14 16:36:04,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 11 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  938. kafka_1 | [2018-05-14 16:36:04,506] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  939. kafka_1 | [2018-05-14 16:36:04,507] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  940. kafka_1 | [2018-05-14 16:36:04,508] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  941. kafka_1 | [2018-05-14 16:36:04,639] INFO [GroupCoordinator 1]: Loading group metadata for filter with generation 4 (kafka.coordinator.group.GroupCoordinator)
  942. kafka_1 | [2018-05-14 16:36:04,666] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 158 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  943. kafka_1 | [2018-05-14 16:36:04,667] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  944. kafka_1 | [2018-05-14 16:36:04,667] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  945. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,712 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  946. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  947. demo-kafka-elastic_1 | auto.offset.reset = latest
  948. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  949. demo-kafka-elastic_1 | check.crcs = true
  950. demo-kafka-elastic_1 | client.id =
  951. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  952. demo-kafka-elastic_1 | enable.auto.commit = true
  953. demo-kafka-elastic_1 | exclude.internal.topics = true
  954. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  955. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  956. demo-kafka-elastic_1 | fetch.min.bytes = 1
  957. demo-kafka-elastic_1 | group.id = foo
  958. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  959. demo-kafka-elastic_1 | interceptor.classes = null
  960. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  961. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  962. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  963. demo-kafka-elastic_1 | max.poll.records = 500
  964. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  965. demo-kafka-elastic_1 | metric.reporters = []
  966. demo-kafka-elastic_1 | metrics.num.samples = 2
  967. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  968. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  969. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  970. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  971. demo-kafka-elastic_1 | request.timeout.ms = 305000
  972. demo-kafka-elastic_1 | retry.backoff.ms = 100
  973. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  974. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  975. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  976. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  977. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  978. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  979. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  980. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  981. demo-kafka-elastic_1 | session.timeout.ms = 10000
  982. demo-kafka-elastic_1 | ssl.cipher.suites = null
  983. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  984. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  985. demo-kafka-elastic_1 | ssl.key.password = null
  986. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  987. demo-kafka-elastic_1 | ssl.keystore.location = null
  988. demo-kafka-elastic_1 | ssl.keystore.password = null
  989. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  990. demo-kafka-elastic_1 | ssl.protocol = TLS
  991. demo-kafka-elastic_1 | ssl.provider = null
  992. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  993. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  994. demo-kafka-elastic_1 | ssl.truststore.location = null
  995. demo-kafka-elastic_1 | ssl.truststore.password = null
  996. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  997. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  998. demo-kafka-elastic_1 |
  999. kafka_1 | [2018-05-14 16:36:04,727] INFO [GroupCoordinator 1]: Loading group metadata for greeting with generation 4 (kafka.coordinator.group.GroupCoordinator)
  1000. kafka_1 | [2018-05-14 16:36:04,727] INFO [GroupCoordinator 1]: Loading group metadata for bar with generation 4 (kafka.coordinator.group.GroupCoordinator)
  1001. kafka_1 | [2018-05-14 16:36:04,727] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 59 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1002. kafka_1 | [2018-05-14 16:36:04,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1003. kafka_1 | [2018-05-14 16:36:04,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1004. kafka_1 | [2018-05-14 16:36:04,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1005. kafka_1 | [2018-05-14 16:36:04,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1006. kafka_1 | [2018-05-14 16:36:04,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1007. kafka_1 | [2018-05-14 16:36:04,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1008. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,739 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1009. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1010. demo-kafka-elastic_1 | auto.offset.reset = latest
  1011. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1012. demo-kafka-elastic_1 | check.crcs = true
  1013. demo-kafka-elastic_1 | client.id = consumer-1
  1014. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1015. demo-kafka-elastic_1 | enable.auto.commit = true
  1016. demo-kafka-elastic_1 | exclude.internal.topics = true
  1017. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1018. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1019. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1020. demo-kafka-elastic_1 | group.id = foo
  1021. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1022. demo-kafka-elastic_1 | interceptor.classes = null
  1023. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1024. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1025. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1026. demo-kafka-elastic_1 | max.poll.records = 500
  1027. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1028. demo-kafka-elastic_1 | metric.reporters = []
  1029. demo-kafka-elastic_1 | metrics.num.samples = 2
  1030. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1031. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1032. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1033. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1034. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1035. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1036. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1037. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1038. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1039. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1040. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1041. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1042. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1043. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1044. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1045. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1046. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1047. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1048. demo-kafka-elastic_1 | ssl.key.password = null
  1049. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1050. demo-kafka-elastic_1 | ssl.keystore.location = null
  1051. demo-kafka-elastic_1 | ssl.keystore.password = null
  1052. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1053. demo-kafka-elastic_1 | ssl.protocol = TLS
  1054. demo-kafka-elastic_1 | ssl.provider = null
  1055. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1056. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1057. demo-kafka-elastic_1 | ssl.truststore.location = null
  1058. demo-kafka-elastic_1 | ssl.truststore.password = null
  1059. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1060. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1061. demo-kafka-elastic_1 |
  1062. kafka_1 | [2018-05-14 16:36:04,749] INFO [GroupCoordinator 1]: Loading group metadata for headers with generation 4 (kafka.coordinator.group.GroupCoordinator)
  1063. kafka_1 | [2018-05-14 16:36:04,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 19 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1064. kafka_1 | [2018-05-14 16:36:04,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1065. kafka_1 | [2018-05-14 16:36:04,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1066. kafka_1 | [2018-05-14 16:36:04,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1067. kafka_1 | [2018-05-14 16:36:04,751] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1068. kafka_1 | [2018-05-14 16:36:04,751] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1069. kafka_1 | [2018-05-14 16:36:04,751] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1070. kafka_1 | [2018-05-14 16:36:04,752] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1071. kafka_1 | [2018-05-14 16:36:04,752] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1072. kafka_1 | [2018-05-14 16:36:04,752] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1073. kafka_1 | [2018-05-14 16:36:04,753] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1074. kafka_1 | [2018-05-14 16:36:04,753] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1075. kafka_1 | [2018-05-14 16:36:04,754] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1076. kafka_1 | [2018-05-14 16:36:04,757] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 3 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1077. kafka_1 | [2018-05-14 16:36:04,758] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1078. kafka_1 | [2018-05-14 16:36:04,758] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1079. kafka_1 | [2018-05-14 16:36:04,759] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1080. kafka_1 | [2018-05-14 16:36:04,759] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1081. kafka_1 | [2018-05-14 16:36:04,760] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1082. kafka_1 | [2018-05-14 16:36:04,760] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1083. kafka_1 | [2018-05-14 16:36:04,761] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1084. kafka_1 | [2018-05-14 16:36:04,763] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1085. kafka_1 | [2018-05-14 16:36:04,764] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1086. kafka_1 | [2018-05-14 16:36:04,765] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1087. kafka_1 | [2018-05-14 16:36:04,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1088. kafka_1 | [2018-05-14 16:36:04,784] INFO [GroupCoordinator 1]: Loading group metadata for foo with generation 4 (kafka.coordinator.group.GroupCoordinator)
  1089. kafka_1 | [2018-05-14 16:36:04,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 20 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1090. kafka_1 | [2018-05-14 16:36:04,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1091. kafka_1 | [2018-05-14 16:36:04,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1092. kafka_1 | [2018-05-14 16:36:04,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1093. kafka_1 | [2018-05-14 16:36:04,789] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1094. kafka_1 | [2018-05-14 16:36:04,799] INFO [GroupCoordinator 1]: Loading group metadata for bitcoin with generation 4 (kafka.coordinator.group.GroupCoordinator)
  1095. kafka_1 | [2018-05-14 16:36:04,799] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1096. kafka_1 | [2018-05-14 16:36:04,800] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1097. kafka_1 | [2018-05-14 16:36:04,800] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1098. kafka_1 | [2018-05-14 16:36:04,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  1099. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,863 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1100. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,863 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1101. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,884 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1102. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1103. demo-kafka-elastic_1 | auto.offset.reset = latest
  1104. demo-kafka-elastic_1 | bootstrap.servers = [localhost:9092]
  1105. demo-kafka-elastic_1 | check.crcs = true
  1106. demo-kafka-elastic_1 | client.id =
  1107. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1108. demo-kafka-elastic_1 | enable.auto.commit = true
  1109. demo-kafka-elastic_1 | exclude.internal.topics = true
  1110. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1111. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1112. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1113. demo-kafka-elastic_1 | group.id =
  1114. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1115. demo-kafka-elastic_1 | interceptor.classes = null
  1116. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1117. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1118. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1119. demo-kafka-elastic_1 | max.poll.records = 500
  1120. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1121. demo-kafka-elastic_1 | metric.reporters = []
  1122. demo-kafka-elastic_1 | metrics.num.samples = 2
  1123. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1124. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1125. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1126. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1127. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1128. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1129. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1130. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1131. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1132. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1133. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1134. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1135. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1136. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1137. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1138. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1139. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1140. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1141. demo-kafka-elastic_1 | ssl.key.password = null
  1142. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1143. demo-kafka-elastic_1 | ssl.keystore.location = null
  1144. demo-kafka-elastic_1 | ssl.keystore.password = null
  1145. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1146. demo-kafka-elastic_1 | ssl.protocol = TLS
  1147. demo-kafka-elastic_1 | ssl.provider = null
  1148. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1149. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1150. demo-kafka-elastic_1 | ssl.truststore.location = null
  1151. demo-kafka-elastic_1 | ssl.truststore.password = null
  1152. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1153. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1154. demo-kafka-elastic_1 |
  1155. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,902 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1156. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1157. demo-kafka-elastic_1 | auto.offset.reset = latest
  1158. demo-kafka-elastic_1 | bootstrap.servers = [localhost:9092]
  1159. demo-kafka-elastic_1 | check.crcs = true
  1160. demo-kafka-elastic_1 | client.id = consumer-2
  1161. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1162. demo-kafka-elastic_1 | enable.auto.commit = true
  1163. demo-kafka-elastic_1 | exclude.internal.topics = true
  1164. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1165. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1166. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1167. demo-kafka-elastic_1 | group.id =
  1168. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1169. demo-kafka-elastic_1 | interceptor.classes = null
  1170. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1171. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1172. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1173. demo-kafka-elastic_1 | max.poll.records = 500
  1174. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1175. demo-kafka-elastic_1 | metric.reporters = []
  1176. demo-kafka-elastic_1 | metrics.num.samples = 2
  1177. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1178. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1179. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1180. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1181. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1182. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1183. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1184. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1185. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1186. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1187. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1188. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1189. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1190. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1191. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1192. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1193. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1194. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1195. demo-kafka-elastic_1 | ssl.key.password = null
  1196. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1197. demo-kafka-elastic_1 | ssl.keystore.location = null
  1198. demo-kafka-elastic_1 | ssl.keystore.password = null
  1199. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1200. demo-kafka-elastic_1 | ssl.protocol = TLS
  1201. demo-kafka-elastic_1 | ssl.provider = null
  1202. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1203. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1204. demo-kafka-elastic_1 | ssl.truststore.location = null
  1205. demo-kafka-elastic_1 | ssl.truststore.password = null
  1206. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1207. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1208. demo-kafka-elastic_1 |
  1209. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,909 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1210. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,910 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1211. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,949 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1212. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1213. demo-kafka-elastic_1 | auto.offset.reset = latest
  1214. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1215. demo-kafka-elastic_1 | check.crcs = true
  1216. demo-kafka-elastic_1 | client.id =
  1217. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1218. demo-kafka-elastic_1 | enable.auto.commit = true
  1219. demo-kafka-elastic_1 | exclude.internal.topics = true
  1220. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1221. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1222. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1223. demo-kafka-elastic_1 | group.id = filter
  1224. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1225. demo-kafka-elastic_1 | interceptor.classes = null
  1226. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1227. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1228. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1229. demo-kafka-elastic_1 | max.poll.records = 500
  1230. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1231. demo-kafka-elastic_1 | metric.reporters = []
  1232. demo-kafka-elastic_1 | metrics.num.samples = 2
  1233. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1234. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1235. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1236. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1237. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1238. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1239. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1240. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1241. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1242. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1243. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1244. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1245. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1246. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1247. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1248. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1249. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1250. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1251. demo-kafka-elastic_1 | ssl.key.password = null
  1252. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1253. demo-kafka-elastic_1 | ssl.keystore.location = null
  1254. demo-kafka-elastic_1 | ssl.keystore.password = null
  1255. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1256. demo-kafka-elastic_1 | ssl.protocol = TLS
  1257. demo-kafka-elastic_1 | ssl.provider = null
  1258. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1259. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1260. demo-kafka-elastic_1 | ssl.truststore.location = null
  1261. demo-kafka-elastic_1 | ssl.truststore.password = null
  1262. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1263. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1264. demo-kafka-elastic_1 |
  1265. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,950 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1266. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1267. demo-kafka-elastic_1 | auto.offset.reset = latest
  1268. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1269. demo-kafka-elastic_1 | check.crcs = true
  1270. demo-kafka-elastic_1 | client.id = consumer-3
  1271. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1272. demo-kafka-elastic_1 | enable.auto.commit = true
  1273. demo-kafka-elastic_1 | exclude.internal.topics = true
  1274. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1275. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1276. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1277. demo-kafka-elastic_1 | group.id = filter
  1278. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1279. demo-kafka-elastic_1 | interceptor.classes = null
  1280. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1281. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1282. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1283. demo-kafka-elastic_1 | max.poll.records = 500
  1284. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1285. demo-kafka-elastic_1 | metric.reporters = []
  1286. demo-kafka-elastic_1 | metrics.num.samples = 2
  1287. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1288. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1289. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1290. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1291. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1292. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1293. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1294. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1295. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1296. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1297. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1298. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1299. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1300. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1301. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1302. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1303. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1304. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1305. demo-kafka-elastic_1 | ssl.key.password = null
  1306. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1307. demo-kafka-elastic_1 | ssl.keystore.location = null
  1308. demo-kafka-elastic_1 | ssl.keystore.password = null
  1309. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1310. demo-kafka-elastic_1 | ssl.protocol = TLS
  1311. demo-kafka-elastic_1 | ssl.provider = null
  1312. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1313. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1314. demo-kafka-elastic_1 | ssl.truststore.location = null
  1315. demo-kafka-elastic_1 | ssl.truststore.password = null
  1316. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1317. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1318. demo-kafka-elastic_1 |
  1319. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,977 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1320. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,977 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1321. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,978 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1322. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1323. demo-kafka-elastic_1 | auto.offset.reset = latest
  1324. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1325. demo-kafka-elastic_1 | check.crcs = true
  1326. demo-kafka-elastic_1 | client.id =
  1327. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1328. demo-kafka-elastic_1 | enable.auto.commit = true
  1329. demo-kafka-elastic_1 | exclude.internal.topics = true
  1330. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1331. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1332. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1333. demo-kafka-elastic_1 | group.id = greeting
  1334. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1335. demo-kafka-elastic_1 | interceptor.classes = null
  1336. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1337. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1338. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1339. demo-kafka-elastic_1 | max.poll.records = 500
  1340. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1341. demo-kafka-elastic_1 | metric.reporters = []
  1342. demo-kafka-elastic_1 | metrics.num.samples = 2
  1343. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1344. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1345. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1346. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1347. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1348. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1349. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1350. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1351. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1352. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1353. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1354. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1355. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1356. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1357. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1358. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1359. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1360. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1361. demo-kafka-elastic_1 | ssl.key.password = null
  1362. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1363. demo-kafka-elastic_1 | ssl.keystore.location = null
  1364. demo-kafka-elastic_1 | ssl.keystore.password = null
  1365. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1366. demo-kafka-elastic_1 | ssl.protocol = TLS
  1367. demo-kafka-elastic_1 | ssl.provider = null
  1368. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1369. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1370. demo-kafka-elastic_1 | ssl.truststore.location = null
  1371. demo-kafka-elastic_1 | ssl.truststore.password = null
  1372. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1373. demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
  1374. demo-kafka-elastic_1 |
  1375. demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,980 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1376. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1377. demo-kafka-elastic_1 | auto.offset.reset = latest
  1378. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1379. demo-kafka-elastic_1 | check.crcs = true
  1380. demo-kafka-elastic_1 | client.id = consumer-4
  1381. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1382. demo-kafka-elastic_1 | enable.auto.commit = true
  1383. demo-kafka-elastic_1 | exclude.internal.topics = true
  1384. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1385. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1386. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1387. demo-kafka-elastic_1 | group.id = greeting
  1388. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1389. demo-kafka-elastic_1 | interceptor.classes = null
  1390. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1391. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1392. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1393. demo-kafka-elastic_1 | max.poll.records = 500
  1394. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1395. demo-kafka-elastic_1 | metric.reporters = []
  1396. demo-kafka-elastic_1 | metrics.num.samples = 2
  1397. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1398. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1399. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1400. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1401. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1402. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1403. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1404. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1405. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1406. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1407. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1408. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1409. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1410. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1411. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1412. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1413. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1414. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1415. demo-kafka-elastic_1 | ssl.key.password = null
  1416. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1417. demo-kafka-elastic_1 | ssl.keystore.location = null
  1418. demo-kafka-elastic_1 | ssl.keystore.password = null
  1419. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1420. demo-kafka-elastic_1 | ssl.protocol = TLS
  1421. demo-kafka-elastic_1 | ssl.provider = null
  1422. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1423. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1424. demo-kafka-elastic_1 | ssl.truststore.location = null
  1425. demo-kafka-elastic_1 | ssl.truststore.password = null
  1426. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1427. demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
  1428. demo-kafka-elastic_1 |
  1429. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,044 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1430. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,044 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1431. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,048 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1432. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1433. demo-kafka-elastic_1 | auto.offset.reset = latest
  1434. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1435. demo-kafka-elastic_1 | check.crcs = true
  1436. demo-kafka-elastic_1 | client.id =
  1437. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1438. demo-kafka-elastic_1 | enable.auto.commit = true
  1439. demo-kafka-elastic_1 | exclude.internal.topics = true
  1440. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1441. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1442. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1443. demo-kafka-elastic_1 | group.id = bitcoin
  1444. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1445. demo-kafka-elastic_1 | interceptor.classes = null
  1446. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1447. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1448. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1449. demo-kafka-elastic_1 | max.poll.records = 500
  1450. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1451. demo-kafka-elastic_1 | metric.reporters = []
  1452. demo-kafka-elastic_1 | metrics.num.samples = 2
  1453. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1454. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1455. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1456. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1457. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1458. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1459. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1460. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1461. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1462. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1463. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1464. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1465. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1466. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1467. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1468. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1469. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1470. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1471. demo-kafka-elastic_1 | ssl.key.password = null
  1472. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1473. demo-kafka-elastic_1 | ssl.keystore.location = null
  1474. demo-kafka-elastic_1 | ssl.keystore.password = null
  1475. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1476. demo-kafka-elastic_1 | ssl.protocol = TLS
  1477. demo-kafka-elastic_1 | ssl.provider = null
  1478. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1479. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1480. demo-kafka-elastic_1 | ssl.truststore.location = null
  1481. demo-kafka-elastic_1 | ssl.truststore.password = null
  1482. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1483. demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
  1484. demo-kafka-elastic_1 |
  1485. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,061 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1486. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1487. demo-kafka-elastic_1 | auto.offset.reset = latest
  1488. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1489. demo-kafka-elastic_1 | check.crcs = true
  1490. demo-kafka-elastic_1 | client.id = consumer-5
  1491. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1492. demo-kafka-elastic_1 | enable.auto.commit = true
  1493. demo-kafka-elastic_1 | exclude.internal.topics = true
  1494. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1495. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1496. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1497. demo-kafka-elastic_1 | group.id = bitcoin
  1498. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1499. demo-kafka-elastic_1 | interceptor.classes = null
  1500. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1501. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1502. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1503. demo-kafka-elastic_1 | max.poll.records = 500
  1504. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1505. demo-kafka-elastic_1 | metric.reporters = []
  1506. demo-kafka-elastic_1 | metrics.num.samples = 2
  1507. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1508. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1509. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1510. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1511. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1512. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1513. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1514. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1515. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1516. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1517. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1518. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1519. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1520. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1521. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1522. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1523. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1524. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1525. demo-kafka-elastic_1 | ssl.key.password = null
  1526. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1527. demo-kafka-elastic_1 | ssl.keystore.location = null
  1528. demo-kafka-elastic_1 | ssl.keystore.password = null
  1529. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1530. demo-kafka-elastic_1 | ssl.protocol = TLS
  1531. demo-kafka-elastic_1 | ssl.provider = null
  1532. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1533. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1534. demo-kafka-elastic_1 | ssl.truststore.location = null
  1535. demo-kafka-elastic_1 | ssl.truststore.password = null
  1536. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1537. demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
  1538. demo-kafka-elastic_1 |
  1539. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,069 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1540. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,069 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1541. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,070 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1542. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1543. demo-kafka-elastic_1 | auto.offset.reset = latest
  1544. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1545. demo-kafka-elastic_1 | check.crcs = true
  1546. demo-kafka-elastic_1 | client.id =
  1547. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1548. demo-kafka-elastic_1 | enable.auto.commit = true
  1549. demo-kafka-elastic_1 | exclude.internal.topics = true
  1550. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1551. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1552. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1553. demo-kafka-elastic_1 | group.id = bar
  1554. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1555. demo-kafka-elastic_1 | interceptor.classes = null
  1556. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1557. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1558. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1559. demo-kafka-elastic_1 | max.poll.records = 500
  1560. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1561. demo-kafka-elastic_1 | metric.reporters = []
  1562. demo-kafka-elastic_1 | metrics.num.samples = 2
  1563. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1564. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1565. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1566. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1567. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1568. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1569. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1570. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1571. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1572. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1573. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1574. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1575. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1576. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1577. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1578. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1579. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1580. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1581. demo-kafka-elastic_1 | ssl.key.password = null
  1582. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1583. demo-kafka-elastic_1 | ssl.keystore.location = null
  1584. demo-kafka-elastic_1 | ssl.keystore.password = null
  1585. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1586. demo-kafka-elastic_1 | ssl.protocol = TLS
  1587. demo-kafka-elastic_1 | ssl.provider = null
  1588. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1589. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1590. demo-kafka-elastic_1 | ssl.truststore.location = null
  1591. demo-kafka-elastic_1 | ssl.truststore.password = null
  1592. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1593. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1594. demo-kafka-elastic_1 |
  1595. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,070 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1596. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1597. demo-kafka-elastic_1 | auto.offset.reset = latest
  1598. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1599. demo-kafka-elastic_1 | check.crcs = true
  1600. demo-kafka-elastic_1 | client.id = consumer-6
  1601. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1602. demo-kafka-elastic_1 | enable.auto.commit = true
  1603. demo-kafka-elastic_1 | exclude.internal.topics = true
  1604. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1605. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1606. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1607. demo-kafka-elastic_1 | group.id = bar
  1608. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1609. demo-kafka-elastic_1 | interceptor.classes = null
  1610. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1611. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1612. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1613. demo-kafka-elastic_1 | max.poll.records = 500
  1614. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1615. demo-kafka-elastic_1 | metric.reporters = []
  1616. demo-kafka-elastic_1 | metrics.num.samples = 2
  1617. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1618. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1619. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1620. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1621. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1622. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1623. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1624. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1625. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1626. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1627. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1628. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1629. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1630. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1631. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1632. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1633. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1634. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1635. demo-kafka-elastic_1 | ssl.key.password = null
  1636. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1637. demo-kafka-elastic_1 | ssl.keystore.location = null
  1638. demo-kafka-elastic_1 | ssl.keystore.password = null
  1639. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1640. demo-kafka-elastic_1 | ssl.protocol = TLS
  1641. demo-kafka-elastic_1 | ssl.provider = null
  1642. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1643. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1644. demo-kafka-elastic_1 | ssl.truststore.location = null
  1645. demo-kafka-elastic_1 | ssl.truststore.password = null
  1646. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1647. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1648. demo-kafka-elastic_1 |
  1649. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,077 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1650. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,078 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1651. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,080 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1652. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1653. demo-kafka-elastic_1 | auto.offset.reset = latest
  1654. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1655. demo-kafka-elastic_1 | check.crcs = true
  1656. demo-kafka-elastic_1 | client.id =
  1657. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1658. demo-kafka-elastic_1 | enable.auto.commit = true
  1659. demo-kafka-elastic_1 | exclude.internal.topics = true
  1660. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1661. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1662. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1663. demo-kafka-elastic_1 | group.id = headers
  1664. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1665. demo-kafka-elastic_1 | interceptor.classes = null
  1666. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1667. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1668. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1669. demo-kafka-elastic_1 | max.poll.records = 500
  1670. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1671. demo-kafka-elastic_1 | metric.reporters = []
  1672. demo-kafka-elastic_1 | metrics.num.samples = 2
  1673. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1674. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1675. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1676. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1677. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1678. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1679. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1680. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1681. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1682. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1683. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1684. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1685. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1686. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1687. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1688. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1689. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1690. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1691. demo-kafka-elastic_1 | ssl.key.password = null
  1692. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1693. demo-kafka-elastic_1 | ssl.keystore.location = null
  1694. demo-kafka-elastic_1 | ssl.keystore.password = null
  1695. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1696. demo-kafka-elastic_1 | ssl.protocol = TLS
  1697. demo-kafka-elastic_1 | ssl.provider = null
  1698. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1699. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1700. demo-kafka-elastic_1 | ssl.truststore.location = null
  1701. demo-kafka-elastic_1 | ssl.truststore.password = null
  1702. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1703. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1704. demo-kafka-elastic_1 |
  1705. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,081 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
  1706. demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
  1707. demo-kafka-elastic_1 | auto.offset.reset = latest
  1708. demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
  1709. demo-kafka-elastic_1 | check.crcs = true
  1710. demo-kafka-elastic_1 | client.id = consumer-7
  1711. demo-kafka-elastic_1 | connections.max.idle.ms = 540000
  1712. demo-kafka-elastic_1 | enable.auto.commit = true
  1713. demo-kafka-elastic_1 | exclude.internal.topics = true
  1714. demo-kafka-elastic_1 | fetch.max.bytes = 52428800
  1715. demo-kafka-elastic_1 | fetch.max.wait.ms = 500
  1716. demo-kafka-elastic_1 | fetch.min.bytes = 1
  1717. demo-kafka-elastic_1 | group.id = headers
  1718. demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
  1719. demo-kafka-elastic_1 | interceptor.classes = null
  1720. demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1721. demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
  1722. demo-kafka-elastic_1 | max.poll.interval.ms = 300000
  1723. demo-kafka-elastic_1 | max.poll.records = 500
  1724. demo-kafka-elastic_1 | metadata.max.age.ms = 300000
  1725. demo-kafka-elastic_1 | metric.reporters = []
  1726. demo-kafka-elastic_1 | metrics.num.samples = 2
  1727. demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
  1728. demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  1729. demo-kafka-elastic_1 | receive.buffer.bytes = 65536
  1730. demo-kafka-elastic_1 | reconnect.backoff.ms = 50
  1731. demo-kafka-elastic_1 | request.timeout.ms = 305000
  1732. demo-kafka-elastic_1 | retry.backoff.ms = 100
  1733. demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1734. demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
  1735. demo-kafka-elastic_1 | sasl.kerberos.service.name = null
  1736. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1737. demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1738. demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
  1739. demo-kafka-elastic_1 | security.protocol = PLAINTEXT
  1740. demo-kafka-elastic_1 | send.buffer.bytes = 131072
  1741. demo-kafka-elastic_1 | session.timeout.ms = 10000
  1742. demo-kafka-elastic_1 | ssl.cipher.suites = null
  1743. demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1744. demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
  1745. demo-kafka-elastic_1 | ssl.key.password = null
  1746. demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
  1747. demo-kafka-elastic_1 | ssl.keystore.location = null
  1748. demo-kafka-elastic_1 | ssl.keystore.password = null
  1749. demo-kafka-elastic_1 | ssl.keystore.type = JKS
  1750. demo-kafka-elastic_1 | ssl.protocol = TLS
  1751. demo-kafka-elastic_1 | ssl.provider = null
  1752. demo-kafka-elastic_1 | ssl.secure.random.implementation = null
  1753. demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
  1754. demo-kafka-elastic_1 | ssl.truststore.location = null
  1755. demo-kafka-elastic_1 | ssl.truststore.password = null
  1756. demo-kafka-elastic_1 | ssl.truststore.type = JKS
  1757. demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  1758. demo-kafka-elastic_1 |
  1759. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,087 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1760. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,087 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1761. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,114 [main] INFO c.a.demo.DemoKafkaElasticApplication - Started DemoKafkaElasticApplication in 8.854 seconds (JVM running for 10.934)
  1762. elastic_1 | [2018-05-14 16:36:05,189][INFO ][cluster.service ] [Proteus] new_master {Proteus}{S1NULhBpRDKOOewsabt2ZQ}{172.21.0.5}{172.21.0.5:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
  1763. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,276 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group foo.
  1764. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,273 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group filter.
  1765. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,273 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group headers.
  1766. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,274 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group bitcoin.
  1767. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,273 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group greeting.
  1768. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,300 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group headers
  1769. elastic_1 | [2018-05-14 16:36:05,310][INFO ][http ] [Proteus] publish_address {172.21.0.5:9200}, bound_addresses {0.0.0.0:9200}
  1770. elastic_1 | [2018-05-14 16:36:05,310][INFO ][node ] [Proteus] started
  1771. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,322 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group headers
  1772. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,323 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group greeting
  1773. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,323 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group greeting
  1774. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,272 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group bar.
  1775. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,295 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group foo
  1776. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,324 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group foo
  1777. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,305 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group filter
  1778. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,325 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group bar
  1779. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,325 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group filter
  1780. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,326 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group bitcoin
  1781. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,326 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group bitcoin
  1782. demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,326 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group bar
  1783. kafka_1 | [2018-05-14 16:36:05,356] INFO [GroupCoordinator 1]: Preparing to rebalance group foo with old generation 4 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
  1784. kafka_1 | [2018-05-14 16:36:05,356] INFO [GroupCoordinator 1]: Preparing to rebalance group headers with old generation 4 (__consumer_offsets-10) (kafka.coordinator.group.GroupCoordinator)
  1785. kafka_1 | [2018-05-14 16:36:05,357] INFO [GroupCoordinator 1]: Preparing to rebalance group filter with old generation 4 (__consumer_offsets-40) (kafka.coordinator.group.GroupCoordinator)
  1786. kafka_1 | [2018-05-14 16:36:05,358] INFO [GroupCoordinator 1]: Preparing to rebalance group bar with old generation 4 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
  1787. kafka_1 | [2018-05-14 16:36:05,359] INFO [GroupCoordinator 1]: Preparing to rebalance group greeting with old generation 4 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
  1788. kafka_1 | [2018-05-14 16:36:05,360] INFO [GroupCoordinator 1]: Preparing to rebalance group bitcoin with old generation 4 (__consumer_offsets-42) (kafka.coordinator.group.GroupCoordinator)
  1789. elastic_1 | [2018-05-14 16:36:05,435][INFO ][gateway ] [Proteus] recovered [2] indices into cluster_state
  1790. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:06+00:00","tags":["status","plugin:elasticsearch","error"],"pid":12,"name":"plugin:elasticsearch","state":"red","message":"Status changed from red to red - Elasticsearch is still initializing the kibana index.","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elastic:9200."}
  1791. elastic_1 | [2018-05-14 16:36:06,328][INFO ][cluster.routing.allocation] [Proteus] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
  1792. demo-quartz_1 | web - 2018-05-14 16:36:06,394 [main] INFO o.h.jpa.internal.util.LogHelper - HHH000204: Processing PersistenceUnitInfo [
  1793. demo-quartz_1 | name: default
  1794. demo-quartz_1 | ...]
  1795. demo-quartz_1 | web - 2018-05-14 16:36:06,572 [main] INFO org.hibernate.Version - HHH000412: Hibernate Core {5.0.12.Final}
  1796. demo-quartz_1 | web - 2018-05-14 16:36:06,576 [main] INFO org.hibernate.cfg.Environment - HHH000206: hibernate.properties not found
  1797. demo-quartz_1 | web - 2018-05-14 16:36:06,580 [main] INFO org.hibernate.cfg.Environment - HHH000021: Bytecode provider name : javassist
  1798. demo-quartz_1 | web - 2018-05-14 16:36:06,699 [main] INFO o.h.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
  1799. demo-quartz_1 | web - 2018-05-14 16:36:07,218 [main] INFO org.hibernate.dialect.Dialect - HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
  1800. demo-kafka-elastic_1 | web - 2018-05-14 16:36:07,346 [elasticsearch[Neurotap][generic][T#1]] WARN org.elasticsearch.client.transport - [Neurotap] node {#transport#-1}{elastic}{172.21.0.5:9300} not part of the cluster Cluster [elasticsearch_aironman], ignoring...
  1801. demo-quartz_1 | web - 2018-05-14 16:36:08,035 [main] INFO o.h.tool.hbm2ddl.SchemaExport - HHH000227: Running hbm2ddl schema export
  1802. demo-quartz_1 | web - 2018-05-14 16:36:08,059 [main] INFO o.h.tool.hbm2ddl.SchemaExport - HHH000230: Schema export complete
  1803. demo-quartz_1 | web - 2018-05-14 16:36:08,158 [main] INFO c.a.d.s.SpringQrtzScheduler$$EnhancerBySpringCGLIB$$65e94d39 - Hello world from Spring...
  1804. kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:08+00:00","tags":["status","plugin:elasticsearch","info"],"pid":12,"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Elasticsearch is still initializing the kibana index."}
  1805. demo-quartz_1 | web - 2018-05-14 16:36:08,590 [main] INFO c.a.d.s.SpringQrtzScheduler$$EnhancerBySpringCGLIB$$65e94d39 - Configuring trigger to fire every 10 seconds
  1806. demo-quartz_1 | web - 2018-05-14 16:36:08,658 [main] INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor
  1807. demo-quartz_1 | web - 2018-05-14 16:36:08,664 [main] INFO org.quartz.simpl.SimpleThreadPool - Job execution threads will use class loader of thread: main
  1808. demo-quartz_1 | web - 2018-05-14 16:36:08,682 [main] INFO o.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
  1809. demo-quartz_1 | web - 2018-05-14 16:36:08,682 [main] INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.2.3 created.
  1810. demo-quartz_1 | web - 2018-05-14 16:36:08,684 [main] INFO org.quartz.simpl.RAMJobStore - RAMJobStore initialized.
  1811. demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.2.3) 'scheduler' with instanceId 'NON_CLUSTERED'
  1812. demo-quartz_1 | Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
  1813. demo-quartz_1 | NOT STARTED.
  1814. demo-quartz_1 | Currently in standby mode.
  1815. demo-quartz_1 | Number of jobs executed: 0
  1816. demo-quartz_1 | Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 2 threads.
  1817. demo-quartz_1 | Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
  1818. demo-quartz_1 |
  1819. demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'scheduler' initialized from an externally provided properties instance.
  1820. demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.2.3
  1821. demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.core.QuartzScheduler - JobFactory set to: com.aironman.demoquartz.config.AutoWiringSpringBeanJobFactory@12f9af83
  1822. demo-quartz_1 | web - 2018-05-14 16:36:10,195 [main] INFO org.quartz.core.QuartzScheduler - Scheduler scheduler_$_NON_CLUSTERED started.
  1823. demo-quartz_1 | web - 2018-05-14 16:36:10,212 [scheduler_Worker-1] INFO c.a.demoquartz.scheduler.SampleJob - Job ** Qrtz_Job_Detail ** fired @ Mon May 14 16:36:10 UTC 2018
  1824. demo-quartz_1 | web - 2018-05-14 16:36:10,212 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - The sample job has begun...
  1825. demo-quartz_1 | web - 2018-05-14 16:36:10,244 [main] INFO o.a.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8080"]
  1826. demo-quartz_1 | web - 2018-05-14 16:36:10,288 [main] INFO o.a.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
  1827. demo-quartz_1 | web - 2018-05-14 16:36:10,364 [main] INFO c.a.demoquartz.DemoQuartzApplication - Started DemoQuartzApplication in 14.59 seconds (JVM running for 17.334)
  1828. demo-quartz_1 | web - 2018-05-14 16:36:11,578 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - com.aironman.demoquartz.pojo.BitcoinEuro@480d1ac2[id=bitcoin,
  1829. demo-quartz_1 | ,name=Bitcoin,
  1830. demo-quartz_1 | ,symbol=BTC,
  1831. demo-quartz_1 | ,rank=1,
  1832. demo-quartz_1 | ,priceUsd=8825.43,
  1833. demo-quartz_1 | ,priceBtc=1.0,_24hVolumeUsd=7445850000.0,
  1834. demo-quartz_1 | ,marketCapUsd=150324211097,
  1835. demo-quartz_1 | ,availableSupply=17033075.0,
  1836. demo-quartz_1 | ,totalSupply=17033075.0,
  1837. demo-quartz_1 | ,maxSupply=21000000.0,
  1838. demo-quartz_1 | ,percentChange1h=0.57,
  1839. demo-quartz_1 | ,percentChange24h=1.79,
  1840. demo-quartz_1 | ,percentChange7d=-5.81,
  1841. demo-quartz_1 | ,lastUpdated=1526315671,
  1842. demo-quartz_1 | ,priceEur=7365.66857628,
  1843. demo-quartz_1 | ,_24hVolumeEur=6214276626.6,
  1844. demo-quartz_1 | ,marketCapEur=125459985285,
  1845. demo-quartz_1 | ,additionalProperties={}]
  1846. demo-quartz_1 | web - 2018-05-14 16:36:11,728 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - created entity...
  1847. demo-quartz_1 | web - 2018-05-14 16:36:11,729 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - BitcoinEuroEntity [idBCEntity=1, id=bitcoin, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8825.43, priceBtc=1.0, _24hVolumeUsd=7445850000.0, marketCapUsd=150324211097, availableSupply=17033075.0, totalSupply=17033075.0, maxSupply=21000000.0, percentChange1h=0.57, percentChange24h=1.79, percentChange7d=-5.81, lastUpdated=1526315671, priceEur=7365.66857628, _24hVolumeEur=6214276626.6, marketCapEur=125459985285]
  1848. demo-quartz_1 | web - 2018-05-14 16:36:11,760 [scheduler_Worker-1] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values:
  1849. demo-quartz_1 | acks = 1
  1850. demo-quartz_1 | batch.size = 16384
  1851. demo-quartz_1 | block.on.buffer.full = false
  1852. demo-quartz_1 | bootstrap.servers = [kafka:9092]
  1853. demo-quartz_1 | buffer.memory = 33554432
  1854. demo-quartz_1 | client.id =
  1855. demo-quartz_1 | compression.type = none
  1856. demo-quartz_1 | connections.max.idle.ms = 540000
  1857. demo-quartz_1 | interceptor.classes = null
  1858. demo-quartz_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer
  1859. demo-quartz_1 | linger.ms = 0
  1860. demo-quartz_1 | max.block.ms = 60000
  1861. demo-quartz_1 | max.in.flight.requests.per.connection = 5
  1862. demo-quartz_1 | max.request.size = 1048576
  1863. demo-quartz_1 | metadata.fetch.timeout.ms = 60000
  1864. demo-quartz_1 | metadata.max.age.ms = 300000
  1865. demo-quartz_1 | metric.reporters = []
  1866. demo-quartz_1 | metrics.num.samples = 2
  1867. demo-quartz_1 | metrics.sample.window.ms = 30000
  1868. demo-quartz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
  1869. demo-quartz_1 | receive.buffer.bytes = 32768
  1870. demo-quartz_1 | reconnect.backoff.ms = 50
  1871. demo-quartz_1 | request.timeout.ms = 30000
  1872. demo-quartz_1 | retries = 0
  1873. demo-quartz_1 | retry.backoff.ms = 100
  1874. demo-quartz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1875. demo-quartz_1 | sasl.kerberos.min.time.before.relogin = 60000
  1876. demo-quartz_1 | sasl.kerberos.service.name = null
  1877. demo-quartz_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1878. demo-quartz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1879. demo-quartz_1 | sasl.mechanism = GSSAPI
  1880. demo-quartz_1 | security.protocol = PLAINTEXT
  1881. demo-quartz_1 | send.buffer.bytes = 131072
  1882. demo-quartz_1 | ssl.cipher.suites = null
  1883. demo-quartz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1884. demo-quartz_1 | ssl.endpoint.identification.algorithm = null
  1885. demo-quartz_1 | ssl.key.password = null
  1886. demo-quartz_1 | ssl.keymanager.algorithm = SunX509
  1887. demo-quartz_1 | ssl.keystore.location = null
  1888. demo-quartz_1 | ssl.keystore.password = null
  1889. demo-quartz_1 | ssl.keystore.type = JKS
  1890. demo-quartz_1 | ssl.protocol = TLS
  1891. demo-quartz_1 | ssl.provider = null
  1892. demo-quartz_1 | ssl.secure.random.implementation = null
  1893. demo-quartz_1 | ssl.trustmanager.algorithm = PKIX
  1894. demo-quartz_1 | ssl.truststore.location = null
  1895. demo-quartz_1 | ssl.truststore.password = null
  1896. demo-quartz_1 | ssl.truststore.type = JKS
  1897. demo-quartz_1 | timeout.ms = 30000
  1898. demo-quartz_1 | value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
  1899. demo-quartz_1 |
  1900. demo-quartz_1 | web - 2018-05-14 16:36:11,773 [scheduler_Worker-1] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values:
  1901. demo-quartz_1 | acks = 1
  1902. demo-quartz_1 | batch.size = 16384
  1903. demo-quartz_1 | block.on.buffer.full = false
  1904. demo-quartz_1 | bootstrap.servers = [kafka:9092]
  1905. demo-quartz_1 | buffer.memory = 33554432
  1906. demo-quartz_1 | client.id = producer-1
  1907. demo-quartz_1 | compression.type = none
  1908. demo-quartz_1 | connections.max.idle.ms = 540000
  1909. demo-quartz_1 | interceptor.classes = null
  1910. demo-quartz_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer
  1911. demo-quartz_1 | linger.ms = 0
  1912. demo-quartz_1 | max.block.ms = 60000
  1913. demo-quartz_1 | max.in.flight.requests.per.connection = 5
  1914. demo-quartz_1 | max.request.size = 1048576
  1915. demo-quartz_1 | metadata.fetch.timeout.ms = 60000
  1916. demo-quartz_1 | metadata.max.age.ms = 300000
  1917. demo-quartz_1 | metric.reporters = []
  1918. demo-quartz_1 | metrics.num.samples = 2
  1919. demo-quartz_1 | metrics.sample.window.ms = 30000
  1920. demo-quartz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
  1921. demo-quartz_1 | receive.buffer.bytes = 32768
  1922. demo-quartz_1 | reconnect.backoff.ms = 50
  1923. demo-quartz_1 | request.timeout.ms = 30000
  1924. demo-quartz_1 | retries = 0
  1925. demo-quartz_1 | retry.backoff.ms = 100
  1926. demo-quartz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
  1927. demo-quartz_1 | sasl.kerberos.min.time.before.relogin = 60000
  1928. demo-quartz_1 | sasl.kerberos.service.name = null
  1929. demo-quartz_1 | sasl.kerberos.ticket.renew.jitter = 0.05
  1930. demo-quartz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
  1931. demo-quartz_1 | sasl.mechanism = GSSAPI
  1932. demo-quartz_1 | security.protocol = PLAINTEXT
  1933. demo-quartz_1 | send.buffer.bytes = 131072
  1934. demo-quartz_1 | ssl.cipher.suites = null
  1935. demo-quartz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  1936. demo-quartz_1 | ssl.endpoint.identification.algorithm = null
  1937. demo-quartz_1 | ssl.key.password = null
  1938. demo-quartz_1 | ssl.keymanager.algorithm = SunX509
  1939. demo-quartz_1 | ssl.keystore.location = null
  1940. demo-quartz_1 | ssl.keystore.password = null
  1941. demo-quartz_1 | ssl.keystore.type = JKS
  1942. demo-quartz_1 | ssl.protocol = TLS
  1943. demo-quartz_1 | ssl.provider = null
  1944. demo-quartz_1 | ssl.secure.random.implementation = null
  1945. demo-quartz_1 | ssl.trustmanager.algorithm = PKIX
  1946. demo-quartz_1 | ssl.truststore.location = null
  1947. demo-quartz_1 | ssl.truststore.password = null
  1948. demo-quartz_1 | ssl.truststore.type = JKS
  1949. demo-quartz_1 | timeout.ms = 30000
  1950. demo-quartz_1 | value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
  1951. demo-quartz_1 |
  1952. demo-quartz_1 | web - 2018-05-14 16:36:11,833 [scheduler_Worker-1] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
  1953. demo-quartz_1 | web - 2018-05-14 16:36:11,833 [scheduler_Worker-1] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
  1954. demo-quartz_1 | web - 2018-05-14 16:36:12,044 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - entity sent to topic...
  1955. demo-kafka-elastic_1 | web - 2018-05-14 16:36:12,349 [elasticsearch[Neurotap][generic][T#1]] WARN org.elasticsearch.client.transport - [Neurotap] node {#transport#-1}{elastic}{172.21.0.5:9300} not part of the cluster Cluster [elasticsearch_aironman], ignoring...
  1956. kafka_1 | [2018-05-14 16:36:14,672] INFO [GroupCoordinator 1]: Member consumer-4-a0abbbf3-8265-4017-8b72-a38d3c90ee12 in group filter has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
  1957. kafka_1 | [2018-05-14 16:36:14,682] INFO [GroupCoordinator 1]: Stabilized group filter generation 5 (__consumer_offsets-40) (kafka.coordinator.group.GroupCoordinator)
  1958. kafka_1 | [2018-05-14 16:36:14,698] INFO [GroupCoordinator 1]: Assignment received from leader for group filter for generation 5 (kafka.coordinator.group.GroupCoordinator)
  1959. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,733 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group filter with generation 5
  1960. kafka_1 | [2018-05-14 16:36:14,732] INFO [GroupCoordinator 1]: Member consumer-5-a6e23486-408c-4a17-9579-f69055530183 in group greeting has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
  1961. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,734 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [filtered-0] for group filter
  1962. kafka_1 | [2018-05-14 16:36:14,739] INFO [GroupCoordinator 1]: Stabilized group greeting generation 5 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
  1963. kafka_1 | [2018-05-14 16:36:14,740] INFO [GroupCoordinator 1]: Member consumer-7-95270c33-64a6-40e3-97db-93d1e7ae1c0a in group bar has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
  1964. kafka_1 | [2018-05-14 16:36:14,741] INFO [GroupCoordinator 1]: Stabilized group bar generation 5 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
  1965. kafka_1 | [2018-05-14 16:36:14,743] INFO [GroupCoordinator 1]: Assignment received from leader for group greeting for generation 5 (kafka.coordinator.group.GroupCoordinator)
  1966. kafka_1 | [2018-05-14 16:36:14,744] INFO [GroupCoordinator 1]: Assignment received from leader for group bar for generation 5 (kafka.coordinator.group.GroupCoordinator)
  1967. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,748 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group bar with generation 5
  1968. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,748 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group greeting with generation 5
  1969. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,753 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group greeting
  1970. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,754 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group bar
  1971. kafka_1 | [2018-05-14 16:36:14,761] INFO [GroupCoordinator 1]: Member consumer-2-b95ed0df-0d24-4ff7-b97d-f31050a1553b in group headers has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
  1972. kafka_1 | [2018-05-14 16:36:14,766] INFO [GroupCoordinator 1]: Stabilized group headers generation 5 (__consumer_offsets-10) (kafka.coordinator.group.GroupCoordinator)
  1973. kafka_1 | [2018-05-14 16:36:14,768] INFO [GroupCoordinator 1]: Assignment received from leader for group headers for generation 5 (kafka.coordinator.group.GroupCoordinator)
  1974. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,779 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group headers with generation 5
  1975. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,780 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group headers
  1976. kafka_1 | [2018-05-14 16:36:14,788] INFO [GroupCoordinator 1]: Member consumer-6-a9013917-e32b-4c6b-88a6-61f7642f2364 in group foo has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
  1977. kafka_1 | [2018-05-14 16:36:14,790] INFO [GroupCoordinator 1]: Stabilized group foo generation 5 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
  1978. kafka_1 | [2018-05-14 16:36:14,792] INFO [GroupCoordinator 1]: Assignment received from leader for group foo for generation 5 (kafka.coordinator.group.GroupCoordinator)
  1979. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,799 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group foo with generation 5
  1980. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,799 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group foo
  1981. kafka_1 | [2018-05-14 16:36:14,804] INFO [GroupCoordinator 1]: Member consumer-1-2ca463f5-1e8f-40d9-bc0d-f1ad621d219f in group bitcoin has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
  1982. kafka_1 | [2018-05-14 16:36:14,818] INFO [GroupCoordinator 1]: Stabilized group bitcoin generation 5 (__consumer_offsets-42) (kafka.coordinator.group.GroupCoordinator)
  1983. kafka_1 | [2018-05-14 16:36:14,820] INFO [GroupCoordinator 1]: Assignment received from leader for group bitcoin for generation 5 (kafka.coordinator.group.GroupCoordinator)
  1984. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,826 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group bitcoin with generation 5
  1985. demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,833 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [aironman-0] for group bitcoin
  1986. demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,108 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO c.a.demo.kafka.MessageListener - kafka message: BitcoinEuroKafkaEntity [id=76, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8810.54, priceBtc=1.0, _24hVolumeUsd=7407300000.0, marketCapUsd=150070474073, availableSupply=17033062.0, totalSupply=17033062.0, maxSupply=21000000.0, percentChange1h=0.32, percentChange24h=1.6, percentChange7d=-5.94, lastUpdated=1526313873, priceEur=7353.24144184, _24hVolumeEur=6182102950.8, marketCapEur=125248217380]
  1987. demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,212 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] ERROR o.s.k.listener.LoggingErrorHandler - Error while processing: ConsumerRecord(topic = aironman, partition = 0, offset = 75, CreateTime = 1526314130780, checksum = 1940245780, serialized key size = -1, serialized value size = 427, key = null, value = BitcoinEuroKafkaEntity [id=76, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8810.54, priceBtc=1.0, _24hVolumeUsd=7407300000.0, marketCapUsd=150070474073, availableSupply=17033062.0, totalSupply=17033062.0, maxSupply=21000000.0, percentChange1h=0.32, percentChange24h=1.6, percentChange7d=-5.94, lastUpdated=1526313873, priceEur=7353.24144184, _24hVolumeEur=6182102950.8, marketCapEur=125248217380])
  1988. demo-kafka-elastic_1 | org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.aironman.demo.kafka.MessageListener.bitCoinListener(com.aironman.demo.kafka.BitcoinEuroKafkaEntity)' threw exception; nested exception is NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]]
  1989. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:188)
  1990. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:72)
  1991. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:47)
  1992. demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:792)
  1993. demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:736)
  1994. demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:568)
  1995. demo-kafka-elastic_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  1996. demo-kafka-elastic_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  1997. demo-kafka-elastic_1 | at java.lang.Thread.run(Thread.java:748)
  1998. demo-kafka-elastic_1 | Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
  1999. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:326)
  2000. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:223)
  2001. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
  2002. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:295)
  2003. demo-kafka-elastic_1 | at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
  2004. demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86)
  2005. demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56)
  2006. demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.index(ElasticsearchTemplate.java:536)
  2007. demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.save(AbstractElasticsearchRepository.java:147)
  2008. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  2009. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  2010. demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  2011. demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
  2012. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.executeMethodOn(RepositoryFactorySupport.java:515)
  2013. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:500)
  2014. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:477)
  2015. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2016. demo-kafka-elastic_1 | at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:56)
  2017. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2018. demo-kafka-elastic_1 | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
  2019. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2020. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:57)
  2021. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2022. demo-kafka-elastic_1 | at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
  2023. demo-kafka-elastic_1 | at com.sun.proxy.$Proxy62.save(Unknown Source)
  2024. demo-kafka-elastic_1 | at com.aironman.demo.es.service.BitCoinESServiceImpl.save(BitCoinESServiceImpl.java:20)
  2025. demo-kafka-elastic_1 | at com.aironman.demo.kafka.MessageListener.bitCoinListener(MessageListener.java:88)
  2026. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  2027. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  2028. demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  2029. demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
  2030. demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:180)
  2031. demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:112)
  2032. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:48)
  2033. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:174)
  2034. demo-kafka-elastic_1 | ... 8 common frames omitted
  2035. demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,213 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO c.a.demo.kafka.MessageListener - kafka message: BitcoinEuroKafkaEntity [id=1, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8825.43, priceBtc=1.0, _24hVolumeUsd=7445850000.0, marketCapUsd=150324211097, availableSupply=17033075.0, totalSupply=17033075.0, maxSupply=21000000.0, percentChange1h=0.57, percentChange24h=1.79, percentChange7d=-5.81, lastUpdated=1526315671, priceEur=7365.66857628, _24hVolumeEur=6214276626.6, marketCapEur=125459985285]
  2036. demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,215 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] ERROR o.s.k.listener.LoggingErrorHandler - Error while processing: ConsumerRecord(topic = aironman, partition = 0, offset = 76, CreateTime = 1526315772029, checksum = 227856670, serialized key size = -1, serialized value size = 427, key = null, value = BitcoinEuroKafkaEntity [id=1, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8825.43, priceBtc=1.0, _24hVolumeUsd=7445850000.0, marketCapUsd=150324211097, availableSupply=17033075.0, totalSupply=17033075.0, maxSupply=21000000.0, percentChange1h=0.57, percentChange24h=1.79, percentChange7d=-5.81, lastUpdated=1526315671, priceEur=7365.66857628, _24hVolumeEur=6214276626.6, marketCapEur=125459985285])
  2037. demo-kafka-elastic_1 | org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.aironman.demo.kafka.MessageListener.bitCoinListener(com.aironman.demo.kafka.BitcoinEuroKafkaEntity)' threw exception; nested exception is NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]]
  2038. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:188)
  2039. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:72)
  2040. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:47)
  2041. demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:792)
  2042. demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:736)
  2043. demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:568)
  2044. demo-kafka-elastic_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  2045. demo-kafka-elastic_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  2046. demo-kafka-elastic_1 | at java.lang.Thread.run(Thread.java:748)
  2047. demo-kafka-elastic_1 | Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
  2048. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:326)
  2049. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:223)
  2050. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
  2051. demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:295)
  2052. demo-kafka-elastic_1 | at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
  2053. demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86)
  2054. demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56)
  2055. demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.index(ElasticsearchTemplate.java:536)
  2056. demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.save(AbstractElasticsearchRepository.java:147)
  2057. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  2058. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  2059. demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  2060. demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
  2061. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.executeMethodOn(RepositoryFactorySupport.java:515)
  2062. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:500)
  2063. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:477)
  2064. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2065. demo-kafka-elastic_1 | at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:56)
  2066. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2067. demo-kafka-elastic_1 | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
  2068. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2069. demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:57)
  2070. demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
  2071. demo-kafka-elastic_1 | at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
  2072. demo-kafka-elastic_1 | at com.sun.proxy.$Proxy62.save(Unknown Source)
  2073. demo-kafka-elastic_1 | at com.aironman.demo.es.service.BitCoinESServiceImpl.save(BitCoinESServiceImpl.java:20)
  2074. demo-kafka-elastic_1 | at com.aironman.demo.kafka.MessageListener.bitCoinListener(MessageListener.java:88)
  2075. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  2076. demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  2077. demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  2078. demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
  2079. demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:180)
  2080. demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:112)
  2081. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:48)
  2082. demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:174)
  2083. demo-kafka-elastic_1 | ... 8 common frames omitted
  2084. demo-quartz_1 | web - 2018-05-14 16:36:17,046 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - Sample job has finished...
  2085. ...
RAW Paste Data Copied