Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- :demo-2 aironman$ clear && docker-compose up
- WARNING: Some services (demo-kafka-elastic, demo-quartz) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
- Pulling kibana (seeruk/docker-kibana-sense:4.5)...
- 4.5: Pulling from seeruk/docker-kibana-sense
- 357ea8c3d80b: Already exists
- 9d99c1434c75: Pull complete
- aa9e96f4d5f4: Pull complete
- 393684003c1e: Pull complete
- e2578dae99ba: Pull complete
- e93da2cb19e9: Pull complete
- b11b2a2ce046: Pull complete
- 136e77e2bc04: Pull complete
- c90792d80587: Pull complete
- e7d4af8bae7c: Pull complete
- Digest: sha256:1c7c6b0a027078c5a50a1a418a2bcf06e1a2d5b3636d62bbf00ca0a93d05d7be
- Status: Downloaded newer image for seeruk/docker-kibana-sense:4.5
- Starting demo-2_zookeeper_1 ... done
- Starting demo-2_kafka_1 ... done
- Starting demo-2_demo-quartz_1 ... done
- Starting demo-2_demo-kafka-elastic_1 ... done
- Recreating demo-2_kibana_1 ... done
- Starting demo-2_elastic_1 ... done
- Attaching to demo-2_demo-quartz_1, demo-2_elastic_1, demo-2_kibana_1, demo-2_zookeeper_1, demo-2_kafka_1, demo-2_demo-kafka-elastic_1
- kafka_1 | Excluding KAFKA_HOME from broker config
- kafka_1 | [Configuring] 'port' in '/opt/kafka/config/server.properties'
- kafka_1 | [Configuring] 'advertised.listeners' in '/opt/kafka/config/server.properties'
- zookeeper_1 | ZooKeeper JMX enabled by default
- zookeeper_1 | Using config: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
- kafka_1 | [Configuring] 'broker.id' in '/opt/kafka/config/server.properties'
- kafka_1 | Excluding KAFKA_VERSION from broker config
- kafka_1 | [Configuring] 'listeners' in '/opt/kafka/config/server.properties'
- kafka_1 | [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties'
- kafka_1 | [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties'
- elastic_1 | [2018-05-14 16:35:54,669][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
- elastic_1 | [2018-05-14 16:35:55,177][INFO ][node ] [Proteus] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
- elastic_1 | [2018-05-14 16:35:55,177][INFO ][node ] [Proteus] initializing ...
- zookeeper_1 | 2018-05-14 16:35:55,415 [myid:] - INFO [main:QuorumPeerConfig@124] - Reading configuration from: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
- zookeeper_1 | 2018-05-14 16:35:55,432 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
- zookeeper_1 | 2018-05-14 16:35:55,432 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
- zookeeper_1 | 2018-05-14 16:35:55,435 [myid:] - WARN [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running in standalone mode
- zookeeper_1 | 2018-05-14 16:35:55,435 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
- zookeeper_1 | 2018-05-14 16:35:55,462 [myid:] - INFO [main:QuorumPeerConfig@124] - Reading configuration from: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
- zookeeper_1 | 2018-05-14 16:35:55,463 [myid:] - INFO [main:ZooKeeperServerMain@96] - Starting server
- zookeeper_1 | 2018-05-14 16:35:55,471 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
- zookeeper_1 | 2018-05-14 16:35:55,475 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
- zookeeper_1 | 2018-05-14 16:35:55,475 [myid:] - INFO [main:Environment@100] - Server environment:host.name=dfb8ed0eb666
- zookeeper_1 | 2018-05-14 16:35:55,476 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.7.0_65
- zookeeper_1 | 2018-05-14 16:35:55,476 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
- zookeeper_1 | 2018-05-14 16:35:55,477 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
- zookeeper_1 | 2018-05-14 16:35:55,478 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper-3.4.9/bin/../build/classes:/opt/zookeeper-3.4.9/bin/../build/lib/*.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/opt/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.9/bin/../conf:
- zookeeper_1 | 2018-05-14 16:35:55,483 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
- zookeeper_1 | 2018-05-14 16:35:55,483 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:sense","info"],"pid":12,"name":"plugin:sense","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:kibana","info"],"pid":12,"name":"plugin:kibana","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- zookeeper_1 | 2018-05-14 16:35:55,487 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=<NA>
- zookeeper_1 | 2018-05-14 16:35:55,493 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux
- zookeeper_1 | 2018-05-14 16:35:55,493 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64
- zookeeper_1 | 2018-05-14 16:35:55,494 [myid:] - INFO [main:Environment@100] - Server environment:os.version=4.9.87-linuxkit-aufs
- zookeeper_1 | 2018-05-14 16:35:55,494 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root
- zookeeper_1 | 2018-05-14 16:35:55,495 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root
- zookeeper_1 | 2018-05-14 16:35:55,495 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.9
- zookeeper_1 | 2018-05-14 16:35:55,520 [myid:] - INFO [main:ZooKeeperServer@815] - tickTime set to 2000
- zookeeper_1 | 2018-05-14 16:35:55,521 [myid:] - INFO [main:ZooKeeperServer@824] - minSessionTimeout set to -1
- zookeeper_1 | 2018-05-14 16:35:55,522 [myid:] - INFO [main:ZooKeeperServer@833] - maxSessionTimeout set to -1
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:elasticsearch","info"],"pid":12,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["error","elasticsearch"],"pid":12,"message":"Request error, retrying -- connect ECONNREFUSED 172.21.0.5:9200"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:kbn_vislib_vis_types","info"],"pid":12,"name":"plugin:kbn_vislib_vis_types","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:markdown_vis","info"],"pid":12,"name":"plugin:markdown_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:elasticsearch","error"],"pid":12,"name":"plugin:elasticsearch","state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elastic:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:metric_vis","info"],"pid":12,"name":"plugin:metric_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:spyModes","info"],"pid":12,"name":"plugin:spyModes","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- zookeeper_1 | 2018-05-14 16:35:55,580 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:statusPage","info"],"pid":12,"name":"plugin:statusPage","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["status","plugin:table_vis","info"],"pid":12,"name":"plugin:table_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:55+00:00","tags":["listening","info"],"pid":12,"message":"Server running at http://0.0.0.0:5601"}
- demo-quartz_1 |
- demo-quartz_1 | . ____ _ __ _ _
- demo-quartz_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
- demo-quartz_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
- demo-quartz_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
- demo-quartz_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
- demo-quartz_1 | =========|_|==============|___/=/_/_/_/
- demo-quartz_1 | :: Spring Boot :: (v1.5.12.RELEASE)
- demo-quartz_1 |
- elastic_1 | [2018-05-14 16:35:56,637][INFO ][plugins ] [Proteus] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
- kafka_1 | [2018-05-14 16:35:56,653] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
- elastic_1 | [2018-05-14 16:35:56,738][INFO ][env ] [Proteus] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda2)]], net usable_space [39.7gb], net total_space [59gb], spins? [possibly], types [ext4]
- elastic_1 | [2018-05-14 16:35:56,738][INFO ][env ] [Proteus] heap size [990.7mb], compressed ordinary object pointers [true]
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | . ____ _ __ _ _
- demo-kafka-elastic_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
- demo-kafka-elastic_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
- demo-kafka-elastic_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
- demo-kafka-elastic_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
- demo-kafka-elastic_1 | =========|_|==============|___/=/_/_/_/
- demo-kafka-elastic_1 | :: Spring Boot :: (v1.5.13.RELEASE)
- demo-kafka-elastic_1 |
- demo-quartz_1 | web - 2018-05-14 16:35:57,274 [main] INFO c.a.demoquartz.DemoQuartzApplication - Starting DemoQuartzApplication v0.0.1-SNAPSHOT on 85b6c5d32806 with PID 1 (/usr/share/aironman/demo-quartz.jar started by root in /)
- demo-quartz_1 | web - 2018-05-14 16:35:57,294 [main] INFO c.a.demoquartz.DemoQuartzApplication - No active profile set, falling back to default profiles: default
- demo-kafka-elastic_1 | web - 2018-05-14 16:35:57,378 [main] INFO c.a.demo.DemoKafkaElasticApplication - Starting DemoKafkaElasticApplication v0.0.1-SNAPSHOT on 6c9aaac17b42 with PID 1 (/usr/share/aironman/demo-kafka-elastic.jar started by root in /)
- demo-kafka-elastic_1 | web - 2018-05-14 16:35:57,383 [main] INFO c.a.demo.DemoKafkaElasticApplication - No active profile set, falling back to default profiles: default
- kafka_1 | [2018-05-14 16:35:58,022] INFO starting (kafka.server.KafkaServer)
- kafka_1 | [2018-05-14 16:35:58,025] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
- kafka_1 | [2018-05-14 16:35:58,056] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:58+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
- kafka_1 | [2018-05-14 16:35:58,084] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,084] INFO Client environment:host.name=5409d6899724 (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,085] INFO Client environment:java.version=1.8.0_151 (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,085] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,085] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,086] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-1.1.0.jar:/opt/kafka/bin/../libs/connect-file-1.1.0.jar:/opt/kafka/bin/../libs/connect-json-1.1.0.jar:/opt/kafka/bin/../libs/connect-runtime-1.1.0.jar:/opt/kafka/bin/../libs/connect-transforms-1.1.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.4.jar:/opt/kafka/bin/../libs/jackson-core-2.9.4.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.4.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.4.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.4.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka/bin/../libs/jetty-client-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-http-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-io-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-security-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-server-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jetty-util-9.2.24.v20180105.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-1.1.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-1.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-1.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-1.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-1.1.0.jar:/opt/kafka/bin/../libs/kafka-tools-1.1.0.jar:/opt/kafka/bin/../libs/kafka_2.12-1.1.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-1.1.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-1.1.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.2.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.1.0.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.12.4.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.7.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.1.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,087] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,087] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,087] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,088] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,088] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,089] INFO Client environment:os.version=4.9.87-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,089] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,090] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,090] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:35:58+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
- kafka_1 | [2018-05-14 16:35:58,093] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@be64738 (org.apache.zookeeper.ZooKeeper)
- kafka_1 | [2018-05-14 16:35:58,130] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
- kafka_1 | [2018-05-14 16:35:58,141] INFO Opening socket connection to server demo-2_zookeeper_1.demo-2_default/172.21.0.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
- kafka_1 | [2018-05-14 16:35:58,172] INFO Socket connection established to demo-2_zookeeper_1.demo-2_default/172.21.0.4:2181, initiating session (org.apache.zookeeper.ClientCnxn)
- zookeeper_1 | 2018-05-14 16:35:58,173 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.21.0.2:56606
- zookeeper_1 | 2018-05-14 16:35:58,191 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.21.0.2:56606
- zookeeper_1 | 2018-05-14 16:35:58,203 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.dc
- kafka_1 | [2018-05-14 16:35:58,219] INFO Session establishment complete on server demo-2_zookeeper_1.demo-2_default/172.21.0.4:2181, sessionid = 0x1635f8228c90000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
- zookeeper_1 | 2018-05-14 16:35:58,223 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x1635f8228c90000 with negotiated timeout 6000 for client /172.21.0.2:56606
- kafka_1 | [2018-05-14 16:35:58,227] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
- zookeeper_1 | 2018-05-14 16:35:58,331 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x1 zxid:0xdd txntype:-1 reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers
- zookeeper_1 | 2018-05-14 16:35:58,356 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x2 zxid:0xde txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
- zookeeper_1 | 2018-05-14 16:35:58,362 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x3 zxid:0xdf txntype:-1 reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NodeExists for /brokers/topics
- zookeeper_1 | 2018-05-14 16:35:58,365 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x4 zxid:0xe0 txntype:-1 reqpath:n/a Error Path:/config/changes Error:KeeperErrorCode = NodeExists for /config/changes
- zookeeper_1 | 2018-05-14 16:35:58,369 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x5 zxid:0xe1 txntype:-1 reqpath:n/a Error Path:/admin/delete_topics Error:KeeperErrorCode = NodeExists for /admin/delete_topics
- zookeeper_1 | 2018-05-14 16:35:58,373 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x6 zxid:0xe2 txntype:-1 reqpath:n/a Error Path:/brokers/seqid Error:KeeperErrorCode = NodeExists for /brokers/seqid
- zookeeper_1 | 2018-05-14 16:35:58,379 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x7 zxid:0xe3 txntype:-1 reqpath:n/a Error Path:/isr_change_notification Error:KeeperErrorCode = NodeExists for /isr_change_notification
- zookeeper_1 | 2018-05-14 16:35:58,382 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x8 zxid:0xe4 txntype:-1 reqpath:n/a Error Path:/latest_producer_id_block Error:KeeperErrorCode = NodeExists for /latest_producer_id_block
- zookeeper_1 | 2018-05-14 16:35:58,386 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0x9 zxid:0xe5 txntype:-1 reqpath:n/a Error Path:/log_dir_event_notification Error:KeeperErrorCode = NodeExists for /log_dir_event_notification
- zookeeper_1 | 2018-05-14 16:35:58,389 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xa zxid:0xe6 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
- zookeeper_1 | 2018-05-14 16:35:58,392 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xb zxid:0xe7 txntype:-1 reqpath:n/a Error Path:/config/clients Error:KeeperErrorCode = NodeExists for /config/clients
- zookeeper_1 | 2018-05-14 16:35:58,394 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xc zxid:0xe8 txntype:-1 reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = NodeExists for /config/users
- zookeeper_1 | 2018-05-14 16:35:58,397 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:create cxid:0xd zxid:0xe9 txntype:-1 reqpath:n/a Error Path:/config/brokers Error:KeeperErrorCode = NodeExists for /config/brokers
- kafka_1 | [2018-05-14 16:35:58,922] INFO Cluster ID = rYF83OU0RCan_mdzL10D5w (kafka.server.KafkaServer)
- demo-quartz_1 | web - 2018-05-14 16:35:59,058 [background-preinit] INFO o.h.validator.internal.util.Version - HV000001: Hibernate Validator 5.3.6.Final
- kafka_1 | [2018-05-14 16:35:59,144] INFO KafkaConfig values:
- kafka_1 | advertised.host.name = null
- kafka_1 | advertised.listeners = PLAINTEXT://kafka:9092
- kafka_1 | advertised.port = null
- kafka_1 | alter.config.policy.class.name = null
- kafka_1 | alter.log.dirs.replication.quota.window.num = 11
- kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1
- kafka_1 | authorizer.class.name =
- kafka_1 | auto.create.topics.enable = true
- kafka_1 | auto.leader.rebalance.enable = true
- kafka_1 | background.threads = 10
- kafka_1 | broker.id = 1
- kafka_1 | broker.id.generation.enable = true
- kafka_1 | broker.rack = null
- kafka_1 | compression.type = producer
- kafka_1 | connections.max.idle.ms = 600000
- kafka_1 | controlled.shutdown.enable = true
- kafka_1 | controlled.shutdown.max.retries = 3
- kafka_1 | controlled.shutdown.retry.backoff.ms = 5000
- kafka_1 | controller.socket.timeout.ms = 30000
- kafka_1 | create.topic.policy.class.name = null
- kafka_1 | default.replication.factor = 1
- kafka_1 | delegation.token.expiry.check.interval.ms = 3600000
- kafka_1 | delegation.token.expiry.time.ms = 86400000
- kafka_1 | delegation.token.master.key = null
- kafka_1 | delegation.token.max.lifetime.ms = 604800000
- kafka_1 | delete.records.purgatory.purge.interval.requests = 1
- kafka_1 | delete.topic.enable = true
- kafka_1 | fetch.purgatory.purge.interval.requests = 1000
- kafka_1 | group.initial.rebalance.delay.ms = 0
- kafka_1 | group.max.session.timeout.ms = 300000
- kafka_1 | group.min.session.timeout.ms = 6000
- kafka_1 | host.name =
- kafka_1 | inter.broker.listener.name = null
- kafka_1 | inter.broker.protocol.version = 1.1-IV0
- kafka_1 | leader.imbalance.check.interval.seconds = 300
- kafka_1 | leader.imbalance.per.broker.percentage = 10
- kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
- kafka_1 | listeners = PLAINTEXT://0.0.0.0:9092
- kafka_1 | log.cleaner.backoff.ms = 15000
- kafka_1 | log.cleaner.dedupe.buffer.size = 134217728
- kafka_1 | log.cleaner.delete.retention.ms = 86400000
- kafka_1 | log.cleaner.enable = true
- kafka_1 | log.cleaner.io.buffer.load.factor = 0.9
- kafka_1 | log.cleaner.io.buffer.size = 524288
- kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
- kafka_1 | log.cleaner.min.cleanable.ratio = 0.5
- kafka_1 | log.cleaner.min.compaction.lag.ms = 0
- kafka_1 | log.cleaner.threads = 1
- kafka_1 | log.cleanup.policy = [delete]
- kafka_1 | log.dir = /tmp/kafka-logs
- kafka_1 | log.dirs = /kafka/kafka-logs-5409d6899724
- kafka_1 | log.flush.interval.messages = 9223372036854775807
- kafka_1 | log.flush.interval.ms = null
- kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000
- kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807
- kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000
- kafka_1 | log.index.interval.bytes = 4096
- kafka_1 | log.index.size.max.bytes = 10485760
- kafka_1 | log.message.format.version = 1.1-IV0
- kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807
- kafka_1 | log.message.timestamp.type = CreateTime
- kafka_1 | log.preallocate = false
- kafka_1 | log.retention.bytes = -1
- kafka_1 | log.retention.check.interval.ms = 300000
- kafka_1 | log.retention.hours = 168
- kafka_1 | log.retention.minutes = null
- kafka_1 | log.retention.ms = null
- kafka_1 | log.roll.hours = 168
- kafka_1 | log.roll.jitter.hours = 0
- kafka_1 | log.roll.jitter.ms = null
- kafka_1 | log.roll.ms = null
- kafka_1 | log.segment.bytes = 1073741824
- kafka_1 | log.segment.delete.delay.ms = 60000
- kafka_1 | max.connections.per.ip = 2147483647
- kafka_1 | max.connections.per.ip.overrides =
- kafka_1 | max.incremental.fetch.session.cache.slots = 1000
- kafka_1 | message.max.bytes = 1000012
- kafka_1 | metric.reporters = []
- kafka_1 | metrics.num.samples = 2
- kafka_1 | metrics.recording.level = INFO
- kafka_1 | metrics.sample.window.ms = 30000
- kafka_1 | min.insync.replicas = 1
- kafka_1 | num.io.threads = 8
- kafka_1 | num.network.threads = 3
- kafka_1 | num.partitions = 1
- kafka_1 | num.recovery.threads.per.data.dir = 1
- kafka_1 | num.replica.alter.log.dirs.threads = null
- kafka_1 | num.replica.fetchers = 1
- kafka_1 | offset.metadata.max.bytes = 4096
- kafka_1 | offsets.commit.required.acks = -1
- kafka_1 | offsets.commit.timeout.ms = 5000
- kafka_1 | offsets.load.buffer.size = 5242880
- kafka_1 | offsets.retention.check.interval.ms = 600000
- kafka_1 | offsets.retention.minutes = 1440
- kafka_1 | offsets.topic.compression.codec = 0
- kafka_1 | offsets.topic.num.partitions = 50
- kafka_1 | offsets.topic.replication.factor = 1
- kafka_1 | offsets.topic.segment.bytes = 104857600
- kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
- kafka_1 | password.encoder.iterations = 4096
- kafka_1 | password.encoder.key.length = 128
- kafka_1 | password.encoder.keyfactory.algorithm = null
- kafka_1 | password.encoder.old.secret = null
- kafka_1 | password.encoder.secret = null
- kafka_1 | port = 9092
- kafka_1 | principal.builder.class = null
- kafka_1 | producer.purgatory.purge.interval.requests = 1000
- kafka_1 | queued.max.request.bytes = -1
- kafka_1 | queued.max.requests = 500
- kafka_1 | quota.consumer.default = 9223372036854775807
- kafka_1 | quota.producer.default = 9223372036854775807
- kafka_1 | quota.window.num = 11
- kafka_1 | quota.window.size.seconds = 1
- kafka_1 | replica.fetch.backoff.ms = 1000
- kafka_1 | replica.fetch.max.bytes = 1048576
- kafka_1 | replica.fetch.min.bytes = 1
- kafka_1 | replica.fetch.response.max.bytes = 10485760
- kafka_1 | replica.fetch.wait.max.ms = 500
- kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000
- kafka_1 | replica.lag.time.max.ms = 10000
- kafka_1 | replica.socket.receive.buffer.bytes = 65536
- kafka_1 | replica.socket.timeout.ms = 30000
- kafka_1 | replication.quota.window.num = 11
- kafka_1 | replication.quota.window.size.seconds = 1
- kafka_1 | request.timeout.ms = 30000
- kafka_1 | reserved.broker.max.id = 1000
- kafka_1 | sasl.enabled.mechanisms = [GSSAPI]
- kafka_1 | sasl.jaas.config = null
- kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- kafka_1 | sasl.kerberos.min.time.before.relogin = 60000
- kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
- kafka_1 | sasl.kerberos.service.name = null
- kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI
- kafka_1 | security.inter.broker.protocol = PLAINTEXT
- kafka_1 | socket.receive.buffer.bytes = 102400
- kafka_1 | socket.request.max.bytes = 104857600
- kafka_1 | socket.send.buffer.bytes = 102400
- kafka_1 | ssl.cipher.suites = []
- kafka_1 | ssl.client.auth = none
- kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- kafka_1 | ssl.endpoint.identification.algorithm = null
- kafka_1 | ssl.key.password = null
- kafka_1 | ssl.keymanager.algorithm = SunX509
- kafka_1 | ssl.keystore.location = null
- kafka_1 | ssl.keystore.password = null
- kafka_1 | ssl.keystore.type = JKS
- kafka_1 | ssl.protocol = TLS
- kafka_1 | ssl.provider = null
- kafka_1 | ssl.secure.random.implementation = null
- kafka_1 | ssl.trustmanager.algorithm = PKIX
- kafka_1 | ssl.truststore.location = null
- kafka_1 | ssl.truststore.password = null
- kafka_1 | ssl.truststore.type = JKS
- kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
- kafka_1 | transaction.max.timeout.ms = 900000
- kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
- kafka_1 | transaction.state.log.load.buffer.size = 5242880
- kafka_1 | transaction.state.log.min.isr = 1
- kafka_1 | transaction.state.log.num.partitions = 50
- kafka_1 | transaction.state.log.replication.factor = 1
- kafka_1 | transaction.state.log.segment.bytes = 104857600
- kafka_1 | transactional.id.expiration.ms = 604800000
- kafka_1 | unclean.leader.election.enable = false
- kafka_1 | zookeeper.connect = zookeeper:2181
- kafka_1 | zookeeper.connection.timeout.ms = 6000
- kafka_1 | zookeeper.max.in.flight.requests = 10
- kafka_1 | zookeeper.session.timeout.ms = 6000
- kafka_1 | zookeeper.set.acl = false
- kafka_1 | zookeeper.sync.time.ms = 2000
- kafka_1 | (kafka.server.KafkaConfig)
- kafka_1 | [2018-05-14 16:35:59,171] INFO KafkaConfig values:
- kafka_1 | advertised.host.name = null
- kafka_1 | advertised.listeners = PLAINTEXT://kafka:9092
- kafka_1 | advertised.port = null
- kafka_1 | alter.config.policy.class.name = null
- kafka_1 | alter.log.dirs.replication.quota.window.num = 11
- kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1
- kafka_1 | authorizer.class.name =
- kafka_1 | auto.create.topics.enable = true
- kafka_1 | auto.leader.rebalance.enable = true
- kafka_1 | background.threads = 10
- kafka_1 | broker.id = 1
- kafka_1 | broker.id.generation.enable = true
- kafka_1 | broker.rack = null
- kafka_1 | compression.type = producer
- kafka_1 | connections.max.idle.ms = 600000
- kafka_1 | controlled.shutdown.enable = true
- kafka_1 | controlled.shutdown.max.retries = 3
- kafka_1 | controlled.shutdown.retry.backoff.ms = 5000
- kafka_1 | controller.socket.timeout.ms = 30000
- kafka_1 | create.topic.policy.class.name = null
- kafka_1 | default.replication.factor = 1
- kafka_1 | delegation.token.expiry.check.interval.ms = 3600000
- kafka_1 | delegation.token.expiry.time.ms = 86400000
- kafka_1 | delegation.token.master.key = null
- kafka_1 | delegation.token.max.lifetime.ms = 604800000
- kafka_1 | delete.records.purgatory.purge.interval.requests = 1
- kafka_1 | delete.topic.enable = true
- kafka_1 | fetch.purgatory.purge.interval.requests = 1000
- kafka_1 | group.initial.rebalance.delay.ms = 0
- kafka_1 | group.max.session.timeout.ms = 300000
- kafka_1 | group.min.session.timeout.ms = 6000
- kafka_1 | host.name =
- kafka_1 | inter.broker.listener.name = null
- kafka_1 | inter.broker.protocol.version = 1.1-IV0
- kafka_1 | leader.imbalance.check.interval.seconds = 300
- kafka_1 | leader.imbalance.per.broker.percentage = 10
- kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
- kafka_1 | listeners = PLAINTEXT://0.0.0.0:9092
- kafka_1 | log.cleaner.backoff.ms = 15000
- kafka_1 | log.cleaner.dedupe.buffer.size = 134217728
- kafka_1 | log.cleaner.delete.retention.ms = 86400000
- kafka_1 | log.cleaner.enable = true
- kafka_1 | log.cleaner.io.buffer.load.factor = 0.9
- kafka_1 | log.cleaner.io.buffer.size = 524288
- kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
- kafka_1 | log.cleaner.min.cleanable.ratio = 0.5
- kafka_1 | log.cleaner.min.compaction.lag.ms = 0
- kafka_1 | log.cleaner.threads = 1
- kafka_1 | log.cleanup.policy = [delete]
- kafka_1 | log.dir = /tmp/kafka-logs
- kafka_1 | log.dirs = /kafka/kafka-logs-5409d6899724
- kafka_1 | log.flush.interval.messages = 9223372036854775807
- kafka_1 | log.flush.interval.ms = null
- kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000
- kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807
- kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000
- kafka_1 | log.index.interval.bytes = 4096
- kafka_1 | log.index.size.max.bytes = 10485760
- kafka_1 | log.message.format.version = 1.1-IV0
- kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807
- kafka_1 | log.message.timestamp.type = CreateTime
- kafka_1 | log.preallocate = false
- kafka_1 | log.retention.bytes = -1
- kafka_1 | log.retention.check.interval.ms = 300000
- kafka_1 | log.retention.hours = 168
- kafka_1 | log.retention.minutes = null
- kafka_1 | log.retention.ms = null
- kafka_1 | log.roll.hours = 168
- kafka_1 | log.roll.jitter.hours = 0
- kafka_1 | log.roll.jitter.ms = null
- kafka_1 | log.roll.ms = null
- kafka_1 | log.segment.bytes = 1073741824
- kafka_1 | log.segment.delete.delay.ms = 60000
- kafka_1 | max.connections.per.ip = 2147483647
- kafka_1 | max.connections.per.ip.overrides =
- kafka_1 | max.incremental.fetch.session.cache.slots = 1000
- kafka_1 | message.max.bytes = 1000012
- kafka_1 | metric.reporters = []
- kafka_1 | metrics.num.samples = 2
- kafka_1 | metrics.recording.level = INFO
- kafka_1 | metrics.sample.window.ms = 30000
- kafka_1 | min.insync.replicas = 1
- kafka_1 | num.io.threads = 8
- kafka_1 | num.network.threads = 3
- kafka_1 | num.partitions = 1
- kafka_1 | num.recovery.threads.per.data.dir = 1
- kafka_1 | num.replica.alter.log.dirs.threads = null
- kafka_1 | num.replica.fetchers = 1
- kafka_1 | offset.metadata.max.bytes = 4096
- kafka_1 | offsets.commit.required.acks = -1
- kafka_1 | offsets.commit.timeout.ms = 5000
- kafka_1 | offsets.load.buffer.size = 5242880
- kafka_1 | offsets.retention.check.interval.ms = 600000
- kafka_1 | offsets.retention.minutes = 1440
- kafka_1 | offsets.topic.compression.codec = 0
- kafka_1 | offsets.topic.num.partitions = 50
- kafka_1 | offsets.topic.replication.factor = 1
- kafka_1 | offsets.topic.segment.bytes = 104857600
- kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
- kafka_1 | password.encoder.iterations = 4096
- kafka_1 | password.encoder.key.length = 128
- kafka_1 | password.encoder.keyfactory.algorithm = null
- kafka_1 | password.encoder.old.secret = null
- kafka_1 | password.encoder.secret = null
- kafka_1 | port = 9092
- kafka_1 | principal.builder.class = null
- kafka_1 | producer.purgatory.purge.interval.requests = 1000
- kafka_1 | queued.max.request.bytes = -1
- kafka_1 | queued.max.requests = 500
- kafka_1 | quota.consumer.default = 9223372036854775807
- kafka_1 | quota.producer.default = 9223372036854775807
- kafka_1 | quota.window.num = 11
- kafka_1 | quota.window.size.seconds = 1
- kafka_1 | replica.fetch.backoff.ms = 1000
- kafka_1 | replica.fetch.max.bytes = 1048576
- kafka_1 | replica.fetch.min.bytes = 1
- kafka_1 | replica.fetch.response.max.bytes = 10485760
- kafka_1 | replica.fetch.wait.max.ms = 500
- kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000
- kafka_1 | replica.lag.time.max.ms = 10000
- kafka_1 | replica.socket.receive.buffer.bytes = 65536
- kafka_1 | replica.socket.timeout.ms = 30000
- kafka_1 | replication.quota.window.num = 11
- kafka_1 | replication.quota.window.size.seconds = 1
- kafka_1 | request.timeout.ms = 30000
- kafka_1 | reserved.broker.max.id = 1000
- kafka_1 | sasl.enabled.mechanisms = [GSSAPI]
- kafka_1 | sasl.jaas.config = null
- kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- kafka_1 | sasl.kerberos.min.time.before.relogin = 60000
- kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
- kafka_1 | sasl.kerberos.service.name = null
- kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI
- kafka_1 | security.inter.broker.protocol = PLAINTEXT
- kafka_1 | socket.receive.buffer.bytes = 102400
- kafka_1 | socket.request.max.bytes = 104857600
- kafka_1 | socket.send.buffer.bytes = 102400
- kafka_1 | ssl.cipher.suites = []
- kafka_1 | ssl.client.auth = none
- kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- kafka_1 | ssl.endpoint.identification.algorithm = null
- kafka_1 | ssl.key.password = null
- kafka_1 | ssl.keymanager.algorithm = SunX509
- kafka_1 | ssl.keystore.location = null
- kafka_1 | ssl.keystore.password = null
- kafka_1 | ssl.keystore.type = JKS
- kafka_1 | ssl.protocol = TLS
- kafka_1 | ssl.provider = null
- kafka_1 | ssl.secure.random.implementation = null
- kafka_1 | ssl.trustmanager.algorithm = PKIX
- kafka_1 | ssl.truststore.location = null
- kafka_1 | ssl.truststore.password = null
- kafka_1 | ssl.truststore.type = JKS
- kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
- kafka_1 | transaction.max.timeout.ms = 900000
- kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
- kafka_1 | transaction.state.log.load.buffer.size = 5242880
- kafka_1 | transaction.state.log.min.isr = 1
- kafka_1 | transaction.state.log.num.partitions = 50
- kafka_1 | transaction.state.log.replication.factor = 1
- kafka_1 | transaction.state.log.segment.bytes = 104857600
- kafka_1 | transactional.id.expiration.ms = 604800000
- kafka_1 | unclean.leader.election.enable = false
- kafka_1 | zookeeper.connect = zookeeper:2181
- kafka_1 | zookeeper.connection.timeout.ms = 6000
- kafka_1 | zookeeper.max.in.flight.requests = 10
- kafka_1 | zookeeper.session.timeout.ms = 6000
- kafka_1 | zookeeper.set.acl = false
- kafka_1 | zookeeper.sync.time.ms = 2000
- kafka_1 | (kafka.server.KafkaConfig)
- kafka_1 | [2018-05-14 16:35:59,291] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
- kafka_1 | [2018-05-14 16:35:59,304] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
- kafka_1 | [2018-05-14 16:35:59,306] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
- kafka_1 | [2018-05-14 16:35:59,480] INFO Loading logs. (kafka.log.LogManager)
- kafka_1 | [2018-05-14 16:35:59,667] WARN [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-5/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-5/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,764] INFO [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,794] INFO [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,801] INFO [Log partition=__consumer_offsets-5, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 202 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,831] WARN [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-36/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-36/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,834] INFO [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,839] INFO [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,843] INFO [Log partition=__consumer_offsets-36, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 13 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,850] WARN [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-49/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-49/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:35:59,940] INFO [ProducerStateManager partition=__consumer_offsets-49] Writing producer snapshot at offset 880 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:35:59,950] INFO [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,008] INFO [ProducerStateManager partition=__consumer_offsets-49] Writing producer snapshot at offset 880 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,024] INFO [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 880 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,038] INFO [ProducerStateManager partition=__consumer_offsets-49] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-49/00000000000000000880.snapshot' (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,083] INFO [Log partition=__consumer_offsets-49, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 880 in 234 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,104] WARN [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-37/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-37/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,106] INFO [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,110] INFO [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,111] INFO [Log partition=__consumer_offsets-37, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,149] WARN [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-20/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-20/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,152] INFO [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,155] INFO [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,158] INFO [Log partition=__consumer_offsets-20, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,170] WARN [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-31/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-31/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,172] INFO [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,187] INFO [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,188] INFO [Log partition=__consumer_offsets-31, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 19 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,204] WARN [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-23/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-23/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,206] INFO [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,217] INFO [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,218] INFO [Log partition=__consumer_offsets-23, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,238] WARN [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-9/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-9/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,239] INFO [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,241] INFO [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,241] INFO [Log partition=__consumer_offsets-9, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,266] WARN [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-32/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-32/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,268] INFO [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,275] INFO [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,277] INFO [Log partition=__consumer_offsets-32, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,306] WARN [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-28/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-28/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,307] INFO [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,308] INFO [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,309] INFO [Log partition=__consumer_offsets-28, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,337] WARN [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-17/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-17/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,338] INFO [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,346] INFO [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,347] INFO [Log partition=__consumer_offsets-17, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 12 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,354] WARN [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-35/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-35/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,360] INFO [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,361] INFO [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,363] INFO [Log partition=__consumer_offsets-35, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,367] WARN [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-42/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-42/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,391] INFO [ProducerStateManager partition=__consumer_offsets-42] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,415] INFO [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,419] INFO [ProducerStateManager partition=__consumer_offsets-42] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,420] INFO [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,422] INFO [ProducerStateManager partition=__consumer_offsets-42] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-42/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,423] INFO [Log partition=__consumer_offsets-42, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 57 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,427] WARN [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-34/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-34/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,428] INFO [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,429] INFO [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,430] INFO [Log partition=__consumer_offsets-34, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,438] WARN [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-21/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-21/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,445] INFO [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,446] INFO [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,453] INFO [Log partition=__consumer_offsets-21, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 16 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,464] WARN [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-3/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-3/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,465] INFO [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,476] INFO [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,477] INFO [Log partition=__consumer_offsets-3, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 14 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,488] WARN [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-27/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-27/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,493] INFO [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,495] INFO [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,503] INFO [Log partition=__consumer_offsets-27, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 16 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,518] WARN [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-19/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-19/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,519] INFO [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,521] INFO [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,525] INFO [Log partition=__consumer_offsets-19, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,547] WARN [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-13/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-13/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,548] INFO [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,550] INFO [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,551] INFO [Log partition=__consumer_offsets-13, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 23 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,554] WARN [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-1/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-1/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,556] INFO [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,558] INFO [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,558] INFO [Log partition=__consumer_offsets-1, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,568] WARN [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-26/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-26/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,569] INFO [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,578] INFO [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,578] INFO [Log partition=__consumer_offsets-26, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,588] WARN [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-41/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-41/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,590] INFO [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,591] INFO [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,592] INFO [Log partition=__consumer_offsets-41, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,595] WARN [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-2/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-2/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,597] INFO [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:00+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:00+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
- kafka_1 | [2018-05-14 16:36:00,599] INFO [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,604] INFO [Log partition=__consumer_offsets-2, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,608] WARN [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-46/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-46/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,609] INFO [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,610] INFO [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,612] INFO [Log partition=__consumer_offsets-46, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,629] WARN [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-15/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-15/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,630] INFO [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,632] INFO [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,633] INFO [Log partition=__consumer_offsets-15, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,637] WARN [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-6/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-6/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,638] INFO [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,639] INFO [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,640] INFO [Log partition=__consumer_offsets-6, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,645] WARN [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-18/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-18/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,646] INFO [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,648] INFO [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,648] INFO [Log partition=__consumer_offsets-18, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,677] WARN [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/greeting-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/greeting-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,678] INFO [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,690] INFO [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,691] INFO [Log partition=greeting-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,696] WARN [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/aironman-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/aironman-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,707] INFO [ProducerStateManager partition=aironman-0] Writing producer snapshot at offset 76 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,708] INFO [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,716] INFO [ProducerStateManager partition=aironman-0] Writing producer snapshot at offset 76 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,718] INFO [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 76 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,719] INFO [ProducerStateManager partition=aironman-0] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/aironman-0/00000000000000000076.snapshot' (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,723] INFO [Log partition=aironman-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 76 in 28 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,729] WARN [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-38/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-38/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,745] INFO [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,747] INFO [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,747] INFO [Log partition=__consumer_offsets-38, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 18 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,750] WARN [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-39/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-39/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,756] INFO [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,775] INFO [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,776] INFO [Log partition=__consumer_offsets-39, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 27 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,786] WARN [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-12/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-12/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,787] INFO [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,788] INFO [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,793] INFO [Log partition=__consumer_offsets-12, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,817] WARN [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-30/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-30/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,818] INFO [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,845] INFO [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,855] INFO [Log partition=__consumer_offsets-30, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 60 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,858] WARN [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-14/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-14/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,862] INFO [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,864] INFO [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,865] INFO [Log partition=__consumer_offsets-14, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,875] WARN [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-43/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-43/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,876] INFO [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,879] INFO [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,880] INFO [Log partition=__consumer_offsets-43, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,893] WARN [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/filtered-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/filtered-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,906] INFO [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,908] INFO [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,910] INFO [Log partition=filtered-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 27 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,913] WARN [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-45/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-45/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,915] INFO [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,917] INFO [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,918] INFO [Log partition=__consumer_offsets-45, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,922] WARN [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-40/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-40/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,932] INFO [ProducerStateManager partition=__consumer_offsets-40] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,933] INFO [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,939] INFO [ProducerStateManager partition=__consumer_offsets-40] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,945] INFO [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,956] INFO [ProducerStateManager partition=__consumer_offsets-40] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-40/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:00,956] INFO [Log partition=__consumer_offsets-40, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 35 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:00,959] WARN [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-22/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-22/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:00,939 [main] INFO org.elasticsearch.plugins - [Neurotap] modules [], plugins [], sites []
- kafka_1 | [2018-05-14 16:36:00,995] INFO [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,000] INFO [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,009] INFO [Log partition=__consumer_offsets-22, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 50 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,014] WARN [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-48/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-48/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,017] INFO [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,018] INFO [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,020] INFO [Log partition=__consumer_offsets-48, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,034] WARN [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-24/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-24/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,048] INFO [ProducerStateManager partition=__consumer_offsets-24] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:01,050] INFO [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,053] INFO [ProducerStateManager partition=__consumer_offsets-24] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:01,054] INFO [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,056] INFO [ProducerStateManager partition=__consumer_offsets-24] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-24/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:01,057] INFO [Log partition=__consumer_offsets-24, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 24 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,060] WARN [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-16/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-16/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,061] INFO [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,063] INFO [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,065] INFO [Log partition=__consumer_offsets-16, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,068] WARN [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-8/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-8/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,070] INFO [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,071] INFO [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,072] INFO [Log partition=__consumer_offsets-8, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,080] WARN [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-10/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-10/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,095] INFO [ProducerStateManager partition=__consumer_offsets-10] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:01,096] INFO [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,100] INFO [ProducerStateManager partition=__consumer_offsets-10] Writing producer snapshot at offset 440 (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:01,101] INFO [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 440 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,101] INFO [ProducerStateManager partition=__consumer_offsets-10] Loading producer state from snapshot file '/kafka/kafka-logs-5409d6899724/__consumer_offsets-10/00000000000000000440.snapshot' (kafka.log.ProducerStateManager)
- kafka_1 | [2018-05-14 16:36:01,102] INFO [Log partition=__consumer_offsets-10, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 440 in 23 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,106] WARN [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-47/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-47/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,107] INFO [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,108] INFO [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,109] INFO [Log partition=__consumer_offsets-47, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,111] WARN [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-25/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-25/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,112] INFO [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,113] INFO [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,114] INFO [Log partition=__consumer_offsets-25, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,116] WARN [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-7/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-7/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,117] INFO [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,119] INFO [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,119] INFO [Log partition=__consumer_offsets-7, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,122] WARN [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-29/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-29/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,124] INFO [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,125] INFO [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,125] INFO [Log partition=__consumer_offsets-29, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,129] WARN [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-4/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-4/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,130] INFO [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,131] INFO [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,132] INFO [Log partition=__consumer_offsets-4, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,134] WARN [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-44/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-44/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,138] INFO [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,139] INFO [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,140] INFO [Log partition=__consumer_offsets-44, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,168] WARN [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-33/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-33/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,170] INFO [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,175] INFO [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,186] INFO [Log partition=__consumer_offsets-33, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 39 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,189] WARN [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-0/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-0/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,190] INFO [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,192] INFO [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,192] INFO [Log partition=__consumer_offsets-0, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,196] WARN [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Found a corrupted index file corresponding to log file /kafka/kafka-logs-5409d6899724/__consumer_offsets-11/00000000000000000000.log due to Corrupt index found, index file (/kafka/kafka-logs-5409d6899724/__consumer_offsets-11/00000000000000000000.index) has non-zero size but the last offset is 0 which is no greater than the base offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,197] INFO [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Recovering unflushed segment 0 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,205] INFO [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,215] INFO [Log partition=__consumer_offsets-11, dir=/kafka/kafka-logs-5409d6899724] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 21 ms (kafka.log.Log)
- kafka_1 | [2018-05-14 16:36:01,219] INFO Logs loading complete in 1739 ms. (kafka.log.LogManager)
- kafka_1 | [2018-05-14 16:36:01,288] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
- kafka_1 | [2018-05-14 16:36:01,291] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
- elastic_1 | [2018-05-14 16:36:01,683][INFO ][node ] [Proteus] initialized
- elastic_1 | [2018-05-14 16:36:01,684][INFO ][node ] [Proteus] starting ...
- kafka_1 | [2018-05-14 16:36:01,908] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
- zookeeper_1 | 2018-05-14 16:36:02,000 [myid:] - INFO [SessionTracker:ZooKeeperServer@358] - Expiring session 0x1635f5dbd090000, timeout of 6000ms exceeded
- zookeeper_1 | 2018-05-14 16:36:02,001 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1635f5dbd090000
- kafka_1 | [2018-05-14 16:36:02,005] INFO [SocketServer brokerId=1] Started 1 acceptor threads (kafka.network.SocketServer)
- elastic_1 | [2018-05-14 16:36:02,044][INFO ][transport ] [Proteus] publish_address {172.21.0.5:9300}, bound_addresses {0.0.0.0:9300}
- elastic_1 | [2018-05-14 16:36:02,066][INFO ][discovery ] [Proteus] elasticsearch/S1NULhBpRDKOOewsabt2ZQ
- kafka_1 | [2018-05-14 16:36:02,125] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
- kafka_1 | [2018-05-14 16:36:02,155] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
- kafka_1 | [2018-05-14 16:36:02,156] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
- kafka_1 | [2018-05-14 16:36:02,254] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
- kafka_1 | [2018-05-14 16:36:02,405] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
- kafka_1 | [2018-05-14 16:36:02,410] INFO Result of znode creation at /brokers/ids/1 is: OK (kafka.zk.KafkaZkClient)
- kafka_1 | [2018-05-14 16:36:02,412] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
- kafka_1 | [2018-05-14 16:36:02,534] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
- kafka_1 | [2018-05-14 16:36:02,549] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
- kafka_1 | [2018-05-14 16:36:02,550] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
- kafka_1 | [2018-05-14 16:36:02,580] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:02,581] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:02,577] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
- kafka_1 | [2018-05-14 16:36:02,589] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 8 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:02,606] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
- kafka_1 | [2018-05-14 16:36:02,634] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:3000,blockEndProducerId:3999) by writing to Zk with path version 4 (kafka.coordinator.transaction.ProducerIdManager)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:02,692 [main] WARN org.elasticsearch.client.transport - [Neurotap] node {#transport#-1}{elastic}{172.21.0.5:9300} not part of the cluster Cluster [elasticsearch_aironman], ignoring...
- kafka_1 | [2018-05-14 16:36:02,711] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
- kafka_1 | [2018-05-14 16:36:02,742] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
- kafka_1 | [2018-05-14 16:36:02,768] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
- kafka_1 | [2018-05-14 16:36:03,037] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:03+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"Unable to revive connection: http://elastic:9200/"}
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:03+00:00","tags":["warning","elasticsearch"],"pid":12,"message":"No living connections"}
- kafka_1 | [2018-05-14 16:36:03,192] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)
- kafka_1 | [2018-05-14 16:36:03,192] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)
- kafka_1 | [2018-05-14 16:36:03,197] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:03,337 [main] ERROR o.s.d.e.r.s.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
- kafka_1 | [2018-05-14 16:36:03,405] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,greeting-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,filtered-0,__consumer_offsets-15,__consumer_offsets-24,aironman-0,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.server.ReplicaFetcherManager)
- zookeeper_1 | 2018-05-14 16:36:03,426 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1635f8228c90000 type:delete cxid:0x6e zxid:0xef txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
- kafka_1 | [2018-05-14 16:36:03,452] INFO Replica loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,516] INFO [Partition __consumer_offsets-0 broker=1] __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,579] INFO Replica loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,579] INFO [Partition __consumer_offsets-29 broker=1] __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:03,583 [main] ERROR o.s.d.e.r.s.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
- kafka_1 | [2018-05-14 16:36:03,608] INFO Replica loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,609] INFO [Partition __consumer_offsets-48 broker=1] __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,638] INFO Replica loaded for partition __consumer_offsets-10 with initial high watermark 440 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,642] INFO [Partition __consumer_offsets-10 broker=1] __consumer_offsets-10 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,660] INFO Replica loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,666] INFO [Partition __consumer_offsets-45 broker=1] __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,685] INFO Replica loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,688] INFO [Partition __consumer_offsets-26 broker=1] __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,706] INFO Replica loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,712] INFO [Partition __consumer_offsets-7 broker=1] __consumer_offsets-7 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,773] INFO Replica loaded for partition __consumer_offsets-42 with initial high watermark 440 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,786] INFO [Partition __consumer_offsets-42 broker=1] __consumer_offsets-42 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- demo-quartz_1 | web - 2018-05-14 16:36:03,812 [main] INFO o.a.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8080"]
- kafka_1 | [2018-05-14 16:36:03,820] INFO Replica loaded for partition greeting-0 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,830] INFO [Partition greeting-0 broker=1] greeting-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,857] INFO Replica loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,860] INFO [Partition __consumer_offsets-4 broker=1] __consumer_offsets-4 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- demo-quartz_1 | web - 2018-05-14 16:36:03,867 [main] INFO o.a.catalina.core.StandardService - Starting service [Tomcat]
- demo-quartz_1 | web - 2018-05-14 16:36:03,867 [main] INFO o.a.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/8.5.29
- kafka_1 | [2018-05-14 16:36:03,870] INFO Replica loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,871] INFO [Partition __consumer_offsets-23 broker=1] __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,876] INFO Replica loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,877] INFO [Partition __consumer_offsets-1 broker=1] __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,887] INFO Replica loaded for partition filtered-0 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,887] INFO [Partition filtered-0 broker=1] filtered-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,896] INFO Replica loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,897] INFO [Partition __consumer_offsets-20 broker=1] __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,909] INFO Replica loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,909] INFO [Partition __consumer_offsets-39 broker=1] __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,923] INFO Replica loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,923] INFO [Partition __consumer_offsets-17 broker=1] __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,938] INFO Replica loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,941] INFO [Partition __consumer_offsets-36 broker=1] __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:03,969] INFO Replica loaded for partition aironman-0 with initial high watermark 76 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:03,970] INFO [Partition aironman-0 broker=1] aironman-0 starts at Leader Epoch 0 from offset 76. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,008] INFO Replica loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,012] INFO [Partition __consumer_offsets-14 broker=1] __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,016] INFO Replica loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,017] INFO [Partition __consumer_offsets-33 broker=1] __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,026] INFO Replica loaded for partition __consumer_offsets-49 with initial high watermark 880 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,026] INFO [Partition __consumer_offsets-49 broker=1] __consumer_offsets-49 starts at Leader Epoch 0 from offset 880. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,034] INFO Replica loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,038] INFO [Partition __consumer_offsets-11 broker=1] __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,044] INFO Replica loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,044] INFO [Partition __consumer_offsets-30 broker=1] __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,049] INFO Replica loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,049] INFO [Partition __consumer_offsets-46 broker=1] __consumer_offsets-46 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,055] INFO Replica loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,063] INFO [Partition __consumer_offsets-27 broker=1] __consumer_offsets-27 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,068] INFO Replica loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,068] INFO [Partition __consumer_offsets-8 broker=1] __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,074] INFO Replica loaded for partition __consumer_offsets-24 with initial high watermark 440 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,074] INFO [Partition __consumer_offsets-24 broker=1] __consumer_offsets-24 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,077] INFO Replica loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,078] INFO [Partition __consumer_offsets-43 broker=1] __consumer_offsets-43 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,086] INFO Replica loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,087] INFO [Partition __consumer_offsets-5 broker=1] __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,093] INFO Replica loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,093] INFO [Partition __consumer_offsets-21 broker=1] __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,104] INFO Replica loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,105] INFO [Partition __consumer_offsets-2 broker=1] __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,108] INFO Replica loaded for partition __consumer_offsets-40 with initial high watermark 440 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,109] INFO [Partition __consumer_offsets-40 broker=1] __consumer_offsets-40 starts at Leader Epoch 0 from offset 440. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,118] INFO Replica loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,118] INFO [Partition __consumer_offsets-37 broker=1] __consumer_offsets-37 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,125] INFO Replica loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,125] INFO [Partition __consumer_offsets-18 broker=1] __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,132] INFO Replica loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,132] INFO [Partition __consumer_offsets-34 broker=1] __consumer_offsets-34 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,146] INFO Replica loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,147] INFO [Partition __consumer_offsets-15 broker=1] __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,159] INFO Replica loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,163] INFO [Partition __consumer_offsets-12 broker=1] __consumer_offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,169] INFO Replica loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,170] INFO [Partition __consumer_offsets-31 broker=1] __consumer_offsets-31 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,179] INFO Replica loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,179] INFO [Partition __consumer_offsets-9 broker=1] __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,199] INFO Replica loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,200] INFO [Partition __consumer_offsets-47 broker=1] __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,204] INFO Replica loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,204] INFO [Partition __consumer_offsets-19 broker=1] __consumer_offsets-19 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,218] INFO Replica loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,219] INFO [Partition __consumer_offsets-28 broker=1] __consumer_offsets-28 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,222] INFO Replica loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,224] INFO [Partition __consumer_offsets-38 broker=1] __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,236] INFO Replica loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,236] INFO [Partition __consumer_offsets-35 broker=1] __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- demo-quartz_1 | web - 2018-05-14 16:36:04,225 [localhost-startStop-1] INFO o.a.c.c.C.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
- kafka_1 | [2018-05-14 16:36:04,242] INFO Replica loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,243] INFO [Partition __consumer_offsets-6 broker=1] __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,250] INFO Replica loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,251] INFO [Partition __consumer_offsets-44 broker=1] __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,254] INFO Replica loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,254] INFO [Partition __consumer_offsets-25 broker=1] __consumer_offsets-25 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,259] INFO Replica loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,260] INFO [Partition __consumer_offsets-16 broker=1] __consumer_offsets-16 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,272] INFO Replica loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,272] INFO [Partition __consumer_offsets-22 broker=1] __consumer_offsets-22 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,276] INFO Replica loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,276] INFO [Partition __consumer_offsets-41 broker=1] __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,279] INFO Replica loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,279] INFO [Partition __consumer_offsets-32 broker=1] __consumer_offsets-32 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,282] INFO Replica loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,283] INFO [Partition __consumer_offsets-3 broker=1] __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,286] INFO Replica loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Replica)
- kafka_1 | [2018-05-14 16:36:04,287] INFO [Partition __consumer_offsets-13 broker=1] __consumer_offsets-13 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
- kafka_1 | [2018-05-14 16:36:04,314] INFO [ReplicaAlterLogDirsManager on broker 1] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
- kafka_1 | [2018-05-14 16:36:04,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,347] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,347] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,348] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,348] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,348] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,349] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,349] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,349] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,350] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,350] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,350] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,351] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,351] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,351] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,352] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,352] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,352] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,353] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,353] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,353] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,354] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,354] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,355] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,355] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,355] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,357] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,357] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,358] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,358] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,359] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,359] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,360] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,360] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,361] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,361] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,362] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,362] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,363] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,363] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,364] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,364] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,365] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,403] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 53 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,493] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,505] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 11 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,506] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,507] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,508] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,639] INFO [GroupCoordinator 1]: Loading group metadata for filter with generation 4 (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:04,666] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 158 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,667] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,667] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,712 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id =
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = foo
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- kafka_1 | [2018-05-14 16:36:04,727] INFO [GroupCoordinator 1]: Loading group metadata for greeting with generation 4 (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:04,727] INFO [GroupCoordinator 1]: Loading group metadata for bar with generation 4 (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:04,727] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 59 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,739 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id = consumer-1
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = foo
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- kafka_1 | [2018-05-14 16:36:04,749] INFO [GroupCoordinator 1]: Loading group metadata for headers with generation 4 (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:04,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 19 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,751] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,751] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,751] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,752] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,752] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,752] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,753] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,753] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,754] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,757] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 3 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,758] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,758] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,759] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,759] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,760] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,760] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,761] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,763] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,764] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,765] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,784] INFO [GroupCoordinator 1]: Loading group metadata for foo with generation 4 (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:04,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 20 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,789] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,799] INFO [GroupCoordinator 1]: Loading group metadata for bitcoin with generation 4 (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:04,799] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,800] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,800] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- kafka_1 | [2018-05-14 16:36:04,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,863 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,863 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,884 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [localhost:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id =
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id =
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,902 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [localhost:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id = consumer-2
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id =
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,909 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,910 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,949 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id =
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = filter
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,950 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id = consumer-3
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = filter
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,977 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,977 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,978 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id =
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = greeting
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:04,980 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id = consumer-4
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = greeting
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,044 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,044 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,048 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id =
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = bitcoin
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,061 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id = consumer-5
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = bitcoin
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,069 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,069 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,070 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id =
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = bar
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,070 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id = consumer-6
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = bar
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,077 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,078 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,080 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id =
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = headers
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,081 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
- demo-kafka-elastic_1 | auto.commit.interval.ms = 5000
- demo-kafka-elastic_1 | auto.offset.reset = latest
- demo-kafka-elastic_1 | bootstrap.servers = [kafka:9092]
- demo-kafka-elastic_1 | check.crcs = true
- demo-kafka-elastic_1 | client.id = consumer-7
- demo-kafka-elastic_1 | connections.max.idle.ms = 540000
- demo-kafka-elastic_1 | enable.auto.commit = true
- demo-kafka-elastic_1 | exclude.internal.topics = true
- demo-kafka-elastic_1 | fetch.max.bytes = 52428800
- demo-kafka-elastic_1 | fetch.max.wait.ms = 500
- demo-kafka-elastic_1 | fetch.min.bytes = 1
- demo-kafka-elastic_1 | group.id = headers
- demo-kafka-elastic_1 | heartbeat.interval.ms = 3000
- demo-kafka-elastic_1 | interceptor.classes = null
- demo-kafka-elastic_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 | max.partition.fetch.bytes = 1048576
- demo-kafka-elastic_1 | max.poll.interval.ms = 300000
- demo-kafka-elastic_1 | max.poll.records = 500
- demo-kafka-elastic_1 | metadata.max.age.ms = 300000
- demo-kafka-elastic_1 | metric.reporters = []
- demo-kafka-elastic_1 | metrics.num.samples = 2
- demo-kafka-elastic_1 | metrics.sample.window.ms = 30000
- demo-kafka-elastic_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- demo-kafka-elastic_1 | receive.buffer.bytes = 65536
- demo-kafka-elastic_1 | reconnect.backoff.ms = 50
- demo-kafka-elastic_1 | request.timeout.ms = 305000
- demo-kafka-elastic_1 | retry.backoff.ms = 100
- demo-kafka-elastic_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-kafka-elastic_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-kafka-elastic_1 | sasl.kerberos.service.name = null
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-kafka-elastic_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-kafka-elastic_1 | sasl.mechanism = GSSAPI
- demo-kafka-elastic_1 | security.protocol = PLAINTEXT
- demo-kafka-elastic_1 | send.buffer.bytes = 131072
- demo-kafka-elastic_1 | session.timeout.ms = 10000
- demo-kafka-elastic_1 | ssl.cipher.suites = null
- demo-kafka-elastic_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-kafka-elastic_1 | ssl.endpoint.identification.algorithm = null
- demo-kafka-elastic_1 | ssl.key.password = null
- demo-kafka-elastic_1 | ssl.keymanager.algorithm = SunX509
- demo-kafka-elastic_1 | ssl.keystore.location = null
- demo-kafka-elastic_1 | ssl.keystore.password = null
- demo-kafka-elastic_1 | ssl.keystore.type = JKS
- demo-kafka-elastic_1 | ssl.protocol = TLS
- demo-kafka-elastic_1 | ssl.provider = null
- demo-kafka-elastic_1 | ssl.secure.random.implementation = null
- demo-kafka-elastic_1 | ssl.trustmanager.algorithm = PKIX
- demo-kafka-elastic_1 | ssl.truststore.location = null
- demo-kafka-elastic_1 | ssl.truststore.password = null
- demo-kafka-elastic_1 | ssl.truststore.type = JKS
- demo-kafka-elastic_1 | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
- demo-kafka-elastic_1 |
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,087 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,087 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,114 [main] INFO c.a.demo.DemoKafkaElasticApplication - Started DemoKafkaElasticApplication in 8.854 seconds (JVM running for 10.934)
- elastic_1 | [2018-05-14 16:36:05,189][INFO ][cluster.service ] [Proteus] new_master {Proteus}{S1NULhBpRDKOOewsabt2ZQ}{172.21.0.5}{172.21.0.5:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,276 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group foo.
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,273 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group filter.
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,273 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group headers.
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,274 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group bitcoin.
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,273 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group greeting.
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,300 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group headers
- elastic_1 | [2018-05-14 16:36:05,310][INFO ][http ] [Proteus] publish_address {172.21.0.5:9200}, bound_addresses {0.0.0.0:9200}
- elastic_1 | [2018-05-14 16:36:05,310][INFO ][node ] [Proteus] started
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,322 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group headers
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,323 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group greeting
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,323 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group greeting
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,272 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group bar.
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,295 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group foo
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,324 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group foo
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,305 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group filter
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,325 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group bar
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,325 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group filter
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,326 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group bitcoin
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,326 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group bitcoin
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:05,326 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group bar
- kafka_1 | [2018-05-14 16:36:05,356] INFO [GroupCoordinator 1]: Preparing to rebalance group foo with old generation 4 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:05,356] INFO [GroupCoordinator 1]: Preparing to rebalance group headers with old generation 4 (__consumer_offsets-10) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:05,357] INFO [GroupCoordinator 1]: Preparing to rebalance group filter with old generation 4 (__consumer_offsets-40) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:05,358] INFO [GroupCoordinator 1]: Preparing to rebalance group bar with old generation 4 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:05,359] INFO [GroupCoordinator 1]: Preparing to rebalance group greeting with old generation 4 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:05,360] INFO [GroupCoordinator 1]: Preparing to rebalance group bitcoin with old generation 4 (__consumer_offsets-42) (kafka.coordinator.group.GroupCoordinator)
- elastic_1 | [2018-05-14 16:36:05,435][INFO ][gateway ] [Proteus] recovered [2] indices into cluster_state
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:06+00:00","tags":["status","plugin:elasticsearch","error"],"pid":12,"name":"plugin:elasticsearch","state":"red","message":"Status changed from red to red - Elasticsearch is still initializing the kibana index.","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elastic:9200."}
- elastic_1 | [2018-05-14 16:36:06,328][INFO ][cluster.routing.allocation] [Proteus] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
- demo-quartz_1 | web - 2018-05-14 16:36:06,394 [main] INFO o.h.jpa.internal.util.LogHelper - HHH000204: Processing PersistenceUnitInfo [
- demo-quartz_1 | name: default
- demo-quartz_1 | ...]
- demo-quartz_1 | web - 2018-05-14 16:36:06,572 [main] INFO org.hibernate.Version - HHH000412: Hibernate Core {5.0.12.Final}
- demo-quartz_1 | web - 2018-05-14 16:36:06,576 [main] INFO org.hibernate.cfg.Environment - HHH000206: hibernate.properties not found
- demo-quartz_1 | web - 2018-05-14 16:36:06,580 [main] INFO org.hibernate.cfg.Environment - HHH000021: Bytecode provider name : javassist
- demo-quartz_1 | web - 2018-05-14 16:36:06,699 [main] INFO o.h.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
- demo-quartz_1 | web - 2018-05-14 16:36:07,218 [main] INFO org.hibernate.dialect.Dialect - HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:07,346 [elasticsearch[Neurotap][generic][T#1]] WARN org.elasticsearch.client.transport - [Neurotap] node {#transport#-1}{elastic}{172.21.0.5:9300} not part of the cluster Cluster [elasticsearch_aironman], ignoring...
- demo-quartz_1 | web - 2018-05-14 16:36:08,035 [main] INFO o.h.tool.hbm2ddl.SchemaExport - HHH000227: Running hbm2ddl schema export
- demo-quartz_1 | web - 2018-05-14 16:36:08,059 [main] INFO o.h.tool.hbm2ddl.SchemaExport - HHH000230: Schema export complete
- demo-quartz_1 | web - 2018-05-14 16:36:08,158 [main] INFO c.a.d.s.SpringQrtzScheduler$$EnhancerBySpringCGLIB$$65e94d39 - Hello world from Spring...
- kibana_1 | {"type":"log","@timestamp":"2018-05-14T16:36:08+00:00","tags":["status","plugin:elasticsearch","info"],"pid":12,"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Elasticsearch is still initializing the kibana index."}
- demo-quartz_1 | web - 2018-05-14 16:36:08,590 [main] INFO c.a.d.s.SpringQrtzScheduler$$EnhancerBySpringCGLIB$$65e94d39 - Configuring trigger to fire every 10 seconds
- demo-quartz_1 | web - 2018-05-14 16:36:08,658 [main] INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor
- demo-quartz_1 | web - 2018-05-14 16:36:08,664 [main] INFO org.quartz.simpl.SimpleThreadPool - Job execution threads will use class loader of thread: main
- demo-quartz_1 | web - 2018-05-14 16:36:08,682 [main] INFO o.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
- demo-quartz_1 | web - 2018-05-14 16:36:08,682 [main] INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.2.3 created.
- demo-quartz_1 | web - 2018-05-14 16:36:08,684 [main] INFO org.quartz.simpl.RAMJobStore - RAMJobStore initialized.
- demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.2.3) 'scheduler' with instanceId 'NON_CLUSTERED'
- demo-quartz_1 | Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
- demo-quartz_1 | NOT STARTED.
- demo-quartz_1 | Currently in standby mode.
- demo-quartz_1 | Number of jobs executed: 0
- demo-quartz_1 | Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 2 threads.
- demo-quartz_1 | Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
- demo-quartz_1 |
- demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'scheduler' initialized from an externally provided properties instance.
- demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.2.3
- demo-quartz_1 | web - 2018-05-14 16:36:08,685 [main] INFO org.quartz.core.QuartzScheduler - JobFactory set to: com.aironman.demoquartz.config.AutoWiringSpringBeanJobFactory@12f9af83
- demo-quartz_1 | web - 2018-05-14 16:36:10,195 [main] INFO org.quartz.core.QuartzScheduler - Scheduler scheduler_$_NON_CLUSTERED started.
- demo-quartz_1 | web - 2018-05-14 16:36:10,212 [scheduler_Worker-1] INFO c.a.demoquartz.scheduler.SampleJob - Job ** Qrtz_Job_Detail ** fired @ Mon May 14 16:36:10 UTC 2018
- demo-quartz_1 | web - 2018-05-14 16:36:10,212 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - The sample job has begun...
- demo-quartz_1 | web - 2018-05-14 16:36:10,244 [main] INFO o.a.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8080"]
- demo-quartz_1 | web - 2018-05-14 16:36:10,288 [main] INFO o.a.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
- demo-quartz_1 | web - 2018-05-14 16:36:10,364 [main] INFO c.a.demoquartz.DemoQuartzApplication - Started DemoQuartzApplication in 14.59 seconds (JVM running for 17.334)
- demo-quartz_1 | web - 2018-05-14 16:36:11,578 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - com.aironman.demoquartz.pojo.BitcoinEuro@480d1ac2[id=bitcoin,
- demo-quartz_1 | ,name=Bitcoin,
- demo-quartz_1 | ,symbol=BTC,
- demo-quartz_1 | ,rank=1,
- demo-quartz_1 | ,priceUsd=8825.43,
- demo-quartz_1 | ,priceBtc=1.0,_24hVolumeUsd=7445850000.0,
- demo-quartz_1 | ,marketCapUsd=150324211097,
- demo-quartz_1 | ,availableSupply=17033075.0,
- demo-quartz_1 | ,totalSupply=17033075.0,
- demo-quartz_1 | ,maxSupply=21000000.0,
- demo-quartz_1 | ,percentChange1h=0.57,
- demo-quartz_1 | ,percentChange24h=1.79,
- demo-quartz_1 | ,percentChange7d=-5.81,
- demo-quartz_1 | ,lastUpdated=1526315671,
- demo-quartz_1 | ,priceEur=7365.66857628,
- demo-quartz_1 | ,_24hVolumeEur=6214276626.6,
- demo-quartz_1 | ,marketCapEur=125459985285,
- demo-quartz_1 | ,additionalProperties={}]
- demo-quartz_1 | web - 2018-05-14 16:36:11,728 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - created entity...
- demo-quartz_1 | web - 2018-05-14 16:36:11,729 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - BitcoinEuroEntity [idBCEntity=1, id=bitcoin, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8825.43, priceBtc=1.0, _24hVolumeUsd=7445850000.0, marketCapUsd=150324211097, availableSupply=17033075.0, totalSupply=17033075.0, maxSupply=21000000.0, percentChange1h=0.57, percentChange24h=1.79, percentChange7d=-5.81, lastUpdated=1526315671, priceEur=7365.66857628, _24hVolumeEur=6214276626.6, marketCapEur=125459985285]
- demo-quartz_1 | web - 2018-05-14 16:36:11,760 [scheduler_Worker-1] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values:
- demo-quartz_1 | acks = 1
- demo-quartz_1 | batch.size = 16384
- demo-quartz_1 | block.on.buffer.full = false
- demo-quartz_1 | bootstrap.servers = [kafka:9092]
- demo-quartz_1 | buffer.memory = 33554432
- demo-quartz_1 | client.id =
- demo-quartz_1 | compression.type = none
- demo-quartz_1 | connections.max.idle.ms = 540000
- demo-quartz_1 | interceptor.classes = null
- demo-quartz_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer
- demo-quartz_1 | linger.ms = 0
- demo-quartz_1 | max.block.ms = 60000
- demo-quartz_1 | max.in.flight.requests.per.connection = 5
- demo-quartz_1 | max.request.size = 1048576
- demo-quartz_1 | metadata.fetch.timeout.ms = 60000
- demo-quartz_1 | metadata.max.age.ms = 300000
- demo-quartz_1 | metric.reporters = []
- demo-quartz_1 | metrics.num.samples = 2
- demo-quartz_1 | metrics.sample.window.ms = 30000
- demo-quartz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- demo-quartz_1 | receive.buffer.bytes = 32768
- demo-quartz_1 | reconnect.backoff.ms = 50
- demo-quartz_1 | request.timeout.ms = 30000
- demo-quartz_1 | retries = 0
- demo-quartz_1 | retry.backoff.ms = 100
- demo-quartz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-quartz_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-quartz_1 | sasl.kerberos.service.name = null
- demo-quartz_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-quartz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-quartz_1 | sasl.mechanism = GSSAPI
- demo-quartz_1 | security.protocol = PLAINTEXT
- demo-quartz_1 | send.buffer.bytes = 131072
- demo-quartz_1 | ssl.cipher.suites = null
- demo-quartz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-quartz_1 | ssl.endpoint.identification.algorithm = null
- demo-quartz_1 | ssl.key.password = null
- demo-quartz_1 | ssl.keymanager.algorithm = SunX509
- demo-quartz_1 | ssl.keystore.location = null
- demo-quartz_1 | ssl.keystore.password = null
- demo-quartz_1 | ssl.keystore.type = JKS
- demo-quartz_1 | ssl.protocol = TLS
- demo-quartz_1 | ssl.provider = null
- demo-quartz_1 | ssl.secure.random.implementation = null
- demo-quartz_1 | ssl.trustmanager.algorithm = PKIX
- demo-quartz_1 | ssl.truststore.location = null
- demo-quartz_1 | ssl.truststore.password = null
- demo-quartz_1 | ssl.truststore.type = JKS
- demo-quartz_1 | timeout.ms = 30000
- demo-quartz_1 | value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
- demo-quartz_1 |
- demo-quartz_1 | web - 2018-05-14 16:36:11,773 [scheduler_Worker-1] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values:
- demo-quartz_1 | acks = 1
- demo-quartz_1 | batch.size = 16384
- demo-quartz_1 | block.on.buffer.full = false
- demo-quartz_1 | bootstrap.servers = [kafka:9092]
- demo-quartz_1 | buffer.memory = 33554432
- demo-quartz_1 | client.id = producer-1
- demo-quartz_1 | compression.type = none
- demo-quartz_1 | connections.max.idle.ms = 540000
- demo-quartz_1 | interceptor.classes = null
- demo-quartz_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer
- demo-quartz_1 | linger.ms = 0
- demo-quartz_1 | max.block.ms = 60000
- demo-quartz_1 | max.in.flight.requests.per.connection = 5
- demo-quartz_1 | max.request.size = 1048576
- demo-quartz_1 | metadata.fetch.timeout.ms = 60000
- demo-quartz_1 | metadata.max.age.ms = 300000
- demo-quartz_1 | metric.reporters = []
- demo-quartz_1 | metrics.num.samples = 2
- demo-quartz_1 | metrics.sample.window.ms = 30000
- demo-quartz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- demo-quartz_1 | receive.buffer.bytes = 32768
- demo-quartz_1 | reconnect.backoff.ms = 50
- demo-quartz_1 | request.timeout.ms = 30000
- demo-quartz_1 | retries = 0
- demo-quartz_1 | retry.backoff.ms = 100
- demo-quartz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
- demo-quartz_1 | sasl.kerberos.min.time.before.relogin = 60000
- demo-quartz_1 | sasl.kerberos.service.name = null
- demo-quartz_1 | sasl.kerberos.ticket.renew.jitter = 0.05
- demo-quartz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
- demo-quartz_1 | sasl.mechanism = GSSAPI
- demo-quartz_1 | security.protocol = PLAINTEXT
- demo-quartz_1 | send.buffer.bytes = 131072
- demo-quartz_1 | ssl.cipher.suites = null
- demo-quartz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- demo-quartz_1 | ssl.endpoint.identification.algorithm = null
- demo-quartz_1 | ssl.key.password = null
- demo-quartz_1 | ssl.keymanager.algorithm = SunX509
- demo-quartz_1 | ssl.keystore.location = null
- demo-quartz_1 | ssl.keystore.password = null
- demo-quartz_1 | ssl.keystore.type = JKS
- demo-quartz_1 | ssl.protocol = TLS
- demo-quartz_1 | ssl.provider = null
- demo-quartz_1 | ssl.secure.random.implementation = null
- demo-quartz_1 | ssl.trustmanager.algorithm = PKIX
- demo-quartz_1 | ssl.truststore.location = null
- demo-quartz_1 | ssl.truststore.password = null
- demo-quartz_1 | ssl.truststore.type = JKS
- demo-quartz_1 | timeout.ms = 30000
- demo-quartz_1 | value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
- demo-quartz_1 |
- demo-quartz_1 | web - 2018-05-14 16:36:11,833 [scheduler_Worker-1] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
- demo-quartz_1 | web - 2018-05-14 16:36:11,833 [scheduler_Worker-1] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
- demo-quartz_1 | web - 2018-05-14 16:36:12,044 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - entity sent to topic...
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:12,349 [elasticsearch[Neurotap][generic][T#1]] WARN org.elasticsearch.client.transport - [Neurotap] node {#transport#-1}{elastic}{172.21.0.5:9300} not part of the cluster Cluster [elasticsearch_aironman], ignoring...
- kafka_1 | [2018-05-14 16:36:14,672] INFO [GroupCoordinator 1]: Member consumer-4-a0abbbf3-8265-4017-8b72-a38d3c90ee12 in group filter has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,682] INFO [GroupCoordinator 1]: Stabilized group filter generation 5 (__consumer_offsets-40) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,698] INFO [GroupCoordinator 1]: Assignment received from leader for group filter for generation 5 (kafka.coordinator.group.GroupCoordinator)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,733 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group filter with generation 5
- kafka_1 | [2018-05-14 16:36:14,732] INFO [GroupCoordinator 1]: Member consumer-5-a6e23486-408c-4a17-9579-f69055530183 in group greeting has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,734 [org.springframework.kafka.KafkaListenerEndpointContainer#3-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [filtered-0] for group filter
- kafka_1 | [2018-05-14 16:36:14,739] INFO [GroupCoordinator 1]: Stabilized group greeting generation 5 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,740] INFO [GroupCoordinator 1]: Member consumer-7-95270c33-64a6-40e3-97db-93d1e7ae1c0a in group bar has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,741] INFO [GroupCoordinator 1]: Stabilized group bar generation 5 (__consumer_offsets-49) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,743] INFO [GroupCoordinator 1]: Assignment received from leader for group greeting for generation 5 (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,744] INFO [GroupCoordinator 1]: Assignment received from leader for group bar for generation 5 (kafka.coordinator.group.GroupCoordinator)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,748 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group bar with generation 5
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,748 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group greeting with generation 5
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,753 [org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group greeting
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,754 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group bar
- kafka_1 | [2018-05-14 16:36:14,761] INFO [GroupCoordinator 1]: Member consumer-2-b95ed0df-0d24-4ff7-b97d-f31050a1553b in group headers has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,766] INFO [GroupCoordinator 1]: Stabilized group headers generation 5 (__consumer_offsets-10) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,768] INFO [GroupCoordinator 1]: Assignment received from leader for group headers for generation 5 (kafka.coordinator.group.GroupCoordinator)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,779 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group headers with generation 5
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,780 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group headers
- kafka_1 | [2018-05-14 16:36:14,788] INFO [GroupCoordinator 1]: Member consumer-6-a9013917-e32b-4c6b-88a6-61f7642f2364 in group foo has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,790] INFO [GroupCoordinator 1]: Stabilized group foo generation 5 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,792] INFO [GroupCoordinator 1]: Assignment received from leader for group foo for generation 5 (kafka.coordinator.group.GroupCoordinator)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,799 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group foo with generation 5
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,799 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [greeting-0] for group foo
- kafka_1 | [2018-05-14 16:36:14,804] INFO [GroupCoordinator 1]: Member consumer-1-2ca463f5-1e8f-40d9-bc0d-f1ad621d219f in group bitcoin has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,818] INFO [GroupCoordinator 1]: Stabilized group bitcoin generation 5 (__consumer_offsets-42) (kafka.coordinator.group.GroupCoordinator)
- kafka_1 | [2018-05-14 16:36:14,820] INFO [GroupCoordinator 1]: Assignment received from leader for group bitcoin for generation 5 (kafka.coordinator.group.GroupCoordinator)
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,826 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - Successfully joined group bitcoin with generation 5
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:14,833 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [aironman-0] for group bitcoin
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,108 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO c.a.demo.kafka.MessageListener - kafka message: BitcoinEuroKafkaEntity [id=76, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8810.54, priceBtc=1.0, _24hVolumeUsd=7407300000.0, marketCapUsd=150070474073, availableSupply=17033062.0, totalSupply=17033062.0, maxSupply=21000000.0, percentChange1h=0.32, percentChange24h=1.6, percentChange7d=-5.94, lastUpdated=1526313873, priceEur=7353.24144184, _24hVolumeEur=6182102950.8, marketCapEur=125248217380]
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,212 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] ERROR o.s.k.listener.LoggingErrorHandler - Error while processing: ConsumerRecord(topic = aironman, partition = 0, offset = 75, CreateTime = 1526314130780, checksum = 1940245780, serialized key size = -1, serialized value size = 427, key = null, value = BitcoinEuroKafkaEntity [id=76, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8810.54, priceBtc=1.0, _24hVolumeUsd=7407300000.0, marketCapUsd=150070474073, availableSupply=17033062.0, totalSupply=17033062.0, maxSupply=21000000.0, percentChange1h=0.32, percentChange24h=1.6, percentChange7d=-5.94, lastUpdated=1526313873, priceEur=7353.24144184, _24hVolumeEur=6182102950.8, marketCapEur=125248217380])
- demo-kafka-elastic_1 | org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.aironman.demo.kafka.MessageListener.bitCoinListener(com.aironman.demo.kafka.BitcoinEuroKafkaEntity)' threw exception; nested exception is NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]]
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:188)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:72)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:47)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:792)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:736)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:568)
- demo-kafka-elastic_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
- demo-kafka-elastic_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
- demo-kafka-elastic_1 | at java.lang.Thread.run(Thread.java:748)
- demo-kafka-elastic_1 | Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:326)
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:223)
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:295)
- demo-kafka-elastic_1 | at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
- demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86)
- demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56)
- demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.index(ElasticsearchTemplate.java:536)
- demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.save(AbstractElasticsearchRepository.java:147)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.executeMethodOn(RepositoryFactorySupport.java:515)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:500)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:477)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:56)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:57)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
- demo-kafka-elastic_1 | at com.sun.proxy.$Proxy62.save(Unknown Source)
- demo-kafka-elastic_1 | at com.aironman.demo.es.service.BitCoinESServiceImpl.save(BitCoinESServiceImpl.java:20)
- demo-kafka-elastic_1 | at com.aironman.demo.kafka.MessageListener.bitCoinListener(MessageListener.java:88)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
- demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:180)
- demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:112)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:48)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:174)
- demo-kafka-elastic_1 | ... 8 common frames omitted
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,213 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] INFO c.a.demo.kafka.MessageListener - kafka message: BitcoinEuroKafkaEntity [id=1, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8825.43, priceBtc=1.0, _24hVolumeUsd=7445850000.0, marketCapUsd=150324211097, availableSupply=17033075.0, totalSupply=17033075.0, maxSupply=21000000.0, percentChange1h=0.57, percentChange24h=1.79, percentChange7d=-5.81, lastUpdated=1526315671, priceEur=7365.66857628, _24hVolumeEur=6214276626.6, marketCapEur=125459985285]
- demo-kafka-elastic_1 | web - 2018-05-14 16:36:15,215 [org.springframework.kafka.KafkaListenerEndpointContainer#5-0-C-1] ERROR o.s.k.listener.LoggingErrorHandler - Error while processing: ConsumerRecord(topic = aironman, partition = 0, offset = 76, CreateTime = 1526315772029, checksum = 227856670, serialized key size = -1, serialized value size = 427, key = null, value = BitcoinEuroKafkaEntity [id=1, name=Bitcoin, symbol=BTC, rank=1, priceUsd=8825.43, priceBtc=1.0, _24hVolumeUsd=7445850000.0, marketCapUsd=150324211097, availableSupply=17033075.0, totalSupply=17033075.0, maxSupply=21000000.0, percentChange1h=0.57, percentChange24h=1.79, percentChange7d=-5.81, lastUpdated=1526315671, priceEur=7365.66857628, _24hVolumeEur=6214276626.6, marketCapEur=125459985285])
- demo-kafka-elastic_1 | org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.aironman.demo.kafka.MessageListener.bitCoinListener(com.aironman.demo.kafka.BitcoinEuroKafkaEntity)' threw exception; nested exception is NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]]
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:188)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:72)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:47)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:792)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:736)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:568)
- demo-kafka-elastic_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
- demo-kafka-elastic_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
- demo-kafka-elastic_1 | at java.lang.Thread.run(Thread.java:748)
- demo-kafka-elastic_1 | Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{elastic}{172.21.0.5:9300}]
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:326)
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:223)
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
- demo-kafka-elastic_1 | at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:295)
- demo-kafka-elastic_1 | at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
- demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86)
- demo-kafka-elastic_1 | at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56)
- demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.index(ElasticsearchTemplate.java:536)
- demo-kafka-elastic_1 | at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.save(AbstractElasticsearchRepository.java:147)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.executeMethodOn(RepositoryFactorySupport.java:515)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:500)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:477)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:56)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:57)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
- demo-kafka-elastic_1 | at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
- demo-kafka-elastic_1 | at com.sun.proxy.$Proxy62.save(Unknown Source)
- demo-kafka-elastic_1 | at com.aironman.demo.es.service.BitCoinESServiceImpl.save(BitCoinESServiceImpl.java:20)
- demo-kafka-elastic_1 | at com.aironman.demo.kafka.MessageListener.bitCoinListener(MessageListener.java:88)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- demo-kafka-elastic_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- demo-kafka-elastic_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- demo-kafka-elastic_1 | at java.lang.reflect.Method.invoke(Method.java:498)
- demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:180)
- demo-kafka-elastic_1 | at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:112)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:48)
- demo-kafka-elastic_1 | at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:174)
- demo-kafka-elastic_1 | ... 8 common frames omitted
- demo-quartz_1 | web - 2018-05-14 16:36:17,046 [scheduler_Worker-1] INFO c.a.d.service.SampleJobService - Sample job has finished...
- ...
Add Comment
Please, Sign In to add comment