Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 17/07/27 09:07:51 INFO executor.CoarseGrainedExecutorBackend: Started daemon with process name: 49439@REDACTED_HOST
- 17/07/27 09:07:51 INFO util.SignalUtils: Registered signal handler for TERM
- 17/07/27 09:07:51 INFO util.SignalUtils: Registered signal handler for HUP
- 17/07/27 09:07:51 INFO util.SignalUtils: Registered signal handler for INT
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing view acls to: yarn,REDACTED_USERNAME
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing modify acls to: yarn,REDACTED_USERNAME
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing view acls groups to:
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing modify acls groups to:
- 17/07/27 09:07:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, REDACTED_USERNAME); groups with view permissions: Set(); users with modify permissions: Set(yarn, REDACTED_USERNAME); groups with modify permissions: Set()
- 17/07/27 09:07:52 INFO client.TransportClientFactory: Successfully created connection to /REDACTED_IP:30702 after 92 ms (0 ms spent in bootstraps)
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing view acls to: yarn,REDACTED_USERNAME
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing modify acls to: yarn,REDACTED_USERNAME
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing view acls groups to:
- 17/07/27 09:07:52 INFO spark.SecurityManager: Changing modify acls groups to:
- 17/07/27 09:07:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, REDACTED_USERNAME); groups with view permissions: Set(); users with modify permissions: Set(yarn, REDACTED_USERNAME); groups with modify permissions: Set()
- 17/07/27 09:07:52 INFO client.TransportClientFactory: Successfully created connection to /REDACTED_IP:30702 after 2 ms (0 ms spent in bootstraps)
- 17/07/27 09:07:52 INFO storage.DiskBlockManager: Created local directory at /data1/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/blockmgr-ebbfd6f0-f722-41f8-972c-492b025ed96c
- 17/07/27 09:07:52 INFO storage.DiskBlockManager: Created local directory at /data2/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/blockmgr-a0bc5ea0-76d3-4f8c-807b-783d91a09b40
- 17/07/27 09:07:52 INFO storage.DiskBlockManager: Created local directory at /data3/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/blockmgr-3b4ea5b2-53fc-4992-9219-b317c1d6078d
- 17/07/27 09:07:52 INFO storage.DiskBlockManager: Created local directory at /data4/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/blockmgr-a0844b71-7c96-42ae-8bca-cdaaabaf6c5d
- 17/07/27 09:07:52 INFO storage.DiskBlockManager: Created local directory at /data5/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/blockmgr-fbda8422-ab26-4c85-b7d7-2befe6e9ef99
- 17/07/27 09:07:52 INFO storage.DiskBlockManager: Created local directory at /data6/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/blockmgr-176837d5-9716-4a99-89dd-b7f5807fd892
- 17/07/27 09:07:52 INFO memory.MemoryStore: MemoryStore started with capacity 3.0 GB
- 17/07/27 09:07:53 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler@REDACTED_IP:30702
- 17/07/27 09:07:53 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver
- 17/07/27 09:07:53 INFO executor.Executor: Starting executor ID 1 on host REDACTED_HOST
- 17/07/27 09:07:53 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 13424.
- 17/07/27 09:07:53 INFO netty.NettyBlockTransferService: Server created on REDACTED_HOST:13424
- 17/07/27 09:07:53 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
- 17/07/27 09:07:53 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(1, REDACTED_HOST, 13424, None)
- 17/07/27 09:07:53 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(1, REDACTED_HOST, 13424, None)
- 17/07/27 09:07:53 INFO storage.BlockManager: external shuffle service port = 7337
- 17/07/27 09:07:53 INFO storage.BlockManager: Registering executor with local external shuffle service.
- 17/07/27 09:07:53 INFO client.TransportClientFactory: Successfully created connection to REDACTED_HOST/REDACTED_IP:7337 after 2 ms (0 ms spent in bootstraps)
- 17/07/27 09:07:53 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(1, REDACTED_HOST, 13424, None)
- 17/07/27 09:07:53 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 0
- 17/07/27 09:07:53 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1
- 17/07/27 09:07:53 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 2
- 17/07/27 09:07:53 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 3
- 17/07/27 09:07:53 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
- 17/07/27 09:07:53 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1)
- 17/07/27 09:07:53 INFO executor.Executor: Running task 2.0 in stage 0.0 (TID 2)
- 17/07/27 09:07:53 INFO executor.Executor: Running task 3.0 in stage 0.0 (TID 3)
- 17/07/27 09:07:53 INFO executor.Executor: Fetching spark://REDACTED_IP:30702/jars/REDACTED_JAR with timestamp 1501142854454
- 17/07/27 09:07:53 INFO client.TransportClientFactory: Successfully created connection to /REDACTED_IP:30702 after 2 ms (0 ms spent in bootstraps)
- 17/07/27 09:07:53 INFO util.Utils: Fetching spark://REDACTED_IP:30702/jars/REDACTED_JAR to /data1/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/spark-b018e92f-3800-4882-9dd8-b4315f119932/fetchFileTemp3129117840396051325.tmp
- 17/07/27 09:07:53 INFO util.Utils: Copying /data1/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/spark-b018e92f-3800-4882-9dd8-b4315f119932/-3463856231501142854454_cache to /data2/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/container_1500467023260_0062_01_000002/./REDACTED_JAR
- 17/07/27 09:07:54 INFO executor.Executor: Adding file:/data2/yarn/nm/usercache/REDACTED_USERNAME/appcache/application_1500467023260_0062/container_1500467023260_0062_01_000002/./REDACTED_JAR to class loader
- 17/07/27 09:07:54 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 0
- 17/07/27 09:07:54 INFO client.TransportClientFactory: Successfully created connection to /REDACTED_IP:21339 after 1 ms (0 ms spent in bootstraps)
- 17/07/27 09:07:54 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 16.7 KB, free 3.0 GB)
- 17/07/27 09:07:54 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 148 ms
- 17/07/27 09:07:54 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 49.3 KB, free 3.0 GB)
- 17/07/27 09:07:54 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id =
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:54 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id = consumer-1
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:54 INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
- 17/07/27 09:07:54 INFO utils.AppInfoParser: Kafka commitId : unknown
- 17/07/27 09:07:54 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id =
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:54 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id = consumer-2
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:55 INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
- 17/07/27 09:07:55 INFO utils.AppInfoParser: Kafka commitId : unknown
- 17/07/27 09:07:55 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id =
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:55 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id = consumer-3
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:55 INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
- 17/07/27 09:07:55 INFO utils.AppInfoParser: Kafka commitId : unknown
- 17/07/27 09:07:55 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id =
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:55 INFO consumer.ConsumerConfig: ConsumerConfig values:
- interceptor.classes = null
- request.timeout.ms = 40000
- check.crcs = true
- ssl.truststore.password = null
- retry.backoff.ms = 100
- ssl.keymanager.algorithm = SunX509
- receive.buffer.bytes = 65536
- ssl.key.password = null
- ssl.cipher.suites = null
- ssl.secure.random.implementation = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.service.name = null
- ssl.provider = null
- session.timeout.ms = 30000
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- max.poll.records = 2147483647
- bootstrap.servers = [REDACTED_HOST:9092, gbslixaacspa05u.metis.prd:9092]
- client.id = consumer-4
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- auto.offset.reset = none
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- max.partition.fetch.bytes = 1048576
- partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
- ssl.endpoint.identification.algorithm = null
- ssl.keystore.location = null
- ssl.truststore.location = null
- exclude.internal.topics = true
- ssl.keystore.password = null
- metrics.sample.window.ms = 30000
- security.protocol = PLAINTEXT
- metadata.max.age.ms = 300000
- auto.commit.interval.ms = 5000
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- group.id = spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor
- enable.auto.commit = false
- metric.reporters = []
- ssl.truststore.type = JKS
- send.buffer.bytes = 131072
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- ssl.keystore.type = JKS
- heartbeat.interval.ms = 3000
- 17/07/27 09:07:55 INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
- 17/07/27 09:07:55 INFO utils.AppInfoParser: Kafka commitId : unknown
- 17/07/27 09:07:55 INFO codegen.CodeGenerator: Code generated in 296.12482 ms
- 17/07/27 09:07:55 INFO codegen.CodeGenerator: Code generated in 63.861574 ms
- 17/07/27 09:07:55 INFO codegen.CodeGenerator: Code generated in 20.656328 ms
- 17/07/27 09:07:55 INFO codegen.CodeGenerator: Code generated in 101.843724 ms
- 17/07/27 09:07:55 INFO codegen.CodeGenerator: Code generated in 21.447822 ms
- 17/07/27 09:07:55 INFO internals.AbstractCoordinator: Discovered coordinator gbslixaacspa05u.metis.prd:9092 (id: 2147483384 rack: null) for group spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor.
- 17/07/27 09:07:55 INFO internals.AbstractCoordinator: Discovered coordinator gbslixaacspa05u.metis.prd:9092 (id: 2147483384 rack: null) for group spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor.
- 17/07/27 09:07:55 INFO internals.AbstractCoordinator: Discovered coordinator gbslixaacspa05u.metis.prd:9092 (id: 2147483384 rack: null) for group spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor.
- 17/07/27 09:07:55 INFO internals.AbstractCoordinator: Discovered coordinator gbslixaacspa05u.metis.prd:9092 (id: 2147483384 rack: null) for group spark-kafka-source-7209feda-3255-4ad4-b6f8-f2f65220d3c3--1396947302-executor.
- 17/07/27 09:07:55 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: timestamp
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: timestamp
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: timestamp
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: double
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: double
- 17/07/27 09:07:56 INFO parser.CatalystSqlParser: Parsing command: string
- 17/07/27 09:07:56 INFO codegen.CodeGenerator: Code generated in 21.216233 ms
- 17/07/27 09:07:56 INFO producer.ProducerConfig: ProducerConfig values:
- interceptor.classes = null
- request.timeout.ms = 30000
- buffer.memory = 33554432
- ssl.keymanager.algorithm = SunX509
- ssl.cipher.suites = null
- ssl.key.password = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- ssl.provider = null
- sasl.kerberos.service.name = null
- max.in.flight.requests.per.connection = 5
- bootstrap.servers = [gbslixaacspa04u:9092, gbslixaacspa05u:9092]
- client.id = etlpipeline
- max.request.size = 1048576
- linger.ms = 0
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.endpoint.identification.algorithm = null
- value.serializer = class org.apache.kafka.common.serialization.StringSerializer
- ssl.keystore.password = null
- key.serializer = class org.apache.kafka.common.serialization.StringSerializer
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- max.block.ms = 60000
- send.buffer.bytes = 131072
- partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- retry.backoff.ms = 100
- ssl.truststore.password = null
- batch.size = 16384
- receive.buffer.bytes = 32768
- ssl.secure.random.implementation = null
- sasl.mechanism = GSSAPI
- sasl.kerberos.ticket.renew.window.factor = 0.8
- acks = 1
- metadata.fetch.timeout.ms = 60000
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- ssl.keystore.location = null
- ssl.truststore.location = null
- block.on.buffer.full = false
- metrics.sample.window.ms = 30000
- metadata.max.age.ms = 300000
- security.protocol = PLAINTEXT
- timeout.ms = 30000
- metric.reporters = []
- compression.type = none
- ssl.truststore.type = JKS
- retries = 0
- ssl.keystore.type = JKS
- 17/07/27 09:07:56 INFO producer.ProducerConfig: ProducerConfig values:
- interceptor.classes = null
- request.timeout.ms = 30000
- buffer.memory = 33554432
- ssl.keymanager.algorithm = SunX509
- ssl.cipher.suites = null
- ssl.key.password = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- ssl.provider = null
- sasl.kerberos.service.name = null
- max.in.flight.requests.per.connection = 5
- bootstrap.servers = [gbslixaacspa04u:9092, gbslixaacspa05u:9092]
- client.id = etlpipeline
- max.request.size = 1048576
- linger.ms = 0
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- ssl.endpoint.identification.algorithm = null
- value.serializer = class org.apache.kafka.common.serialization.StringSerializer
- ssl.keystore.password = null
- key.serializer = class org.apache.kafka.common.serialization.StringSerializer
- ssl.protocol = TLS
- sasl.kerberos.min.time.before.relogin = 60000
- connections.max.idle.ms = 540000
- ssl.trustmanager.algorithm = PKIX
- max.block.ms = 60000
- send.buffer.bytes = 131072
- partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- reconnect.backoff.ms = 50
- metrics.num.samples = 2
- retry.backoff.ms = 100
- ssl.truststore.password = null
- batch.size = 16384
- receive.buffer.bytes = 32768
- ssl.secure.random.implementation = null
- sasl.mechanism = GSSAPI
- sasl.kerberos.ticket.renew.window.factor = 0.8
- acks = 1
- metadata.fetch.timeout.ms = 60000
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- ssl.keystore.location = null
- ssl.truststore.location = null
- block.on.buffer.full = false
- metrics.sample.window.ms = 30000
- metadata.max.age.ms = 300000
- security.protocol = PLAINTEXT
- timeout.ms = 30000
- metric.reporters = []
- compression.type = none
- ssl.truststore.type = JKS
- retries = 0
- ssl.keystore.type = JKS
- 17/07/27 09:07:56 INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
- 17/07/27 09:07:56 INFO utils.AppInfoParser: Kafka commitId : unknown
- 17/07/27 09:08:29 INFO executor.Executor: Finished task 1.0 in stage 0.0 (TID 1). 1563 bytes result sent to driver
- 17/07/27 09:08:30 INFO executor.Executor: Finished task 2.0 in stage 0.0 (TID 2). 1476 bytes result sent to driver
- 17/07/27 09:08:32 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 2275 bytes result sent to driver
- 17/07/27 09:08:33 INFO executor.Executor: Finished task 3.0 in stage 0.0 (TID 3). 1476 bytes result sent to driver
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement