Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- core@lga-kubernetes09 ~ $ docker run --network=host --rm -it lga-registry01.pulse.prod:5000/xt3-spark-invoker:ET-2478_newKafkaApi spark-submit --class com.contextweb.xt3.spark.invoker.Xt3SparkInvoker --master yarn-client --executor-memory 3G --num-executors 10 /xt3-spark-invoker.jar
- Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
- 17/08/03 06:40:50 INFO support.ClassPathXmlApplicationContext: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@443118b0: startup date [Thu Aug 03 06:40:50 EDT 2017]; root of context hierarchy
- 17/08/03 06:40:50 INFO xml.XmlBeanDefinitionReader: Loading XML bean definitions from class path resource [xt3-spark-applicationContext.xml]
- 17/08/03 06:40:50 INFO xml.XmlBeanDefinitionReader: Loading XML bean definitions from class path resource [applicationContext-common.xml]
- 17/08/03 06:40:50 INFO xml.XmlBeanDefinitionReader: Loading XML bean definitions from class path resource [xt3on-applicationContext-dao.xml]
- 17/08/03 06:40:50 INFO xml.XmlBeanDefinitionReader: Loading XML bean definitions from class path resource [xt3-kafkaConfiguration-applicationContext.xml]
- 17/08/03 06:40:51 INFO invoker.LogEventDecoder: object created, urls = http://lga-avro.pulse.prod/avro-schema-repo/
- 17/08/03 06:40:51 INFO invoker.RtbLogEventDecoder: Object created -- urls http://lga-avro.pulse.prod/avro-schema-repo/
- 17/08/03 06:40:51 INFO utils.ApplicationContextLoader: ApplicationContextLoader was initialized
- 17/08/03 06:40:51 INFO invoker.Xt3SparkInvoker: SparkStreamingXt3Client.start
- 17/08/03 06:40:51 INFO spark.SparkContext: Running Spark version 2.2.0
- 17/08/03 06:40:51 INFO spark.SparkContext: Submitted application: [AD-SERVING] xt3 invoker
- 17/08/03 06:40:51 INFO spark.SecurityManager: Changing view acls to: datascience
- 17/08/03 06:40:51 INFO spark.SecurityManager: Changing modify acls to: datascience
- 17/08/03 06:40:51 INFO spark.SecurityManager: Changing view acls groups to:
- 17/08/03 06:40:51 INFO spark.SecurityManager: Changing modify acls groups to:
- 17/08/03 06:40:51 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(datascience); groups with view permissions: Set(); users with modify permissions: Set(datascience); groups with modify permissions: Set()
- 17/08/03 06:40:52 INFO util.Utils: Successfully started service 'sparkDriver' on port 42187.
- 17/08/03 06:40:52 INFO spark.SparkEnv: Registering MapOutputTracker
- 17/08/03 06:40:52 INFO spark.SparkEnv: Registering BlockManagerMaster
- 17/08/03 06:40:52 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
- 17/08/03 06:40:52 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
- 17/08/03 06:40:52 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-7ab4958c-6547-409d-85d7-803447eca617
- 17/08/03 06:40:52 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
- 17/08/03 06:40:52 INFO spark.SparkEnv: Registering OutputCommitCoordinator
- 17/08/03 06:40:52 INFO util.log: Logging initialized @3246ms
- 17/08/03 06:40:52 INFO server.Server: jetty-9.3.z-SNAPSHOT
- 17/08/03 06:40:52 INFO server.Server: Started @3321ms
- 17/08/03 06:40:52 INFO server.AbstractConnector: Started ServerConnector@3382cf68{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
- 17/08/03 06:40:52 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@72503b19{/jobs,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4548d254{/jobs/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@208f0007{/jobs/job,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@188598ad{/jobs/job/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7cf78c85{/stages,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3a4ab7f7{/stages/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2b7e8044{/stages/stage,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@63cf9de0{/stages/stage/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5befbac1{/stages/pool,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1a565afb{/stages/pool/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@949c598{/storage,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6bfaa0a6{/storage/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@314b9e4b{/storage/rdd,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@51dae791{/storage/rdd/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5de5e95{/environment,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@303c55fa{/environment/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7eb200ce{/executors,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7c2924d7{/executors/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6587305a{/executors/threadDump,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3f81621c{/executors/threadDump/json,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74d6736{/static,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6d099323{/,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@10947c4e{/api,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2b0e9f30{/jobs/job/kill,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3330f3ad{/stages/stage/kill,null,AVAILABLE,@Spark}
- 17/08/03 06:40:52 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.201.2.41:4040
- 17/08/03 06:40:52 INFO spark.SparkContext: Added JAR file:/xt3-spark-invoker.jar at spark://10.201.2.41:42187/jars/xt3-spark-invoker.jar with timestamp 1501756852681
- 17/08/03 06:40:53 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm1269
- 17/08/03 06:40:53 INFO yarn.Client: Requesting a new application from cluster with 101 NodeManagers
- 17/08/03 06:40:53 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (24576 MB per container)
- 17/08/03 06:40:53 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
- 17/08/03 06:40:53 INFO yarn.Client: Setting up container launch context for our AM
- 17/08/03 06:40:53 INFO yarn.Client: Setting up the launch environment for our AM container
- 17/08/03 06:40:53 INFO yarn.Client: Preparing resources for our AM container
- 17/08/03 06:40:54 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
- 17/08/03 06:40:57 INFO yarn.Client: Uploading resource file:/tmp/spark-222e9ccc-2cb9-4dea-8a8f-4fc5eaff06c5/__spark_libs__2552298700288894086.zip -> hdfs://nameservice1/user/datascience/.sparkStaging/application_1501722808817_11178/__spark_libs__2552298700288894086.zip
- 17/08/03 06:41:03 INFO yarn.Client: Uploading resource file:/tmp/spark-222e9ccc-2cb9-4dea-8a8f-4fc5eaff06c5/__spark_conf__1347518755096537212.zip -> hdfs://nameservice1/user/datascience/.sparkStaging/application_1501722808817_11178/__spark_conf__.zip
- 17/08/03 06:41:03 INFO spark.SecurityManager: Changing view acls to: datascience
- 17/08/03 06:41:03 INFO spark.SecurityManager: Changing modify acls to: datascience
- 17/08/03 06:41:03 INFO spark.SecurityManager: Changing view acls groups to:
- 17/08/03 06:41:03 INFO spark.SecurityManager: Changing modify acls groups to:
- 17/08/03 06:41:03 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(datascience); groups with view permissions: Set(); users with modify permissions: Set(datascience); groups with modify permissions: Set()
- 17/08/03 06:41:03 INFO yarn.Client: Submitting application application_1501722808817_11178 to ResourceManager
- 17/08/03 06:41:03 INFO impl.YarnClientImpl: Submitted application application_1501722808817_11178
- 17/08/03 06:41:03 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1501722808817_11178 and attemptId None
- 17/08/03 06:41:04 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:04 INFO yarn.Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: N/A
- ApplicationMaster RPC port: -1
- queue: root.default
- start time: 1501756863679
- final status: UNDEFINED
- tracking URL: http://lga-grid109.contextweb.prod:8088/proxy/application_1501722808817_11178/
- user: datascience
- 17/08/03 06:41:05 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:06 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:07 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:08 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:09 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:10 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:11 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:12 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:13 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
- 17/08/03 06:41:13 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> lga-grid108.contextweb.prod,lga-grid109.contextweb.prod, PROXY_URI_BASES -> http://lga-grid108.contextweb.prod:8088/proxy/application_1501722808817_11178,http://lga-grid109.contextweb.prod:8088/proxy/application_1501722808817_11178), /proxy/application_1501722808817_11178
- 17/08/03 06:41:13 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
- 17/08/03 06:41:13 INFO yarn.Client: Application report for application_1501722808817_11178 (state: ACCEPTED)
- 17/08/03 06:41:14 INFO yarn.Client: Application report for application_1501722808817_11178 (state: RUNNING)
- 17/08/03 06:41:14 INFO yarn.Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: 10.201.3.102
- ApplicationMaster RPC port: 0
- queue: root.default
- start time: 1501756863679
- final status: UNDEFINED
- tracking URL: http://lga-grid109.contextweb.prod:8088/proxy/application_1501722808817_11178/
- user: datascience
- 17/08/03 06:41:14 INFO cluster.YarnClientSchedulerBackend: Application application_1501722808817_11178 has started running.
- 17/08/03 06:41:14 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36277.
- 17/08/03 06:41:14 INFO netty.NettyBlockTransferService: Server created on 10.201.2.41:36277
- 17/08/03 06:41:14 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
- 17/08/03 06:41:14 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.201.2.41, 36277, None)
- 17/08/03 06:41:14 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.201.2.41:36277 with 366.3 MB RAM, BlockManagerId(driver, 10.201.2.41, 36277, None)
- 17/08/03 06:41:14 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.201.2.41, 36277, None)
- 17/08/03 06:41:14 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.201.2.41, 36277, None)
- 17/08/03 06:41:14 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7884f722{/metrics/json,null,AVAILABLE,@Spark}
- 17/08/03 06:41:20 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.159:40884) with ID 1
- 17/08/03 06:41:20 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid358.contextweb.prod:37498 with 1458.6 MB RAM, BlockManagerId(1, lga-grid358.contextweb.prod, 37498, None)
- 17/08/03 06:41:22 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
- 17/08/03 06:41:22 INFO invoker.Xt3SparkInvoker: Topics are [ams-LogEvent, LogEvent, sjc-LogEvent]
- 17/08/03 06:41:23 WARN kafka010.KafkaUtils: overriding enable.auto.commit to false for executor
- 17/08/03 06:41:23 WARN kafka010.KafkaUtils: overriding auto.offset.reset to none for executor
- 17/08/03 06:41:23 WARN kafka010.KafkaUtils: overriding executor group.id to spark-executor-xt3-spark-invoker
- 17/08/03 06:41:23 WARN kafka010.KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
- 17/08/03 06:41:23 INFO kafka010.DirectKafkaInputDStream: Slide time = 180000 ms
- 17/08/03 06:41:23 INFO kafka010.DirectKafkaInputDStream: Storage level = Serialized 1x Replicated
- 17/08/03 06:41:23 INFO kafka010.DirectKafkaInputDStream: Checkpoint interval = null
- 17/08/03 06:41:23 INFO kafka010.DirectKafkaInputDStream: Remember interval = 180000 ms
- 17/08/03 06:41:23 INFO kafka010.DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka010.DirectKafkaInputDStream@4de17670
- 17/08/03 06:41:23 INFO dstream.MappedDStream: Slide time = 180000 ms
- 17/08/03 06:41:23 INFO dstream.MappedDStream: Storage level = Serialized 1x Replicated
- 17/08/03 06:41:23 INFO dstream.MappedDStream: Checkpoint interval = null
- 17/08/03 06:41:23 INFO dstream.MappedDStream: Remember interval = 180000 ms
- 17/08/03 06:41:23 INFO dstream.MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@707d3ab1
- 17/08/03 06:41:23 INFO dstream.ForEachDStream: Slide time = 180000 ms
- 17/08/03 06:41:23 INFO dstream.ForEachDStream: Storage level = Serialized 1x Replicated
- 17/08/03 06:41:23 INFO dstream.ForEachDStream: Checkpoint interval = null
- 17/08/03 06:41:23 INFO dstream.ForEachDStream: Remember interval = 180000 ms
- 17/08/03 06:41:23 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@257b9da9
- 17/08/03 06:41:23 INFO consumer.ConsumerConfig: ConsumerConfig values:
- auto.commit.interval.ms = 5000
- auto.offset.reset = latest
- bootstrap.servers = [lga-kafka00.pulse.prod:9092, lga-kafka01.pulse.prod:9092, lga-kafka02.pulse.prod:9092, lga-kafka03.pulse.prod:9092, lga-kafka04.pulse.prod:9092, lga-kafka05.pulse.prod:9092, lga-kafka06.pulse.prod:9092]
- check.crcs = true
- client.id =
- connections.max.idle.ms = 540000
- enable.auto.commit = true
- exclude.internal.topics = true
- fetch.max.bytes = 10485760
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- group.id = xt3-spark-invoker
- heartbeat.interval.ms = 3000
- interceptor.classes = null
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- max.partition.fetch.bytes = 1048576
- max.poll.interval.ms = 300000
- max.poll.records = 500
- metadata.max.age.ms = 300000
- metric.reporters = []
- metrics.num.samples = 2
- metrics.recording.level = INFO
- metrics.sample.window.ms = 30000
- partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- receive.buffer.bytes = 65536
- reconnect.backoff.ms = 50
- request.timeout.ms = 305000
- retry.backoff.ms = 100
- sasl.jaas.config = null
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- sasl.kerberos.min.time.before.relogin = 60000
- sasl.kerberos.service.name = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.mechanism = GSSAPI
- security.protocol = PLAINTEXT
- send.buffer.bytes = 131072
- session.timeout.ms = 10000
- ssl.cipher.suites = null
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- ssl.endpoint.identification.algorithm = null
- ssl.key.password = null
- ssl.keymanager.algorithm = SunX509
- ssl.keystore.location = null
- ssl.keystore.password = null
- ssl.keystore.type = JKS
- ssl.protocol = TLS
- ssl.provider = null
- ssl.secure.random.implementation = null
- ssl.trustmanager.algorithm = PKIX
- ssl.truststore.location = null
- ssl.truststore.password = null
- ssl.truststore.type = JKS
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- 17/08/03 06:41:23 INFO utils.AppInfoParser: Kafka version : 0.10.2.1
- 17/08/03 06:41:23 INFO utils.AppInfoParser: Kafka commitId : e89bffd6b2eff799
- 17/08/03 06:41:23 INFO internals.AbstractCoordinator: Discovered coordinator lga-kafka01.pulse.prod:9092 (id: 2147483646 rack: null) for group xt3-spark-invoker.
- 17/08/03 06:41:23 INFO internals.ConsumerCoordinator: Revoking previously assigned partitions [] for group xt3-spark-invoker
- 17/08/03 06:41:23 INFO internals.AbstractCoordinator: (Re-)joining group xt3-spark-invoker
- 17/08/03 06:41:23 INFO internals.AbstractCoordinator: Successfully joined group xt3-spark-invoker with generation 5
- 17/08/03 06:41:23 INFO internals.ConsumerCoordinator: Setting newly assigned partitions [ams-LogEvent-52, ams-LogEvent-19, LogEvent-7, LogEvent-24, LogEvent-57, sjc-LogEvent-9, sjc-LogEvent-42, ams-LogEvent-69, ams-LogEvent-36, ams-LogEvent-3, LogEvent-23, LogEvent-40, sjc-LogEvent-59, sjc-LogEvent-26, sjc-LogEvent-24, ams-LogEvent-51, LogEvent-6, ams-LogEvent-18, LogEvent-39, LogEvent-56, sjc-LogEvent-41, ams-LogEvent-68, sjc-LogEvent-8, ams-LogEvent-35, LogEvent-22, ams-LogEvent-2, LogEvent-55, sjc-LogEvent-25, sjc-LogEvent-58, ams-LogEvent-21, sjc-LogEvent-23, LogEvent-5, sjc-LogEvent-56, LogEvent-38, ams-LogEvent-54, ams-LogEvent-5, sjc-LogEvent-7, LogEvent-21, sjc-LogEvent-40, LogEvent-54, ams-LogEvent-71, sjc-LogEvent-57, ams-LogEvent-38, ams-LogEvent-53, LogEvent-4, sjc-LogEvent-55, ams-LogEvent-20, LogEvent-37, sjc-LogEvent-22, ams-LogEvent-37, LogEvent-20, sjc-LogEvent-39, ams-LogEvent-4, LogEvent-53, sjc-LogEvent-6, ams-LogEvent-70, LogEvent-28, sjc-LogEvent-13, sjc-LogEvent-46, ams-LogEvent-56, ams-LogEvent-23, LogEvent-44, sjc-LogEvent-63, sjc-LogEvent-30, ams-LogEvent-40, ams-LogEvent-7, LogEvent-11, sjc-LogEvent-45, sjc-LogEvent-12, ams-LogEvent-55, ams-LogEvent-22, LogEvent-27, sjc-LogEvent-29, sjc-LogEvent-62, ams-LogEvent-39, LogEvent-10, ams-LogEvent-6, LogEvent-43, ams-LogEvent-58, ams-LogEvent-25, sjc-LogEvent-11, sjc-LogEvent-44, LogEvent-26, LogEvent-59, sjc-LogEvent-61, ams-LogEvent-42, ams-LogEvent-9, LogEvent-9, sjc-LogEvent-28, LogEvent-42, ams-LogEvent-57, sjc-LogEvent-43, ams-LogEvent-24, LogEvent-25, LogEvent-58, sjc-LogEvent-10, ams-LogEvent-41, LogEvent-8, sjc-LogEvent-27, ams-LogEvent-8, LogEvent-41, sjc-LogEvent-60, LogEvent-32, sjc-LogEvent-17, sjc-LogEvent-50, ams-LogEvent-60, ams-LogEvent-27, LogEvent-48, sjc-LogEvent-67, sjc-LogEvent-1, sjc-LogEvent-34, ams-LogEvent-44, ams-LogEvent-11, LogEvent-15, sjc-LogEvent-49, sjc-LogEvent-16, ams-LogEvent-59, ams-LogEvent-26, LogEvent-31, sjc-LogEvent-33, sjc-LogEvent-66, sjc-LogEvent-0, ams-LogEvent-43, LogEvent-14, ams-LogEvent-10, LogEvent-47, ams-LogEvent-46, ams-LogEvent-29, sjc-LogEvent-15, sjc-LogEvent-48, LogEvent-30, ams-LogEvent-63, sjc-LogEvent-65, ams-LogEvent-30, ams-LogEvent-13, LogEvent-13, sjc-LogEvent-32, LogEvent-46, ams-LogEvent-61, sjc-LogEvent-47, ams-LogEvent-28, LogEvent-29, sjc-LogEvent-14, ams-LogEvent-62, ams-LogEvent-45, LogEvent-12, sjc-LogEvent-31, ams-LogEvent-12, LogEvent-45, sjc-LogEvent-64, LogEvent-36, sjc-LogEvent-21, sjc-LogEvent-54, ams-LogEvent-48, ams-LogEvent-15, LogEvent-3, LogEvent-52, sjc-LogEvent-71, sjc-LogEvent-5, sjc-LogEvent-38, ams-LogEvent-65, ams-LogEvent-32, LogEvent-19, sjc-LogEvent-53, sjc-LogEvent-20, ams-LogEvent-47, LogEvent-2, ams-LogEvent-14, LogEvent-35, sjc-LogEvent-37, sjc-LogEvent-70, ams-LogEvent-64, sjc-LogEvent-4, ams-LogEvent-31, LogEvent-18, LogEvent-51, ams-LogEvent-50, ams-LogEvent-17, sjc-LogEvent-19, LogEvent-1, sjc-LogEvent-52, LogEvent-34, ams-LogEvent-67, sjc-LogEvent-69, ams-LogEvent-34, ams-LogEvent-1, sjc-LogEvent-3, LogEvent-17, sjc-LogEvent-36, LogEvent-50, ams-LogEvent-49, LogEvent-0, sjc-LogEvent-51, ams-LogEvent-16, LogEvent-33, sjc-LogEvent-18, ams-LogEvent-66, ams-LogEvent-33, LogEvent-16, sjc-LogEvent-35, ams-LogEvent-0, LogEvent-49, sjc-LogEvent-68, sjc-LogEvent-2] for group xt3-spark-invoker
- 17/08/03 06:41:23 INFO util.RecurringTimer: Started timer for JobGenerator at time 1501756920000
- 17/08/03 06:41:23 INFO scheduler.JobGenerator: Started JobGenerator at 1501756920000 ms
- 17/08/03 06:41:23 INFO scheduler.JobScheduler: Started JobScheduler
- 17/08/03 06:41:23 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@e7d0db2{/streaming,null,AVAILABLE,@Spark}
- 17/08/03 06:41:23 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6b4fd7d{/streaming/json,null,AVAILABLE,@Spark}
- 17/08/03 06:41:23 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26f480c6{/streaming/batch,null,AVAILABLE,@Spark}
- 17/08/03 06:41:23 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7747cc1b{/streaming/batch/json,null,AVAILABLE,@Spark}
- 17/08/03 06:41:23 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26a9c6df{/static/streaming,null,AVAILABLE,@Spark}
- 17/08/03 06:41:23 INFO streaming.StreamingContext: StreamingContext started
- 17/08/03 06:41:30 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.185:41091) with ID 10
- 17/08/03 06:41:30 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.110:50272) with ID 3
- 17/08/03 06:41:30 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.79:45595) with ID 9
- 17/08/03 06:41:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid285.contextweb.prod:34715 with 1458.6 MB RAM, BlockManagerId(10, lga-grid285.contextweb.prod, 34715, None)
- 17/08/03 06:41:30 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.110:50273) with ID 2
- 17/08/03 06:41:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid260.contextweb.prod:51076 with 1458.6 MB RAM, BlockManagerId(3, lga-grid260.contextweb.prod, 51076, None)
- 17/08/03 06:41:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid279.contextweb.prod:44359 with 1458.6 MB RAM, BlockManagerId(9, lga-grid279.contextweb.prod, 44359, None)
- 17/08/03 06:41:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid260.contextweb.prod:43233 with 1458.6 MB RAM, BlockManagerId(2, lga-grid260.contextweb.prod, 43233, None)
- 17/08/03 06:41:31 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.110:50271) with ID 4
- 17/08/03 06:41:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid260.contextweb.prod:46377 with 1458.6 MB RAM, BlockManagerId(4, lga-grid260.contextweb.prod, 46377, None)
- 17/08/03 06:41:31 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.113:42070) with ID 8
- 17/08/03 06:41:31 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.221:33634) with ID 5
- 17/08/03 06:41:31 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.221:33633) with ID 6
- 17/08/03 06:41:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid263.contextweb.prod:44536 with 1458.6 MB RAM, BlockManagerId(8, lga-grid263.contextweb.prod, 44536, None)
- 17/08/03 06:41:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid271.contextweb.prod:58375 with 1458.6 MB RAM, BlockManagerId(5, lga-grid271.contextweb.prod, 58375, None)
- 17/08/03 06:41:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid271.contextweb.prod:39529 with 1458.6 MB RAM, BlockManagerId(6, lga-grid271.contextweb.prod, 39529, None)
- 17/08/03 06:41:34 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.201.3.190:45114) with ID 7
- 17/08/03 06:41:35 INFO storage.BlockManagerMasterEndpoint: Registering block manager lga-grid290.contextweb.prod:40939 with 1458.6 MB RAM, BlockManagerId(7, lga-grid290.contextweb.prod, 40939, None)
- 17/08/03 06:42:00 INFO scheduler.JobScheduler: Added jobs for time 1501756920000 ms
- 17/08/03 06:42:00 INFO scheduler.JobScheduler: Starting job streaming job 1501756920000 ms.0 from job set of time 1501756920000 ms
- 17/08/03 06:42:00 INFO invoker.LogEventDecoder: object serialization, url = http://lga-avro.pulse.prod/avro-schema-repo/
- 17/08/03 06:42:00 INFO spark.SparkContext: Starting job: sortByKey at ProcessLogEventRddFunction.java:133
- 17/08/03 06:42:00 INFO scheduler.DAGScheduler: Registering RDD 8 (mapToPair at ProcessLogEventRddFunction.java:70)
- 17/08/03 06:42:00 INFO scheduler.DAGScheduler: Got job 0 (sortByKey at ProcessLogEventRddFunction.java:133) with 204 output partitions
- 17/08/03 06:42:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (sortByKey at ProcessLogEventRddFunction.java:133)
- 17/08/03 06:42:00 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
- 17/08/03 06:42:00 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 0)
- 17/08/03 06:42:00 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[8] at mapToPair at ProcessLogEventRddFunction.java:70), which has no missing parents
- 17/08/03 06:42:00 INFO invoker.LogEventDecoder: object serialization, url = http://lga-avro.pulse.prod/avro-schema-repo/
- 17/08/03 06:42:00 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 13.6 KB, free 366.3 MB)
- 17/08/03 06:42:00 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 7.8 KB, free 366.3 MB)
- 17/08/03 06:42:00 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.201.2.41:36277 (size: 7.8 KB, free: 366.3 MB)
- 17/08/03 06:42:00 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
- 17/08/03 06:42:00 INFO scheduler.DAGScheduler: Submitting 204 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[8] at mapToPair at ProcessLogEventRddFunction.java:70) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
- 17/08/03 06:42:00 INFO cluster.YarnScheduler: Adding task set 0.0 with 204 tasks
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 0, lga-grid279.contextweb.prod, executor 9, partition 7, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 12.0 in stage 0.0 (TID 1, lga-grid358.contextweb.prod, executor 1, partition 12, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 2, lga-grid285.contextweb.prod, executor 10, partition 4, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 3, lga-grid290.contextweb.prod, executor 7, partition 0, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 4, lga-grid260.contextweb.prod, executor 3, partition 8, PROCESS_LOCAL, 4707 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 5, lga-grid260.contextweb.prod, executor 2, partition 2, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 6, lga-grid271.contextweb.prod, executor 5, partition 5, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 26.0 in stage 0.0 (TID 7, lga-grid271.contextweb.prod, executor 6, partition 26, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 17.0 in stage 0.0 (TID 8, lga-grid263.contextweb.prod, executor 8, partition 17, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:00 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 9, lga-grid260.contextweb.prod, executor 4, partition 1, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:42:03 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid279.contextweb.prod:44359 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid285.contextweb.prod:34715 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid290.contextweb.prod:40939 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid263.contextweb.prod:44536 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid358.contextweb.prod:37498 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:05 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid260.contextweb.prod:43233 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:05 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid260.contextweb.prod:46377 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:06 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid260.contextweb.prod:51076 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:08 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid271.contextweb.prod:39529 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:42:08 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on lga-grid271.contextweb.prod:58375 (size: 7.8 KB, free: 1458.6 MB)
- 17/08/03 06:44:29 INFO scheduler.TaskSetManager: Starting task 25.0 in stage 0.0 (TID 10, lga-grid358.contextweb.prod, executor 1, partition 25, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:44:29 INFO scheduler.TaskSetManager: Finished task 12.0 in stage 0.0 (TID 1) in 149337 ms on lga-grid358.contextweb.prod (executor 1) (1/204)
- 17/08/03 06:45:00 INFO scheduler.JobScheduler: Added jobs for time 1501757100000 ms
- 17/08/03 06:45:44 INFO scheduler.TaskSetManager: Starting task 13.0 in stage 0.0 (TID 11, lga-grid260.contextweb.prod, executor 4, partition 13, PROCESS_LOCAL, 4707 bytes)
- 17/08/03 06:45:44 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 9) in 223428 ms on lga-grid260.contextweb.prod (executor 4) (2/204)
- 17/08/03 06:46:33 INFO scheduler.TaskSetManager: Starting task 20.0 in stage 0.0 (TID 12, lga-grid285.contextweb.prod, executor 10, partition 20, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:46:33 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 2) in 273175 ms on lga-grid285.contextweb.prod (executor 10) (3/204)
- 17/08/03 06:46:40 INFO scheduler.TaskSetManager: Starting task 31.0 in stage 0.0 (TID 13, lga-grid358.contextweb.prod, executor 1, partition 31, PROCESS_LOCAL, 4707 bytes)
- 17/08/03 06:46:40 INFO scheduler.TaskSetManager: Finished task 25.0 in stage 0.0 (TID 10) in 130084 ms on lga-grid358.contextweb.prod (executor 1) (4/204)
- 17/08/03 06:46:51 INFO scheduler.TaskSetManager: Starting task 23.0 in stage 0.0 (TID 14, lga-grid263.contextweb.prod, executor 8, partition 23, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:46:51 INFO scheduler.TaskSetManager: Finished task 17.0 in stage 0.0 (TID 8) in 290802 ms on lga-grid263.contextweb.prod (executor 8) (5/204)
- 17/08/03 06:48:00 INFO scheduler.JobScheduler: Added jobs for time 1501757280000 ms
- 17/08/03 06:48:17 INFO scheduler.TaskSetManager: Starting task 15.0 in stage 0.0 (TID 15, lga-grid279.contextweb.prod, executor 9, partition 15, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:48:17 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 0) in 376691 ms on lga-grid279.contextweb.prod (executor 9) (6/204)
- 17/08/03 06:48:40 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 16, lga-grid260.contextweb.prod, executor 2, partition 3, PROCESS_LOCAL, 4711 bytes)
- 17/08/03 06:48:40 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 5) in 399999 ms on lga-grid260.contextweb.prod (executor 2) (7/204)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement