Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 16/10/11 13:20:09 INFO JobScheduler: Finished job streaming job 1476184808000 ms.1 from job set of time 1476184808000 ms
- 16/10/11 13:20:09 INFO JobScheduler: Total delay: 1,178 s for time 1476184808000 ms (execution: 0,122 s)
- 16/10/11 13:20:09 DEBUG JobGenerator: Got event ClearMetadata(1476184808000 ms)
- 16/10/11 13:20:09 DEBUG DStreamGraph: Clearing metadata for time 1476184808000 ms
- 16/10/11 13:20:09 DEBUG ForEachDStream: Clearing references to old RDDs: []
- 16/10/11 13:20:09 DEBUG ForEachDStream: Unpersisting old RDDs:
- 16/10/11 13:20:09 DEBUG ForEachDStream: Cleared 0 RDDs that were older than 1476184806000 ms:
- 16/10/11 13:20:09 DEBUG MappedDStream: Clearing references to old RDDs: [1476184806000 ms -> 597]
- 16/10/11 13:20:09 DEBUG MappedDStream: Unpersisting old RDDs: 597
- 16/10/11 13:20:09 INFO MapPartitionsRDD: Removing RDD 597 from persistence list
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 597
- 16/10/11 13:20:09 INFO BlockManager: Removing RDD 597
- 16/10/11 13:20:09 DEBUG MappedDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
- 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Clearing references to old RDDs: [1476184806000 ms -> 596]
- 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Unpersisting old RDDs: 596
- 16/10/11 13:20:09 INFO KafkaRDD: Removing RDD 596 from persistence list
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 597, response is 0
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 596
- 16/10/11 13:20:09 INFO BlockManager: Removing RDD 596
- 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
- 16/10/11 13:20:09 DEBUG ForEachDStream: Clearing references to old RDDs: []
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 596, response is 0
- 16/10/11 13:20:09 DEBUG ForEachDStream: Unpersisting old RDDs:
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
- 16/10/11 13:20:09 DEBUG ForEachDStream: Cleared 0 RDDs that were older than 1476184806000 ms:
- 16/10/11 13:20:09 DEBUG MappedDStream: Clearing references to old RDDs: [1476184806000 ms -> 599]
- 16/10/11 13:20:09 DEBUG MappedDStream: Unpersisting old RDDs: 599
- 16/10/11 13:20:09 INFO MapPartitionsRDD: Removing RDD 599 from persistence list
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 599
- 16/10/11 13:20:09 INFO BlockManager: Removing RDD 599
- 16/10/11 13:20:09 DEBUG MappedDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
- 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Clearing references to old RDDs: [1476184806000 ms -> 598]
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 599, response is 0
- 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Unpersisting old RDDs: 598
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
- 16/10/11 13:20:09 INFO KafkaRDD: Removing RDD 598 from persistence list
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 598
- 16/10/11 13:20:09 INFO BlockManager: Removing RDD 598
- 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
- 16/10/11 13:20:09 DEBUG DStreamGraph: Cleared old metadata for time 1476184808000 ms
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 598, response is 0
- 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
- 16/10/11 13:20:09 INFO ReceivedBlockTracker: Deleting batches:
- 16/10/11 13:20:09 INFO InputInfoTracker: remove old batch metadata: 1476184804000 ms
- 16/10/11 13:20:10 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184810000
- 16/10/11 13:20:10 DEBUG JobGenerator: Got event GenerateJobs(1476184810000 ms)
- 16/10/11 13:20:10 DEBUG DStreamGraph: Generating jobs for time 1476184810000 ms
- 16/10/11 13:20:10 DEBUG MappedDStream: Time 1476184810000 ms is valid
- 16/10/11 13:20:10 DEBUG DirectKafkaInputDStream: Time 1476184810000 ms is valid
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-11
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-4
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-9
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-0
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-10
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-3
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-15
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-8
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-13
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-2
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-14
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-7
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-5
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-12
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-1
- 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-6
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-12 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287152 for partition sapxm.adserving.log.ad_request-12
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-11 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288191 for partition sapxm.adserving.log.ad_request-11
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-14 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288142 for partition sapxm.adserving.log.ad_request-14
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-13 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288125 for partition sapxm.adserving.log.ad_request-13
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-15 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287784 for partition sapxm.adserving.log.ad_request-15
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-0 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287137 for partition sapxm.adserving.log.ad_request-0
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-2 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288497 for partition sapxm.adserving.log.ad_request-2
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-1 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287982 for partition sapxm.adserving.log.ad_request-1
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-4 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287649 for partition sapxm.adserving.log.ad_request-4
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-3 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287854 for partition sapxm.adserving.log.ad_request-3
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-6 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287684 for partition sapxm.adserving.log.ad_request-6
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-5 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287421 for partition sapxm.adserving.log.ad_request-5
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-8 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287664 for partition sapxm.adserving.log.ad_request-8
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-7 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288177 for partition sapxm.adserving.log.ad_request-7
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-10 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287290 for partition sapxm.adserving.log.ad_request-10
- 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-9 to latest offset.
- 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287903 for partition sapxm.adserving.log.ad_request-9
- 16/10/11 13:20:10 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1) +++
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + declared fields: 1
- 16/10/11 13:20:10 DEBUG ClosureCleaner: public static final long com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1.serialVersionUID
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:20:10 DEBUG ClosureCleaner: public final java.lang.Object com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1.apply(java.lang.Object)
- 16/10/11 13:20:10 DEBUG ClosureCleaner: public final byte[] com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1.apply(org.apache.kafka.clients.consumer.ConsumerRecord)
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + outer classes: 0
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + outer objects: 0
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
- 16/10/11 13:20:10 DEBUG ClosureCleaner: + there are no enclosing objects!
- 16/10/11 13:20:10 DEBUG ClosureCleaner: +++ closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1) is now cleaned +++
- 16/10/11 13:20:10 DEBUG MappedDStream: Time 1476184810000 ms is valid
- 16/10/11 13:20:10 DEBUG DirectKafkaInputDStream: Time 1476184810000 ms is valid
- 16/10/11 13:20:10 INFO ConsumerCoordinator: Revoking previously assigned partitions [sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-0, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-15, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14] for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
- 16/10/11 13:20:10 INFO AbstractCoordinator: (Re-)joining group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
- 16/10/11 13:20:10 DEBUG AbstractCoordinator: Sending JoinGroup ({group_id=87a6dfd6-9832-4140-ae19-7f6f583be8ad,session_timeout=60000,member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=36 cap=36]}]}) to coordinator 10.1.1.88:9092 (id: 2147483647 rack: null)
- 16/10/11 13:20:12 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184812000
- 16/10/11 13:20:14 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184814000
- 16/10/11 13:20:16 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184816000
- 16/10/11 13:20:18 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184818000
- 16/10/11 13:20:20 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184820000
- 16/10/11 13:20:22 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184822000
- 16/10/11 13:20:24 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184824000
- 16/10/11 13:20:26 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184826000
- 16/10/11 13:20:28 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184828000
- 16/10/11 13:20:30 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184830000
- 16/10/11 13:20:32 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184832000
- 16/10/11 13:20:34 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184834000
- 16/10/11 13:20:36 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184836000
- 16/10/11 13:20:38 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184838000
- 16/10/11 13:20:40 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184840000
- 16/10/11 13:20:42 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184842000
- 16/10/11 13:20:44 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184844000
- 16/10/11 13:20:46 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184846000
- 16/10/11 13:20:48 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184848000
- 16/10/11 13:20:50 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184850000
- 16/10/11 13:20:52 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184852000
- 16/10/11 13:20:54 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184854000
- 16/10/11 13:20:56 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184856000
- 16/10/11 13:20:58 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184858000
- 16/10/11 13:21:00 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184860000
- 16/10/11 13:21:02 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184862000
- 16/10/11 13:21:04 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184864000
- 16/10/11 13:21:06 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184866000
- 16/10/11 13:21:08 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184868000
- 16/10/11 13:21:08 DEBUG AbstractCoordinator: Received successful join group response for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad: {error_code=0,generation_id=3,group_protocol=range,leader_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,members=[{member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=36 cap=36]}]}
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Performing assignment for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad using strategy range with subscriptions {consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd=Subscription(topics=[sapxm.adserving.log.view])}
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Finished assignment for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad: {consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd=Assignment(partitions=[sapxm.adserving.log.view-0, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14, sapxm.adserving.log.view-15])}
- 16/10/11 13:21:08 DEBUG AbstractCoordinator: Sending leader SyncGroup for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad to coordinator 10.1.1.88:9092 (id: 2147483647 rack: null): {group_id=87a6dfd6-9832-4140-ae19-7f6f583be8ad,generation_id=3,member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,group_assignment=[{member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=104 cap=104]}]}
- 16/10/11 13:21:08 INFO AbstractCoordinator: Successfully joined group 87a6dfd6-9832-4140-ae19-7f6f583be8ad with generation 3
- 16/10/11 13:21:08 INFO ConsumerCoordinator: Setting newly assigned partitions [sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-0, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-15, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14] for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad fetching committed offsets for partitions: [sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-0, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-15, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14]
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-3
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-4
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-1
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-2
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-0
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-11
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-12
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-9
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-10
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-7
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-8
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-5
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-6
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-15
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-13
- 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-14
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-3 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287982 for partition sapxm.adserving.log.view-3
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-4 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287755 for partition sapxm.adserving.log.view-4
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-1 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288111 for partition sapxm.adserving.log.view-1
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-2 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288627 for partition sapxm.adserving.log.view-2
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-0 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287276 for partition sapxm.adserving.log.view-0
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-11 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288341 for partition sapxm.adserving.log.view-11
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-12 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287280 for partition sapxm.adserving.log.view-12
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-9 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288009 for partition sapxm.adserving.log.view-9
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-10 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287426 for partition sapxm.adserving.log.view-10
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-7 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288297 for partition sapxm.adserving.log.view-7
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-8 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287781 for partition sapxm.adserving.log.view-8
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-5 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287558 for partition sapxm.adserving.log.view-5
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-6 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287822 for partition sapxm.adserving.log.view-6
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-15 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287916 for partition sapxm.adserving.log.view-15
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-13 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288259 for partition sapxm.adserving.log.view-13
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-14 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288271 for partition sapxm.adserving.log.view-14
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-0
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-12
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-5
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-13
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-1
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-9
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-14
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-6
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-10
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-2
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-4
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-7
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-15
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-3
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-11
- 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-8
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-3 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-4 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-1 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-13 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-10 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-7 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-2 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-14 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-11 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-8 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-5 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-3 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-15 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-0 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-12 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-9 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-6 since it is no longer fetchable
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287982 for partition sapxm.adserving.log.view-3
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-4 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287755 for partition sapxm.adserving.log.view-4
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-1 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288113 for partition sapxm.adserving.log.view-1
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-2 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288631 for partition sapxm.adserving.log.view-2
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-0 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287279 for partition sapxm.adserving.log.view-0
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-11 to latest offset.
- 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288345 for partition sapxm.adserving.log.view-11
- 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-12 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287284 for partition sapxm.adserving.log.view-12
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-9 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288014 for partition sapxm.adserving.log.view-9
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-10 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287429 for partition sapxm.adserving.log.view-10
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-7 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288301 for partition sapxm.adserving.log.view-7
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-8 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287784 for partition sapxm.adserving.log.view-8
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-5 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287564 for partition sapxm.adserving.log.view-5
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-6 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287826 for partition sapxm.adserving.log.view-6
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-15 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287917 for partition sapxm.adserving.log.view-15
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-13 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288262 for partition sapxm.adserving.log.view-13
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-14 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288272 for partition sapxm.adserving.log.view-14
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2.apply(java.lang.Object)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final byte[] com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2.apply(org.apache.kafka.clients.consumer.ConsumerRecord)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2) is now cleaned +++
- 16/10/11 13:21:09 DEBUG DStreamGraph: Generated 2 jobs for time 1476184810000 ms
- 16/10/11 13:21:09 INFO JobScheduler: Added jobs for time 1476184810000 ms
- 16/10/11 13:21:09 DEBUG JobGenerator: Got event GenerateJobs(1476184812000 ms)
- 16/10/11 13:21:09 DEBUG DStreamGraph: Generating jobs for time 1476184812000 ms
- 16/10/11 13:21:09 DEBUG MappedDStream: Time 1476184812000 ms is valid
- 16/10/11 13:21:09 DEBUG DirectKafkaInputDStream: Time 1476184812000 ms is valid
- 16/10/11 13:21:09 INFO AbstractCoordinator: Marking the coordinator 10.1.1.88:9092 (id: 2147483647 rack: null) dead for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
- 16/10/11 13:21:09 INFO JobScheduler: Starting job streaming job 1476184810000 ms.0 from job set of time 1476184810000 ms
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) +++
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-11
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-4
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-9
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-0
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-10
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-3
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-15
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-8
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-13
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-2
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-14
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-7
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-5
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-12
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-1
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-6
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-12 to latest offset.
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD$$anonfun$take$1 org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.$outer
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final int org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.left$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(java.lang.Object)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(scala.collection.Iterator)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: <function0>
- 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + cloning the object <function0> of class org.apache.spark.rdd.RDD$$anonfun$take$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + cleaning cloned closure <function0> recursively (org.apache.spark.rdd.RDD$$anonfun$take$1)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.$outer
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final int org.apache.spark.rdd.RDD$$anonfun$take$1.num$2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1.apply()
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.org$apache$spark$rdd$RDD$$anonfun$$$outer()
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$apply$48
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) is now cleaned +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) is now cleaned +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final scala.Function1 org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
- 16/10/11 13:21:09 INFO SparkContext: Starting job: print at SparkJobMinimal.scala:59
- 16/10/11 13:21:09 INFO DAGScheduler: Got job 612 (print at SparkJobMinimal.scala:59) with 1 output partitions
- 16/10/11 13:21:09 INFO DAGScheduler: Final stage: ResultStage 612 (print at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 INFO DAGScheduler: Parents of final stage: List()
- 16/10/11 13:21:09 INFO DAGScheduler: Missing parents: List()
- 16/10/11 13:21:09 DEBUG DAGScheduler: submitStage(ResultStage 612)
- 16/10/11 13:21:09 DEBUG DAGScheduler: missing: List()
- 16/10/11 13:21:09 INFO DAGScheduler: Submitting ResultStage 612 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59), which has no missing parents
- 16/10/11 13:21:09 DEBUG DAGScheduler: submitMissingTasks(ResultStage 612)
- 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_612 stored as values in memory (estimated size 3.5 KB, free 2004.4 MB)
- 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_612 locally took 0 ms
- 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_612 without replication took 0 ms
- 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_612_piece0 stored as bytes in memory (estimated size 2.2 KB, free 2004.4 MB)
- 16/10/11 13:21:09 INFO BlockManagerInfo: Added broadcast_612_piece0 in memory on 10.94.121.6:55035 (size: 2.2 KB, free: 2004.5 MB)
- 16/10/11 13:21:09 DEBUG BlockManagerMaster: Updated info of block broadcast_612_piece0
- 16/10/11 13:21:09 DEBUG BlockManager: Told master about block broadcast_612_piece0
- 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_612_piece0 locally took 1 ms
- 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_612_piece0 without replication took 1 ms
- 16/10/11 13:21:09 INFO SparkContext: Created broadcast 612 from broadcast at DAGScheduler.scala:1012
- 16/10/11 13:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 612 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 DEBUG DAGScheduler: New pending partitions: Set(0)
- 16/10/11 13:21:09 INFO TaskSchedulerImpl: Adding task set 612.0 with 1 tasks
- 16/10/11 13:21:09 DEBUG TaskSetManager: Epoch for TaskSet 612.0: 0
- 16/10/11 13:21:09 DEBUG TaskSetManager: Valid locality levels for TaskSet 612.0: NO_PREF, ANY
- 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_612, runningTasks: 0
- 16/10/11 13:21:09 INFO TaskSetManager: Starting task 0.0 in stage 612.0 (TID 828, localhost, partition 0, PROCESS_LOCAL, 6951 bytes)
- 16/10/11 13:21:09 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY
- 16/10/11 13:21:09 INFO Executor: Running task 0.0 in stage 612.0 (TID 828)
- 16/10/11 13:21:09 DEBUG Executor: Task 828's epoch is 0
- 16/10/11 13:21:09 DEBUG BlockManager: Getting local block broadcast_612
- 16/10/11 13:21:09 DEBUG BlockManager: Level for block broadcast_612 is StorageLevel(disk, memory, deserialized, 1 replicas)
- 16/10/11 13:21:09 INFO KafkaRDD: Computing topic sapxm.adserving.log.ad_request, partition 11 offsets 288183 -> 288191
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288183 requested 288183
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288184 requested 288184
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288185 requested 288185
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288186 requested 288186
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288187 requested 288187
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288188 requested 288188
- 16/10/11 13:21:09 DEBUG NetworkClient: Initialize connection to node 0 for sending metadata request
- 16/10/11 13:21:09 DEBUG NetworkClient: Initiating connection to node 0 at 10.1.1.88:9092.
- 16/10/11 13:21:09 DEBUG NetworkClient: Sending metadata request {topics=[sapxm.adserving.log.ad_request]} to node 2
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287338 for partition sapxm.adserving.log.ad_request-12
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-11 to latest offset.
- 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-sent
- 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-received
- 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.latency
- 16/10/11 13:21:09 DEBUG NetworkClient: Completed connection to node 0
- 16/10/11 13:21:09 DEBUG Metadata: Updated cluster metadata version 3 to Cluster(nodes = [10.1.1.250:9092 (id: 1 rack: null), 10.1.1.88:9092 (id: 0 rack: null), 10.1.1.83:9092 (id: 2 rack: null)], partitions = [Partition(topic = sapxm.adserving.log.ad_request, partition = 12, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 11, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 14, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 13, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 15, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 0, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 2, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 1, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 4, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 3, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 6, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 5, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 8, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 7, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 10, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 9, leader = 0, replicas = [0,1,2,], isr = [0,1,2,]])
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288393 for partition sapxm.adserving.log.ad_request-11
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-14 to latest offset.
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288322 for partition sapxm.adserving.log.ad_request-14
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-13 to latest offset.
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Polled [sapxm.adserving.log.ad_request-11] 205
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288189 requested 288189
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288190 requested 288190
- 16/10/11 13:21:09 INFO Executor: Finished task 0.0 in stage 612.0 (TID 828). 1833 bytes result sent to driver
- 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_612, runningTasks: 0
- 16/10/11 13:21:09 INFO TaskSetManager: Finished task 0.0 in stage 612.0 (TID 828) in 91 ms on localhost (1/1)
- 16/10/11 13:21:09 INFO TaskSchedulerImpl: Removed TaskSet 612.0, whose tasks have all completed, from pool
- 16/10/11 13:21:09 INFO DAGScheduler: ResultStage 612 (print at SparkJobMinimal.scala:59) finished in 0,091 s
- 16/10/11 13:21:09 DEBUG DAGScheduler: After removal of stage 612, remaining stages = 0
- 16/10/11 13:21:09 INFO DAGScheduler: Job 612 finished: print at SparkJobMinimal.scala:59, took 0,093241 s
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD$$anonfun$take$1 org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.$outer
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final int org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.left$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(java.lang.Object)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(scala.collection.Iterator)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: <function0>
- 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + cloning the object <function0> of class org.apache.spark.rdd.RDD$$anonfun$take$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + cleaning cloned closure <function0> recursively (org.apache.spark.rdd.RDD$$anonfun$take$1)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.$outer
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final int org.apache.spark.rdd.RDD$$anonfun$take$1.num$2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1.apply()
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.org$apache$spark$rdd$RDD$$anonfun$$$outer()
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$apply$48
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) is now cleaned +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) is now cleaned +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final scala.Function1 org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
- 16/10/11 13:21:09 INFO SparkContext: Starting job: print at SparkJobMinimal.scala:59
- 16/10/11 13:21:09 INFO DAGScheduler: Got job 613 (print at SparkJobMinimal.scala:59) with 1 output partitions
- 16/10/11 13:21:09 INFO DAGScheduler: Final stage: ResultStage 613 (print at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 INFO DAGScheduler: Parents of final stage: List()
- 16/10/11 13:21:09 INFO DAGScheduler: Missing parents: List()
- 16/10/11 13:21:09 DEBUG DAGScheduler: submitStage(ResultStage 613)
- 16/10/11 13:21:09 DEBUG DAGScheduler: missing: List()
- 16/10/11 13:21:09 INFO DAGScheduler: Submitting ResultStage 613 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59), which has no missing parents
- 16/10/11 13:21:09 DEBUG DAGScheduler: submitMissingTasks(ResultStage 613)
- 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_613 stored as values in memory (estimated size 3.5 KB, free 2004.4 MB)
- 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_613 locally took 0 ms
- 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_613 without replication took 0 ms
- 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_613_piece0 stored as bytes in memory (estimated size 2.2 KB, free 2004.4 MB)
- 16/10/11 13:21:09 INFO BlockManagerInfo: Added broadcast_613_piece0 in memory on 10.94.121.6:55035 (size: 2.2 KB, free: 2004.5 MB)
- 16/10/11 13:21:09 DEBUG BlockManagerMaster: Updated info of block broadcast_613_piece0
- 16/10/11 13:21:09 DEBUG BlockManager: Told master about block broadcast_613_piece0
- 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_613_piece0 locally took 0 ms
- 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_613_piece0 without replication took 0 ms
- 16/10/11 13:21:09 INFO SparkContext: Created broadcast 613 from broadcast at DAGScheduler.scala:1012
- 16/10/11 13:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 613 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
- 16/10/11 13:21:09 DEBUG DAGScheduler: New pending partitions: Set(1)
- 16/10/11 13:21:09 INFO TaskSchedulerImpl: Adding task set 613.0 with 1 tasks
- 16/10/11 13:21:09 DEBUG TaskSetManager: Epoch for TaskSet 613.0: 0
- 16/10/11 13:21:09 DEBUG TaskSetManager: Valid locality levels for TaskSet 613.0: NO_PREF, ANY
- 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_613, runningTasks: 0
- 16/10/11 13:21:09 INFO TaskSetManager: Starting task 0.0 in stage 613.0 (TID 829, localhost, partition 1, PROCESS_LOCAL, 6951 bytes)
- 16/10/11 13:21:09 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY
- 16/10/11 13:21:09 INFO Executor: Running task 0.0 in stage 613.0 (TID 829)
- 16/10/11 13:21:09 DEBUG Executor: Task 829's epoch is 0
- 16/10/11 13:21:09 DEBUG BlockManager: Getting local block broadcast_613
- 16/10/11 13:21:09 DEBUG BlockManager: Level for block broadcast_613 is StorageLevel(disk, memory, deserialized, 1 replicas)
- 16/10/11 13:21:09 INFO KafkaRDD: Computing topic sapxm.adserving.log.ad_request, partition 4 offsets 287643 -> 287649
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 4 nextOffset 287643 requested 287643
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 4 nextOffset 287644 requested 287644
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 4 nextOffset 287645 requested 287645
- 16/10/11 13:21:09 INFO Executor: Finished task 0.0 in stage 613.0 (TID 829). 1081 bytes result sent to driver
- 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_613, runningTasks: 0
- 16/10/11 13:21:09 INFO TaskSetManager: Finished task 0.0 in stage 613.0 (TID 829) in 3 ms on localhost (1/1)
- 16/10/11 13:21:09 INFO TaskSchedulerImpl: Removed TaskSet 613.0, whose tasks have all completed, from pool
- 16/10/11 13:21:09 INFO DAGScheduler: ResultStage 613 (print at SparkJobMinimal.scala:59) finished in 0,003 s
- 16/10/11 13:21:09 DEBUG DAGScheduler: After removal of stage 613, remaining stages = 0
- 16/10/11 13:21:09 INFO DAGScheduler: Job 613 finished: print at SparkJobMinimal.scala:59, took 0,005053 s
- 16/10/11 13:21:09 INFO JobScheduler: Finished job streaming job 1476184810000 ms.0 from job set of time 1476184810000 ms
- 16/10/11 13:21:09 INFO JobScheduler: Starting job streaming job 1476184810000 ms.1 from job set of time 1476184810000 ms
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD$$anonfun$take$1 org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.$outer
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final int org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.left$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(java.lang.Object)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(scala.collection.Iterator)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: <function0>
- 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + cloning the object <function0> of class org.apache.spark.rdd.RDD$$anonfun$take$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + cleaning cloned closure <function0> recursively (org.apache.spark.rdd.RDD$$anonfun$take$1)
- -------------------------------------------
- Time: 1476184810000 ms
- -------------------------------------------
- [B@45027835
- [B@3a71b22f
- [B@2585b0af
- [B@7e8fa0c9
- [B@281abcfd
- [B@1fcd931b
- [B@17c55885
- [B@144f3b5c
- [B@694f907c
- [B@1a699305
- ...
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.$outer
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final int org.apache.spark.rdd.RDD$$anonfun$take$1.num$2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1.apply()
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.org$apache$spark$rdd$RDD$$anonfun$$$outer()
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$apply$48
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) is now cleaned +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) is now cleaned +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
- 16/10/11 13:21:09 DEBUG ClosureCleaner: private final scala.Function1 org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
- 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
- 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
- 16/10/11 13:21:09 INFO SparkContext: Starting job: print at SparkJobMinimal.scala:64
- 16/10/11 13:21:09 INFO DAGScheduler: Got job 614 (print at SparkJobMinimal.scala:64) with 1 output partitions
- 16/10/11 13:21:09 INFO DAGScheduler: Final stage: ResultStage 614 (print at SparkJobMinimal.scala:64)
- 16/10/11 13:21:09 INFO DAGScheduler: Parents of final stage: List()
- 16/10/11 13:21:09 INFO DAGScheduler: Missing parents: List()
- 16/10/11 13:21:09 DEBUG DAGScheduler: submitStage(ResultStage 614)
- 16/10/11 13:21:09 DEBUG DAGScheduler: missing: List()
- 16/10/11 13:21:09 INFO DAGScheduler: Submitting ResultStage 614 (MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64), which has no missing parents
- 16/10/11 13:21:09 DEBUG DAGScheduler: submitMissingTasks(ResultStage 614)
- 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_614 stored as values in memory (estimated size 3.5 KB, free 2004.4 MB)
- 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_614 locally took 0 ms
- 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_614 without replication took 0 ms
- 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_614_piece0 stored as bytes in memory (estimated size 2.2 KB, free 2004.4 MB)
- 16/10/11 13:21:09 INFO BlockManagerInfo: Added broadcast_614_piece0 in memory on 10.94.121.6:55035 (size: 2.2 KB, free: 2004.5 MB)
- 16/10/11 13:21:09 DEBUG BlockManagerMaster: Updated info of block broadcast_614_piece0
- 16/10/11 13:21:09 DEBUG BlockManager: Told master about block broadcast_614_piece0
- 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_614_piece0 locally took 0 ms
- 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_614_piece0 without replication took 0 ms
- 16/10/11 13:21:09 INFO SparkContext: Created broadcast 614 from broadcast at DAGScheduler.scala:1012
- 16/10/11 13:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 614 (MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64)
- 16/10/11 13:21:09 DEBUG DAGScheduler: New pending partitions: Set(0)
- 16/10/11 13:21:09 INFO TaskSchedulerImpl: Adding task set 614.0 with 1 tasks
- 16/10/11 13:21:09 DEBUG TaskSetManager: Epoch for TaskSet 614.0: 0
- 16/10/11 13:21:09 DEBUG TaskSetManager: Valid locality levels for TaskSet 614.0: NO_PREF, ANY
- 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_614, runningTasks: 0
- 16/10/11 13:21:09 INFO TaskSetManager: Starting task 0.0 in stage 614.0 (TID 830, localhost, partition 0, PROCESS_LOCAL, 6945 bytes)
- 16/10/11 13:21:09 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY
- 16/10/11 13:21:09 INFO Executor: Running task 0.0 in stage 614.0 (TID 830)
- 16/10/11 13:21:09 DEBUG Executor: Task 830's epoch is 0
- 16/10/11 13:21:09 DEBUG BlockManager: Getting local block broadcast_614
- 16/10/11 13:21:09 DEBUG BlockManager: Level for block broadcast_614 is StorageLevel(disk, memory, deserialized, 1 replicas)
- 16/10/11 13:21:09 INFO KafkaRDD: Computing topic sapxm.adserving.log.view, partition 0 offsets 287068 -> 287279
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287067 requested 287068
- 16/10/11 13:21:09 INFO CachedKafkaConsumer: Initial fetch for spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 287068
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Seeking to sapxm.adserving.log.view-0 287068
- 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to offset 287068 for partition sapxm.adserving.log.view-0
- 16/10/11 13:21:09 DEBUG NetworkClient: Initialize connection to node 0 for sending metadata request
- 16/10/11 13:21:09 DEBUG NetworkClient: Initiating connection to node 0 at 10.1.1.88:9092.
- 16/10/11 13:21:09 DEBUG Fetcher: Discarding fetch response for partition sapxm.adserving.log.view-0 since its offset 287072 does not match the expected offset 287068
- 16/10/11 13:21:09 DEBUG NetworkClient: Sending metadata request {topics=[sapxm.adserving.log.view]} to node 2
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288315 for partition sapxm.adserving.log.ad_request-13
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-15 to latest offset.
- 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-sent
- 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-received
- 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.latency
- 16/10/11 13:21:09 DEBUG NetworkClient: Completed connection to node 0
- 16/10/11 13:21:09 DEBUG Metadata: Updated cluster metadata version 3 to Cluster(nodes = [10.1.1.83:9092 (id: 2 rack: null), 10.1.1.250:9092 (id: 1 rack: null), 10.1.1.88:9092 (id: 0 rack: null)], partitions = [Partition(topic = sapxm.adserving.log.view, partition = 3, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 4, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 1, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 2, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 0, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 11, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 12, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 9, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 10, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 7, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 8, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 5, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 6, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 15, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 13, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 14, leader = 1, replicas = [0,1,2,], isr = [1,0,2,]])
- 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287970 for partition sapxm.adserving.log.ad_request-15
- 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-0 to latest offset.
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Polled [sapxm.adserving.log.view-0] 211
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287069 requested 287069
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287070 requested 287070
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287071 requested 287071
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287072 requested 287072
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287073 requested 287073
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287074 requested 287074
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287075 requested 287075
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287076 requested 287076
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287077 requested 287077
- 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287078 requested 287078
- 16/10/11 13:21:09 INFO Executor: Finished task 0.0 in stage 614.0 (TID 830). 958 bytes result sent to driver
- 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_614, runningTasks: 0
- 16/10/11 13:21:09 INFO TaskSetManager: Finished task 0.0 in stage 614.0 (TID 830) in 60 ms on localhost (1/1)
- 16/10/11 13:21:09 INFO TaskSchedulerImpl: Removed TaskSet 614.0, whose tasks have all completed, from pool
- 16/10/11 13:21:09 INFO DAGScheduler: ResultStage 614 (print at SparkJobMinimal.scala:64) finished in 0,061 s
- 16/10/11 13:21:09 DEBUG DAGScheduler: After removal of stage 614, remaining stages = 0
- 16/10/11 13:21:09 INFO DAGScheduler: Job 614 finished: print at SparkJobMinimal.scala:64, took 0,063755 s
- 16/10/11 13:21:09 INFO JobScheduler: Finished job streaming job 1476184810000 ms.1 from job set of time 1476184810000 ms
- 16/10/11 13:21:09 INFO JobScheduler: Total delay: 59,448 s for time 1476184810000 ms (execution: 0,174 s)
Add Comment
Please, Sign In to add comment