Guest User

Untitled

a guest
Oct 11th, 2016
216
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 73.73 KB | None | 0 0
  1. 16/10/11 13:20:09 INFO JobScheduler: Finished job streaming job 1476184808000 ms.1 from job set of time 1476184808000 ms
  2. 16/10/11 13:20:09 INFO JobScheduler: Total delay: 1,178 s for time 1476184808000 ms (execution: 0,122 s)
  3. 16/10/11 13:20:09 DEBUG JobGenerator: Got event ClearMetadata(1476184808000 ms)
  4. 16/10/11 13:20:09 DEBUG DStreamGraph: Clearing metadata for time 1476184808000 ms
  5. 16/10/11 13:20:09 DEBUG ForEachDStream: Clearing references to old RDDs: []
  6. 16/10/11 13:20:09 DEBUG ForEachDStream: Unpersisting old RDDs:
  7. 16/10/11 13:20:09 DEBUG ForEachDStream: Cleared 0 RDDs that were older than 1476184806000 ms:
  8. 16/10/11 13:20:09 DEBUG MappedDStream: Clearing references to old RDDs: [1476184806000 ms -> 597]
  9. 16/10/11 13:20:09 DEBUG MappedDStream: Unpersisting old RDDs: 597
  10. 16/10/11 13:20:09 INFO MapPartitionsRDD: Removing RDD 597 from persistence list
  11. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 597
  12. 16/10/11 13:20:09 INFO BlockManager: Removing RDD 597
  13. 16/10/11 13:20:09 DEBUG MappedDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
  14. 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Clearing references to old RDDs: [1476184806000 ms -> 596]
  15. 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Unpersisting old RDDs: 596
  16. 16/10/11 13:20:09 INFO KafkaRDD: Removing RDD 596 from persistence list
  17. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 597, response is 0
  18. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
  19. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 596
  20. 16/10/11 13:20:09 INFO BlockManager: Removing RDD 596
  21. 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
  22. 16/10/11 13:20:09 DEBUG ForEachDStream: Clearing references to old RDDs: []
  23. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 596, response is 0
  24. 16/10/11 13:20:09 DEBUG ForEachDStream: Unpersisting old RDDs:
  25. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
  26. 16/10/11 13:20:09 DEBUG ForEachDStream: Cleared 0 RDDs that were older than 1476184806000 ms:
  27. 16/10/11 13:20:09 DEBUG MappedDStream: Clearing references to old RDDs: [1476184806000 ms -> 599]
  28. 16/10/11 13:20:09 DEBUG MappedDStream: Unpersisting old RDDs: 599
  29. 16/10/11 13:20:09 INFO MapPartitionsRDD: Removing RDD 599 from persistence list
  30. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 599
  31. 16/10/11 13:20:09 INFO BlockManager: Removing RDD 599
  32. 16/10/11 13:20:09 DEBUG MappedDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
  33. 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Clearing references to old RDDs: [1476184806000 ms -> 598]
  34. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 599, response is 0
  35. 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Unpersisting old RDDs: 598
  36. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
  37. 16/10/11 13:20:09 INFO KafkaRDD: Removing RDD 598 from persistence list
  38. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: removing RDD 598
  39. 16/10/11 13:20:09 INFO BlockManager: Removing RDD 598
  40. 16/10/11 13:20:09 DEBUG DirectKafkaInputDStream: Cleared 1 RDDs that were older than 1476184806000 ms: 1476184806000 ms
  41. 16/10/11 13:20:09 DEBUG DStreamGraph: Cleared old metadata for time 1476184808000 ms
  42. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Done removing RDD 598, response is 0
  43. 16/10/11 13:20:09 DEBUG BlockManagerSlaveEndpoint: Sent response: 0 to 10.94.121.6:55034
  44. 16/10/11 13:20:09 INFO ReceivedBlockTracker: Deleting batches:
  45. 16/10/11 13:20:09 INFO InputInfoTracker: remove old batch metadata: 1476184804000 ms
  46. 16/10/11 13:20:10 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184810000
  47. 16/10/11 13:20:10 DEBUG JobGenerator: Got event GenerateJobs(1476184810000 ms)
  48. 16/10/11 13:20:10 DEBUG DStreamGraph: Generating jobs for time 1476184810000 ms
  49. 16/10/11 13:20:10 DEBUG MappedDStream: Time 1476184810000 ms is valid
  50. 16/10/11 13:20:10 DEBUG DirectKafkaInputDStream: Time 1476184810000 ms is valid
  51. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-11
  52. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-4
  53. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-9
  54. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-0
  55. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-10
  56. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-3
  57. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-15
  58. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-8
  59. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-13
  60. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-2
  61. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-14
  62. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-7
  63. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-5
  64. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-12
  65. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-1
  66. 16/10/11 13:20:10 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-6
  67. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-12 to latest offset.
  68. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287152 for partition sapxm.adserving.log.ad_request-12
  69. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-11 to latest offset.
  70. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288191 for partition sapxm.adserving.log.ad_request-11
  71. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-14 to latest offset.
  72. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288142 for partition sapxm.adserving.log.ad_request-14
  73. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-13 to latest offset.
  74. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288125 for partition sapxm.adserving.log.ad_request-13
  75. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-15 to latest offset.
  76. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287784 for partition sapxm.adserving.log.ad_request-15
  77. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-0 to latest offset.
  78. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287137 for partition sapxm.adserving.log.ad_request-0
  79. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-2 to latest offset.
  80. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288497 for partition sapxm.adserving.log.ad_request-2
  81. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-1 to latest offset.
  82. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287982 for partition sapxm.adserving.log.ad_request-1
  83. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-4 to latest offset.
  84. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287649 for partition sapxm.adserving.log.ad_request-4
  85. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-3 to latest offset.
  86. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287854 for partition sapxm.adserving.log.ad_request-3
  87. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-6 to latest offset.
  88. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287684 for partition sapxm.adserving.log.ad_request-6
  89. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-5 to latest offset.
  90. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287421 for partition sapxm.adserving.log.ad_request-5
  91. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-8 to latest offset.
  92. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287664 for partition sapxm.adserving.log.ad_request-8
  93. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-7 to latest offset.
  94. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 288177 for partition sapxm.adserving.log.ad_request-7
  95. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-10 to latest offset.
  96. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287290 for partition sapxm.adserving.log.ad_request-10
  97. 16/10/11 13:20:10 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-9 to latest offset.
  98. 16/10/11 13:20:10 DEBUG Fetcher: Fetched offset 287903 for partition sapxm.adserving.log.ad_request-9
  99. 16/10/11 13:20:10 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1) +++
  100. 16/10/11 13:20:10 DEBUG ClosureCleaner: + declared fields: 1
  101. 16/10/11 13:20:10 DEBUG ClosureCleaner: public static final long com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1.serialVersionUID
  102. 16/10/11 13:20:10 DEBUG ClosureCleaner: + declared methods: 2
  103. 16/10/11 13:20:10 DEBUG ClosureCleaner: public final java.lang.Object com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1.apply(java.lang.Object)
  104. 16/10/11 13:20:10 DEBUG ClosureCleaner: public final byte[] com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1.apply(org.apache.kafka.clients.consumer.ConsumerRecord)
  105. 16/10/11 13:20:10 DEBUG ClosureCleaner: + inner classes: 0
  106. 16/10/11 13:20:10 DEBUG ClosureCleaner: + outer classes: 0
  107. 16/10/11 13:20:10 DEBUG ClosureCleaner: + outer objects: 0
  108. 16/10/11 13:20:10 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  109. 16/10/11 13:20:10 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
  110. 16/10/11 13:20:10 DEBUG ClosureCleaner: + there are no enclosing objects!
  111. 16/10/11 13:20:10 DEBUG ClosureCleaner: +++ closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$1) is now cleaned +++
  112. 16/10/11 13:20:10 DEBUG MappedDStream: Time 1476184810000 ms is valid
  113. 16/10/11 13:20:10 DEBUG DirectKafkaInputDStream: Time 1476184810000 ms is valid
  114. 16/10/11 13:20:10 INFO ConsumerCoordinator: Revoking previously assigned partitions [sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-0, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-15, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14] for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
  115. 16/10/11 13:20:10 INFO AbstractCoordinator: (Re-)joining group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
  116. 16/10/11 13:20:10 DEBUG AbstractCoordinator: Sending JoinGroup ({group_id=87a6dfd6-9832-4140-ae19-7f6f583be8ad,session_timeout=60000,member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=36 cap=36]}]}) to coordinator 10.1.1.88:9092 (id: 2147483647 rack: null)
  117. 16/10/11 13:20:12 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184812000
  118. 16/10/11 13:20:14 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184814000
  119. 16/10/11 13:20:16 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184816000
  120. 16/10/11 13:20:18 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184818000
  121. 16/10/11 13:20:20 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184820000
  122. 16/10/11 13:20:22 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184822000
  123. 16/10/11 13:20:24 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184824000
  124. 16/10/11 13:20:26 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184826000
  125. 16/10/11 13:20:28 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184828000
  126. 16/10/11 13:20:30 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184830000
  127. 16/10/11 13:20:32 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184832000
  128. 16/10/11 13:20:34 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184834000
  129. 16/10/11 13:20:36 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184836000
  130. 16/10/11 13:20:38 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184838000
  131. 16/10/11 13:20:40 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184840000
  132. 16/10/11 13:20:42 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184842000
  133. 16/10/11 13:20:44 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184844000
  134. 16/10/11 13:20:46 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184846000
  135. 16/10/11 13:20:48 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184848000
  136. 16/10/11 13:20:50 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184850000
  137. 16/10/11 13:20:52 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184852000
  138. 16/10/11 13:20:54 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184854000
  139. 16/10/11 13:20:56 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184856000
  140. 16/10/11 13:20:58 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184858000
  141. 16/10/11 13:21:00 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184860000
  142. 16/10/11 13:21:02 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184862000
  143. 16/10/11 13:21:04 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184864000
  144. 16/10/11 13:21:06 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184866000
  145. 16/10/11 13:21:08 DEBUG RecurringTimer: Callback for JobGenerator called at time 1476184868000
  146. 16/10/11 13:21:08 DEBUG AbstractCoordinator: Received successful join group response for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad: {error_code=0,generation_id=3,group_protocol=range,leader_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,members=[{member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=36 cap=36]}]}
  147. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Performing assignment for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad using strategy range with subscriptions {consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd=Subscription(topics=[sapxm.adserving.log.view])}
  148. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Finished assignment for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad: {consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd=Assignment(partitions=[sapxm.adserving.log.view-0, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14, sapxm.adserving.log.view-15])}
  149. 16/10/11 13:21:08 DEBUG AbstractCoordinator: Sending leader SyncGroup for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad to coordinator 10.1.1.88:9092 (id: 2147483647 rack: null): {group_id=87a6dfd6-9832-4140-ae19-7f6f583be8ad,generation_id=3,member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,group_assignment=[{member_id=consumer-2-68b7904f-1711-4d55-97f6-0003a06f3bcd,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=104 cap=104]}]}
  150. 16/10/11 13:21:08 INFO AbstractCoordinator: Successfully joined group 87a6dfd6-9832-4140-ae19-7f6f583be8ad with generation 3
  151. 16/10/11 13:21:08 INFO ConsumerCoordinator: Setting newly assigned partitions [sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-0, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-15, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14] for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
  152. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad fetching committed offsets for partitions: [sapxm.adserving.log.view-3, sapxm.adserving.log.view-4, sapxm.adserving.log.view-1, sapxm.adserving.log.view-2, sapxm.adserving.log.view-0, sapxm.adserving.log.view-11, sapxm.adserving.log.view-12, sapxm.adserving.log.view-9, sapxm.adserving.log.view-10, sapxm.adserving.log.view-7, sapxm.adserving.log.view-8, sapxm.adserving.log.view-5, sapxm.adserving.log.view-6, sapxm.adserving.log.view-15, sapxm.adserving.log.view-13, sapxm.adserving.log.view-14]
  153. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-3
  154. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-4
  155. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-1
  156. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-2
  157. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-0
  158. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-11
  159. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-12
  160. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-9
  161. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-10
  162. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-7
  163. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-8
  164. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-5
  165. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-6
  166. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-15
  167. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-13
  168. 16/10/11 13:21:08 DEBUG ConsumerCoordinator: Group 87a6dfd6-9832-4140-ae19-7f6f583be8ad has no committed offset for partition sapxm.adserving.log.view-14
  169. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-3 to latest offset.
  170. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287982 for partition sapxm.adserving.log.view-3
  171. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-4 to latest offset.
  172. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287755 for partition sapxm.adserving.log.view-4
  173. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-1 to latest offset.
  174. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288111 for partition sapxm.adserving.log.view-1
  175. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-2 to latest offset.
  176. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288627 for partition sapxm.adserving.log.view-2
  177. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-0 to latest offset.
  178. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287276 for partition sapxm.adserving.log.view-0
  179. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-11 to latest offset.
  180. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288341 for partition sapxm.adserving.log.view-11
  181. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-12 to latest offset.
  182. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287280 for partition sapxm.adserving.log.view-12
  183. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-9 to latest offset.
  184. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288009 for partition sapxm.adserving.log.view-9
  185. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-10 to latest offset.
  186. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287426 for partition sapxm.adserving.log.view-10
  187. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-7 to latest offset.
  188. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288297 for partition sapxm.adserving.log.view-7
  189. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-8 to latest offset.
  190. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287781 for partition sapxm.adserving.log.view-8
  191. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-5 to latest offset.
  192. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287558 for partition sapxm.adserving.log.view-5
  193. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-6 to latest offset.
  194. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287822 for partition sapxm.adserving.log.view-6
  195. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-15 to latest offset.
  196. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287916 for partition sapxm.adserving.log.view-15
  197. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-13 to latest offset.
  198. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288259 for partition sapxm.adserving.log.view-13
  199. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-14 to latest offset.
  200. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288271 for partition sapxm.adserving.log.view-14
  201. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-0
  202. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-12
  203. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-5
  204. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-13
  205. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-1
  206. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-9
  207. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-14
  208. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-6
  209. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-10
  210. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-2
  211. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-4
  212. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-7
  213. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-15
  214. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-3
  215. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-11
  216. 16/10/11 13:21:08 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.view-8
  217. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-3 to latest offset.
  218. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-4 since it is no longer fetchable
  219. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-1 since it is no longer fetchable
  220. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-13 since it is no longer fetchable
  221. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-10 since it is no longer fetchable
  222. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-7 since it is no longer fetchable
  223. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-2 since it is no longer fetchable
  224. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-14 since it is no longer fetchable
  225. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-11 since it is no longer fetchable
  226. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-8 since it is no longer fetchable
  227. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-5 since it is no longer fetchable
  228. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-3 since it is no longer fetchable
  229. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-15 since it is no longer fetchable
  230. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-0 since it is no longer fetchable
  231. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-12 since it is no longer fetchable
  232. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-9 since it is no longer fetchable
  233. 16/10/11 13:21:08 DEBUG Fetcher: Ignoring fetched records for partition sapxm.adserving.log.view-6 since it is no longer fetchable
  234. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287982 for partition sapxm.adserving.log.view-3
  235. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-4 to latest offset.
  236. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287755 for partition sapxm.adserving.log.view-4
  237. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-1 to latest offset.
  238. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288113 for partition sapxm.adserving.log.view-1
  239. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-2 to latest offset.
  240. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288631 for partition sapxm.adserving.log.view-2
  241. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-0 to latest offset.
  242. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 287279 for partition sapxm.adserving.log.view-0
  243. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-11 to latest offset.
  244. 16/10/11 13:21:08 DEBUG Fetcher: Fetched offset 288345 for partition sapxm.adserving.log.view-11
  245. 16/10/11 13:21:08 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-12 to latest offset.
  246. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287284 for partition sapxm.adserving.log.view-12
  247. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-9 to latest offset.
  248. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288014 for partition sapxm.adserving.log.view-9
  249. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-10 to latest offset.
  250. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287429 for partition sapxm.adserving.log.view-10
  251. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-7 to latest offset.
  252. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288301 for partition sapxm.adserving.log.view-7
  253. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-8 to latest offset.
  254. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287784 for partition sapxm.adserving.log.view-8
  255. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-5 to latest offset.
  256. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287564 for partition sapxm.adserving.log.view-5
  257. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-6 to latest offset.
  258. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287826 for partition sapxm.adserving.log.view-6
  259. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-15 to latest offset.
  260. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287917 for partition sapxm.adserving.log.view-15
  261. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-13 to latest offset.
  262. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288262 for partition sapxm.adserving.log.view-13
  263. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.view-14 to latest offset.
  264. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288272 for partition sapxm.adserving.log.view-14
  265. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2) +++
  266. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 1
  267. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2.serialVersionUID
  268. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  269. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2.apply(java.lang.Object)
  270. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final byte[] com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2.apply(org.apache.kafka.clients.consumer.ConsumerRecord)
  271. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
  272. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
  273. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
  274. 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  275. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
  276. 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
  277. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (com.sap.xm.SparkJobMinimal$$anonfun$createStreamingContext$1$2) is now cleaned +++
  278. 16/10/11 13:21:09 DEBUG DStreamGraph: Generated 2 jobs for time 1476184810000 ms
  279. 16/10/11 13:21:09 INFO JobScheduler: Added jobs for time 1476184810000 ms
  280. 16/10/11 13:21:09 DEBUG JobGenerator: Got event GenerateJobs(1476184812000 ms)
  281. 16/10/11 13:21:09 DEBUG DStreamGraph: Generating jobs for time 1476184812000 ms
  282. 16/10/11 13:21:09 DEBUG MappedDStream: Time 1476184812000 ms is valid
  283. 16/10/11 13:21:09 DEBUG DirectKafkaInputDStream: Time 1476184812000 ms is valid
  284. 16/10/11 13:21:09 INFO AbstractCoordinator: Marking the coordinator 10.1.1.88:9092 (id: 2147483647 rack: null) dead for group 87a6dfd6-9832-4140-ae19-7f6f583be8ad
  285. 16/10/11 13:21:09 INFO JobScheduler: Starting job streaming job 1476184810000 ms.0 from job set of time 1476184810000 ms
  286. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) +++
  287. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-11
  288. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-4
  289. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-9
  290. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-0
  291. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-10
  292. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-3
  293. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-15
  294. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-8
  295. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-13
  296. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-2
  297. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-14
  298. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-7
  299. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-5
  300. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-12
  301. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-1
  302. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to end of partition sapxm.adserving.log.ad_request-6
  303. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-12 to latest offset.
  304. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
  305. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.serialVersionUID
  306. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD$$anonfun$take$1 org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.$outer
  307. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final int org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.left$1
  308. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  309. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(java.lang.Object)
  310. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(scala.collection.Iterator)
  311. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
  312. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 2
  313. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1
  314. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
  315. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 2
  316. 16/10/11 13:21:09 DEBUG ClosureCleaner: <function0>
  317. 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
  318. 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  319. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
  320. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
  321. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
  322. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
  323. 16/10/11 13:21:09 DEBUG ClosureCleaner: + cloning the object <function0> of class org.apache.spark.rdd.RDD$$anonfun$take$1
  324. 16/10/11 13:21:09 DEBUG ClosureCleaner: + cleaning cloned closure <function0> recursively (org.apache.spark.rdd.RDD$$anonfun$take$1)
  325. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) +++
  326. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
  327. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1.serialVersionUID
  328. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.$outer
  329. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final int org.apache.spark.rdd.RDD$$anonfun$take$1.num$2
  330. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  331. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1.apply()
  332. 16/10/11 13:21:09 DEBUG ClosureCleaner: public org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.org$apache$spark$rdd$RDD$$anonfun$$$outer()
  333. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 2
  334. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29
  335. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$apply$48
  336. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 1
  337. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
  338. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 1
  339. 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
  340. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
  341. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
  342. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
  343. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
  344. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) is now cleaned +++
  345. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) is now cleaned +++
  346. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
  347. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 2
  348. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
  349. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final scala.Function1 org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
  350. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  351. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
  352. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
  353. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
  354. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
  355. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
  356. 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  357. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
  358. 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
  359. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
  360. 16/10/11 13:21:09 INFO SparkContext: Starting job: print at SparkJobMinimal.scala:59
  361. 16/10/11 13:21:09 INFO DAGScheduler: Got job 612 (print at SparkJobMinimal.scala:59) with 1 output partitions
  362. 16/10/11 13:21:09 INFO DAGScheduler: Final stage: ResultStage 612 (print at SparkJobMinimal.scala:59)
  363. 16/10/11 13:21:09 INFO DAGScheduler: Parents of final stage: List()
  364. 16/10/11 13:21:09 INFO DAGScheduler: Missing parents: List()
  365. 16/10/11 13:21:09 DEBUG DAGScheduler: submitStage(ResultStage 612)
  366. 16/10/11 13:21:09 DEBUG DAGScheduler: missing: List()
  367. 16/10/11 13:21:09 INFO DAGScheduler: Submitting ResultStage 612 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59), which has no missing parents
  368. 16/10/11 13:21:09 DEBUG DAGScheduler: submitMissingTasks(ResultStage 612)
  369. 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_612 stored as values in memory (estimated size 3.5 KB, free 2004.4 MB)
  370. 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_612 locally took 0 ms
  371. 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_612 without replication took 0 ms
  372. 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_612_piece0 stored as bytes in memory (estimated size 2.2 KB, free 2004.4 MB)
  373. 16/10/11 13:21:09 INFO BlockManagerInfo: Added broadcast_612_piece0 in memory on 10.94.121.6:55035 (size: 2.2 KB, free: 2004.5 MB)
  374. 16/10/11 13:21:09 DEBUG BlockManagerMaster: Updated info of block broadcast_612_piece0
  375. 16/10/11 13:21:09 DEBUG BlockManager: Told master about block broadcast_612_piece0
  376. 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_612_piece0 locally took 1 ms
  377. 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_612_piece0 without replication took 1 ms
  378. 16/10/11 13:21:09 INFO SparkContext: Created broadcast 612 from broadcast at DAGScheduler.scala:1012
  379. 16/10/11 13:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 612 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
  380. 16/10/11 13:21:09 DEBUG DAGScheduler: New pending partitions: Set(0)
  381. 16/10/11 13:21:09 INFO TaskSchedulerImpl: Adding task set 612.0 with 1 tasks
  382. 16/10/11 13:21:09 DEBUG TaskSetManager: Epoch for TaskSet 612.0: 0
  383. 16/10/11 13:21:09 DEBUG TaskSetManager: Valid locality levels for TaskSet 612.0: NO_PREF, ANY
  384. 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_612, runningTasks: 0
  385. 16/10/11 13:21:09 INFO TaskSetManager: Starting task 0.0 in stage 612.0 (TID 828, localhost, partition 0, PROCESS_LOCAL, 6951 bytes)
  386. 16/10/11 13:21:09 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY
  387. 16/10/11 13:21:09 INFO Executor: Running task 0.0 in stage 612.0 (TID 828)
  388. 16/10/11 13:21:09 DEBUG Executor: Task 828's epoch is 0
  389. 16/10/11 13:21:09 DEBUG BlockManager: Getting local block broadcast_612
  390. 16/10/11 13:21:09 DEBUG BlockManager: Level for block broadcast_612 is StorageLevel(disk, memory, deserialized, 1 replicas)
  391. 16/10/11 13:21:09 INFO KafkaRDD: Computing topic sapxm.adserving.log.ad_request, partition 11 offsets 288183 -> 288191
  392. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288183 requested 288183
  393. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288184 requested 288184
  394. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288185 requested 288185
  395. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288186 requested 288186
  396. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288187 requested 288187
  397. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288188 requested 288188
  398. 16/10/11 13:21:09 DEBUG NetworkClient: Initialize connection to node 0 for sending metadata request
  399. 16/10/11 13:21:09 DEBUG NetworkClient: Initiating connection to node 0 at 10.1.1.88:9092.
  400. 16/10/11 13:21:09 DEBUG NetworkClient: Sending metadata request {topics=[sapxm.adserving.log.ad_request]} to node 2
  401. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287338 for partition sapxm.adserving.log.ad_request-12
  402. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-11 to latest offset.
  403. 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-sent
  404. 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-received
  405. 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.latency
  406. 16/10/11 13:21:09 DEBUG NetworkClient: Completed connection to node 0
  407. 16/10/11 13:21:09 DEBUG Metadata: Updated cluster metadata version 3 to Cluster(nodes = [10.1.1.250:9092 (id: 1 rack: null), 10.1.1.88:9092 (id: 0 rack: null), 10.1.1.83:9092 (id: 2 rack: null)], partitions = [Partition(topic = sapxm.adserving.log.ad_request, partition = 12, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 11, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 14, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 13, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 15, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 0, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 2, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 1, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 4, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 3, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 6, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 5, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 8, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 7, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 10, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.ad_request, partition = 9, leader = 0, replicas = [0,1,2,], isr = [0,1,2,]])
  408. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288393 for partition sapxm.adserving.log.ad_request-11
  409. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-14 to latest offset.
  410. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288322 for partition sapxm.adserving.log.ad_request-14
  411. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-13 to latest offset.
  412. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Polled [sapxm.adserving.log.ad_request-11] 205
  413. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288189 requested 288189
  414. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 11 nextOffset 288190 requested 288190
  415. 16/10/11 13:21:09 INFO Executor: Finished task 0.0 in stage 612.0 (TID 828). 1833 bytes result sent to driver
  416. 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_612, runningTasks: 0
  417. 16/10/11 13:21:09 INFO TaskSetManager: Finished task 0.0 in stage 612.0 (TID 828) in 91 ms on localhost (1/1)
  418. 16/10/11 13:21:09 INFO TaskSchedulerImpl: Removed TaskSet 612.0, whose tasks have all completed, from pool
  419. 16/10/11 13:21:09 INFO DAGScheduler: ResultStage 612 (print at SparkJobMinimal.scala:59) finished in 0,091 s
  420. 16/10/11 13:21:09 DEBUG DAGScheduler: After removal of stage 612, remaining stages = 0
  421. 16/10/11 13:21:09 INFO DAGScheduler: Job 612 finished: print at SparkJobMinimal.scala:59, took 0,093241 s
  422. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) +++
  423. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
  424. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.serialVersionUID
  425. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD$$anonfun$take$1 org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.$outer
  426. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final int org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.left$1
  427. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  428. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(java.lang.Object)
  429. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(scala.collection.Iterator)
  430. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
  431. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 2
  432. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1
  433. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
  434. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 2
  435. 16/10/11 13:21:09 DEBUG ClosureCleaner: <function0>
  436. 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
  437. 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  438. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
  439. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
  440. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
  441. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
  442. 16/10/11 13:21:09 DEBUG ClosureCleaner: + cloning the object <function0> of class org.apache.spark.rdd.RDD$$anonfun$take$1
  443. 16/10/11 13:21:09 DEBUG ClosureCleaner: + cleaning cloned closure <function0> recursively (org.apache.spark.rdd.RDD$$anonfun$take$1)
  444. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) +++
  445. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
  446. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1.serialVersionUID
  447. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.$outer
  448. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final int org.apache.spark.rdd.RDD$$anonfun$take$1.num$2
  449. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  450. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1.apply()
  451. 16/10/11 13:21:09 DEBUG ClosureCleaner: public org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.org$apache$spark$rdd$RDD$$anonfun$$$outer()
  452. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 2
  453. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29
  454. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$apply$48
  455. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 1
  456. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
  457. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 1
  458. 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59
  459. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
  460. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
  461. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
  462. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
  463. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) is now cleaned +++
  464. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) is now cleaned +++
  465. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
  466. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 2
  467. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
  468. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final scala.Function1 org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
  469. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  470. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
  471. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
  472. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
  473. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
  474. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
  475. 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  476. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
  477. 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
  478. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
  479. 16/10/11 13:21:09 INFO SparkContext: Starting job: print at SparkJobMinimal.scala:59
  480. 16/10/11 13:21:09 INFO DAGScheduler: Got job 613 (print at SparkJobMinimal.scala:59) with 1 output partitions
  481. 16/10/11 13:21:09 INFO DAGScheduler: Final stage: ResultStage 613 (print at SparkJobMinimal.scala:59)
  482. 16/10/11 13:21:09 INFO DAGScheduler: Parents of final stage: List()
  483. 16/10/11 13:21:09 INFO DAGScheduler: Missing parents: List()
  484. 16/10/11 13:21:09 DEBUG DAGScheduler: submitStage(ResultStage 613)
  485. 16/10/11 13:21:09 DEBUG DAGScheduler: missing: List()
  486. 16/10/11 13:21:09 INFO DAGScheduler: Submitting ResultStage 613 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59), which has no missing parents
  487. 16/10/11 13:21:09 DEBUG DAGScheduler: submitMissingTasks(ResultStage 613)
  488. 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_613 stored as values in memory (estimated size 3.5 KB, free 2004.4 MB)
  489. 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_613 locally took 0 ms
  490. 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_613 without replication took 0 ms
  491. 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_613_piece0 stored as bytes in memory (estimated size 2.2 KB, free 2004.4 MB)
  492. 16/10/11 13:21:09 INFO BlockManagerInfo: Added broadcast_613_piece0 in memory on 10.94.121.6:55035 (size: 2.2 KB, free: 2004.5 MB)
  493. 16/10/11 13:21:09 DEBUG BlockManagerMaster: Updated info of block broadcast_613_piece0
  494. 16/10/11 13:21:09 DEBUG BlockManager: Told master about block broadcast_613_piece0
  495. 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_613_piece0 locally took 0 ms
  496. 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_613_piece0 without replication took 0 ms
  497. 16/10/11 13:21:09 INFO SparkContext: Created broadcast 613 from broadcast at DAGScheduler.scala:1012
  498. 16/10/11 13:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 613 (MapPartitionsRDD[605] at map at SparkJobMinimal.scala:59)
  499. 16/10/11 13:21:09 DEBUG DAGScheduler: New pending partitions: Set(1)
  500. 16/10/11 13:21:09 INFO TaskSchedulerImpl: Adding task set 613.0 with 1 tasks
  501. 16/10/11 13:21:09 DEBUG TaskSetManager: Epoch for TaskSet 613.0: 0
  502. 16/10/11 13:21:09 DEBUG TaskSetManager: Valid locality levels for TaskSet 613.0: NO_PREF, ANY
  503. 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_613, runningTasks: 0
  504. 16/10/11 13:21:09 INFO TaskSetManager: Starting task 0.0 in stage 613.0 (TID 829, localhost, partition 1, PROCESS_LOCAL, 6951 bytes)
  505. 16/10/11 13:21:09 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY
  506. 16/10/11 13:21:09 INFO Executor: Running task 0.0 in stage 613.0 (TID 829)
  507. 16/10/11 13:21:09 DEBUG Executor: Task 829's epoch is 0
  508. 16/10/11 13:21:09 DEBUG BlockManager: Getting local block broadcast_613
  509. 16/10/11 13:21:09 DEBUG BlockManager: Level for block broadcast_613 is StorageLevel(disk, memory, deserialized, 1 replicas)
  510. 16/10/11 13:21:09 INFO KafkaRDD: Computing topic sapxm.adserving.log.ad_request, partition 4 offsets 287643 -> 287649
  511. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 4 nextOffset 287643 requested 287643
  512. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 4 nextOffset 287644 requested 287644
  513. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.ad_request 4 nextOffset 287645 requested 287645
  514. 16/10/11 13:21:09 INFO Executor: Finished task 0.0 in stage 613.0 (TID 829). 1081 bytes result sent to driver
  515. 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_613, runningTasks: 0
  516. 16/10/11 13:21:09 INFO TaskSetManager: Finished task 0.0 in stage 613.0 (TID 829) in 3 ms on localhost (1/1)
  517. 16/10/11 13:21:09 INFO TaskSchedulerImpl: Removed TaskSet 613.0, whose tasks have all completed, from pool
  518. 16/10/11 13:21:09 INFO DAGScheduler: ResultStage 613 (print at SparkJobMinimal.scala:59) finished in 0,003 s
  519. 16/10/11 13:21:09 DEBUG DAGScheduler: After removal of stage 613, remaining stages = 0
  520. 16/10/11 13:21:09 INFO DAGScheduler: Job 613 finished: print at SparkJobMinimal.scala:59, took 0,005053 s
  521. 16/10/11 13:21:09 INFO JobScheduler: Finished job streaming job 1476184810000 ms.0 from job set of time 1476184810000 ms
  522. 16/10/11 13:21:09 INFO JobScheduler: Starting job streaming job 1476184810000 ms.1 from job set of time 1476184810000 ms
  523. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) +++
  524. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
  525. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.serialVersionUID
  526. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD$$anonfun$take$1 org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.$outer
  527. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final int org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.left$1
  528. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  529. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(java.lang.Object)
  530. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(scala.collection.Iterator)
  531. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
  532. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 2
  533. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1
  534. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
  535. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 2
  536. 16/10/11 13:21:09 DEBUG ClosureCleaner: <function0>
  537. 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64
  538. 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  539. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
  540. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
  541. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
  542. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64)
  543. 16/10/11 13:21:09 DEBUG ClosureCleaner: + cloning the object <function0> of class org.apache.spark.rdd.RDD$$anonfun$take$1
  544. 16/10/11 13:21:09 DEBUG ClosureCleaner: + cleaning cloned closure <function0> recursively (org.apache.spark.rdd.RDD$$anonfun$take$1)
  545. -------------------------------------------
  546. Time: 1476184810000 ms
  547. -------------------------------------------
  548. [B@45027835
  549. [B@3a71b22f
  550. [B@2585b0af
  551. [B@7e8fa0c9
  552. [B@281abcfd
  553. [B@1fcd931b
  554. [B@17c55885
  555. [B@144f3b5c
  556. [B@694f907c
  557. [B@1a699305
  558. ...
  559.  
  560. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) +++
  561. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 3
  562. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.rdd.RDD$$anonfun$take$1.serialVersionUID
  563. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.$outer
  564. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final int org.apache.spark.rdd.RDD$$anonfun$take$1.num$2
  565. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  566. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.rdd.RDD$$anonfun$take$1.apply()
  567. 16/10/11 13:21:09 DEBUG ClosureCleaner: public org.apache.spark.rdd.RDD org.apache.spark.rdd.RDD$$anonfun$take$1.org$apache$spark$rdd$RDD$$anonfun$$$outer()
  568. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 2
  569. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29
  570. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$apply$48
  571. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 1
  572. 16/10/11 13:21:09 DEBUG ClosureCleaner: org.apache.spark.rdd.RDD
  573. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 1
  574. 16/10/11 13:21:09 DEBUG ClosureCleaner: MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64
  575. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 2
  576. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD,Set(org$apache$spark$rdd$RDD$$evidence$1))
  577. 16/10/11 13:21:09 DEBUG ClosureCleaner: (class org.apache.spark.rdd.RDD$$anonfun$take$1,Set($outer))
  578. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outermost object is not a closure or REPL line object, so do not clone it: (class org.apache.spark.rdd.RDD,MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64)
  579. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function0> (org.apache.spark.rdd.RDD$$anonfun$take$1) is now cleaned +++
  580. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function1> (org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29) is now cleaned +++
  581. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
  582. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared fields: 2
  583. 16/10/11 13:21:09 DEBUG ClosureCleaner: public static final long org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
  584. 16/10/11 13:21:09 DEBUG ClosureCleaner: private final scala.Function1 org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
  585. 16/10/11 13:21:09 DEBUG ClosureCleaner: + declared methods: 2
  586. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
  587. 16/10/11 13:21:09 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
  588. 16/10/11 13:21:09 DEBUG ClosureCleaner: + inner classes: 0
  589. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer classes: 0
  590. 16/10/11 13:21:09 DEBUG ClosureCleaner: + outer objects: 0
  591. 16/10/11 13:21:09 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure
  592. 16/10/11 13:21:09 DEBUG ClosureCleaner: + fields accessed by starting closure: 0
  593. 16/10/11 13:21:09 DEBUG ClosureCleaner: + there are no enclosing objects!
  594. 16/10/11 13:21:09 DEBUG ClosureCleaner: +++ closure <function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
  595. 16/10/11 13:21:09 INFO SparkContext: Starting job: print at SparkJobMinimal.scala:64
  596. 16/10/11 13:21:09 INFO DAGScheduler: Got job 614 (print at SparkJobMinimal.scala:64) with 1 output partitions
  597. 16/10/11 13:21:09 INFO DAGScheduler: Final stage: ResultStage 614 (print at SparkJobMinimal.scala:64)
  598. 16/10/11 13:21:09 INFO DAGScheduler: Parents of final stage: List()
  599. 16/10/11 13:21:09 INFO DAGScheduler: Missing parents: List()
  600. 16/10/11 13:21:09 DEBUG DAGScheduler: submitStage(ResultStage 614)
  601. 16/10/11 13:21:09 DEBUG DAGScheduler: missing: List()
  602. 16/10/11 13:21:09 INFO DAGScheduler: Submitting ResultStage 614 (MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64), which has no missing parents
  603. 16/10/11 13:21:09 DEBUG DAGScheduler: submitMissingTasks(ResultStage 614)
  604. 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_614 stored as values in memory (estimated size 3.5 KB, free 2004.4 MB)
  605. 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_614 locally took 0 ms
  606. 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_614 without replication took 0 ms
  607. 16/10/11 13:21:09 INFO MemoryStore: Block broadcast_614_piece0 stored as bytes in memory (estimated size 2.2 KB, free 2004.4 MB)
  608. 16/10/11 13:21:09 INFO BlockManagerInfo: Added broadcast_614_piece0 in memory on 10.94.121.6:55035 (size: 2.2 KB, free: 2004.5 MB)
  609. 16/10/11 13:21:09 DEBUG BlockManagerMaster: Updated info of block broadcast_614_piece0
  610. 16/10/11 13:21:09 DEBUG BlockManager: Told master about block broadcast_614_piece0
  611. 16/10/11 13:21:09 DEBUG BlockManager: Put block broadcast_614_piece0 locally took 0 ms
  612. 16/10/11 13:21:09 DEBUG BlockManager: Putting block broadcast_614_piece0 without replication took 0 ms
  613. 16/10/11 13:21:09 INFO SparkContext: Created broadcast 614 from broadcast at DAGScheduler.scala:1012
  614. 16/10/11 13:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 614 (MapPartitionsRDD[607] at map at SparkJobMinimal.scala:64)
  615. 16/10/11 13:21:09 DEBUG DAGScheduler: New pending partitions: Set(0)
  616. 16/10/11 13:21:09 INFO TaskSchedulerImpl: Adding task set 614.0 with 1 tasks
  617. 16/10/11 13:21:09 DEBUG TaskSetManager: Epoch for TaskSet 614.0: 0
  618. 16/10/11 13:21:09 DEBUG TaskSetManager: Valid locality levels for TaskSet 614.0: NO_PREF, ANY
  619. 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_614, runningTasks: 0
  620. 16/10/11 13:21:09 INFO TaskSetManager: Starting task 0.0 in stage 614.0 (TID 830, localhost, partition 0, PROCESS_LOCAL, 6945 bytes)
  621. 16/10/11 13:21:09 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY
  622. 16/10/11 13:21:09 INFO Executor: Running task 0.0 in stage 614.0 (TID 830)
  623. 16/10/11 13:21:09 DEBUG Executor: Task 830's epoch is 0
  624. 16/10/11 13:21:09 DEBUG BlockManager: Getting local block broadcast_614
  625. 16/10/11 13:21:09 DEBUG BlockManager: Level for block broadcast_614 is StorageLevel(disk, memory, deserialized, 1 replicas)
  626. 16/10/11 13:21:09 INFO KafkaRDD: Computing topic sapxm.adserving.log.view, partition 0 offsets 287068 -> 287279
  627. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287067 requested 287068
  628. 16/10/11 13:21:09 INFO CachedKafkaConsumer: Initial fetch for spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 287068
  629. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Seeking to sapxm.adserving.log.view-0 287068
  630. 16/10/11 13:21:09 DEBUG KafkaConsumer: Seeking to offset 287068 for partition sapxm.adserving.log.view-0
  631. 16/10/11 13:21:09 DEBUG NetworkClient: Initialize connection to node 0 for sending metadata request
  632. 16/10/11 13:21:09 DEBUG NetworkClient: Initiating connection to node 0 at 10.1.1.88:9092.
  633. 16/10/11 13:21:09 DEBUG Fetcher: Discarding fetch response for partition sapxm.adserving.log.view-0 since its offset 287072 does not match the expected offset 287068
  634. 16/10/11 13:21:09 DEBUG NetworkClient: Sending metadata request {topics=[sapxm.adserving.log.view]} to node 2
  635. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 288315 for partition sapxm.adserving.log.ad_request-13
  636. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-15 to latest offset.
  637. 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-sent
  638. 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.bytes-received
  639. 16/10/11 13:21:09 DEBUG Metrics: Added sensor with name node-0.latency
  640. 16/10/11 13:21:09 DEBUG NetworkClient: Completed connection to node 0
  641. 16/10/11 13:21:09 DEBUG Metadata: Updated cluster metadata version 3 to Cluster(nodes = [10.1.1.83:9092 (id: 2 rack: null), 10.1.1.250:9092 (id: 1 rack: null), 10.1.1.88:9092 (id: 0 rack: null)], partitions = [Partition(topic = sapxm.adserving.log.view, partition = 3, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 4, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 1, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 2, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 0, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 11, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 12, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 9, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 10, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 7, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 8, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 5, leader = 1, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 6, leader = 2, replicas = [0,1,2,], isr = [1,0,2,], Partition(topic = sapxm.adserving.log.view, partition = 15, leader = 2, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 13, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = sapxm.adserving.log.view, partition = 14, leader = 1, replicas = [0,1,2,], isr = [1,0,2,]])
  642. 16/10/11 13:21:09 DEBUG Fetcher: Fetched offset 287970 for partition sapxm.adserving.log.ad_request-15
  643. 16/10/11 13:21:09 DEBUG Fetcher: Resetting offset for partition sapxm.adserving.log.ad_request-0 to latest offset.
  644. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Polled [sapxm.adserving.log.view-0] 211
  645. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287069 requested 287069
  646. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287070 requested 287070
  647. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287071 requested 287071
  648. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287072 requested 287072
  649. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287073 requested 287073
  650. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287074 requested 287074
  651. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287075 requested 287075
  652. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287076 requested 287076
  653. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287077 requested 287077
  654. 16/10/11 13:21:09 DEBUG CachedKafkaConsumer: Get spark-executor-87a6dfd6-9832-4140-ae19-7f6f583be8ad sapxm.adserving.log.view 0 nextOffset 287078 requested 287078
  655. 16/10/11 13:21:09 INFO Executor: Finished task 0.0 in stage 614.0 (TID 830). 958 bytes result sent to driver
  656. 16/10/11 13:21:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_614, runningTasks: 0
  657. 16/10/11 13:21:09 INFO TaskSetManager: Finished task 0.0 in stage 614.0 (TID 830) in 60 ms on localhost (1/1)
  658. 16/10/11 13:21:09 INFO TaskSchedulerImpl: Removed TaskSet 614.0, whose tasks have all completed, from pool
  659. 16/10/11 13:21:09 INFO DAGScheduler: ResultStage 614 (print at SparkJobMinimal.scala:64) finished in 0,061 s
  660. 16/10/11 13:21:09 DEBUG DAGScheduler: After removal of stage 614, remaining stages = 0
  661. 16/10/11 13:21:09 INFO DAGScheduler: Job 614 finished: print at SparkJobMinimal.scala:64, took 0,063755 s
  662. 16/10/11 13:21:09 INFO JobScheduler: Finished job streaming job 1476184810000 ms.1 from job set of time 1476184810000 ms
  663. 16/10/11 13:21:09 INFO JobScheduler: Total delay: 59,448 s for time 1476184810000 ms (execution: 0,174 s)
Add Comment
Please, Sign In to add comment