Advertisement
Guest User

Untitled

a guest
Mar 14th, 2016
207
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 15.96 KB | None | 0 0
  1. log4j:WARN No appenders could be found for logger (com.google.cloud.dataflow.sdk.options.PipelineOptionsFactory).
  2. log4j:WARN Please initialize the log4j system properly.
  3. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
  4. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  5. 16/03/14 13:13:59 INFO SparkContext: Running Spark version 1.6.1
  6. 16/03/14 13:14:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  7. 16/03/14 13:14:00 INFO SecurityManager: Changing view acls to: snakanda
  8. 16/03/14 13:14:00 INFO SecurityManager: Changing modify acls to: snakanda
  9. 16/03/14 13:14:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(snakanda); users with modify permissions: Set(snakanda)
  10. 16/03/14 13:14:00 INFO Utils: Successfully started service 'sparkDriver' on port 50003.
  11. 16/03/14 13:14:00 INFO Slf4jLogger: Slf4jLogger started
  12. 16/03/14 13:14:00 INFO Remoting: Starting remoting
  13. 16/03/14 13:14:01 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:50004]
  14. 16/03/14 13:14:01 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 50004.
  15. 16/03/14 13:14:01 INFO SparkEnv: Registering MapOutputTracker
  16. 16/03/14 13:14:01 INFO SparkEnv: Registering BlockManagerMaster
  17. 16/03/14 13:14:01 INFO DiskBlockManager: Created local directory at /private/var/folders/r8/3x9692kj7679zdbxtwly35qr0000gn/T/blockmgr-018f8f99-82e1-4ad5-ade5-6145affb43ce
  18. 16/03/14 13:14:01 INFO MemoryStore: MemoryStore started with capacity 1140.4 MB
  19. 16/03/14 13:14:01 INFO SparkEnv: Registering OutputCommitCoordinator
  20. 16/03/14 13:14:01 INFO Utils: Successfully started service 'SparkUI' on port 4040.
  21. 16/03/14 13:14:01 INFO SparkUI: Started SparkUI at http://156.56.179.144:4040
  22. 16/03/14 13:14:01 INFO Executor: Starting executor ID driver on host localhost
  23. 16/03/14 13:14:01 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50005.
  24. 16/03/14 13:14:01 INFO NettyBlockTransferService: Server created on 50005
  25. 16/03/14 13:14:01 INFO BlockManagerMaster: Trying to register BlockManager
  26. 16/03/14 13:14:01 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50005 with 1140.4 MB RAM, BlockManagerId(driver, localhost, 50005)
  27. 16/03/14 13:14:01 INFO BlockManagerMaster: Registered BlockManager
  28. 16/03/14 13:14:01 INFO SparkPipelineRunner: Evaluating TextIO.Read
  29. 16/03/14 13:14:01 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 119.9 KB, free 119.9 KB)
  30. 16/03/14 13:14:02 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.8 KB, free 132.7 KB)
  31. 16/03/14 13:14:02 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50005 (size: 12.8 KB, free: 1140.4 MB)
  32. 16/03/14 13:14:02 INFO SparkContext: Created broadcast 0 from textFile at TransformTranslator.java:393
  33. 16/03/14 13:14:02 INFO SparkPipelineRunner: Evaluating AnonymousParDo
  34. 16/03/14 13:14:02 INFO SparkPipelineRunner: Evaluating Window.Into()
  35. 16/03/14 13:14:02 INFO SparkPipelineRunner: Evaluating ParDo(ReifyTimestampAndWindows)
  36. 16/03/14 13:14:02 INFO SparkPipelineRunner: Evaluating GroupByKey.GroupByKeyOnly
  37. 16/03/14 13:14:02 INFO FileInputFormat: Total input paths to process : 1
  38. 16/03/14 13:14:02 INFO SparkPipelineRunner: Evaluating AnonymousParDo
  39. 16/03/14 13:14:02 INFO SparkPipelineRunner: Evaluating ParDo(GroupAlsoByWindowsViaIterators)
  40. 16/03/14 13:14:02 INFO SparkContext: Starting job: count at EvaluationContext.java:201
  41. 16/03/14 13:14:02 INFO DAGScheduler: Registering RDD 5 (mapToPair at TransformTranslator.java:132)
  42. 16/03/14 13:14:02 INFO DAGScheduler: Got job 0 (count at EvaluationContext.java:201) with 2 output partitions
  43. 16/03/14 13:14:02 INFO DAGScheduler: Final stage: ResultStage 1 (count at EvaluationContext.java:201)
  44. 16/03/14 13:14:02 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
  45. 16/03/14 13:14:02 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
  46. 16/03/14 13:14:02 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[5] at mapToPair at TransformTranslator.java:132), which has no missing parents
  47. 16/03/14 13:14:02 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 224.8 KB, free 357.5 KB)
  48. 16/03/14 13:14:02 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 118.5 KB, free 476.0 KB)
  49. 16/03/14 13:14:02 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:50005 (size: 118.5 KB, free: 1140.2 MB)
  50. 16/03/14 13:14:02 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
  51. 16/03/14 13:14:02 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[5] at mapToPair at TransformTranslator.java:132)
  52. 16/03/14 13:14:02 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
  53. 16/03/14 13:14:02 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2148 bytes)
  54. 16/03/14 13:14:02 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
  55. 16/03/14 13:14:02 INFO HadoopRDD: Input split: file:/Users/supun/Work/stock-analysis/inputFiles/2004.csv:0+33554432
  56. 16/03/14 13:14:02 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
  57. 16/03/14 13:14:02 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
  58. 16/03/14 13:14:02 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
  59. 16/03/14 13:14:02 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
  60. 16/03/14 13:14:02 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
  61. 16/03/14 13:14:02 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
  62. java.lang.ClassCastException: com.google.cloud.dataflow.sdk.transforms.windowing.GlobalWindow cannot be cast to com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow
  63. at com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow$IntervalWindowCoder.encode(IntervalWindow.java:171)
  64. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:113)
  65. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:59)
  66. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:599)
  67. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:540)
  68. at com.cloudera.dataflow.spark.CoderHelpers.toByteArray(CoderHelpers.java:48)
  69. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:134)
  70. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:131)
  71. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  72. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  73. at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  74. at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
  75. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
  76. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  77. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  78. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  79. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  80. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  81. at java.lang.Thread.run(Thread.java:745)
  82. 16/03/14 13:14:02 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2148 bytes)
  83. 16/03/14 13:14:02 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
  84. 16/03/14 13:14:02 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ClassCastException: com.google.cloud.dataflow.sdk.transforms.windowing.GlobalWindow cannot be cast to com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow
  85. at com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow$IntervalWindowCoder.encode(IntervalWindow.java:171)
  86. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:113)
  87. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:59)
  88. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:599)
  89. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:540)
  90. at com.cloudera.dataflow.spark.CoderHelpers.toByteArray(CoderHelpers.java:48)
  91. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:134)
  92. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:131)
  93. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  94. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  95. at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  96. at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
  97. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
  98. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  99. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  100. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  101. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  102. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  103. at java.lang.Thread.run(Thread.java:745)
  104.  
  105. 16/03/14 13:14:02 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
  106. 16/03/14 13:14:02 INFO HadoopRDD: Input split: file:/Users/supun/Work/stock-analysis/inputFiles/2004.csv:33554432+27825150
  107. 16/03/14 13:14:02 INFO TaskSchedulerImpl: Cancelling stage 0
  108. 16/03/14 13:14:02 INFO Executor: Executor is trying to kill task 1.0 in stage 0.0 (TID 1)
  109. 16/03/14 13:14:02 INFO TaskSchedulerImpl: Stage 0 was cancelled
  110. 16/03/14 13:14:02 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at TransformTranslator.java:132) failed in 0.179 s
  111. 16/03/14 13:14:02 INFO DAGScheduler: Job 0 failed: count at EvaluationContext.java:201, took 0.275541 s
  112. Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: com.google.cloud.dataflow.sdk.transforms.windowing.GlobalWindow cannot be cast to com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow
  113. at com.cloudera.dataflow.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:121)
  114. at edu.indiana.soic.ts.streaming.dataflow.StockAnalysisPipeline2.main(StockAnalysisPipeline2.java:120)
  115. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  116. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  117. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  118. at java.lang.reflect.Method.invoke(Method.java:497)
  119. at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
  120. Caused by: java.lang.ClassCastException: com.google.cloud.dataflow.sdk.transforms.windowing.GlobalWindow cannot be cast to com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow
  121. at com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow$IntervalWindowCoder.encode(IntervalWindow.java:171)
  122. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:113)
  123. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:59)
  124. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:599)
  125. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:540)
  126. at com.cloudera.dataflow.spark.CoderHelpers.toByteArray(CoderHelpers.java:48)
  127. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:134)
  128. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:131)
  129. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  130. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  131. at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  132. at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
  133. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
  134. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  135. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  136. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  137. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  138. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  139. at java.lang.Thread.run(Thread.java:745)
  140. 16/03/14 13:14:02 INFO SparkContext: Invoking stop() from shutdown hook
  141. 16/03/14 13:14:02 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
  142. java.lang.ClassCastException: com.google.cloud.dataflow.sdk.transforms.windowing.GlobalWindow cannot be cast to com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow
  143. at com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow$IntervalWindowCoder.encode(IntervalWindow.java:171)
  144. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:113)
  145. at com.google.cloud.dataflow.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:59)
  146. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:599)
  147. at com.google.cloud.dataflow.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:540)
  148. at com.cloudera.dataflow.spark.CoderHelpers.toByteArray(CoderHelpers.java:48)
  149. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:134)
  150. at com.cloudera.dataflow.spark.CoderHelpers$3.call(CoderHelpers.java:131)
  151. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  152. at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
  153. at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  154. at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
  155. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
  156. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  157. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  158. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  159. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  160. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  161. at java.lang.Thread.run(Thread.java:745)
  162. 16/03/14 13:14:02 INFO TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) on executor localhost: java.lang.ClassCastException (com.google.cloud.dataflow.sdk.transforms.windowing.GlobalWindow cannot be cast to com.google.cloud.dataflow.sdk.transforms.windowing.IntervalWindow) [duplicate 1]
  163. 16/03/14 13:14:02 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
  164. 16/03/14 13:14:02 INFO SparkUI: Stopped Spark web UI at http://156.56.179.144:4040
  165. 16/03/14 13:14:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
  166. 16/03/14 13:14:02 INFO MemoryStore: MemoryStore cleared
  167. 16/03/14 13:14:02 INFO BlockManager: BlockManager stopped
  168. 16/03/14 13:14:02 INFO BlockManagerMaster: BlockManagerMaster stopped
  169. 16/03/14 13:14:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
  170. 16/03/14 13:14:02 INFO SparkContext: Successfully stopped SparkContext
  171. 16/03/14 13:14:02 INFO ShutdownHookManager: Shutdown hook called
  172. 16/03/14 13:14:02 INFO ShutdownHookManager: Deleting directory /private/var/folders/r8/3x9692kj7679zdbxtwly35qr0000gn/T/spark-208493ac-43af-40ac-ae37-978bf1450f8a
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement