Advertisement
Guest User

Untitled

a guest
Oct 25th, 2014
390
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 11.28 KB | None | 0 0
  1. 14/10/25 16:35:36 ERROR Executor: Exception in task 3.0 in stage 4.0 (TID 27)
  2. org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  3. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/worker.py", line 79, in main
  4. serializer.dump_stream(func(split_index, iterator), outfile)
  5. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  6. return func(split, prev_func(split, iterator))
  7. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  8. return func(split, prev_func(split, iterator))
  9. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  10. return func(split, prev_func(split, iterator))
  11. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  12. return func(split, prev_func(split, iterator))
  13. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 352, in func
  14. return f(iterator)
  15. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1585, in _mergeCombiners
  16. merger.mergeCombiners(iterator)
  17. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/shuffle.py", line 282, in mergeCombiners
  18. d[k] = comb(d[k], v) if k in d else v
  19. File "/home/cc/cs61c/fa14/class/cs61c-ef/proj2/SlidingBfsSpark.py", line 10, in bfs_reduce
  20. if value1[1] < value2[1]:
  21. TypeError: 'int' object has no attribute '__getitem__'
  22.  
  23. at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  24. at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  25. at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  26. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  27. at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  28. at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:265)
  29. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  30. at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  31. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
  32. at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  33. at org.apache.spark.scheduler.Task.run(Task.scala:54)
  34. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
  35. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  36. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  37. at java.lang.Thread.run(Thread.java:745)
  38. 14/10/25 16:35:36 INFO TaskSetManager: Finished task 6.0 in stage 4.0 (TID 30) in 41 ms on localhost (2/8)
  39. 14/10/25 16:35:36 INFO Executor: Finished task 7.0 in stage 4.0 (TID 31). 1014 bytes result sent to driver
  40. 14/10/25 16:35:36 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 24) in 60 ms on localhost (3/8)
  41. 14/10/25 16:35:36 INFO TaskSetManager: Finished task 2.0 in stage 4.0 (TID 26) in 65 ms on localhost (4/8)
  42. 14/10/25 16:35:36 INFO TaskSetManager: Finished task 4.0 in stage 4.0 (TID 28) in 67 ms on localhost (5/8)
  43. 14/10/25 16:35:36 INFO TaskSetManager: Finished task 5.0 in stage 4.0 (TID 29) in 73 ms on localhost (6/8)
  44. 14/10/25 16:35:36 INFO TaskSetManager: Finished task 7.0 in stage 4.0 (TID 31) in 76 ms on localhost (7/8)
  45. 14/10/25 16:35:36 WARN TaskSetManager: Lost task 3.0 in stage 4.0 (TID 27, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  46. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/worker.py", line 79, in main
  47. serializer.dump_stream(func(split_index, iterator), outfile)
  48. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  49. return func(split, prev_func(split, iterator))
  50. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  51. return func(split, prev_func(split, iterator))
  52. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  53. return func(split, prev_func(split, iterator))
  54. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  55. return func(split, prev_func(split, iterator))
  56. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 352, in func
  57. return f(iterator)
  58. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1585, in _mergeCombiners
  59. merger.mergeCombiners(iterator)
  60. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/shuffle.py", line 282, in mergeCombiners
  61. d[k] = comb(d[k], v) if k in d else v
  62. File "/home/cc/cs61c/fa14/class/cs61c-ef/proj2/SlidingBfsSpark.py", line 10, in bfs_reduce
  63. if value1[1] < value2[1]:
  64. TypeError: 'int' object has no attribute '__getitem__'
  65.  
  66. org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  67. org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  68. org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  69. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  70. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  71. org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:265)
  72. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  73. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  74. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
  75. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  76. org.apache.spark.scheduler.Task.run(Task.scala:54)
  77. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
  78. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  79. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  80. java.lang.Thread.run(Thread.java:745)
  81. 14/10/25 16:35:36 ERROR TaskSetManager: Task 3 in stage 4.0 failed 1 times; aborting job
  82. 14/10/25 16:35:36 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
  83. 14/10/25 16:35:36 INFO TaskSchedulerImpl: Cancelling stage 4
  84. 14/10/25 16:35:36 INFO DAGScheduler: Failed to run collect at /home/cc/cs61c/fa14/class/cs61c-ef/proj2/SlidingBfsSpark.py:53
  85. Traceback (most recent call last):
  86. File "/home/cc/cs61c/fa14/class/cs61c-ef/proj2/SlidingBfsSpark.py", line 95, in <module>
  87. main()
  88. File "/home/cc/cs61c/fa14/class/cs61c-ef/proj2/SlidingBfsSpark.py", line 88, in main
  89. solve_sliding_puzzle(args.master, writer, args.height, args.width)
  90. File "/home/cc/cs61c/fa14/class/cs61c-ef/proj2/SlidingBfsSpark.py", line 53, in solve_sliding_puzzle
  91. print(gameBoard.collect())
  92. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 723, in collect
  93. bytesInJava = self._jrdd.collect().iterator()
  94. File "/home/ff/cs61c/spark-1.1.0/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  95. File "/home/ff/cs61c/spark-1.1.0/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
  96. py4j.protocol.Py4JJavaError: An error occurred while calling o133.collect.
  97. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 4.0 failed 1 times, most recent failure: Lost task 3.0 in stage 4.0 (TID 27, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  98. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/worker.py", line 79, in main
  99. serializer.dump_stream(func(split_index, iterator), outfile)
  100. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  101. return func(split, prev_func(split, iterator))
  102. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  103. return func(split, prev_func(split, iterator))
  104. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  105. return func(split, prev_func(split, iterator))
  106. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1990, in pipeline_func
  107. return func(split, prev_func(split, iterator))
  108. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 352, in func
  109. return f(iterator)
  110. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/rdd.py", line 1585, in _mergeCombiners
  111. merger.mergeCombiners(iterator)
  112. File "/home/ff/cs61c/spark-1.1.0/python/pyspark/shuffle.py", line 282, in mergeCombiners
  113. d[k] = comb(d[k], v) if k in d else v
  114. File "/home/cc/cs61c/fa14/class/cs61c-ef/proj2/SlidingBfsSpark.py", line 10, in bfs_reduce
  115. if value1[1] < value2[1]:
  116. TypeError: 'int' object has no attribute '__getitem__'
  117.  
  118. org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  119. org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  120. org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  121. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  122. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  123. org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:265)
  124. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  125. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  126. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
  127. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  128. org.apache.spark.scheduler.Task.run(Task.scala:54)
  129. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
  130. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  131. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  132. java.lang.Thread.run(Thread.java:745)
  133. Driver stacktrace:
  134. at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
  135. at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
  136. at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
  137. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  138. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
  139. at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
  140. at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  141. at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  142. at scala.Option.foreach(Option.scala:236)
  143. at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
  144. at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
  145. at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
  146. at akka.actor.ActorCell.invoke(ActorCell.scala:456)
  147. at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
  148. at akka.dispatch.Mailbox.run(Mailbox.scala:219)
  149. at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
  150. at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
  151. at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
  152. at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
  153. at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
  154.  
  155. Command exited with non-zero status 1
  156. 7.28user 0.46system 0:05.20elapsed 148%CPU (0avgtext+0avgdata 917168maxresident)k
  157. 0inputs+264outputs (0major+96509minor)pagefaults 0swaps
  158. gmake: *** [run-small] Error 1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement