Guest User

Untitled

a guest
Nov 30th, 2016
118
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 14.27 KB | None | 0 0
  1. NFO 30-11 22:05:41,548 - Starting job: show at <console>:37
  2. INFO 30-11 22:05:41,548 - Got job 3 (show at <console>:37) with 1 output partitions
  3. INFO 30-11 22:05:41,548 - Final stage: ResultStage 4 (show at <console>:37)
  4. INFO 30-11 22:05:41,548 - Parents of final stage: List()
  5. INFO 30-11 22:05:41,548 - Missing parents: List()
  6. INFO 30-11 22:05:41,549 - Submitting ResultStage 4 (MapPartitionsRDD[21] at show at <console>:37), which has no missing parents
  7. INFO 30-11 22:05:41,552 - Block broadcast_6 stored as values in memory (estimated size 13.2 KB, free 25.6 KB)
  8. INFO 30-11 22:05:41,553 - Block broadcast_6_piece0 stored as bytes in memory (estimated size 6.7 KB, free 32.2 KB)
  9. INFO 30-11 22:05:41,553 - Added broadcast_6_piece0 in memory on localhost:53272 (size: 6.7 KB, free: 511.1 MB)
  10. INFO 30-11 22:05:41,554 - Created broadcast 6 from broadcast at DAGScheduler.scala:1006
  11. INFO 30-11 22:05:41,554 - Submitting 1 missing tasks from ResultStage 4 (MapPartitionsRDD[21] at show at <console>:37)
  12. INFO 30-11 22:05:41,554 - Adding task set 4.0 with 1 tasks
  13. INFO 30-11 22:05:41,555 - Starting task 0.0 in stage 4.0 (TID 6, localhost, partition 0,PROCESS_LOCAL, 2546 bytes)
  14. INFO 30-11 22:05:41,555 - Running task 0.0 in stage 4.0 (TID 6)
  15. INFO 30-11 22:05:41,584 - [Executor task launch worker-0][partitionID:table;queryID:10233263048484647] Query will be executed on table: test_table
  16. ERROR 30-11 22:05:41,594 - Exception in task 0.0 in stage 4.0 (TID 6)
  17. java.lang.InterruptedException:
  18. at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:83)
  19. at org.apache.carbondata.spark.rdd.CarbonScanRDD.compute(CarbonScanRDD.scala:171)
  20. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  21. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  22. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  23. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  24. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  25. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  26. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  27. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  28. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  29. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  30. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  31. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  32. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  33. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  34. at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
  35. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  36. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  37. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  38. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  39. at java.lang.Thread.run(Thread.java:745)
  40. WARN 30-11 22:05:41,607 - Lost task 0.0 in stage 4.0 (TID 6, localhost): java.lang.InterruptedException:
  41. at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:83)
  42. at org.apache.carbondata.spark.rdd.CarbonScanRDD.compute(CarbonScanRDD.scala:171)
  43. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  44. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  45. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  46. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  47. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  48. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  49. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  50. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  51. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  52. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  53. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  54. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  55. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  56. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  57. at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
  58. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  59. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  60. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  61. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  62. at java.lang.Thread.run(Thread.java:745)
  63.  
  64. ERROR 30-11 22:05:41,608 - Task 0 in stage 4.0 failed 1 times; aborting job
  65. INFO 30-11 22:05:41,609 - Removed TaskSet 4.0, whose tasks have all completed, from pool
  66. INFO 30-11 22:05:41,612 - Cancelling stage 4
  67. INFO 30-11 22:05:41,614 - ResultStage 4 (show at <console>:37) failed in 0.059 s
  68. INFO 30-11 22:05:41,615 - Job 3 failed: show at <console>:37, took 0.066920 s
  69. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 6, localhost): java.lang.InterruptedException:
  70. at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:83)
  71. at org.apache.carbondata.spark.rdd.CarbonScanRDD.compute(CarbonScanRDD.scala:171)
  72. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  73. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  74. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  75. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  76. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  77. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  78. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  79. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  80. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  81. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  82. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  83. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  84. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  85. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  86. at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
  87. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  88. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  89. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  90. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  91. at java.lang.Thread.run(Thread.java:745)
  92.  
  93. Driver stacktrace:
  94. at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
  95. at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
  96. at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
  97. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  98. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
  99. at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
  100. at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
  101. at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
  102. at scala.Option.foreach(Option.scala:236)
  103. at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
  104. at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
  105. at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
  106. at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
  107. at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  108. at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
  109. at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
  110. at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
  111. at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
  112. at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)
  113. at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
  114. at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
  115. at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
  116. at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
  117. at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
  118. at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
  119. at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
  120. at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
  121. at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
  122. at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
  123. at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
  124. at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
  125. at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
  126. at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
  127. at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
  128. at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
  129. at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
  130. at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37)
  131. at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
  132. at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
  133. at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
  134. at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:48)
  135. at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:50)
  136. at $iwC$$iwC$$iwC$$iwC.<init>(<console>:52)
  137. at $iwC$$iwC$$iwC.<init>(<console>:54)
  138. at $iwC$$iwC.<init>(<console>:56)
  139. at $iwC.<init>(<console>:58)
  140. at <init>(<console>:60)
  141. at .<init>(<console>:64)
  142. at .<clinit>(<console>)
  143. at .<init>(<console>:7)
  144. at .<clinit>(<console>)
  145. at $print(<console>)
  146. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  147. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  148. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  149. at java.lang.reflect.Method.invoke(Method.java:498)
  150. at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
  151. at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
  152. at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
  153. at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
  154. at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
  155. at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
  156. at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
  157. at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
  158. at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
  159. at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
  160. at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
  161. at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
  162. at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
  163. at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
  164. at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
  165. at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
  166. at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
  167. at org.apache.spark.repl.Main$.main(Main.scala:31)
  168. at org.apache.spark.repl.Main.main(Main.scala)
  169. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  170. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  171. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  172. at java.lang.reflect.Method.invoke(Method.java:498)
  173. at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
  174. at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
  175. at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
  176. at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
  177. at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  178. Caused by: java.lang.InterruptedException:
  179. at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:83)
  180. at org.apache.carbondata.spark.rdd.CarbonScanRDD.compute(CarbonScanRDD.scala:171)
  181. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  182. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  183. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  184. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  185. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  186. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  187. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  188. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  189. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  190. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  191. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  192. at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
  193. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
  194. at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
  195. at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
  196. at org.apache.spark.scheduler.Task.run(Task.scala:89)
  197. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
  198. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  199. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  200. at java.lang.Thread.run(Thread.java:745)
Advertisement
Add Comment
Please, Sign In to add comment