Advertisement
Guest User

Untitled

a guest
Jun 29th, 2016
482
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.52 KB | None | 0 0
  1. aused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
  2. Exchange rangepartitioning(merchant_id#8 ASC,200), None
  3. +- ConvertToSafe
  4. +- TungstenAggregate(key=[city#5,category#4,merchant_id#8,timestamp#12], functions=[], output=[city#5,timestamp#12,merchant_id#8])
  5. +- TungstenExchange hashpartitioning(city#5,category#4,merchant_id#8,timestamp#12,200), None
  6. +- TungstenAggregate(key=[city#5,category#4,merchant_id#8,timestamp#12], functions=[], output=[city#5,category#4,merchant_id#8,timestamp#12])
  7. +- Project [city#5,category#4,merchant_id#8,timestamp#12]
  8. +- Filter NOT (merchant_id#8 = )
  9. +- Scan ExistingRDD[entity_id#0,is_OMS_jingpin#1,wlt_ico#2,subcategory#3,category#4,city#5,is_OMS_ding#6,is_famousCompany#7,merchant_id#8,pageno#9,url#10,position#11,timestamp#12,hour#13]
  10.  
  11. at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
  12. at org.apache.spark.sql.execution.Exchange.doExecute(Exchange.scala:247)
  13. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
  14. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
  15. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
  16. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
  17. at org.apache.spark.sql.execution.ConvertToUnsafe.doExecute(rowFormatConverters.scala:38)
  18. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
  19. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
  20. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
  21. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
  22. at org.apache.spark.sql.execution.Sort.doExecute(Sort.scala:64)
  23. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
  24. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
  25. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
  26. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
  27. at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46)
  28. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
  29. at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
  30. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
  31. at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
  32. at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1.apply(TungstenAggregate.scala:86)
  33. at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1.apply(TungstenAggregate.scala:80)
  34. at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
  35. ... 54 more
  36. Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, ip-172-31-44-106.us-west-2.compute.internal): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 5.5 GB of 5.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement