aironman

update amazon spark-streaming

May 25th, 2016
254
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.21 KB | None | 0 0
  1. 16/05/25 11:30:30 INFO CheckpointWriter: Checkpoint for time 1464168630000 ms saved to file 'file:/Users/aironman/my-recommendation-spark-engine/checkpoint/checkpoint-1464168630000', took 5928 bytes and 8 ms
  2. 16/05/25 11:30:30 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 1041 bytes result sent to driver
  3. 16/05/25 11:30:30 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 4 ms on localhost (1/1)
  4. 16/05/25 11:30:30 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
  5. 16/05/25 11:30:30 INFO DAGScheduler: ResultStage 2 (runJob at KafkaRDD.scala:98) finished in 0,004 s
  6. 16/05/25 11:30:30 INFO DAGScheduler: Job 2 finished: runJob at KafkaRDD.scala:98, took 0,008740 s
  7. <------>
  8. someMessages is [Lscala.Tuple2;@2641d687
  9. (null,{"userId":"someUserId","productId":"0981531679","rating":6.0})
  10. <------>
  11. <---POSSIBLE SOLUTION--->
  12. 16/05/25 11:30:30 INFO JobScheduler: Finished job streaming job 1464168630000 ms.0 from job set of time 1464168630000 ms
  13. 16/05/25 11:30:30 INFO KafkaRDD: Removing RDD 105 from persistence list
  14. 16/05/25 11:30:30 INFO JobScheduler: Total delay: 0,020 s for time 1464168630000 ms (execution: 0,012 s)
  15. 16/05/25 11:30:30 ERROR JobScheduler: Error running job streaming job 1464168630000 ms.0
  16. java.lang.IllegalStateException: Adding new inputs, transformations, and output operations after starting a context is not supported
  17. at org.apache.spark.streaming.dstream.DStream.validateAtInit(DStream.scala:222)
  18. at org.apache.spark.streaming.dstream.DStream.<init>(DStream.scala:64)
  19. at org.apache.spark.streaming.dstream.MappedDStream.<init>(MappedDStream.scala:25)
  20. at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:558)
  21. at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:558)
  22. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
  23. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
  24. at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
  25. at org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:260)
  26. at org.apache.spark.streaming.dstream.DStream.map(DStream.scala:557)
  27. at example.spark.AmazonKafkaConnector$$anonfun$main$1.apply(AmazonKafkaConnectorWithMongo.scala:125)
  28. at example.spark.AmazonKafkaConnector$$anonfun$main$1.apply(AmazonKafkaConnectorWithMongo.scala:114)
  29. at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
  30. at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
  31. at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
  32. at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
  33. at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
  34. at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
  35. at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
  36. at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
  37. at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
  38. at scala.util.Try$.apply(Try.scala:161)
  39. at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
  40. at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
  41. at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
  42. at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
  43. at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
  44. at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
  45. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  46. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  47. at java.lang.Thread.run(Thread.java:745)
  48. 16/05/25 11:30:30 INFO BlockManager: Removing RDD 105
Add Comment
Please, Sign In to add comment