Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 6/05/24 16:55:44 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
- 16/05/24 16:55:44 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (KafkaRDD[74] at createDirectStream at AmazonKafkaConnectorWithMongo.scala:100)
- 16/05/24 16:55:44 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
- 16/05/24 16:55:44 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,ANY, 2024 bytes)
- 16/05/24 16:55:44 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
- 16/05/24 16:55:44 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
- java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to [B
- at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
- at org.apache.spark.scheduler.Task.run(Task.scala:89)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- 16/05/24 16:55:44 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to [B
- at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
- at org.apache.spark.scheduler.Task.run(Task.scala:89)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- 16/05/24 16:55:44 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
- 16/05/24 16:55:44 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
- 16/05/24 16:55:44 INFO TaskSchedulerImpl: Cancelling stage 0
- 16/05/24 16:55:44 INFO DAGScheduler: ResultStage 0 (runJob at KafkaRDD.scala:98) failed in 0,020 s
- 16/05/24 16:55:44 INFO DAGScheduler: Job 0 failed: runJob at KafkaRDD.scala:98, took 0,031171 s
- 16/05/24 16:55:44 INFO JobScheduler: Finished job streaming job 1464101744000 ms.0 from job set of time 1464101744000 ms
- 16/05/24 16:55:44 INFO JobScheduler: Total delay: 0,051 s for time 1464101744000 ms (execution: 0,042 s)
- 16/05/24 16:55:44 INFO KafkaRDD: Removing RDD 73 from persistence list
- 16/05/24 16:55:44 INFO BlockManager: Removing RDD 73
- 16/05/24 16:55:44 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
- 16/05/24 16:55:44 INFO InputInfoTracker: remove old batch metadata: 1464101740000 ms
- 16/05/24 16:55:44 ERROR JobScheduler: Error running job streaming job 1464101744000 ms.0
- org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to [B
- at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
- at org.apache.spark.scheduler.Task.run(Task.scala:89)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Driver stacktrace:
- at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
- at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
- at scala.Option.foreach(Option.scala:236)
- at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
- at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
- at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
- at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
- at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
- at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
- at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
- at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
- at org.apache.spark.streaming.kafka.KafkaRDD.take(KafkaRDD.scala:98)
- at example.spark.AmazonKafkaConnector$$anonfun$main$1.apply(AmazonKafkaConnectorWithMongo.scala:117)
- at example.spark.AmazonKafkaConnector$$anonfun$main$1.apply(AmazonKafkaConnectorWithMongo.scala:113)
- at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
- at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
- at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
- at scala.util.Try$.apply(Try.scala:161)
- at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
- at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to [B
- at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
- at org.apache.spark.scheduler.Task.run(Task.scala:89)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
- ... 3 more
- Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to [B
- at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
- at org.apache.spark.scheduler.Task.run(Task.scala:89)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Driver stacktrace:
- at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
- at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
- at scala.Option.foreach(Option.scala:236)
- at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
- at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
- at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
- at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
- at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
- at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
- at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
- at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
- at org.apache.spark.streaming.kafka.KafkaRDD.take(KafkaRDD.scala:98)
- at example.spark.AmazonKafkaConnector$$anonfun$main$1.apply(AmazonKafkaConnectorWithMongo.scala:117)
- at example.spark.AmazonKafkaConnector$$anonfun$main$1.apply(AmazonKafkaConnectorWithMongo.scala:113)
- at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
- at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
- at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
- at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
- at scala.util.Try$.apply(Try.scala:161)
- at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
- at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
- at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to [B
- at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
- at org.apache.spark.scheduler.Task.run(Task.scala:89)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
- ... 3 more
- 16/05/24 16:55:44 INFO StreamingContext: Invoking stop(stopGracefully=false) from shutdown hook
- 16/05/24 16:55:44 INFO JobGenerator: Stopping JobGenerator immediately
- 16/05/24 16:55:44 INFO RecurringTimer: Stopped timer for JobGenerator after time 1464101744000
- 16/05/24 16:55:44 INFO JobGenerator: Stopped JobGenerator
- 16/05/24 16:55:44 INFO JobScheduler: Stopped JobScheduler
- 16/05/24 16:55:44 INFO StreamingContext: StreamingContext stopped successfully
- 16/05/24 16:55:44 INFO SparkContext: Invoking stop() from shutdown hook
- 16/05/24 16:55:44 INFO SparkUI: Stopped Spark web UI at http://localhost:4041
- 16/05/24 16:55:44 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
- 16/05/24 16:55:44 INFO MemoryStore: MemoryStore cleared
- 16/05/24 16:55:44 INFO BlockManager: BlockManager stopped
- 16/05/24 16:55:44 INFO BlockManagerMaster: BlockManagerMaster stopped
- 16/05/24 16:55:44 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
- 16/05/24 16:55:44 INFO SparkContext: Successfully stopped SparkContext
- 16/05/24 16:55:44 INFO SparkContext: Invoking stop() from shutdown hook
- 16/05/24 16:55:44 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
- 16/05/24 16:55:44 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
- 16/05/24 16:55:44 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
- 16/05/24 16:55:44 INFO SparkUI: Stopped Spark web UI at http://192.168.1.35:4040
- 16/05/24 16:55:44 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
- 16/05/24 16:55:44 INFO MemoryStore: MemoryStore cleared
- 16/05/24 16:55:44 INFO BlockManager: BlockManager stopped
- 16/05/24 16:55:44 INFO BlockManagerMaster: BlockManagerMaster stopped
- 16/05/24 16:55:44 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
- 16/05/24 16:55:44 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
- 16/05/24 16:55:44 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
- 16/05/24 16:55:44 INFO SparkContext: Successfully stopped SparkContext
- 16/05/24 16:55:44 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
- 16/05/24 16:55:44 INFO ShutdownHookManager: Shutdown hook called
- 16/05/24 16:55:44 INFO ShutdownHookManager: Deleting directory /private/var/folders/gn/pzkybyfd2g5bpyh47q0pp5nc0000gn/T/spark-cf3acb3b-c056-4672-9396-596c7469b32a
- 16/05/24 16:55:44 INFO ShutdownHookManager: Deleting directory /private/var/folders/gn/pzkybyfd2g5bpyh47q0pp5nc0000gn/T/spark-cf3acb3b-c056-4672-9396-596c7469b32a/httpd-e3bb8815-8a12-46fd-baff-64d19edd1187
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement