Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- vagrant@kmaster:~/spark-2.4.5-bin-hadoop2.7$ bin/spark-submit \
- > --master k8s://https://<ip>:<port> \
- > --deploy-mode client \
- > --name spark-pi \
- > --class org.apache.spark.examples.SparkPi \
- > --conf spark.executor.instances=1 \
- > --conf spark.kubernetes.namespace=spark-native \
- > --conf spark.kubernetes.container.image=kmaster:5000/spark:latest \
- > --conf spark.kubernetes.container.image.pullPolicy=IfNotPresent \
- > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
- > local:///home/vagrant/spark-2.4.5-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.4.5.jar 100
- 20/04/27 15:41:30 WARN Utils: Your hostname, kmaster resolves to a loopback address: 127.0.0.1; using 10.0.2.15 instead (on interface eth0)
- 20/04/27 15:41:30 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
- 20/04/27 15:41:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
- 20/04/27 15:41:31 INFO SparkContext: Running Spark version 2.4.5
- 20/04/27 15:41:31 INFO SparkContext: Submitted application: Spark Pi
- 20/04/27 15:41:31 INFO SecurityManager: Changing view acls to: vagrant
- 20/04/27 15:41:31 INFO SecurityManager: Changing modify acls to: vagrant
- 20/04/27 15:41:31 INFO SecurityManager: Changing view acls groups to:
- 20/04/27 15:41:31 INFO SecurityManager: Changing modify acls groups to:
- 20/04/27 15:41:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vagrant); groups with view permissions: Set(); users with modify permissions: Set(vagrant); groups with modify permissions: Set()
- 20/04/27 15:41:32 INFO Utils: Successfully started service 'sparkDriver' on port 37837.
- 20/04/27 15:41:32 INFO SparkEnv: Registering MapOutputTracker
- 20/04/27 15:41:32 INFO SparkEnv: Registering BlockManagerMaster
- 20/04/27 15:41:32 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
- 20/04/27 15:41:32 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
- 20/04/27 15:41:32 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-f11ebf68-8ac9-4423-a559-47bc5412fea4
- 20/04/27 15:41:32 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
- 20/04/27 15:41:32 INFO SparkEnv: Registering OutputCommitCoordinator
- 20/04/27 15:41:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
- 20/04/27 15:41:32 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
- 20/04/27 15:41:32 INFO SparkContext: Added JAR local:///home/vagrant/spark-2.4.5-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.4.5.jar at file:/home/vagrant/spark-2.4.5-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.4.5.jar with timestamp 1588027292920
- 20/04/27 15:41:34 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:41:35 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38133.
- 20/04/27 15:41:35 INFO NettyBlockTransferService: Server created on 10.0.2.15:38133
- 20/04/27 15:41:35 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
- 20/04/27 15:41:35 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 38133, None)
- 20/04/27 15:41:35 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:38133 with 366.3 MB RAM, BlockManagerId(driver, 10.0.2.15, 38133, None)
- 20/04/27 15:41:35 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 38133, None)
- 20/04/27 15:41:35 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 38133, None)
- 20/04/27 15:41:40 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:41:41 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
- 20/04/27 15:41:41 INFO BlockManagerMaster: Removal of executor 1 requested
- 20/04/27 15:41:41 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 1
- 20/04/27 15:41:46 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:41:46 INFO BlockManagerMaster: Removal of executor 2 requested
- 20/04/27 15:41:46 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 2
- 20/04/27 15:41:46 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
- 20/04/27 15:41:53 INFO BlockManagerMaster: Removal of executor 3 requested
- 20/04/27 15:41:53 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
- 20/04/27 15:41:53 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 3
- 20/04/27 15:41:53 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:41:59 INFO BlockManagerMaster: Removal of executor 4 requested
- 20/04/27 15:41:59 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 4
- 20/04/27 15:41:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 4 from BlockManagerMaster.
- 20/04/27 15:41:59 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:04 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
- 20/04/27 15:42:04 INFO SparkContext: Starting job: reduce at SparkPi.scala:38
- 20/04/27 15:42:04 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 100 output partitions
- 20/04/27 15:42:04 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
- 20/04/27 15:42:04 INFO DAGScheduler: Parents of final stage: List()
- 20/04/27 15:42:04 INFO DAGScheduler: Missing parents: List()
- 20/04/27 15:42:04 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
- 20/04/27 15:42:04 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.0 KB, free 366.3 MB)
- 20/04/27 15:42:04 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1381.0 B, free 366.3 MB)
- 20/04/27 15:42:04 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:38133 (size: 1381.0 B, free: 366.3 MB)
- 20/04/27 15:42:04 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1163
- 20/04/27 15:42:04 INFO DAGScheduler: Submitting 100 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
- 20/04/27 15:42:04 INFO TaskSchedulerImpl: Adding task set 0.0 with 100 tasks
- 20/04/27 15:42:05 INFO BlockManagerMaster: Removal of executor 5 requested
- 20/04/27 15:42:05 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 5
- 20/04/27 15:42:05 INFO BlockManagerMasterEndpoint: Trying to remove executor 5 from BlockManagerMaster.
- 20/04/27 15:42:05 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:13 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:13 INFO BlockManagerMaster: Removal of executor 6 requested
- 20/04/27 15:42:13 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 6
- 20/04/27 15:42:13 INFO BlockManagerMasterEndpoint: Trying to remove executor 6 from BlockManagerMaster.
- 20/04/27 15:42:19 INFO BlockManagerMaster: Removal of executor 7 requested
- 20/04/27 15:42:19 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 7
- 20/04/27 15:42:19 INFO BlockManagerMasterEndpoint: Trying to remove executor 7 from BlockManagerMaster.
- 20/04/27 15:42:19 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:20 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
- 20/04/27 15:42:26 INFO BlockManagerMaster: Removal of executor 8 requested
- 20/04/27 15:42:26 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 8
- 20/04/27 15:42:26 INFO BlockManagerMasterEndpoint: Trying to remove executor 8 from BlockManagerMaster.
- 20/04/27 15:42:27 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:33 INFO BlockManagerMaster: Removal of executor 9 requested
- 20/04/27 15:42:33 INFO BlockManagerMasterEndpoint: Trying to remove executor 9 from BlockManagerMaster.
- 20/04/27 15:42:33 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 9
- 20/04/27 15:42:33 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:35 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
- 20/04/27 15:42:39 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:39 INFO BlockManagerMasterEndpoint: Trying to remove executor 10 from BlockManagerMaster.
- 20/04/27 15:42:39 INFO BlockManagerMaster: Removal of executor 10 requested
- 20/04/27 15:42:39 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 10
- 20/04/27 15:42:45 INFO BlockManagerMaster: Removal of executor 11 requested
- 20/04/27 15:42:45 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 11
- 20/04/27 15:42:45 INFO BlockManagerMasterEndpoint: Trying to remove executor 11 from BlockManagerMaster.
- 20/04/27 15:42:45 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:50 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
- 20/04/27 15:42:52 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:42:52 INFO BlockManagerMaster: Removal of executor 12 requested
- 20/04/27 15:42:52 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 12
- 20/04/27 15:42:52 INFO BlockManagerMasterEndpoint: Trying to remove executor 12 from BlockManagerMaster.
- 20/04/27 15:42:58 INFO BlockManagerMaster: Removal of executor 13 requested
- 20/04/27 15:42:58 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 13
- 20/04/27 15:42:58 INFO BlockManagerMasterEndpoint: Trying to remove executor 13 from BlockManagerMaster.
- 20/04/27 15:42:58 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:43:05 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
- 20/04/27 15:43:05 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:43:05 INFO BlockManagerMaster: Removal of executor 14 requested
- 20/04/27 15:43:05 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 14
- 20/04/27 15:43:05 INFO BlockManagerMasterEndpoint: Trying to remove executor 14 from BlockManagerMaster.
- 20/04/27 15:43:11 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
- 20/04/27 15:43:11 INFO BlockManagerMaster: Removal of executor 15 requested
- 20/04/27 15:43:11 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 15
- 20/04/27 15:43:11 INFO BlockManagerMasterEndpoint: Trying to remove executor 15 from BlockManagerMaster.
- ^C20/04/27 15:43:14 INFO SparkContext: Invoking stop() from shutdown hook
- 20/04/27 15:43:14 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
- 20/04/27 15:43:14 INFO DAGScheduler: Job 0 failed: reduce at SparkPi.scala:38, took 70.120058 s
- Exception in thread "main" org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:933)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:931)
- at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
- at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:931)
- at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2130)
- at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
- at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2043)
- at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949)
- at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
- at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
- at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:575)
- at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
- at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
- at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
- at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
- at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
- at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
- at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
- at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
- at scala.util.Try$.apply(Try.scala:192)
- at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
- at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
- at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
- at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738)
- at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
- at org.apache.spark.SparkContext.runJob(SparkContext.scala:2158)
- at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1080)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
- at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
- at org.apache.spark.rdd.RDD.reduce(RDD.scala:1062)
- at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:38)
- at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
- at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
- at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
- at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
- at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
- at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
- at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
- at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
- 20/04/27 15:43:14 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) failed in 70.002 s due to Stage cancelled because SparkContext was shut down
- 20/04/27 15:43:14 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
- 20/04/27 15:43:14 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
- 20/04/27 15:43:14 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
- 20/04/27 15:43:14 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
- 20/04/27 15:43:14 INFO MemoryStore: MemoryStore cleared
- 20/04/27 15:43:14 INFO BlockManager: BlockManager stopped
- 20/04/27 15:43:14 INFO BlockManagerMaster: BlockManagerMaster stopped
- 20/04/27 15:43:14 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
- 20/04/27 15:43:14 INFO SparkContext: Successfully stopped SparkContext
- 20/04/27 15:43:14 INFO ShutdownHookManager: Shutdown hook called
- 20/04/27 15:43:14 INFO ShutdownHookManager: Deleting directory /tmp/spark-acbb97fe-bbf5-4b65-98d0-a66de228676b
- 20/04/27 15:43:14 INFO ShutdownHookManager: Deleting directory /tmp/spark-8a83c8a6-1162-4c31-a2f6-178b5835a476
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement