Advertisement
Guest User

Spark container log

a guest
Jul 30th, 2015
912
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 15.63 KB | None | 0 0
  1. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  2. 15/07/30 07:44:53 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
  3. 15/07/30 07:44:54 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1438241505028_0001_000001
  4. 15/07/30 07:44:55 INFO SecurityManager: Changing view acls to: vagrant,tomaszg
  5. 15/07/30 07:44:55 INFO SecurityManager: Changing modify acls to: vagrant,tomaszg
  6. 15/07/30 07:44:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vagrant, tomaszg); users with modify permissions: Set(vagrant, tomaszg)
  7. 15/07/30 07:44:56 INFO ApplicationMaster: Starting the user application in a separate Thread
  8. 15/07/30 07:44:56 INFO ApplicationMaster: Waiting for spark context initialization
  9. 15/07/30 07:44:56 INFO ApplicationMaster: Waiting for spark context initialization ...
  10. 15/07/30 07:44:56 INFO SparkContext: Running Spark version 1.4.1
  11. 15/07/30 07:44:56 INFO SecurityManager: Changing view acls to: vagrant,tomaszg
  12. 15/07/30 07:44:56 INFO SecurityManager: Changing modify acls to: vagrant,tomaszg
  13. 15/07/30 07:44:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vagrant, tomaszg); users with modify permissions: Set(vagrant, tomaszg)
  14. 15/07/30 07:44:57 INFO Slf4jLogger: Slf4jLogger started
  15. 15/07/30 07:44:57 INFO Remoting: Starting remoting
  16. 15/07/30 07:44:58 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.33.10:52362]
  17. 15/07/30 07:44:58 INFO Utils: Successfully started service 'sparkDriver' on port 52362.
  18. 15/07/30 07:44:58 INFO SparkEnv: Registering MapOutputTracker
  19. 15/07/30 07:44:58 INFO SparkEnv: Registering BlockManagerMaster
  20. 15/07/30 07:44:58 INFO DiskBlockManager: Created local directory at /tmp/hadoop-vagrant/nm-local-dir/usercache/tomaszg/appcache/application_1438241505028_0001/blockmgr-cb5664f6-b5ba-477f-a037-54cbf7264c14
  21. 15/07/30 07:44:58 INFO MemoryStore: MemoryStore started with capacity 267.3 MB
  22. 15/07/30 07:44:58 INFO HttpFileServer: HTTP File server directory is /tmp/hadoop-vagrant/nm-local-dir/usercache/tomaszg/appcache/application_1438241505028_0001/httpd-019af6e1-b398-4db7-bd9a-ba6cbf3cfab2
  23. 15/07/30 07:44:58 INFO HttpServer: Starting HTTP Server
  24. 15/07/30 07:44:58 INFO Utils: Successfully started service 'HTTP file server' on port 48026.
  25. 15/07/30 07:44:58 INFO SparkEnv: Registering OutputCommitCoordinator
  26. 15/07/30 07:44:58 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
  27. 15/07/30 07:44:58 INFO Utils: Successfully started service 'SparkUI' on port 55847.
  28. 15/07/30 07:44:58 INFO SparkUI: Started SparkUI at http://192.168.33.10:55847
  29. 15/07/30 07:44:58 INFO YarnClusterScheduler: Created YarnClusterScheduler
  30. 15/07/30 07:44:59 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33269.
  31. 15/07/30 07:44:59 INFO NettyBlockTransferService: Server created on 33269
  32. 15/07/30 07:44:59 INFO BlockManagerMaster: Trying to register BlockManager
  33. 15/07/30 07:44:59 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.33.10:33269 with 267.3 MB RAM, BlockManagerId(driver, 192.168.33.10, 33269)
  34. 15/07/30 07:44:59 INFO BlockManagerMaster: Registered BlockManager
  35. 15/07/30 07:44:59 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#1935548891])
  36. 15/07/30 07:44:59 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
  37. 15/07/30 07:44:59 INFO YarnRMClient: Registering the ApplicationMaster
  38. 15/07/30 07:44:59 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
  39. 15/07/30 07:44:59 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
  40. 15/07/30 07:44:59 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
  41. 15/07/30 07:44:59 INFO ApplicationMaster: Started progress reporter thread - sleep time : 5000
  42. 15/07/30 07:45:04 INFO AMRMClientImpl: Received new token for : bigdatavm:58620
  43. 15/07/30 07:45:04 INFO YarnAllocator: Launching container container_1438241505028_0001_01_000002 for on host bigdatavm
  44. 15/07/30 07:45:04 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@192.168.33.10:52362/user/CoarseGrainedScheduler, executorHostname: bigdatavm
  45. 15/07/30 07:45:04 INFO YarnAllocator: Launching container container_1438241505028_0001_01_000003 for on host bigdatavm
  46. 15/07/30 07:45:04 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@192.168.33.10:52362/user/CoarseGrainedScheduler, executorHostname: bigdatavm
  47. 15/07/30 07:45:04 INFO ExecutorRunnable: Starting Executor Container
  48. 15/07/30 07:45:04 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
  49. 15/07/30 07:45:04 INFO ExecutorRunnable: Setting up ContainerLaunchContext
  50. 15/07/30 07:45:04 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them.
  51. 15/07/30 07:45:04 INFO ExecutorRunnable: Starting Executor Container
  52. 15/07/30 07:45:04 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
  53. 15/07/30 07:45:04 INFO ExecutorRunnable: Setting up ContainerLaunchContext
  54. 15/07/30 07:45:04 INFO ExecutorRunnable: Preparing Local resources
  55. 15/07/30 07:45:04 INFO ExecutorRunnable: Preparing Local resources
  56. 15/07/30 07:45:05 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "bigdatavm" port: 9000 file: "/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-examples-1.4.1-hadoop2.6.0.jar" } size: 105738089 timestamp: 1438242286737 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "bigdatavm" port: 9000 file: "/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-assembly-1.4.1-hadoop2.6.0.jar" } size: 162976273 timestamp: 1438242284037 type: FILE visibility: PRIVATE)
  57. 15/07/30 07:45:05 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "bigdatavm" port: 9000 file: "/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-examples-1.4.1-hadoop2.6.0.jar" } size: 105738089 timestamp: 1438242286737 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "bigdatavm" port: 9000 file: "/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-assembly-1.4.1-hadoop2.6.0.jar" } size: 162976273 timestamp: 1438242284037 type: FILE visibility: PRIVATE)
  58. 15/07/30 07:45:05 INFO ExecutorRunnable: Setting up executor with environment: Map(CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*, SPARK_LOG_URL_STDERR -> http://bigdatavm:8042/node/containerlogs/container_1438241505028_0001_01_000003/tomaszg/stderr?start=0, SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1438241505028_0001, SPARK_YARN_CACHE_FILES_FILE_SIZES -> 162976273,105738089, SPARK_USER -> tomaszg, SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE, SPARK_YARN_MODE -> true, SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1438242284037,1438242286737, SPARK_LOG_URL_STDOUT -> http://bigdatavm:8042/node/containerlogs/container_1438241505028_0001_01_000003/tomaszg/stdout?start=0, SPARK_YARN_CACHE_FILES -> hdfs://bigdatavm:9000/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-assembly-1.4.1-hadoop2.6.0.jar#__spark__.jar,hdfs://bigdatavm:9000/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-examples-1.4.1-hadoop2.6.0.jar#__app__.jar)
  59. 15/07/30 07:45:05 INFO ExecutorRunnable: Setting up executor with commands: List({{JAVA_HOME}}/bin/java, -server, -XX:OnOutOfMemoryError='kill %p', -Xms1024m, -Xmx1024m, -Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=52362', '-Dspark.ui.port=0', -Dspark.yarn.app.container.log.dir=<LOG_DIR>, org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url, akka.tcp://sparkDriver@192.168.33.10:52362/user/CoarseGrainedScheduler, --executor-id, 2, --hostname, bigdatavm, --cores, 1, --app-id, application_1438241505028_0001, --user-class-path, file:$PWD/__app__.jar, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
  60. 15/07/30 07:45:05 INFO ContainerManagementProtocolProxy: Opening proxy : bigdatavm:58620
  61. 15/07/30 07:45:05 INFO ExecutorRunnable: Setting up executor with environment: Map(CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*, SPARK_LOG_URL_STDERR -> http://bigdatavm:8042/node/containerlogs/container_1438241505028_0001_01_000002/tomaszg/stderr?start=0, SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1438241505028_0001, SPARK_YARN_CACHE_FILES_FILE_SIZES -> 162976273,105738089, SPARK_USER -> tomaszg, SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE, SPARK_YARN_MODE -> true, SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1438242284037,1438242286737, SPARK_LOG_URL_STDOUT -> http://bigdatavm:8042/node/containerlogs/container_1438241505028_0001_01_000002/tomaszg/stdout?start=0, SPARK_YARN_CACHE_FILES -> hdfs://bigdatavm:9000/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-assembly-1.4.1-hadoop2.6.0.jar#__spark__.jar,hdfs://bigdatavm:9000/user/tomaszg/.sparkStaging/application_1438241505028_0001/spark-examples-1.4.1-hadoop2.6.0.jar#__app__.jar)
  62. 15/07/30 07:45:05 INFO ExecutorRunnable: Setting up executor with commands: List({{JAVA_HOME}}/bin/java, -server, -XX:OnOutOfMemoryError='kill %p', -Xms1024m, -Xmx1024m, -Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=52362', '-Dspark.ui.port=0', -Dspark.yarn.app.container.log.dir=<LOG_DIR>, org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url, akka.tcp://sparkDriver@192.168.33.10:52362/user/CoarseGrainedScheduler, --executor-id, 1, --hostname, bigdatavm, --cores, 1, --app-id, application_1438241505028_0001, --user-class-path, file:$PWD/__app__.jar, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
  63. 15/07/30 07:45:05 INFO ContainerManagementProtocolProxy: Opening proxy : bigdatavm:58620
  64. 15/07/30 07:45:14 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. bigdatavm:46937
  65. 15/07/30 07:45:15 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@bigdatavm:42032/user/Executor#491554388]) with ID 1
  66. 15/07/30 07:45:15 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. bigdatavm:46529
  67. 15/07/30 07:45:16 INFO BlockManagerMasterEndpoint: Registering block manager bigdatavm:41172 with 534.5 MB RAM, BlockManagerId(1, bigdatavm, 41172)
  68. 15/07/30 07:45:16 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@bigdatavm:60352/user/Executor#-152283894]) with ID 2
  69. 15/07/30 07:45:16 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
  70. 15/07/30 07:45:16 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
  71. 15/07/30 07:45:17 INFO BlockManagerMasterEndpoint: Registering block manager bigdatavm:40659 with 534.5 MB RAM, BlockManagerId(2, bigdatavm, 40659)
  72. 15/07/30 07:45:17 INFO SparkContext: Starting job: reduce at SparkPi.scala:35
  73. 15/07/30 07:45:17 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 2 output partitions (allowLocal=false)
  74. 15/07/30 07:45:17 INFO DAGScheduler: Final stage: ResultStage 0(reduce at SparkPi.scala:35)
  75. 15/07/30 07:45:17 INFO DAGScheduler: Parents of final stage: List()
  76. 15/07/30 07:45:17 INFO DAGScheduler: Missing parents: List()
  77. 15/07/30 07:45:17 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
  78. 15/07/30 07:45:17 INFO MemoryStore: ensureFreeSpace(1888) called with curMem=0, maxMem=280248975
  79. 15/07/30 07:45:17 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1888.0 B, free 267.3 MB)
  80. 15/07/30 07:45:17 INFO MemoryStore: ensureFreeSpace(1202) called with curMem=1888, maxMem=280248975
  81. 15/07/30 07:45:17 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B, free 267.3 MB)
  82. 15/07/30 07:45:17 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.33.10:33269 (size: 1202.0 B, free: 267.3 MB)
  83. 15/07/30 07:45:17 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
  84. 15/07/30 07:45:17 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31)
  85. 15/07/30 07:45:17 INFO YarnClusterScheduler: Adding task set 0.0 with 2 tasks
  86. 15/07/30 07:45:17 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, bigdatavm, PROCESS_LOCAL, 1369 bytes)
  87. 15/07/30 07:45:17 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, bigdatavm, PROCESS_LOCAL, 1369 bytes)
  88. 15/07/30 07:45:18 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on bigdatavm:40659 (size: 1202.0 B, free: 534.5 MB)
  89. 15/07/30 07:45:18 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on bigdatavm:41172 (size: 1202.0 B, free: 534.5 MB)
  90. 15/07/30 07:45:18 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1094 ms on bigdatavm (1/2)
  91. 15/07/30 07:45:18 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:35) finished in 1.100 s
  92. 15/07/30 07:45:18 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1072 ms on bigdatavm (2/2)
  93. 15/07/30 07:45:18 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:35, took 1.419507 s
  94. 15/07/30 07:45:18 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
  95. 15/07/30 07:45:18 INFO SparkUI: Stopped Spark web UI at http://192.168.33.10:55847
  96. 15/07/30 07:45:18 INFO DAGScheduler: Stopping DAGScheduler
  97. 15/07/30 07:45:18 INFO YarnClusterSchedulerBackend: Shutting down all executors
  98. 15/07/30 07:45:18 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
  99. 15/07/30 07:45:18 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
  100. 15/07/30 07:45:18 INFO MemoryStore: MemoryStore cleared
  101. 15/07/30 07:45:18 INFO BlockManager: BlockManager stopped
  102. 15/07/30 07:45:18 INFO BlockManagerMaster: BlockManagerMaster stopped
  103. 15/07/30 07:45:18 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
  104. 15/07/30 07:45:18 INFO SparkContext: Successfully stopped SparkContext
  105. 15/07/30 07:45:19 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
  106. 15/07/30 07:45:19 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
  107. 15/07/30 07:45:19 INFO ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
  108. 15/07/30 07:45:19 INFO ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
  109. 15/07/30 07:45:19 INFO AMRMClientImpl: Waiting for application to be successfully unregistered.
  110. 15/07/30 07:45:19 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
  111. 15/07/30 07:45:19 INFO ApplicationMaster: Deleting staging directory .sparkStaging/application_1438241505028_0001
  112. 15/07/30 07:45:19 INFO Utils: Shutdown hook called
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement