Advertisement
Guest User

Spark LogFile

a guest
Apr 8th, 2016
246
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 49.40 KB | None | 0 0
  1. 16/04/08 12:23:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  2. 16/04/08 12:23:32 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
  3. 16/04/08 12:23:33 INFO Client: Requesting a new application from cluster with 1 NodeManagers
  4. 16/04/08 12:23:33 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
  5. 16/04/08 12:23:33 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
  6. 16/04/08 12:23:33 INFO Client: Setting up container launch context for our AM
  7. 16/04/08 12:23:33 INFO Client: Setting up the launch environment for our AM container
  8. 16/04/08 12:23:34 INFO Client: Preparing resources for our AM container
  9. 16/04/08 12:23:36 INFO Client: Uploading resource file:/usr/local/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar
  10. 16/04/08 12:23:44 INFO Client: Uploading resource file:/usr/local/sparkapps/WordCount/target/scala-2.10/scalawordcount_2.10-1.0.jar -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar
  11. 16/04/08 12:23:45 INFO Client: Uploading resource file:/tmp/spark-46d2564e-43c2-4833-a682-91ff617f65e5/__spark_conf__2355479738370329692.zip -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/__spark_conf__2355479738370329692.zip
  12. 16/04/08 12:23:45 INFO SecurityManager: Changing view acls to: hduser
  13. 16/04/08 12:23:45 INFO SecurityManager: Changing modify acls to: hduser
  14. 16/04/08 12:23:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
  15. 16/04/08 12:23:46 INFO Client: Submitting application 3 to ResourceManager
  16. 16/04/08 12:23:46 INFO YarnClientImpl: Submitted application application_1460107053907_0003
  17. 16/04/08 12:23:47 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  18. 16/04/08 12:23:47 INFO Client:
  19. client token: N/A
  20. diagnostics: N/A
  21. ApplicationMaster host: N/A
  22. ApplicationMaster RPC port: -1
  23. queue: default
  24. start time: 1460111026395
  25. final status: UNDEFINED
  26. tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
  27. user: hduser
  28. 16/04/08 12:23:48 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  29. 16/04/08 12:23:49 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  30. 16/04/08 12:23:50 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  31. 16/04/08 12:23:51 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  32. 16/04/08 12:23:52 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  33. 16/04/08 12:23:53 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
  34. 16/04/08 12:23:53 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  35. 16/04/08 12:23:54 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  36. 16/04/08 12:23:55 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1460107053907_0003_000001
  37. 16/04/08 12:23:55 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  38. 16/04/08 12:23:56 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  39. 16/04/08 12:23:57 INFO SecurityManager: Changing view acls to: hduser
  40. 16/04/08 12:23:57 INFO SecurityManager: Changing modify acls to: hduser
  41. 16/04/08 12:23:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
  42. 16/04/08 12:23:57 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  43. 16/04/08 12:23:58 INFO ApplicationMaster: Starting the user application in a separate Thread
  44. 16/04/08 12:23:58 INFO ApplicationMaster: Waiting for spark context initialization
  45. 16/04/08 12:23:58 INFO ApplicationMaster: Waiting for spark context initialization ...
  46. 16/04/08 12:23:58 INFO SparkContext: Running Spark version 1.6.1
  47. 16/04/08 12:23:58 WARN Utils: Your hostname, debian resolves to a loopback address: 127.0.0.1; using 192.168.1.55 instead (on interface eth0)
  48. 16/04/08 12:23:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
  49. 16/04/08 12:23:58 INFO SecurityManager: Changing view acls to: hduser
  50. 16/04/08 12:23:58 INFO SecurityManager: Changing modify acls to: hduser
  51. 16/04/08 12:23:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
  52. 16/04/08 12:23:58 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  53. 16/04/08 12:23:59 INFO Utils: Successfully started service 'sparkDriver' on port 49937.
  54. 16/04/08 12:23:59 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  55. 16/04/08 12:24:00 INFO Slf4jLogger: Slf4jLogger started
  56. 16/04/08 12:24:00 INFO Remoting: Starting remoting
  57. 16/04/08 12:24:00 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  58. 16/04/08 12:24:00 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:39971]
  59. 16/04/08 12:24:01 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39971.
  60. 16/04/08 12:24:01 INFO SparkEnv: Registering MapOutputTracker
  61. 16/04/08 12:24:01 INFO SparkEnv: Registering BlockManagerMaster
  62. 16/04/08 12:24:01 INFO DiskBlockManager: Created local directory at /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/blockmgr-c5dff622-033d-4462-b0ca-74291d4b631f
  63. 16/04/08 12:24:01 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
  64. 16/04/08 12:24:01 INFO SparkEnv: Registering OutputCommitCoordinator
  65. 16/04/08 12:24:02 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  66. 16/04/08 12:24:02 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
  67. 16/04/08 12:24:03 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  68. 16/04/08 12:24:03 INFO Utils: Successfully started service 'SparkUI' on port 45602.
  69. 16/04/08 12:24:03 INFO SparkUI: Started SparkUI at http://192.168.1.55:45602
  70. 16/04/08 12:24:03 INFO YarnClusterScheduler: Created YarnClusterScheduler
  71. 16/04/08 12:24:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58739.
  72. 16/04/08 12:24:03 INFO NettyBlockTransferService: Server created on 58739
  73. 16/04/08 12:24:03 INFO BlockManagerMaster: Trying to register BlockManager
  74. 16/04/08 12:24:03 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:58739 with 517.4 MB RAM, BlockManagerId(driver, 192.168.1.55, 58739)
  75. 16/04/08 12:24:03 INFO BlockManagerMaster: Registered BlockManager
  76. 16/04/08 12:24:04 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  77. 16/04/08 12:24:04 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://[email protected]:49937)
  78. 16/04/08 12:24:04 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
  79. 16/04/08 12:24:04 INFO YarnRMClient: Registering the ApplicationMaster
  80. 16/04/08 12:24:05 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  81. 16/04/08 12:24:05 INFO Client:
  82. client token: N/A
  83. diagnostics: N/A
  84. ApplicationMaster host: 192.168.1.55
  85. ApplicationMaster RPC port: 0
  86. queue: default
  87. start time: 1460111026395
  88. final status: UNDEFINED
  89. tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
  90. user: hduser
  91. 16/04/08 12:24:05 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
  92. 16/04/08 12:24:05 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
  93. 16/04/08 12:24:05 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
  94. 16/04/08 12:24:05 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
  95. 16/04/08 12:24:06 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  96. 16/04/08 12:24:06 INFO AMRMClientImpl: Received new token for : localhost:45967
  97. 16/04/08 12:24:06 INFO YarnAllocator: Launching container container_1460107053907_0003_01_000002 for on host localhost
  98. 16/04/08 12:24:06 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49937, executorHostname: localhost
  99. 16/04/08 12:24:06 INFO ExecutorRunnable: Starting Executor Container
  100. 16/04/08 12:24:06 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
  101. 16/04/08 12:24:06 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
  102. 16/04/08 12:24:06 INFO ExecutorRunnable: Setting up ContainerLaunchContext
  103. 16/04/08 12:24:06 INFO ExecutorRunnable: Preparing Local resources
  104. 16/04/08 12:24:07 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  105. 16/04/08 12:24:07 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
  106. 16/04/08 12:24:07 INFO ExecutorRunnable:
  107. ===============================================================================
  108. YARN executor launch context:
  109. env:
  110. CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
  111. SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000002/hduser/stderr?start=-4096
  112. SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
  113. SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
  114. SPARK_USER -> hduser
  115. SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
  116. SPARK_YARN_MODE -> true
  117. SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
  118. SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000002/hduser/stdout?start=-4096
  119. SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
  120.  
  121. command:
  122. {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49937' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49937 --executor-id 1 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
  123. ===============================================================================
  124.  
  125. 16/04/08 12:24:07 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
  126. 16/04/08 12:24:08 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  127. 16/04/08 12:24:08 INFO YarnAllocator: Launching container container_1460107053907_0003_01_000003 for on host localhost
  128. 16/04/08 12:24:08 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49937, executorHostname: localhost
  129. 16/04/08 12:24:08 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
  130. 16/04/08 12:24:08 INFO ExecutorRunnable: Starting Executor Container
  131. 16/04/08 12:24:08 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
  132. 16/04/08 12:24:08 INFO ExecutorRunnable: Setting up ContainerLaunchContext
  133. 16/04/08 12:24:08 INFO ExecutorRunnable: Preparing Local resources
  134. 16/04/08 12:24:08 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
  135. 16/04/08 12:24:08 INFO ExecutorRunnable:
  136. ===============================================================================
  137. YARN executor launch context:
  138. env:
  139. CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
  140. SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000003/hduser/stderr?start=-4096
  141. SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
  142. SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
  143. SPARK_USER -> hduser
  144. SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
  145. SPARK_YARN_MODE -> true
  146. SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
  147. SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000003/hduser/stdout?start=-4096
  148. SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
  149.  
  150. command:
  151. {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49937' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49937 --executor-id 2 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
  152. ===============================================================================
  153.  
  154. 16/04/08 12:24:08 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
  155. 16/04/08 12:24:09 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  156. 16/04/08 12:24:10 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  157. 16/04/08 12:24:11 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  158. 16/04/08 12:24:11 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 0 of them.
  159. 16/04/08 12:24:12 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  160. 16/04/08 12:24:13 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  161. 16/04/08 12:24:14 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  162. 16/04/08 12:24:15 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  163. 16/04/08 12:24:16 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  164. 16/04/08 12:24:17 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  165. 16/04/08 12:24:18 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  166. 16/04/08 12:24:19 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  167. 16/04/08 12:24:20 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  168. 16/04/08 12:24:21 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  169. 16/04/08 12:24:22 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  170. 16/04/08 12:24:23 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  171. 16/04/08 12:24:24 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  172. 16/04/08 12:24:25 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  173. 16/04/08 12:24:26 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  174. 16/04/08 12:24:27 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  175. 16/04/08 12:24:28 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  176. 16/04/08 12:24:29 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  177. 16/04/08 12:24:30 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  178. 16/04/08 12:24:31 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  179. 16/04/08 12:24:33 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  180. 16/04/08 12:24:33 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
  181. 16/04/08 12:24:33 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
  182. 16/04/08 12:24:34 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  183. 16/04/08 12:24:35 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  184. 16/04/08 12:24:36 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  185. 16/04/08 12:24:36 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (192.168.1.55:42124) with ID 1
  186. 16/04/08 12:24:37 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  187. 16/04/08 12:24:37 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:45035 with 517.4 MB RAM, BlockManagerId(1, 192.168.1.55, 45035)
  188. 16/04/08 12:24:38 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  189. 16/04/08 12:24:38 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (192.168.1.55:42125) with ID 2
  190. 16/04/08 12:24:39 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  191. 16/04/08 12:24:39 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:38849 with 517.4 MB RAM, BlockManagerId(2, 192.168.1.55, 38849)
  192. 16/04/08 12:24:40 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  193. 16/04/08 12:24:40 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 88.5 KB, free 88.5 KB)
  194. 16/04/08 12:24:40 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.6 KB, free 108.1 KB)
  195. 16/04/08 12:24:40 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.55:58739 (size: 19.6 KB, free: 517.4 MB)
  196. 16/04/08 12:24:40 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:10
  197. 16/04/08 12:24:41 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  198. 16/04/08 12:24:41 ERROR ApplicationMaster: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
  199. org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
  200. at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
  201. at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
  202. at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
  203. at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
  204. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  205. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  206. at scala.Option.getOrElse(Option.scala:120)
  207. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  208. at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  209. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  210. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  211. at scala.Option.getOrElse(Option.scala:120)
  212. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  213. at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  214. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  215. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  216. at scala.Option.getOrElse(Option.scala:120)
  217. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  218. at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  219. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  220. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  221. at scala.Option.getOrElse(Option.scala:120)
  222. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  223. at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
  224. at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
  225. at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
  226. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
  227. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
  228. at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
  229. at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)
  230. at com.mydomain.spark.wordcount.ScalaWordCount$.main(WordCount.scala:11)
  231. at com.mydomain.spark.wordcount.ScalaWordCount.main(WordCount.scala)
  232. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  233. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  234. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  235. at java.lang.reflect.Method.invoke(Method.java:497)
  236. at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
  237. 16/04/08 12:24:41 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt)
  238. 16/04/08 12:24:41 INFO SparkContext: Invoking stop() from shutdown hook
  239. 16/04/08 12:24:41 INFO SparkUI: Stopped Spark web UI at http://192.168.1.55:45602
  240. 16/04/08 12:24:41 INFO YarnClusterSchedulerBackend: Shutting down all executors
  241. 16/04/08 12:24:41 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
  242. 16/04/08 12:24:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
  243. 16/04/08 12:24:41 INFO MemoryStore: MemoryStore cleared
  244. 16/04/08 12:24:41 INFO BlockManager: BlockManager stopped
  245. 16/04/08 12:24:41 INFO BlockManagerMaster: BlockManagerMaster stopped
  246. 16/04/08 12:24:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
  247. 16/04/08 12:24:42 INFO SparkContext: Successfully stopped SparkContext
  248. 16/04/08 12:24:42 INFO ShutdownHookManager: Shutdown hook called
  249. 16/04/08 12:24:42 INFO ShutdownHookManager: Deleting directory /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/spark-942f340a-86bf-424b-8f05-05bfa401bbae
  250. 16/04/08 12:24:42 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
  251. 16/04/08 12:24:42 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  252. 16/04/08 12:24:43 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  253. 16/04/08 12:24:44 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  254. 16/04/08 12:24:44 INFO Client:
  255. client token: N/A
  256. diagnostics: N/A
  257. ApplicationMaster host: N/A
  258. ApplicationMaster RPC port: -1
  259. queue: default
  260. start time: 1460111026395
  261. final status: UNDEFINED
  262. tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
  263. user: hduser
  264. 16/04/08 12:24:45 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  265. 16/04/08 12:24:46 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  266. 16/04/08 12:24:46 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
  267. 16/04/08 12:24:47 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  268. 16/04/08 12:24:48 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  269. 16/04/08 12:24:49 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1460107053907_0003_000002
  270. 16/04/08 12:24:49 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  271. 16/04/08 12:24:50 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  272. 16/04/08 12:24:51 INFO SecurityManager: Changing view acls to: hduser
  273. 16/04/08 12:24:51 INFO SecurityManager: Changing modify acls to: hduser
  274. 16/04/08 12:24:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
  275. 16/04/08 12:24:51 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  276. 16/04/08 12:24:52 INFO ApplicationMaster: Starting the user application in a separate Thread
  277. 16/04/08 12:24:52 INFO ApplicationMaster: Waiting for spark context initialization
  278. 16/04/08 12:24:52 INFO ApplicationMaster: Waiting for spark context initialization ...
  279. 16/04/08 12:24:52 INFO SparkContext: Running Spark version 1.6.1
  280. 16/04/08 12:24:52 WARN Utils: Your hostname, debian resolves to a loopback address: 127.0.0.1; using 192.168.1.55 instead (on interface eth0)
  281. 16/04/08 12:24:52 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
  282. 16/04/08 12:24:52 INFO SecurityManager: Changing view acls to: hduser
  283. 16/04/08 12:24:52 INFO SecurityManager: Changing modify acls to: hduser
  284. 16/04/08 12:24:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
  285. 16/04/08 12:24:52 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  286. 16/04/08 12:24:52 INFO Utils: Successfully started service 'sparkDriver' on port 49388.
  287. 16/04/08 12:24:53 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  288. 16/04/08 12:24:54 INFO Slf4jLogger: Slf4jLogger started
  289. 16/04/08 12:24:54 INFO Remoting: Starting remoting
  290. 16/04/08 12:24:54 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  291. 16/04/08 12:24:55 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:57307]
  292. 16/04/08 12:24:55 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 57307.
  293. 16/04/08 12:24:55 INFO SparkEnv: Registering MapOutputTracker
  294. 16/04/08 12:24:55 INFO SparkEnv: Registering BlockManagerMaster
  295. 16/04/08 12:24:55 INFO DiskBlockManager: Created local directory at /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/blockmgr-f6372c9e-6fea-48ad-a2fe-6e660f4859ac
  296. 16/04/08 12:24:55 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  297. 16/04/08 12:24:55 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
  298. 16/04/08 12:24:55 INFO SparkEnv: Registering OutputCommitCoordinator
  299. 16/04/08 12:24:56 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
  300. 16/04/08 12:24:56 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  301. 16/04/08 12:24:57 INFO Utils: Successfully started service 'SparkUI' on port 37440.
  302. 16/04/08 12:24:57 INFO SparkUI: Started SparkUI at http://192.168.1.55:37440
  303. 16/04/08 12:24:57 INFO YarnClusterScheduler: Created YarnClusterScheduler
  304. 16/04/08 12:24:57 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  305. 16/04/08 12:24:57 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35746.
  306. 16/04/08 12:24:57 INFO NettyBlockTransferService: Server created on 35746
  307. 16/04/08 12:24:57 INFO BlockManagerMaster: Trying to register BlockManager
  308. 16/04/08 12:24:57 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:35746 with 517.4 MB RAM, BlockManagerId(driver, 192.168.1.55, 35746)
  309. 16/04/08 12:24:57 INFO BlockManagerMaster: Registered BlockManager
  310. 16/04/08 12:24:58 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
  311. 16/04/08 12:24:58 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://[email protected]:49388)
  312. 16/04/08 12:24:58 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
  313. 16/04/08 12:24:59 INFO YarnRMClient: Registering the ApplicationMaster
  314. 16/04/08 12:24:59 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
  315. 16/04/08 12:24:59 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
  316. 16/04/08 12:24:59 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
  317. 16/04/08 12:24:59 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  318. 16/04/08 12:24:59 INFO Client:
  319. client token: N/A
  320. diagnostics: N/A
  321. ApplicationMaster host: 192.168.1.55
  322. ApplicationMaster RPC port: 0
  323. queue: default
  324. start time: 1460111026395
  325. final status: UNDEFINED
  326. tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
  327. user: hduser
  328. 16/04/08 12:24:59 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
  329. 16/04/08 12:25:00 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  330. 16/04/08 12:25:01 INFO AMRMClientImpl: Received new token for : localhost:45967
  331. 16/04/08 12:25:01 INFO YarnAllocator: Launching container container_1460107053907_0003_02_000002 for on host localhost
  332. 16/04/08 12:25:01 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49388, executorHostname: localhost
  333. 16/04/08 12:25:01 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
  334. 16/04/08 12:25:01 INFO ExecutorRunnable: Starting Executor Container
  335. 16/04/08 12:25:01 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
  336. 16/04/08 12:25:01 INFO ExecutorRunnable: Setting up ContainerLaunchContext
  337. 16/04/08 12:25:01 INFO ExecutorRunnable: Preparing Local resources
  338. 16/04/08 12:25:01 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  339. 16/04/08 12:25:02 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
  340. 16/04/08 12:25:02 INFO ExecutorRunnable:
  341. ===============================================================================
  342. YARN executor launch context:
  343. env:
  344. CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
  345. SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000002/hduser/stderr?start=-4096
  346. SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
  347. SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
  348. SPARK_USER -> hduser
  349. SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
  350. SPARK_YARN_MODE -> true
  351. SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
  352. SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000002/hduser/stdout?start=-4096
  353. SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
  354.  
  355. command:
  356. {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49388' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49388 --executor-id 1 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
  357. ===============================================================================
  358.  
  359. 16/04/08 12:25:02 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
  360. 16/04/08 12:25:02 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  361. 16/04/08 12:25:03 INFO YarnAllocator: Launching container container_1460107053907_0003_02_000003 for on host localhost
  362. 16/04/08 12:25:03 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49388, executorHostname: localhost
  363. 16/04/08 12:25:03 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
  364. 16/04/08 12:25:03 INFO ExecutorRunnable: Starting Executor Container
  365. 16/04/08 12:25:03 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
  366. 16/04/08 12:25:03 INFO ExecutorRunnable: Setting up ContainerLaunchContext
  367. 16/04/08 12:25:03 INFO ExecutorRunnable: Preparing Local resources
  368. 16/04/08 12:25:03 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
  369. 16/04/08 12:25:03 INFO ExecutorRunnable:
  370. ===============================================================================
  371. YARN executor launch context:
  372. env:
  373. CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
  374. SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000003/hduser/stderr?start=-4096
  375. SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
  376. SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
  377. SPARK_USER -> hduser
  378. SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
  379. SPARK_YARN_MODE -> true
  380. SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
  381. SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000003/hduser/stdout?start=-4096
  382. SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
  383.  
  384. command:
  385. {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49388' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49388 --executor-id 2 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
  386. ===============================================================================
  387.  
  388. 16/04/08 12:25:03 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
  389. 16/04/08 12:25:03 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  390. 16/04/08 12:25:05 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  391. 16/04/08 12:25:06 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  392. 16/04/08 12:25:06 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 0 of them.
  393. 16/04/08 12:25:07 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  394. 16/04/08 12:25:08 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  395. 16/04/08 12:25:09 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  396. 16/04/08 12:25:10 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  397. 16/04/08 12:25:11 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  398. 16/04/08 12:25:12 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  399. 16/04/08 12:25:13 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  400. 16/04/08 12:25:14 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  401. 16/04/08 12:25:15 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  402. 16/04/08 12:25:16 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  403. 16/04/08 12:25:17 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  404. 16/04/08 12:25:18 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  405. 16/04/08 12:25:19 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  406. 16/04/08 12:25:20 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  407. 16/04/08 12:25:21 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  408. 16/04/08 12:25:22 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  409. 16/04/08 12:25:23 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  410. 16/04/08 12:25:24 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  411. 16/04/08 12:25:26 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  412. 16/04/08 12:25:27 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  413. 16/04/08 12:25:27 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
  414. 16/04/08 12:25:27 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
  415. 16/04/08 12:25:28 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  416. 16/04/08 12:25:29 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  417. 16/04/08 12:25:30 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  418. 16/04/08 12:25:31 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  419. 16/04/08 12:25:32 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  420. 16/04/08 12:25:33 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  421. 16/04/08 12:25:33 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 88.5 KB, free 88.5 KB)
  422. 16/04/08 12:25:34 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  423. 16/04/08 12:25:35 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.6 KB, free 108.1 KB)
  424. 16/04/08 12:25:35 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.55:35746 (size: 19.6 KB, free: 517.4 MB)
  425. 16/04/08 12:25:35 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:10
  426. 16/04/08 12:25:35 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  427. 16/04/08 12:25:36 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  428. 16/04/08 12:25:37 ERROR ApplicationMaster: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
  429. org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
  430. at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
  431. at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
  432. at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
  433. at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
  434. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  435. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  436. at scala.Option.getOrElse(Option.scala:120)
  437. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  438. at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  439. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  440. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  441. at scala.Option.getOrElse(Option.scala:120)
  442. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  443. at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  444. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  445. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  446. at scala.Option.getOrElse(Option.scala:120)
  447. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  448. at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  449. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
  450. at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
  451. at scala.Option.getOrElse(Option.scala:120)
  452. at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
  453. at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
  454. at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
  455. at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
  456. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
  457. at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
  458. at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
  459. at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)
  460. at com.mydomain.spark.wordcount.ScalaWordCount$.main(WordCount.scala:11)
  461. at com.mydomain.spark.wordcount.ScalaWordCount.main(WordCount.scala)
  462. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  463. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  464. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  465. at java.lang.reflect.Method.invoke(Method.java:497)
  466. at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
  467. 16/04/08 12:25:37 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt)
  468. 16/04/08 12:25:37 INFO SparkContext: Invoking stop() from shutdown hook
  469. 16/04/08 12:25:37 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
  470. 16/04/08 12:25:37 INFO SparkUI: Stopped Spark web UI at http://192.168.1.55:37440
  471. 16/04/08 12:25:38 INFO YarnClusterSchedulerBackend: Shutting down all executors
  472. 16/04/08 12:25:38 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
  473. 16/04/08 12:25:38 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
  474. 16/04/08 12:25:38 INFO MemoryStore: MemoryStore cleared
  475. 16/04/08 12:25:38 INFO BlockManager: BlockManager stopped
  476. 16/04/08 12:25:38 INFO BlockManagerMaster: BlockManagerMaster stopped
  477. 16/04/08 12:25:38 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
  478. 16/04/08 12:25:38 INFO SparkContext: Successfully stopped SparkContext
  479. 16/04/08 12:25:38 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
  480. 16/04/08 12:25:38 INFO ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt)
  481. 16/04/08 12:25:38 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
  482. 16/04/08 12:25:38 INFO AMRMClientImpl: Waiting for application to be successfully unregistered.
  483. 16/04/08 12:25:38 INFO ApplicationMaster: Deleting staging directory .sparkStaging/application_1460107053907_0003
  484. 16/04/08 12:25:38 INFO Client: Application report for application_1460107053907_0003 (state: FINISHED)
  485. 16/04/08 12:25:38 INFO Client:
  486. client token: N/A
  487. diagnostics: N/A
  488. ApplicationMaster host: 192.168.1.55
  489. ApplicationMaster RPC port: 0
  490. queue: default
  491. start time: 1460111026395
  492. final status: FAILED
  493. tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
  494. user: hduser
  495. 16/04/08 12:25:38 INFO Client: Deleting staging directory .sparkStaging/application_1460107053907_0003
  496. 16/04/08 12:25:39 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
  497. 16/04/08 12:25:39 INFO ShutdownHookManager: Shutdown hook called
  498. 16/04/08 12:25:39 INFO ShutdownHookManager: Deleting directory /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/spark-abb370f1-4475-44bb-8c7f-3e68e0bb422e
  499. 16/04/08 12:25:39 INFO ShutdownHookManager: Shutdown hook called
  500. 16/04/08 12:25:39 INFO ShutdownHookManager: Deleting directory /tmp/spark-46d2564e-43c2-4833-a682-91ff617f65e5
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement