Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 16/04/08 12:23:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 16/04/08 12:23:32 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
- 16/04/08 12:23:33 INFO Client: Requesting a new application from cluster with 1 NodeManagers
- 16/04/08 12:23:33 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
- 16/04/08 12:23:33 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
- 16/04/08 12:23:33 INFO Client: Setting up container launch context for our AM
- 16/04/08 12:23:33 INFO Client: Setting up the launch environment for our AM container
- 16/04/08 12:23:34 INFO Client: Preparing resources for our AM container
- 16/04/08 12:23:36 INFO Client: Uploading resource file:/usr/local/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar
- 16/04/08 12:23:44 INFO Client: Uploading resource file:/usr/local/sparkapps/WordCount/target/scala-2.10/scalawordcount_2.10-1.0.jar -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar
- 16/04/08 12:23:45 INFO Client: Uploading resource file:/tmp/spark-46d2564e-43c2-4833-a682-91ff617f65e5/__spark_conf__2355479738370329692.zip -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/__spark_conf__2355479738370329692.zip
- 16/04/08 12:23:45 INFO SecurityManager: Changing view acls to: hduser
- 16/04/08 12:23:45 INFO SecurityManager: Changing modify acls to: hduser
- 16/04/08 12:23:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
- 16/04/08 12:23:46 INFO Client: Submitting application 3 to ResourceManager
- 16/04/08 12:23:46 INFO YarnClientImpl: Submitted application application_1460107053907_0003
- 16/04/08 12:23:47 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:47 INFO Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: N/A
- ApplicationMaster RPC port: -1
- queue: default
- start time: 1460111026395
- final status: UNDEFINED
- tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
- user: hduser
- 16/04/08 12:23:48 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:49 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:50 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:51 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:52 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:53 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
- 16/04/08 12:23:53 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:54 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:55 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1460107053907_0003_000001
- 16/04/08 12:23:55 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:56 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:57 INFO SecurityManager: Changing view acls to: hduser
- 16/04/08 12:23:57 INFO SecurityManager: Changing modify acls to: hduser
- 16/04/08 12:23:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
- 16/04/08 12:23:57 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:58 INFO ApplicationMaster: Starting the user application in a separate Thread
- 16/04/08 12:23:58 INFO ApplicationMaster: Waiting for spark context initialization
- 16/04/08 12:23:58 INFO ApplicationMaster: Waiting for spark context initialization ...
- 16/04/08 12:23:58 INFO SparkContext: Running Spark version 1.6.1
- 16/04/08 12:23:58 WARN Utils: Your hostname, debian resolves to a loopback address: 127.0.0.1; using 192.168.1.55 instead (on interface eth0)
- 16/04/08 12:23:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
- 16/04/08 12:23:58 INFO SecurityManager: Changing view acls to: hduser
- 16/04/08 12:23:58 INFO SecurityManager: Changing modify acls to: hduser
- 16/04/08 12:23:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
- 16/04/08 12:23:58 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:23:59 INFO Utils: Successfully started service 'sparkDriver' on port 49937.
- 16/04/08 12:23:59 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:00 INFO Slf4jLogger: Slf4jLogger started
- 16/04/08 12:24:00 INFO Remoting: Starting remoting
- 16/04/08 12:24:00 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:00 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:39971]
- 16/04/08 12:24:01 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39971.
- 16/04/08 12:24:01 INFO SparkEnv: Registering MapOutputTracker
- 16/04/08 12:24:01 INFO SparkEnv: Registering BlockManagerMaster
- 16/04/08 12:24:01 INFO DiskBlockManager: Created local directory at /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/blockmgr-c5dff622-033d-4462-b0ca-74291d4b631f
- 16/04/08 12:24:01 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
- 16/04/08 12:24:01 INFO SparkEnv: Registering OutputCommitCoordinator
- 16/04/08 12:24:02 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:02 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
- 16/04/08 12:24:03 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:03 INFO Utils: Successfully started service 'SparkUI' on port 45602.
- 16/04/08 12:24:03 INFO SparkUI: Started SparkUI at http://192.168.1.55:45602
- 16/04/08 12:24:03 INFO YarnClusterScheduler: Created YarnClusterScheduler
- 16/04/08 12:24:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58739.
- 16/04/08 12:24:03 INFO NettyBlockTransferService: Server created on 58739
- 16/04/08 12:24:03 INFO BlockManagerMaster: Trying to register BlockManager
- 16/04/08 12:24:03 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:58739 with 517.4 MB RAM, BlockManagerId(driver, 192.168.1.55, 58739)
- 16/04/08 12:24:03 INFO BlockManagerMaster: Registered BlockManager
- 16/04/08 12:24:04 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:04 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://[email protected]:49937)
- 16/04/08 12:24:04 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
- 16/04/08 12:24:04 INFO YarnRMClient: Registering the ApplicationMaster
- 16/04/08 12:24:05 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:05 INFO Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: 192.168.1.55
- ApplicationMaster RPC port: 0
- queue: default
- start time: 1460111026395
- final status: UNDEFINED
- tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
- user: hduser
- 16/04/08 12:24:05 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
- 16/04/08 12:24:05 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
- 16/04/08 12:24:05 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
- 16/04/08 12:24:05 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
- 16/04/08 12:24:06 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:06 INFO AMRMClientImpl: Received new token for : localhost:45967
- 16/04/08 12:24:06 INFO YarnAllocator: Launching container container_1460107053907_0003_01_000002 for on host localhost
- 16/04/08 12:24:06 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49937, executorHostname: localhost
- 16/04/08 12:24:06 INFO ExecutorRunnable: Starting Executor Container
- 16/04/08 12:24:06 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
- 16/04/08 12:24:06 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
- 16/04/08 12:24:06 INFO ExecutorRunnable: Setting up ContainerLaunchContext
- 16/04/08 12:24:06 INFO ExecutorRunnable: Preparing Local resources
- 16/04/08 12:24:07 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:07 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
- 16/04/08 12:24:07 INFO ExecutorRunnable:
- ===============================================================================
- YARN executor launch context:
- env:
- CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
- SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000002/hduser/stderr?start=-4096
- SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
- SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
- SPARK_USER -> hduser
- SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
- SPARK_YARN_MODE -> true
- SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
- SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000002/hduser/stdout?start=-4096
- SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
- command:
- {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49937' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49937 --executor-id 1 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
- ===============================================================================
- 16/04/08 12:24:07 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
- 16/04/08 12:24:08 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:08 INFO YarnAllocator: Launching container container_1460107053907_0003_01_000003 for on host localhost
- 16/04/08 12:24:08 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49937, executorHostname: localhost
- 16/04/08 12:24:08 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
- 16/04/08 12:24:08 INFO ExecutorRunnable: Starting Executor Container
- 16/04/08 12:24:08 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
- 16/04/08 12:24:08 INFO ExecutorRunnable: Setting up ContainerLaunchContext
- 16/04/08 12:24:08 INFO ExecutorRunnable: Preparing Local resources
- 16/04/08 12:24:08 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
- 16/04/08 12:24:08 INFO ExecutorRunnable:
- ===============================================================================
- YARN executor launch context:
- env:
- CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
- SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000003/hduser/stderr?start=-4096
- SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
- SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
- SPARK_USER -> hduser
- SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
- SPARK_YARN_MODE -> true
- SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
- SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_01_000003/hduser/stdout?start=-4096
- SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
- command:
- {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49937' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49937 --executor-id 2 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
- ===============================================================================
- 16/04/08 12:24:08 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
- 16/04/08 12:24:09 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:10 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:11 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:11 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 0 of them.
- 16/04/08 12:24:12 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:13 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:14 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:15 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:16 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:17 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:18 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:19 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:20 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:21 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:22 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:23 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:24 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:25 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:26 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:27 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:28 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:29 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:30 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:31 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:33 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:33 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
- 16/04/08 12:24:33 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
- 16/04/08 12:24:34 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:35 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:36 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:36 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (192.168.1.55:42124) with ID 1
- 16/04/08 12:24:37 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:37 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:45035 with 517.4 MB RAM, BlockManagerId(1, 192.168.1.55, 45035)
- 16/04/08 12:24:38 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:38 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (192.168.1.55:42125) with ID 2
- 16/04/08 12:24:39 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:39 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:38849 with 517.4 MB RAM, BlockManagerId(2, 192.168.1.55, 38849)
- 16/04/08 12:24:40 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:40 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 88.5 KB, free 88.5 KB)
- 16/04/08 12:24:40 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.6 KB, free 108.1 KB)
- 16/04/08 12:24:40 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.55:58739 (size: 19.6 KB, free: 517.4 MB)
- 16/04/08 12:24:40 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:10
- 16/04/08 12:24:41 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:41 ERROR ApplicationMaster: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
- org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
- at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
- at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
- at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
- at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
- at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
- at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
- at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
- at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)
- at com.mydomain.spark.wordcount.ScalaWordCount$.main(WordCount.scala:11)
- at com.mydomain.spark.wordcount.ScalaWordCount.main(WordCount.scala)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:497)
- at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
- 16/04/08 12:24:41 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt)
- 16/04/08 12:24:41 INFO SparkContext: Invoking stop() from shutdown hook
- 16/04/08 12:24:41 INFO SparkUI: Stopped Spark web UI at http://192.168.1.55:45602
- 16/04/08 12:24:41 INFO YarnClusterSchedulerBackend: Shutting down all executors
- 16/04/08 12:24:41 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
- 16/04/08 12:24:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
- 16/04/08 12:24:41 INFO MemoryStore: MemoryStore cleared
- 16/04/08 12:24:41 INFO BlockManager: BlockManager stopped
- 16/04/08 12:24:41 INFO BlockManagerMaster: BlockManagerMaster stopped
- 16/04/08 12:24:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
- 16/04/08 12:24:42 INFO SparkContext: Successfully stopped SparkContext
- 16/04/08 12:24:42 INFO ShutdownHookManager: Shutdown hook called
- 16/04/08 12:24:42 INFO ShutdownHookManager: Deleting directory /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/spark-942f340a-86bf-424b-8f05-05bfa401bbae
- 16/04/08 12:24:42 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
- 16/04/08 12:24:42 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:43 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:44 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:44 INFO Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: N/A
- ApplicationMaster RPC port: -1
- queue: default
- start time: 1460111026395
- final status: UNDEFINED
- tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
- user: hduser
- 16/04/08 12:24:45 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:46 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:46 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
- 16/04/08 12:24:47 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:48 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:49 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1460107053907_0003_000002
- 16/04/08 12:24:49 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:50 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:51 INFO SecurityManager: Changing view acls to: hduser
- 16/04/08 12:24:51 INFO SecurityManager: Changing modify acls to: hduser
- 16/04/08 12:24:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
- 16/04/08 12:24:51 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:52 INFO ApplicationMaster: Starting the user application in a separate Thread
- 16/04/08 12:24:52 INFO ApplicationMaster: Waiting for spark context initialization
- 16/04/08 12:24:52 INFO ApplicationMaster: Waiting for spark context initialization ...
- 16/04/08 12:24:52 INFO SparkContext: Running Spark version 1.6.1
- 16/04/08 12:24:52 WARN Utils: Your hostname, debian resolves to a loopback address: 127.0.0.1; using 192.168.1.55 instead (on interface eth0)
- 16/04/08 12:24:52 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
- 16/04/08 12:24:52 INFO SecurityManager: Changing view acls to: hduser
- 16/04/08 12:24:52 INFO SecurityManager: Changing modify acls to: hduser
- 16/04/08 12:24:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
- 16/04/08 12:24:52 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:52 INFO Utils: Successfully started service 'sparkDriver' on port 49388.
- 16/04/08 12:24:53 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:54 INFO Slf4jLogger: Slf4jLogger started
- 16/04/08 12:24:54 INFO Remoting: Starting remoting
- 16/04/08 12:24:54 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:55 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:57307]
- 16/04/08 12:24:55 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 57307.
- 16/04/08 12:24:55 INFO SparkEnv: Registering MapOutputTracker
- 16/04/08 12:24:55 INFO SparkEnv: Registering BlockManagerMaster
- 16/04/08 12:24:55 INFO DiskBlockManager: Created local directory at /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/blockmgr-f6372c9e-6fea-48ad-a2fe-6e660f4859ac
- 16/04/08 12:24:55 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:55 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
- 16/04/08 12:24:55 INFO SparkEnv: Registering OutputCommitCoordinator
- 16/04/08 12:24:56 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
- 16/04/08 12:24:56 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:57 INFO Utils: Successfully started service 'SparkUI' on port 37440.
- 16/04/08 12:24:57 INFO SparkUI: Started SparkUI at http://192.168.1.55:37440
- 16/04/08 12:24:57 INFO YarnClusterScheduler: Created YarnClusterScheduler
- 16/04/08 12:24:57 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:57 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35746.
- 16/04/08 12:24:57 INFO NettyBlockTransferService: Server created on 35746
- 16/04/08 12:24:57 INFO BlockManagerMaster: Trying to register BlockManager
- 16/04/08 12:24:57 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.55:35746 with 517.4 MB RAM, BlockManagerId(driver, 192.168.1.55, 35746)
- 16/04/08 12:24:57 INFO BlockManagerMaster: Registered BlockManager
- 16/04/08 12:24:58 INFO Client: Application report for application_1460107053907_0003 (state: ACCEPTED)
- 16/04/08 12:24:58 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://[email protected]:49388)
- 16/04/08 12:24:58 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
- 16/04/08 12:24:59 INFO YarnRMClient: Registering the ApplicationMaster
- 16/04/08 12:24:59 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
- 16/04/08 12:24:59 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
- 16/04/08 12:24:59 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
- 16/04/08 12:24:59 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:24:59 INFO Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: 192.168.1.55
- ApplicationMaster RPC port: 0
- queue: default
- start time: 1460111026395
- final status: UNDEFINED
- tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
- user: hduser
- 16/04/08 12:24:59 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
- 16/04/08 12:25:00 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:01 INFO AMRMClientImpl: Received new token for : localhost:45967
- 16/04/08 12:25:01 INFO YarnAllocator: Launching container container_1460107053907_0003_02_000002 for on host localhost
- 16/04/08 12:25:01 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49388, executorHostname: localhost
- 16/04/08 12:25:01 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
- 16/04/08 12:25:01 INFO ExecutorRunnable: Starting Executor Container
- 16/04/08 12:25:01 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
- 16/04/08 12:25:01 INFO ExecutorRunnable: Setting up ContainerLaunchContext
- 16/04/08 12:25:01 INFO ExecutorRunnable: Preparing Local resources
- 16/04/08 12:25:01 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:02 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
- 16/04/08 12:25:02 INFO ExecutorRunnable:
- ===============================================================================
- YARN executor launch context:
- env:
- CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
- SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000002/hduser/stderr?start=-4096
- SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
- SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
- SPARK_USER -> hduser
- SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
- SPARK_YARN_MODE -> true
- SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
- SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000002/hduser/stdout?start=-4096
- SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
- command:
- {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49388' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49388 --executor-id 1 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
- ===============================================================================
- 16/04/08 12:25:02 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
- 16/04/08 12:25:02 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:03 INFO YarnAllocator: Launching container container_1460107053907_0003_02_000003 for on host localhost
- 16/04/08 12:25:03 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:49388, executorHostname: localhost
- 16/04/08 12:25:03 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
- 16/04/08 12:25:03 INFO ExecutorRunnable: Starting Executor Container
- 16/04/08 12:25:03 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
- 16/04/08 12:25:03 INFO ExecutorRunnable: Setting up ContainerLaunchContext
- 16/04/08 12:25:03 INFO ExecutorRunnable: Preparing Local resources
- 16/04/08 12:25:03 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar" } size: 5382 timestamp: 1460111024985 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "localhost" port: 9000 file: "/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1460111024731 type: FILE visibility: PRIVATE)
- 16/04/08 12:25:03 INFO ExecutorRunnable:
- ===============================================================================
- YARN executor launch context:
- env:
- CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
- SPARK_LOG_URL_STDERR -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000003/hduser/stderr?start=-4096
- SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1460107053907_0003
- SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038,5382
- SPARK_USER -> hduser
- SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
- SPARK_YARN_MODE -> true
- SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1460111024731,1460111024985
- SPARK_LOG_URL_STDOUT -> http://localhost:8042/node/containerlogs/container_1460107053907_0003_02_000003/hduser/stdout?start=-4096
- SPARK_YARN_CACHE_FILES -> hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar,hdfs://localhost:9000/user/hduser/.sparkStaging/application_1460107053907_0003/scalawordcount_2.10-1.0.jar#__app__.jar
- command:
- {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=49388' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:49388 --executor-id 2 --hostname localhost --cores 1 --app-id application_1460107053907_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
- ===============================================================================
- 16/04/08 12:25:03 INFO ContainerManagementProtocolProxy: Opening proxy : localhost:45967
- 16/04/08 12:25:03 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:05 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:06 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:06 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 0 of them.
- 16/04/08 12:25:07 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:08 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:09 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:10 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:11 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:12 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:13 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:14 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:15 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:16 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:17 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:18 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:19 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:20 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:21 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:22 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:23 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:24 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:26 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:27 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:27 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
- 16/04/08 12:25:27 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
- 16/04/08 12:25:28 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:29 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:30 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:31 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:32 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:33 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:33 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 88.5 KB, free 88.5 KB)
- 16/04/08 12:25:34 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:35 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.6 KB, free 108.1 KB)
- 16/04/08 12:25:35 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.55:35746 (size: 19.6 KB, free: 517.4 MB)
- 16/04/08 12:25:35 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:10
- 16/04/08 12:25:35 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:36 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:37 ERROR ApplicationMaster: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
- org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt
- at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
- at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
- at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
- at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
- at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
- at scala.Option.getOrElse(Option.scala:120)
- at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
- at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
- at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
- at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
- at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
- at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
- at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)
- at com.mydomain.spark.wordcount.ScalaWordCount$.main(WordCount.scala:11)
- at com.mydomain.spark.wordcount.ScalaWordCount.main(WordCount.scala)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:497)
- at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
- 16/04/08 12:25:37 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt)
- 16/04/08 12:25:37 INFO SparkContext: Invoking stop() from shutdown hook
- 16/04/08 12:25:37 INFO Client: Application report for application_1460107053907_0003 (state: RUNNING)
- 16/04/08 12:25:37 INFO SparkUI: Stopped Spark web UI at http://192.168.1.55:37440
- 16/04/08 12:25:38 INFO YarnClusterSchedulerBackend: Shutting down all executors
- 16/04/08 12:25:38 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
- 16/04/08 12:25:38 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
- 16/04/08 12:25:38 INFO MemoryStore: MemoryStore cleared
- 16/04/08 12:25:38 INFO BlockManager: BlockManager stopped
- 16/04/08 12:25:38 INFO BlockManagerMaster: BlockManagerMaster stopped
- 16/04/08 12:25:38 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
- 16/04/08 12:25:38 INFO SparkContext: Successfully stopped SparkContext
- 16/04/08 12:25:38 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
- 16/04/08 12:25:38 INFO ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/inputfile.txt)
- 16/04/08 12:25:38 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
- 16/04/08 12:25:38 INFO AMRMClientImpl: Waiting for application to be successfully unregistered.
- 16/04/08 12:25:38 INFO ApplicationMaster: Deleting staging directory .sparkStaging/application_1460107053907_0003
- 16/04/08 12:25:38 INFO Client: Application report for application_1460107053907_0003 (state: FINISHED)
- 16/04/08 12:25:38 INFO Client:
- client token: N/A
- diagnostics: N/A
- ApplicationMaster host: 192.168.1.55
- ApplicationMaster RPC port: 0
- queue: default
- start time: 1460111026395
- final status: FAILED
- tracking URL: http://localhost:8088/proxy/application_1460107053907_0003/
- user: hduser
- 16/04/08 12:25:38 INFO Client: Deleting staging directory .sparkStaging/application_1460107053907_0003
- 16/04/08 12:25:39 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
- 16/04/08 12:25:39 INFO ShutdownHookManager: Shutdown hook called
- 16/04/08 12:25:39 INFO ShutdownHookManager: Deleting directory /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1460107053907_0003/spark-abb370f1-4475-44bb-8c7f-3e68e0bb422e
- 16/04/08 12:25:39 INFO ShutdownHookManager: Shutdown hook called
- 16/04/08 12:25:39 INFO ShutdownHookManager: Deleting directory /tmp/spark-46d2564e-43c2-4833-a682-91ff617f65e5
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement