Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- root@ip-172-31-10-160 lda]$ ~/bin/sbt run
- [info] Loading project definition from /root/lda/lda/project/project
- [info] Loading project definition from /root/lda/lda/project
- [info] Set current project to org/lda (in build file:/root/lda/lda/)
- [info] Running Main
- 14/05/04 01:55:44 INFO slf4j.Slf4jLogger: Slf4jLogger started
- 14/05/04 01:55:44 INFO Remoting: Starting remoting
- 14/05/04 01:55:44 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:51789]
- 14/05/04 01:55:44 INFO Remoting: Remoting now listens on addresses: [akka.tcp://[email protected]:51789]
- 14/05/04 01:55:44 INFO spark.SparkEnv: Registering BlockManagerMaster
- 14/05/04 01:55:44 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20140504015544-5752
- 14/05/04 01:55:44 INFO storage.MemoryStore: MemoryStore started with capacity 819.3 MB.
- 14/05/04 01:55:44 INFO network.ConnectionManager: Bound socket to port 53610 with id = ConnectionManagerId(ip-172-31-10-160.ec2.internal,53610)
- 14/05/04 01:55:44 INFO storage.BlockManagerMaster: Trying to register BlockManager
- 14/05/04 01:55:44 INFO storage.BlockManagerMasterActor$BlockManagerInfo: Registering block manager ip-172-31-10-160.ec2.internal:53610 with 819.3 MB RAM
- 14/05/04 01:55:44 INFO storage.BlockManagerMaster: Registered BlockManager
- 14/05/04 01:55:44 INFO spark.HttpServer: Starting HTTP Server
- 14/05/04 01:55:44 INFO server.Server: jetty-7.6.8.v20121106
- 14/05/04 01:55:44 INFO server.AbstractConnector: Started [email protected]:33786
- 14/05/04 01:55:44 INFO broadcast.HttpBroadcast: Broadcast server started at http://172.31.10.160:33786
- 14/05/04 01:55:44 INFO spark.SparkEnv: Registering MapOutputTracker
- 14/05/04 01:55:44 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-1d1a3ac0-4d69-4c06-bf10-d218437d4240
- 14/05/04 01:55:44 INFO spark.HttpServer: Starting HTTP Server
- 14/05/04 01:55:44 INFO server.Server: jetty-7.6.8.v20121106
- 14/05/04 01:55:44 INFO server.AbstractConnector: Started [email protected]:54526
- 14/05/04 01:55:45 INFO server.Server: jetty-7.6.8.v20121106
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/storage/rdd,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/storage,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/stages/stage,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/stages/pool,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/stages,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/environment,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/executors,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/metrics/json,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/static,null}
- 14/05/04 01:55:45 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/,null}
- 14/05/04 01:55:45 INFO server.AbstractConnector: Started [email protected]:4040
- 14/05/04 01:55:45 INFO ui.SparkUI: Started Spark Web UI at http://ip-172-31-10-160.ec2.internal:4040
- 14/05/04 01:55:45 INFO spark.SparkContext: Added JAR target/scala-2.10/org-lda_2.10-1.0.jar at http://172.31.10.160:54526/jars/org-lda_2.10-1.0.jar with timestamp 1399168545528
- 14/05/04 01:55:45 INFO client.AppClient$ClientActor: Connecting to master spark://ec2-54-86-18-95.compute-1.amazonaws.com:7077...
- 14/05/04 01:55:46 INFO storage.MemoryStore: ensureFreeSpace(32856) called with curMem=0, maxMem=859098316
- 14/05/04 01:55:46 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 32.1 KB, free 819.3 MB)
- 14/05/04 01:55:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 14/05/04 01:55:46 WARN snappy.LoadSnappy: Snappy native library not loaded
- 14/05/04 01:55:46 INFO mapred.FileInputFormat: Total input paths to process : 1
- 14/05/04 01:55:46 INFO spark.SparkContext: Starting job: count at LatentDirichletAllocation.scala:38
- 14/05/04 01:55:46 INFO scheduler.DAGScheduler: Got job 0 (count at LatentDirichletAllocation.scala:38) with 1 output partitions (allowLocal=false)
- 14/05/04 01:55:46 INFO scheduler.DAGScheduler: Final stage: Stage 0 (count at LatentDirichletAllocation.scala:38)
- 14/05/04 01:55:46 INFO scheduler.DAGScheduler: Parents of final stage: List()
- 14/05/04 01:55:46 INFO scheduler.DAGScheduler: Missing parents: List()
- 14/05/04 01:55:46 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at textFile at LatentDirichletAllocation.scala:37), which has no missing parents
- 14/05/04 01:55:46 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from Stage 0 (MappedRDD[1] at textFile at LatentDirichletAllocation.scala:37)
- 14/05/04 01:55:46 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
- 14/05/04 01:56:01 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
- 14/05/04 01:56:05 INFO client.AppClient$ClientActor: Connecting to master spark://ec2-54-86-18-95.compute-1.amazonaws.com:7077...
- 14/05/04 01:56:16 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
- 14/05/04 01:56:25 INFO client.AppClient$ClientActor: Connecting to master spark://ec2-54-86-18-95.compute-1.amazonaws.com:7077...
- 14/05/04 01:56:31 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
- 14/05/04 01:56:45 ERROR client.AppClient$ClientActor: All masters are unresponsive! Giving up.
- 14/05/04 01:56:45 ERROR cluster.SparkDeploySchedulerBackend: Spark cluster looks dead, giving up.
- 14/05/04 01:56:45 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
- 14/05/04 01:56:45 INFO scheduler.DAGScheduler: Failed to run count at LatentDirichletAllocation.scala:38
- [error] (run-main-0) org.apache.spark.SparkException: Job aborted: Spark cluster looks down
- org.apache.spark.SparkException: Job aborted: Spark cluster looks down
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
- at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
- at scala.Option.foreach(Option.scala:236)
- at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
- at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
- at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
- at akka.actor.ActorCell.invoke(ActorCell.scala:456)
- at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
- at akka.dispatch.Mailbox.run(Mailbox.scala:219)
- at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
- at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
- at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
- at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
- at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
- [trace] Stack trace suppressed: run last compile:run for the full output.
- 14/05/04 01:56:45 INFO network.ConnectionManager: Selector thread was interrupted!
- java.lang.RuntimeException: Nonzero exit code: 1
- at scala.sys.package$.error(package.scala:27)
- [trace] Stack trace suppressed: run last compile:run for the full output.
- [error] (compile:run) Nonzero exit code: 1
- [error] Total time: 64 s, completed May 4, 2014 1:56:46 AM
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement