Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 17/03/27 16:01:57 INFO storage.BlockManagerMaster: Registered BlockManager
- 17/03/27 16:01:57 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
- 17/03/27 16:01:58 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 231.0 KB, free 231.0 KB)
- 17/03/27 16:01:58 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.0 KB, free 251.0 KB)
- 17/03/27 16:01:58 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.1.0.12:41285 (size: 20.0 KB, free: 511.5 MB)
- 17/03/27 16:01:58 INFO spark.SparkContext: Created broadcast 0 from textFile at SparkWordCount.scala:30
- 17/03/27 16:01:59 INFO Configuration.deprecation: topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
- 17/03/27 16:01:59 WARN httpclient.HttpMethodDirector: Unable to respond to any of these challenges: {keystone=Keystone uri="http://192.168.51.71:5000"}
- 17/03/27 16:02:02 INFO cluster.SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (spark-2c-2g-spark-small-datanode-0.novalocal:51857) with ID 0
- 17/03/27 16:02:02 INFO storage.BlockManagerMasterEndpoint: Registering block manager spark-2c-2g-spark-small-datanode-0.novalocal:56279 with 511.5 MB RAM, BlockManagerI
- d(0, spark-2c-2g-spark-small-datanode-0.novalocal, 56279)
- cException in thread "main" java.net.SocketTimeoutException: POST http://192.168.51.71:5000/v2.0/tokens/ failed on exception: java.net.SocketTimeoutException: Read time
- d out; For more details see: http://wiki.apache.org/hadoop/SocketTimeout
- at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
- at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
- at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
- at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
- at org.apache.hadoop.fs.swift.http.ExceptionDiags.wrapWithMessage(ExceptionDiags.java:90)
- at org.apache.hadoop.fs.swift.http.ExceptionDiags.wrapException(ExceptionDiags.java:76)
- at org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1560)
- at org.apache.hadoop.fs.swift.http.SwiftRestClient.authenticate(SwiftRestClient.java:1170)
- at org.apache.hadoop.fs.swift.http.SwiftRestClient.authIfNeeded(SwiftRestClient.java:1469)
- at org.apache.hadoop.fs.swift.http.SwiftRestClient.preRemoteCommand(SwiftRestClient.java:1485)
- at org.apache.hadoop.fs.swift.http.SwiftRestClient.headRequest(SwiftRestClient.java:1105)
- at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.stat(SwiftNativeFileSystemStore.java:293)
- at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:224)
- at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:182)
- at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.getFileStatus(SwiftNativeFileSystem.java:173)
- at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement