Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@sm178 hadoop]# yarn jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 10000000 s3a://tpc/karan/1G-terasort-input
- WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
- 19/03/15 09:23:52 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
- 19/03/15 09:23:52 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
- 19/03/15 09:23:52 INFO impl.MetricsSystemImpl: s3a-file-system metrics system started
- 19/03/15 09:23:53 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key
- 19/03/15 09:23:53 INFO client.RMProxy: Connecting to ResourceManager at sm178.ch.intel.com/192.168.177.178:8032
- 19/03/15 09:23:53 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1552650691937_0005
- 19/03/15 09:24:53 WARN hdfs.DataStreamer: Exception for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144587
- java.io.EOFException: Unexpected EOF while trying to read response from server
- at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:448)
- at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
- at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086)
- 19/03/15 09:24:53 WARN hdfs.DataStreamer: Error Recovery for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144587 in pipeline [DatanodeInfoWithStorage[192.168.177.183:9866,DS-197f2d21-b750-4fc4-8bdc-c7f8c1c6011f,DISK], DatanodeInfoWithStorage[192.168.177.179:9866,DS-9876982b-6138-48d6-889b-983a2974170f,DISK], DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]: datanode 0(DatanodeInfoWithStorage[192.168.177.183:9866,DS-197f2d21-b750-4fc4-8bdc-c7f8c1c6011f,DISK]) is bad.
- 19/03/15 09:25:53 WARN hdfs.DataStreamer: Exception for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144589
- java.io.EOFException: Unexpected EOF while trying to read response from server
- at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:448)
- at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
- at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086)
- 19/03/15 09:25:53 WARN hdfs.DataStreamer: Error Recovery for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144589 in pipeline [DatanodeInfoWithStorage[192.168.177.179:9866,DS-9876982b-6138-48d6-889b-983a2974170f,DISK], DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]: datanode 0(DatanodeInfoWithStorage[192.168.177.179:9866,DS-9876982b-6138-48d6-889b-983a2974170f,DISK]) is bad.
- 19/03/15 09:25:53 WARN hdfs.DataStreamer: DataStreamer Exception
- java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]], original=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
- at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
- at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
- at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
- at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
- at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
- at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
- at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
- 19/03/15 09:25:53 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/root/.staging/job_1552650691937_0005
- java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]], original=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
- at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
- at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
- at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
- at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
- at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
- at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
- at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
- 19/03/15 09:25:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system metrics system...
- 19/03/15 09:25:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system stopped.
- 19/03/15 09:25:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
- [root@sm178 hadoop]#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement