Advertisement
alohamora007

cldr

Mar 15th, 2019
218
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 5.62 KB | None | 0 0
  1. [root@sm178 hadoop]# yarn jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 10000000 s3a://tpc/karan/1G-terasort-input
  2. WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
  3. 19/03/15 09:23:52 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
  4. 19/03/15 09:23:52 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
  5. 19/03/15 09:23:52 INFO impl.MetricsSystemImpl: s3a-file-system metrics system started
  6. 19/03/15 09:23:53 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key
  7. 19/03/15 09:23:53 INFO client.RMProxy: Connecting to ResourceManager at sm178.ch.intel.com/192.168.177.178:8032
  8. 19/03/15 09:23:53 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1552650691937_0005
  9.  
  10. 19/03/15 09:24:53 WARN hdfs.DataStreamer: Exception for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144587
  11. java.io.EOFException: Unexpected EOF while trying to read response from server
  12. at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:448)
  13. at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
  14. at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086)
  15. 19/03/15 09:24:53 WARN hdfs.DataStreamer: Error Recovery for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144587 in pipeline [DatanodeInfoWithStorage[192.168.177.183:9866,DS-197f2d21-b750-4fc4-8bdc-c7f8c1c6011f,DISK], DatanodeInfoWithStorage[192.168.177.179:9866,DS-9876982b-6138-48d6-889b-983a2974170f,DISK], DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]: datanode 0(DatanodeInfoWithStorage[192.168.177.183:9866,DS-197f2d21-b750-4fc4-8bdc-c7f8c1c6011f,DISK]) is bad.
  16. 19/03/15 09:25:53 WARN hdfs.DataStreamer: Exception for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144589
  17. java.io.EOFException: Unexpected EOF while trying to read response from server
  18. at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:448)
  19. at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
  20. at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086)
  21. 19/03/15 09:25:53 WARN hdfs.DataStreamer: Error Recovery for BP-925763216-10.2.28.205-1547509841320:blk_1073885377_144589 in pipeline [DatanodeInfoWithStorage[192.168.177.179:9866,DS-9876982b-6138-48d6-889b-983a2974170f,DISK], DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]: datanode 0(DatanodeInfoWithStorage[192.168.177.179:9866,DS-9876982b-6138-48d6-889b-983a2974170f,DISK]) is bad.
  22. 19/03/15 09:25:53 WARN hdfs.DataStreamer: DataStreamer Exception
  23. java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]], original=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
  24. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
  25. at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
  26. at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
  27. at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
  28. at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
  29. at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
  30. at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
  31. 19/03/15 09:25:53 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/root/.staging/job_1552650691937_0005
  32. java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]], original=[DatanodeInfoWithStorage[192.168.177.180:9866,DS-13d39c39-0198-461f-8f06-c8e1afec2a7a,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
  33. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
  34. at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
  35. at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
  36. at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
  37. at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
  38. at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
  39. at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
  40. 19/03/15 09:25:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system metrics system...
  41. 19/03/15 09:25:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system stopped.
  42. 19/03/15 09:25:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
  43. [root@sm178 hadoop]#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement