Advertisement
Guest User

Hadoop cluster logs

a guest
May 25th, 2012
513
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 52.72 KB | None | 0 0
  1. NameNode startup log:
  2. /************************************************************
  3. STARTUP_MSG: Starting NameNode
  4. STARTUP_MSG: host = NCOIASI1/127.0.1.1
  5. STARTUP_MSG: args = []
  6. STARTUP_MSG: version = 1.0.3
  7. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
  8. ************************************************************/
  9. 2012-05-25 10:58:44,259 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  10. 2012-05-25 10:58:44,294 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
  11. 2012-05-25 10:58:44,295 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
  12. 2012-05-25 10:58:44,295 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
  13. 2012-05-25 10:58:44,406 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
  14. 2012-05-25 10:58:44,409 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
  15. 2012-05-25 10:58:44,412 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
  16. 2012-05-25 10:58:44,412 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
  17. 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
  18. 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 2.27625 MB
  19. 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^18 = 262144 entries
  20. 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: recommended=262144, actual=262144
  21. 2012-05-25 10:58:44,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=dmurvihill
  22. 2012-05-25 10:58:44,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
  23. 2012-05-25 10:58:44,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
  24. 2012-05-25 10:58:44,443 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
  25. 2012-05-25 10:58:44,443 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  26. 2012-05-25 10:58:44,456 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
  27. 2012-05-25 10:58:44,468 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
  28. 2012-05-25 10:58:44,475 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 26
  29. 2012-05-25 10:58:44,482 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
  30. 2012-05-25 10:58:44,482 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 loaded in 0 seconds.
  31. 2012-05-25 10:58:44,482 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-dmurvihill/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
  32. 2012-05-25 10:58:44,483 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
  33. 2012-05-25 10:58:44,890 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
  34. 2012-05-25 10:58:45,288 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
  35. 2012-05-25 10:58:45,288 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 853 msecs
  36. 2012-05-25 10:58:45,293 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON.
  37. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
  38. 2012-05-25 10:58:45,298 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
  39. 2012-05-25 10:58:45,301 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
  40. 2012-05-25 10:58:45,315 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
  41. 2012-05-25 10:58:45,317 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
  42. 2012-05-25 10:58:45,317 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
  43. 2012-05-25 10:58:45,319 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
  44. 2012-05-25 10:58:50,364 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  45. 2012-05-25 10:58:50,398 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  46. 2012-05-25 10:58:50,405 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
  47. 2012-05-25 10:58:50,409 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
  48. 2012-05-25 10:58:50,410 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
  49. 2012-05-25 10:58:50,410 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
  50. 2012-05-25 10:58:50,410 INFO org.mortbay.log: jetty-6.1.26
  51. 2012-05-25 10:58:50,563 INFO org.mortbay.log: Started [email protected]:50070
  52. 2012-05-25 10:58:50,563 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
  53. 2012-05-25 10:58:50,564 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
  54. 2012-05-25 10:58:50,564 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
  55. 2012-05-25 10:58:50,565 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
  56. 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
  57. 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
  58. 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
  59. 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
  60. 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
  61. 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
  62. 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
  63. 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
  64. 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
  65. 2012-05-25 10:58:53,976 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  66. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
  67. 2012-05-25 10:58:53,977 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53040: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  68. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
  69. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  70. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
  71. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  72. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  73. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  74. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  75. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  76. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  77. at java.lang.reflect.Method.invoke(Method.java:616)
  78. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  79. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  80. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  81. at java.security.AccessController.doPrivileged(Native Method)
  82. at javax.security.auth.Subject.doAs(Subject.java:416)
  83. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  84. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  85. 2012-05-25 10:59:00,969 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-921708818-127.0.1.1-50010-1337891288341
  86. 2012-05-25 10:59:00,973 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
  87. 2012-05-25 10:59:00,988 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered.
  88. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 29 seconds.
  89. 2012-05-25 10:59:00,989 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 10, processing time: 4 msecs
  90. 2012-05-25 10:59:03,988 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  91. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
  92. 2012-05-25 10:59:03,988 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53044: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  93. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
  94. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  95. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
  96. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  97. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  98. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  99. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  100. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  101. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  102. at java.lang.reflect.Method.invoke(Method.java:616)
  103. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  104. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  105. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  106. at java.security.AccessController.doPrivileged(Native Method)
  107. at javax.security.auth.Subject.doAs(Subject.java:416)
  108. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  109. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  110. 2012-05-25 10:59:13,997 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  111. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
  112. 2012-05-25 10:59:13,997 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53045: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  113. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
  114. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  115. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
  116. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  117. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  118. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  119. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  120. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  121. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  122. at java.lang.reflect.Method.invoke(Method.java:616)
  123. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  124. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  125. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  126. at java.security.AccessController.doPrivileged(Native Method)
  127. at javax.security.auth.Subject.doAs(Subject.java:416)
  128. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  129. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  130. 2012-05-25 10:59:20,993 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON.
  131. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 9 seconds.
  132. 2012-05-25 10:59:24,007 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  133. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
  134. 2012-05-25 10:59:24,007 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53047: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  135. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
  136. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  137. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
  138. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  139. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  140. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  141. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  142. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  143. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  144. at java.lang.reflect.Method.invoke(Method.java:616)
  145. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  146. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  147. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  148. at java.security.AccessController.doPrivileged(Native Method)
  149. at javax.security.auth.Subject.doAs(Subject.java:416)
  150. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  151. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  152. 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 10
  153. 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
  154. 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 10
  155. 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0
  156. 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 14 msec
  157. 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 46 secs.
  158. 2012-05-25 10:59:31,010 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF.
  159. 2012-05-25 10:59:31,010 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 1 datanodes
  160. 2012-05-25 10:59:31,010 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 10 blocks
  161. 2012-05-25 10:59:33,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
  162. 2012-05-25 10:59:33,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
  163. 2012-05-25 10:59:33,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
  164. 2012-05-25 10:59:34,015 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_156230688239219853 is added to invalidSet of 127.0.0.1:50010
  165. 2012-05-25 10:59:34,196 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-dmurvihill/mapred/system/jobtracker.info. blk_8304339447159938952_1022
  166. 2012-05-25 10:59:34,240 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_8304339447159938952_1022 size 4
  167. 2012-05-25 10:59:34,242 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on file /tmp/hadoop-dmurvihill/mapred/system/jobtracker.info from client DFSClient_-278025243
  168. 2012-05-25 10:59:34,242 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-dmurvihill/mapred/system/jobtracker.info is closed by DFSClient_-278025243
  169. 2012-05-25 10:59:36,303 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 127.0.0.1:50010 to delete blk_156230688239219853_1011
  170. 2012-05-25 11:03:52,775 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
  171. 2012-05-25 11:03:52,776 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 7 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 6 SyncTimes(ms): 98
  172. 2012-05-25 11:03:53,833 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1
  173. 2012-05-25 11:03:53,834 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 50
  174.  
  175.  
  176. JobTracker startup log:
  177. /************************************************************
  178. STARTUP_MSG: Starting JobTracker
  179. STARTUP_MSG: host = NCOIASI1/127.0.1.1
  180. STARTUP_MSG: args = []
  181. STARTUP_MSG: version = 1.0.3
  182. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
  183. ************************************************************/
  184. 2012-05-25 10:58:48,427 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  185. 2012-05-25 10:58:48,462 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
  186. 2012-05-25 10:58:48,463 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
  187. 2012-05-25 10:58:48,463 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system started
  188. 2012-05-25 10:58:48,499 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.
  189. 2012-05-25 10:58:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
  190. 2012-05-25 10:58:48,574 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
  191. 2012-05-25 10:58:48,574 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
  192. 2012-05-25 10:58:48,575 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
  193. 2012-05-25 10:58:48,575 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
  194. 2012-05-25 10:58:48,575 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
  195. 2012-05-25 10:58:48,576 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
  196. 2012-05-25 10:58:48,581 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as dmurvihill
  197. 2012-05-25 10:58:48,597 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
  198. 2012-05-25 10:58:48,598 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.
  199. 2012-05-25 10:58:48,598 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.
  200. 2012-05-25 10:58:53,634 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  201. 2012-05-25 10:58:53,669 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  202. 2012-05-25 10:58:53,695 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
  203. 2012-05-25 10:58:53,696 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
  204. 2012-05-25 10:58:53,696 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
  205. 2012-05-25 10:58:53,696 INFO org.mortbay.log: jetty-6.1.26
  206. 2012-05-25 10:58:53,720 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50030_job____yn7qmk, using /tmp/Jetty_0_0_0_0_50030_job____yn7qmk_5750362317391306683
  207. 2012-05-25 10:58:53,882 INFO org.mortbay.log: Started [email protected]:50030
  208. 2012-05-25 10:58:53,886 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
  209. 2012-05-25 10:58:53,886 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.
  210. 2012-05-25 10:58:53,887 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
  211. 2012-05-25 10:58:53,887 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
  212. 2012-05-25 10:58:53,975 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
  213. 2012-05-25 10:58:53,980 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
  214. org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  215. The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
  216. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  217. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  218. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  219. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  220. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  221. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  222. at java.lang.reflect.Method.invoke(Method.java:616)
  223. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  224. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  225. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  226. at java.security.AccessController.doPrivileged(Native Method)
  227. at javax.security.auth.Subject.doAs(Subject.java:416)
  228. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  229. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  230.  
  231. at org.apache.hadoop.ipc.Client.call(Client.java:1070)
  232. at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
  233. at $Proxy5.delete(Unknown Source)
  234. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  235. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  236. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  237. at java.lang.reflect.Method.invoke(Method.java:616)
  238. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  239. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  240. at $Proxy5.delete(Unknown Source)
  241. at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
  242. at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
  243. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
  244. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
  245. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
  246. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
  247. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
  248. at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
  249. 2012-05-25 10:59:03,987 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
  250. 2012-05-25 10:59:03,990 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
  251. org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  252. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
  253. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  254. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  255. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  256. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  257. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  258. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  259. at java.lang.reflect.Method.invoke(Method.java:616)
  260. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  261. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  262. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  263. at java.security.AccessController.doPrivileged(Native Method)
  264. at javax.security.auth.Subject.doAs(Subject.java:416)
  265. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  266. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  267.  
  268. at org.apache.hadoop.ipc.Client.call(Client.java:1070)
  269. at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
  270. at $Proxy5.delete(Unknown Source)
  271. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  272. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  273. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  274. at java.lang.reflect.Method.invoke(Method.java:616)
  275. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  276. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  277. at $Proxy5.delete(Unknown Source)
  278. at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
  279. at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
  280. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
  281. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
  282. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
  283. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
  284. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
  285. at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
  286. 2012-05-25 10:59:13,996 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
  287. 2012-05-25 10:59:13,999 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
  288. org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  289. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
  290. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  291. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  292. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  293. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  294. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  295. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  296. at java.lang.reflect.Method.invoke(Method.java:616)
  297. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  298. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  299. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  300. at java.security.AccessController.doPrivileged(Native Method)
  301. at javax.security.auth.Subject.doAs(Subject.java:416)
  302. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  303. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  304.  
  305. at org.apache.hadoop.ipc.Client.call(Client.java:1070)
  306. at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
  307. at $Proxy5.delete(Unknown Source)
  308. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  309. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  310. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  311. at java.lang.reflect.Method.invoke(Method.java:616)
  312. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  313. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  314. at $Proxy5.delete(Unknown Source)
  315. at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
  316. at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
  317. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
  318. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
  319. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
  320. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
  321. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
  322. at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
  323. 2012-05-25 10:59:24,006 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
  324. 2012-05-25 10:59:24,008 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
  325. org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
  326. The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
  327. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
  328. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
  329. at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
  330. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  331. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  332. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  333. at java.lang.reflect.Method.invoke(Method.java:616)
  334. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  335. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  336. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  337. at java.security.AccessController.doPrivileged(Native Method)
  338. at javax.security.auth.Subject.doAs(Subject.java:416)
  339. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
  340. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  341.  
  342. at org.apache.hadoop.ipc.Client.call(Client.java:1070)
  343. at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
  344. at $Proxy5.delete(Unknown Source)
  345. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  346. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  347. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  348. at java.lang.reflect.Method.invoke(Method.java:616)
  349. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  350. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  351. at $Proxy5.delete(Unknown Source)
  352. at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
  353. at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
  354. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
  355. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
  356. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
  357. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
  358. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
  359. at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
  360. 2012-05-25 10:59:34,014 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
  361. 2012-05-25 10:59:34,126 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode
  362. 2012-05-25 10:59:34,131 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030
  363. 2012-05-25 10:59:34,131 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030
  364. 2012-05-25 10:59:34,135 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive
  365. 2012-05-25 10:59:34,251 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information
  366. 2012-05-25 10:59:34,272 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to
  367. 2012-05-25 10:59:34,273 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to
  368. 2012-05-25 10:59:34,273 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
  369. 2012-05-25 10:59:34,273 INFO org.apache.hadoop.mapred.JobTracker: Decommissioning 0 nodes
  370. 2012-05-25 10:59:34,273 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
  371. 2012-05-25 10:59:34,274 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
  372. 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting
  373. 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting
  374. 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting
  375. 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting
  376. 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting
  377. 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting
  378. 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting
  379. 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting
  380. 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting
  381. 2012-05-25 10:59:34,276 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING
  382. 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting
  383. 2012-05-25 10:59:37,570 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/NCOIASI1
  384. 2012-05-25 10:59:37,572 INFO org.apache.hadoop.mapred.JobTracker: Adding tracker tracker_NCOIASI1:localhost/127.0.0.1:34035 to host NCOIASI1
  385.  
  386. SecondaryNameNode logs:
  387. /************************************************************
  388. STARTUP_MSG: Starting SecondaryNameNode
  389. STARTUP_MSG: host = NCOIASI1/127.0.1.1
  390. STARTUP_MSG: args = []
  391. STARTUP_MSG: version = 1.0.3
  392. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
  393. ************************************************************/
  394. 2012-05-25 10:58:52,536 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Starting web server as: dmurvihill
  395. 2012-05-25 10:58:52,566 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  396. 2012-05-25 10:58:52,600 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  397. 2012-05-25 10:58:52,602 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50090
  398. 2012-05-25 10:58:52,603 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
  399. 2012-05-25 10:58:52,603 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50090
  400. 2012-05-25 10:58:52,603 INFO org.mortbay.log: jetty-6.1.26
  401. 2012-05-25 10:58:52,768 INFO org.mortbay.log: Started [email protected]:50090
  402. 2012-05-25 10:58:52,768 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Web server init done
  403. 2012-05-25 10:58:52,768 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary Web-server up at: 0.0.0.0:50090
  404. 2012-05-25 10:58:52,768 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary image servlet up at: 0.0.0.0:50090
  405. 2012-05-25 10:58:52,768 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint Period :3600 secs (60 min)
  406. 2012-05-25 10:58:52,768 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Log Size Trigger :67108864 bytes (65536 KB)
  407. 2012-05-25 11:03:52,953 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file fsimage size 3273 bytes.
  408. 2012-05-25 11:03:52,954 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file edits size 571 bytes.
  409. 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
  410. 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 2.27625 MB
  411. 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^18 = 262144 entries
  412. 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: recommended=262144, actual=262144
  413. 2012-05-25 11:03:52,960 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=dmurvihill
  414. 2012-05-25 11:03:52,961 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
  415. 2012-05-25 11:03:52,961 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
  416. 2012-05-25 11:03:52,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
  417. 2012-05-25 11:03:52,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  418. 2012-05-25 11:03:52,967 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
  419. 2012-05-25 11:03:52,975 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 26
  420. 2012-05-25 11:03:52,980 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
  421. 2012-05-25 11:03:52,983 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-dmurvihill/dfs/namesecondary/current/edits of size 571 edits # 7 loaded in 0 seconds.
  422. 2012-05-25 11:03:52,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
  423. 2012-05-25 11:03:53,056 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
  424. 2012-05-25 11:03:53,448 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
  425. 2012-05-25 11:03:53,766 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL 0.0.0.0:50070putimage=1&port=50090&machine=0.0.0.0&token=-32:2078098923:0:1337969032000:1337968724482
  426. 2012-05-25 11:03:54,205 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 3273
  427.  
  428.  
  429. DataNode logs from master:
  430. /************************************************************
  431. STARTUP_MSG: Starting DataNode
  432. STARTUP_MSG: host = NCOIASI1/127.0.1.1
  433. STARTUP_MSG: args = []
  434. STARTUP_MSG: version = 1.0.3
  435. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
  436. ************************************************************/
  437. 2012-05-25 10:58:45,742 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  438. 2012-05-25 10:58:45,777 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
  439. 2012-05-25 10:58:45,778 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
  440. 2012-05-25 10:58:45,778 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
  441. 2012-05-25 10:58:45,867 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
  442. 2012-05-25 10:58:45,869 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
  443. 2012-05-25 10:58:50,669 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
  444. 2012-05-25 10:58:50,675 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
  445. 2012-05-25 10:58:50,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
  446. 2012-05-25 10:58:55,717 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  447. 2012-05-25 10:58:55,753 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  448. 2012-05-25 10:58:55,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
  449. 2012-05-25 10:58:55,760 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
  450. 2012-05-25 10:58:55,761 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
  451. 2012-05-25 10:58:55,761 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
  452. 2012-05-25 10:58:55,761 INFO org.mortbay.log: jetty-6.1.26
  453. 2012-05-25 10:58:55,935 INFO org.mortbay.log: Started [email protected]:50075
  454. 2012-05-25 10:58:55,939 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
  455. 2012-05-25 10:58:55,939 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source DataNode registered.
  456. 2012-05-25 10:59:00,957 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
  457. 2012-05-25 10:59:00,960 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50020 registered.
  458. 2012-05-25 10:59:00,961 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50020 registered.
  459. 2012-05-25 10:59:00,963 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(NCOIASI1:50010, storageID=DS-921708818-127.0.1.1-50010-1337891288341, infoPort=50075, ipcPort=50020)
  460. 2012-05-25 10:59:00,974 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous block report scan
  461. 2012-05-25 10:59:00,974 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-921708818-127.0.1.1-50010-1337891288341, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop-dmurvihill/dfs/data/current'}
  462. 2012-05-25 10:59:00,976 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
  463. 2012-05-25 10:59:00,976 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 2ms
  464. 2012-05-25 10:59:00,976 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
  465. 2012-05-25 10:59:00,978 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
  466. 2012-05-25 10:59:00,978 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
  467. 2012-05-25 10:59:00,979 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
  468. 2012-05-25 10:59:00,979 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
  469. 2012-05-25 10:59:00,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 1 ms
  470. 2012-05-25 10:59:00,991 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 10 blocks took 1 msec to generate and 8 msecs for RPC and NN processing
  471. 2012-05-25 10:59:00,992 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
  472. 2012-05-25 10:59:00,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) block report in 1 ms
  473. 2012-05-25 10:59:00,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 0 ms
  474. 2012-05-25 10:59:34,232 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_8304339447159938952_1022 src: /127.0.0.1:37311 dest: /127.0.0.1:50010
  475. 2012-05-25 10:59:34,239 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:37311, dest: /127.0.0.1:50010, bytes: 4, op: HDFS_WRITE, cliID: DFSClient_-278025243, offset: 0, srvID: DS-921708818-127.0.1.1-50010-1337891288341, blockid: blk_8304339447159938952_1022, duration: 1878489
  476. 2012-05-25 10:59:34,239 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_8304339447159938952_1022 terminating
  477. 2012-05-25 10:59:36,994 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_156230688239219853_1011 file /tmp/hadoop-dmurvihill/dfs/data/current/blk_156230688239219853 for deletion
  478. 2012-05-25 10:59:36,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_156230688239219853_1011 at file /tmp/hadoop-dmurvihill/dfs/data/current/blk_156230688239219853
  479.  
  480.  
  481. TaskTracker logs from the master:
  482. 2012-05-25 10:58:49,777 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG:
  483. /************************************************************
  484. STARTUP_MSG: Starting TaskTracker
  485. STARTUP_MSG: host = NCOIASI1/127.0.1.1
  486. STARTUP_MSG: args = []
  487. STARTUP_MSG: version = 1.0.3
  488. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
  489. ************************************************************/
  490. 2012-05-25 10:58:49,885 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  491. 2012-05-25 10:58:49,919 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
  492. 2012-05-25 10:58:49,920 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
  493. 2012-05-25 10:58:49,920 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
  494. 2012-05-25 10:58:49,973 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
  495. 2012-05-25 10:58:49,975 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
  496. 2012-05-25 10:58:55,087 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  497. 2012-05-25 10:58:55,121 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  498. 2012-05-25 10:58:55,136 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
  499. 2012-05-25 10:58:55,139 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as dmurvihill
  500. 2012-05-25 10:58:55,139 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-dmurvihill/mapred/local
  501. 2012-05-25 10:58:55,154 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  502. 2012-05-25 10:58:55,159 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
  503. 2012-05-25 10:58:55,160 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source TaskTrackerMetrics registered.
  504. 2012-05-25 10:58:55,174 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
  505. 2012-05-25 10:58:55,176 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort34035 registered.
  506. 2012-05-25 10:58:55,176 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort34035 registered.
  507. 2012-05-25 10:58:55,179 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
  508. 2012-05-25 10:58:55,179 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 34035: starting
  509. 2012-05-25 10:58:55,180 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 34035: starting
  510. 2012-05-25 10:58:55,180 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 34035: starting
  511. 2012-05-25 10:58:55,181 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 34035: starting
  512. 2012-05-25 10:58:55,181 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 34035: starting
  513. 2012-05-25 10:58:55,181 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:34035
  514. 2012-05-25 10:58:55,181 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_NCOIASI1:localhost/127.0.0.1:34035
  515. 2012-05-25 10:59:34,285 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_NCOIASI1:localhost/127.0.0.1:34035
  516. 2012-05-25 10:59:34,297 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
  517. 2012-05-25 10:59:34,307 INFO org.apache.hadoop.mapred.TaskTracker: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7c3afb99
  518. 2012-05-25 10:59:34,310 WARN org.apache.hadoop.mapred.TaskTracker: TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is disabled.
  519. 2012-05-25 10:59:34,315 INFO org.apache.hadoop.mapred.IndexCache: IndexCache created with max memory = 10485760
  520. 2012-05-25 10:59:34,324 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ShuffleServerMetrics registered.
  521. 2012-05-25 10:59:34,327 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060
  522. 2012-05-25 10:59:34,328 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060
  523. 2012-05-25 10:59:34,328 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060
  524. 2012-05-25 10:59:34,328 INFO org.mortbay.log: jetty-6.1.26
  525. 2012-05-25 10:59:34,487 INFO org.mortbay.log: Started [email protected]:50060
  526. 2012-05-25 10:59:34,488 INFO org.apache.hadoop.mapred.TaskTracker: FILE_CACHE_SIZE for mapOutputServlet set to : 2000
  527. 2012-05-25 10:59:34,493 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201205231213_0002 for user-log deletion with retainTimeStamp:1338055174313
  528. 2012-05-25 10:59:34,493 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201205241528_0002 for user-log deletion with retainTimeStamp:1338055174313
  529. 2012-05-25 10:59:34,493 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201205241528_0002 for user-log deletion with retainTimeStamp:1338055174314
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement