Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- NameNode startup log:
- /************************************************************
- STARTUP_MSG: Starting NameNode
- STARTUP_MSG: host = NCOIASI1/127.0.1.1
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 1.0.3
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
- ************************************************************/
- 2012-05-25 10:58:44,259 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
- 2012-05-25 10:58:44,294 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
- 2012-05-25 10:58:44,295 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
- 2012-05-25 10:58:44,295 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
- 2012-05-25 10:58:44,406 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
- 2012-05-25 10:58:44,409 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
- 2012-05-25 10:58:44,412 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
- 2012-05-25 10:58:44,412 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
- 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
- 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 2.27625 MB
- 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^18 = 262144 entries
- 2012-05-25 10:58:44,429 INFO org.apache.hadoop.hdfs.util.GSet: recommended=262144, actual=262144
- 2012-05-25 10:58:44,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=dmurvihill
- 2012-05-25 10:58:44,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
- 2012-05-25 10:58:44,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
- 2012-05-25 10:58:44,443 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
- 2012-05-25 10:58:44,443 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
- 2012-05-25 10:58:44,456 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
- 2012-05-25 10:58:44,468 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
- 2012-05-25 10:58:44,475 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 26
- 2012-05-25 10:58:44,482 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
- 2012-05-25 10:58:44,482 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 loaded in 0 seconds.
- 2012-05-25 10:58:44,482 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-dmurvihill/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
- 2012-05-25 10:58:44,483 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
- 2012-05-25 10:58:44,890 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
- 2012-05-25 10:58:45,288 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
- 2012-05-25 10:58:45,288 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 853 msecs
- 2012-05-25 10:58:45,293 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON.
- The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
- 2012-05-25 10:58:45,298 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
- 2012-05-25 10:58:45,301 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
- 2012-05-25 10:58:45,315 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
- 2012-05-25 10:58:45,317 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
- 2012-05-25 10:58:45,317 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
- 2012-05-25 10:58:45,319 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
- 2012-05-25 10:58:50,364 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
- 2012-05-25 10:58:50,398 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
- 2012-05-25 10:58:50,405 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
- 2012-05-25 10:58:50,409 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
- 2012-05-25 10:58:50,410 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
- 2012-05-25 10:58:50,410 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
- 2012-05-25 10:58:50,410 INFO org.mortbay.log: jetty-6.1.26
- 2012-05-25 10:58:50,563 INFO org.mortbay.log: Started [email protected]:50070
- 2012-05-25 10:58:50,563 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
- 2012-05-25 10:58:50,564 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
- 2012-05-25 10:58:50,564 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
- 2012-05-25 10:58:50,565 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
- 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
- 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
- 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
- 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
- 2012-05-25 10:58:50,566 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
- 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
- 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
- 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
- 2012-05-25 10:58:50,567 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
- 2012-05-25 10:58:53,976 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
- 2012-05-25 10:58:53,977 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53040: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
- org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- 2012-05-25 10:59:00,969 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-921708818-127.0.1.1-50010-1337891288341
- 2012-05-25 10:59:00,973 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
- 2012-05-25 10:59:00,988 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 29 seconds.
- 2012-05-25 10:59:00,989 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 10, processing time: 4 msecs
- 2012-05-25 10:59:03,988 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
- 2012-05-25 10:59:03,988 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53044: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
- org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- 2012-05-25 10:59:13,997 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
- 2012-05-25 10:59:13,997 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53045: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
- org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- 2012-05-25 10:59:20,993 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 9 seconds.
- 2012-05-25 10:59:24,007 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dmurvihill cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
- 2012-05-25 10:59:24,007 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call delete(/tmp/hadoop-dmurvihill/mapred/system, true) from 127.0.0.1:53047: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
- org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 10
- 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
- 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 10
- 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0
- 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 14 msec
- 2012-05-25 10:59:31,009 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 46 secs.
- 2012-05-25 10:59:31,010 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF.
- 2012-05-25 10:59:31,010 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 1 datanodes
- 2012-05-25 10:59:31,010 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 10 blocks
- 2012-05-25 10:59:33,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
- 2012-05-25 10:59:33,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
- 2012-05-25 10:59:33,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
- 2012-05-25 10:59:34,015 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_156230688239219853 is added to invalidSet of 127.0.0.1:50010
- 2012-05-25 10:59:34,196 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-dmurvihill/mapred/system/jobtracker.info. blk_8304339447159938952_1022
- 2012-05-25 10:59:34,240 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_8304339447159938952_1022 size 4
- 2012-05-25 10:59:34,242 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on file /tmp/hadoop-dmurvihill/mapred/system/jobtracker.info from client DFSClient_-278025243
- 2012-05-25 10:59:34,242 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-dmurvihill/mapred/system/jobtracker.info is closed by DFSClient_-278025243
- 2012-05-25 10:59:36,303 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 127.0.0.1:50010 to delete blk_156230688239219853_1011
- 2012-05-25 11:03:52,775 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
- 2012-05-25 11:03:52,776 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 7 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 6 SyncTimes(ms): 98
- 2012-05-25 11:03:53,833 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1
- 2012-05-25 11:03:53,834 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 50
- JobTracker startup log:
- /************************************************************
- STARTUP_MSG: Starting JobTracker
- STARTUP_MSG: host = NCOIASI1/127.0.1.1
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 1.0.3
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
- ************************************************************/
- 2012-05-25 10:58:48,427 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
- 2012-05-25 10:58:48,462 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
- 2012-05-25 10:58:48,463 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
- 2012-05-25 10:58:48,463 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system started
- 2012-05-25 10:58:48,499 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.
- 2012-05-25 10:58:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
- 2012-05-25 10:58:48,574 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
- 2012-05-25 10:58:48,574 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
- 2012-05-25 10:58:48,575 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
- 2012-05-25 10:58:48,575 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
- 2012-05-25 10:58:48,575 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
- 2012-05-25 10:58:48,576 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
- 2012-05-25 10:58:48,581 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as dmurvihill
- 2012-05-25 10:58:48,597 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
- 2012-05-25 10:58:48,598 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.
- 2012-05-25 10:58:48,598 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.
- 2012-05-25 10:58:53,634 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
- 2012-05-25 10:58:53,669 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
- 2012-05-25 10:58:53,695 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
- 2012-05-25 10:58:53,696 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
- 2012-05-25 10:58:53,696 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
- 2012-05-25 10:58:53,696 INFO org.mortbay.log: jetty-6.1.26
- 2012-05-25 10:58:53,720 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50030_job____yn7qmk, using /tmp/Jetty_0_0_0_0_50030_job____yn7qmk_5750362317391306683
- 2012-05-25 10:58:53,882 INFO org.mortbay.log: Started [email protected]:50030
- 2012-05-25 10:58:53,886 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
- 2012-05-25 10:58:53,886 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.
- 2012-05-25 10:58:53,887 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
- 2012-05-25 10:58:53,887 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
- 2012-05-25 10:58:53,975 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
- 2012-05-25 10:58:53,980 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
- org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- at org.apache.hadoop.ipc.Client.call(Client.java:1070)
- at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
- at $Proxy5.delete(Unknown Source)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
- at $Proxy5.delete(Unknown Source)
- at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
- at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
- at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
- 2012-05-25 10:59:03,987 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
- 2012-05-25 10:59:03,990 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
- org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 26 seconds.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- at org.apache.hadoop.ipc.Client.call(Client.java:1070)
- at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
- at $Proxy5.delete(Unknown Source)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
- at $Proxy5.delete(Unknown Source)
- at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
- at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
- at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
- 2012-05-25 10:59:13,996 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
- 2012-05-25 10:59:13,999 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
- org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 16 seconds.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- at org.apache.hadoop.ipc.Client.call(Client.java:1070)
- at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
- at $Proxy5.delete(Unknown Source)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
- at $Proxy5.delete(Unknown Source)
- at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
- at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
- at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
- 2012-05-25 10:59:24,006 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
- 2012-05-25 10:59:24,008 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://localhost:9000/tmp/hadoop-dmurvihill/mapred/system
- org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-dmurvihill/mapred/system. Name node is in safe mode.
- The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 6 seconds.
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1994)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1974)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:416)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- at org.apache.hadoop.ipc.Client.call(Client.java:1070)
- at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
- at $Proxy5.delete(Unknown Source)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:616)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
- at $Proxy5.delete(Unknown Source)
- at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:828)
- at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:234)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2410)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
- at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
- at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
- at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
- 2012-05-25 10:59:34,014 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
- 2012-05-25 10:59:34,126 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode
- 2012-05-25 10:59:34,131 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030
- 2012-05-25 10:59:34,131 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030
- 2012-05-25 10:59:34,135 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive
- 2012-05-25 10:59:34,251 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information
- 2012-05-25 10:59:34,272 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to
- 2012-05-25 10:59:34,273 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to
- 2012-05-25 10:59:34,273 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
- 2012-05-25 10:59:34,273 INFO org.apache.hadoop.mapred.JobTracker: Decommissioning 0 nodes
- 2012-05-25 10:59:34,273 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
- 2012-05-25 10:59:34,274 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
- 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting
- 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting
- 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting
- 2012-05-25 10:59:34,275 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting
- 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting
- 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting
- 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting
- 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting
- 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting
- 2012-05-25 10:59:34,276 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING
- 2012-05-25 10:59:34,276 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting
- 2012-05-25 10:59:37,570 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/NCOIASI1
- 2012-05-25 10:59:37,572 INFO org.apache.hadoop.mapred.JobTracker: Adding tracker tracker_NCOIASI1:localhost/127.0.0.1:34035 to host NCOIASI1
- SecondaryNameNode logs:
- /************************************************************
- STARTUP_MSG: Starting SecondaryNameNode
- STARTUP_MSG: host = NCOIASI1/127.0.1.1
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 1.0.3
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
- ************************************************************/
- 2012-05-25 10:58:52,536 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Starting web server as: dmurvihill
- 2012-05-25 10:58:52,566 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
- 2012-05-25 10:58:52,600 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
- 2012-05-25 10:58:52,602 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50090
- 2012-05-25 10:58:52,603 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
- 2012-05-25 10:58:52,603 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50090
- 2012-05-25 10:58:52,603 INFO org.mortbay.log: jetty-6.1.26
- 2012-05-25 10:58:52,768 INFO org.mortbay.log: Started [email protected]:50090
- 2012-05-25 10:58:52,768 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Web server init done
- 2012-05-25 10:58:52,768 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary Web-server up at: 0.0.0.0:50090
- 2012-05-25 10:58:52,768 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary image servlet up at: 0.0.0.0:50090
- 2012-05-25 10:58:52,768 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint Period :3600 secs (60 min)
- 2012-05-25 10:58:52,768 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Log Size Trigger :67108864 bytes (65536 KB)
- 2012-05-25 11:03:52,953 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file fsimage size 3273 bytes.
- 2012-05-25 11:03:52,954 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file edits size 571 bytes.
- 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
- 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 2.27625 MB
- 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^18 = 262144 entries
- 2012-05-25 11:03:52,956 INFO org.apache.hadoop.hdfs.util.GSet: recommended=262144, actual=262144
- 2012-05-25 11:03:52,960 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=dmurvihill
- 2012-05-25 11:03:52,961 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
- 2012-05-25 11:03:52,961 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
- 2012-05-25 11:03:52,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
- 2012-05-25 11:03:52,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
- 2012-05-25 11:03:52,967 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
- 2012-05-25 11:03:52,975 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 26
- 2012-05-25 11:03:52,980 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
- 2012-05-25 11:03:52,983 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-dmurvihill/dfs/namesecondary/current/edits of size 571 edits # 7 loaded in 0 seconds.
- 2012-05-25 11:03:52,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
- 2012-05-25 11:03:53,056 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
- 2012-05-25 11:03:53,448 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3273 saved in 0 seconds.
- 2012-05-25 11:03:53,766 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL 0.0.0.0:50070putimage=1&port=50090&machine=0.0.0.0&token=-32:2078098923:0:1337969032000:1337968724482
- 2012-05-25 11:03:54,205 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 3273
- DataNode logs from master:
- /************************************************************
- STARTUP_MSG: Starting DataNode
- STARTUP_MSG: host = NCOIASI1/127.0.1.1
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 1.0.3
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
- ************************************************************/
- 2012-05-25 10:58:45,742 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
- 2012-05-25 10:58:45,777 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
- 2012-05-25 10:58:45,778 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
- 2012-05-25 10:58:45,778 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
- 2012-05-25 10:58:45,867 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
- 2012-05-25 10:58:45,869 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
- 2012-05-25 10:58:50,669 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
- 2012-05-25 10:58:50,675 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
- 2012-05-25 10:58:50,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
- 2012-05-25 10:58:55,717 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
- 2012-05-25 10:58:55,753 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
- 2012-05-25 10:58:55,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
- 2012-05-25 10:58:55,760 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
- 2012-05-25 10:58:55,761 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
- 2012-05-25 10:58:55,761 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
- 2012-05-25 10:58:55,761 INFO org.mortbay.log: jetty-6.1.26
- 2012-05-25 10:58:55,935 INFO org.mortbay.log: Started [email protected]:50075
- 2012-05-25 10:58:55,939 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
- 2012-05-25 10:58:55,939 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source DataNode registered.
- 2012-05-25 10:59:00,957 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
- 2012-05-25 10:59:00,960 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50020 registered.
- 2012-05-25 10:59:00,961 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50020 registered.
- 2012-05-25 10:59:00,963 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(NCOIASI1:50010, storageID=DS-921708818-127.0.1.1-50010-1337891288341, infoPort=50075, ipcPort=50020)
- 2012-05-25 10:59:00,974 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous block report scan
- 2012-05-25 10:59:00,974 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-921708818-127.0.1.1-50010-1337891288341, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop-dmurvihill/dfs/data/current'}
- 2012-05-25 10:59:00,976 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
- 2012-05-25 10:59:00,976 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 2ms
- 2012-05-25 10:59:00,976 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
- 2012-05-25 10:59:00,978 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
- 2012-05-25 10:59:00,978 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
- 2012-05-25 10:59:00,979 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
- 2012-05-25 10:59:00,979 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
- 2012-05-25 10:59:00,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 1 ms
- 2012-05-25 10:59:00,991 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 10 blocks took 1 msec to generate and 8 msecs for RPC and NN processing
- 2012-05-25 10:59:00,992 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
- 2012-05-25 10:59:00,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) block report in 1 ms
- 2012-05-25 10:59:00,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 0 ms
- 2012-05-25 10:59:34,232 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_8304339447159938952_1022 src: /127.0.0.1:37311 dest: /127.0.0.1:50010
- 2012-05-25 10:59:34,239 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:37311, dest: /127.0.0.1:50010, bytes: 4, op: HDFS_WRITE, cliID: DFSClient_-278025243, offset: 0, srvID: DS-921708818-127.0.1.1-50010-1337891288341, blockid: blk_8304339447159938952_1022, duration: 1878489
- 2012-05-25 10:59:34,239 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_8304339447159938952_1022 terminating
- 2012-05-25 10:59:36,994 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_156230688239219853_1011 file /tmp/hadoop-dmurvihill/dfs/data/current/blk_156230688239219853 for deletion
- 2012-05-25 10:59:36,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_156230688239219853_1011 at file /tmp/hadoop-dmurvihill/dfs/data/current/blk_156230688239219853
- TaskTracker logs from the master:
- 2012-05-25 10:58:49,777 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG:
- /************************************************************
- STARTUP_MSG: Starting TaskTracker
- STARTUP_MSG: host = NCOIASI1/127.0.1.1
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 1.0.3
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:37:40 UTC 2012
- ************************************************************/
- 2012-05-25 10:58:49,885 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
- 2012-05-25 10:58:49,919 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
- 2012-05-25 10:58:49,920 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
- 2012-05-25 10:58:49,920 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
- 2012-05-25 10:58:49,973 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
- 2012-05-25 10:58:49,975 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
- 2012-05-25 10:58:55,087 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
- 2012-05-25 10:58:55,121 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
- 2012-05-25 10:58:55,136 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
- 2012-05-25 10:58:55,139 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as dmurvihill
- 2012-05-25 10:58:55,139 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-dmurvihill/mapred/local
- 2012-05-25 10:58:55,154 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 2012-05-25 10:58:55,159 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
- 2012-05-25 10:58:55,160 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source TaskTrackerMetrics registered.
- 2012-05-25 10:58:55,174 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
- 2012-05-25 10:58:55,176 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort34035 registered.
- 2012-05-25 10:58:55,176 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort34035 registered.
- 2012-05-25 10:58:55,179 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
- 2012-05-25 10:58:55,179 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 34035: starting
- 2012-05-25 10:58:55,180 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 34035: starting
- 2012-05-25 10:58:55,180 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 34035: starting
- 2012-05-25 10:58:55,181 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 34035: starting
- 2012-05-25 10:58:55,181 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 34035: starting
- 2012-05-25 10:58:55,181 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:34035
- 2012-05-25 10:58:55,181 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_NCOIASI1:localhost/127.0.0.1:34035
- 2012-05-25 10:59:34,285 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_NCOIASI1:localhost/127.0.0.1:34035
- 2012-05-25 10:59:34,297 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
- 2012-05-25 10:59:34,307 INFO org.apache.hadoop.mapred.TaskTracker: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7c3afb99
- 2012-05-25 10:59:34,310 WARN org.apache.hadoop.mapred.TaskTracker: TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is disabled.
- 2012-05-25 10:59:34,315 INFO org.apache.hadoop.mapred.IndexCache: IndexCache created with max memory = 10485760
- 2012-05-25 10:59:34,324 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ShuffleServerMetrics registered.
- 2012-05-25 10:59:34,327 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060
- 2012-05-25 10:59:34,328 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060
- 2012-05-25 10:59:34,328 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060
- 2012-05-25 10:59:34,328 INFO org.mortbay.log: jetty-6.1.26
- 2012-05-25 10:59:34,487 INFO org.mortbay.log: Started [email protected]:50060
- 2012-05-25 10:59:34,488 INFO org.apache.hadoop.mapred.TaskTracker: FILE_CACHE_SIZE for mapOutputServlet set to : 2000
- 2012-05-25 10:59:34,493 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201205231213_0002 for user-log deletion with retainTimeStamp:1338055174313
- 2012-05-25 10:59:34,493 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201205241528_0002 for user-log deletion with retainTimeStamp:1338055174313
- 2012-05-25 10:59:34,493 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201205241528_0002 for user-log deletion with retainTimeStamp:1338055174314
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement