Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2017-02-12 16:24:58,249 INFO namenode.NameNode (LogAdapter.java:info(47)) - STARTUP_MSG:
- /************************************************************
- STARTUP_MSG: Starting NameNode
- STARTUP_MSG: user = hdfs
- STARTUP_MSG: host = node0/127.0.1.1
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 2.7.3.2.5.3.0-37
- STARTUP_MSG: classpath = ...
- STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 9828acfdec41a121f0121f556b09e2d112259e92; compiled by 'j
- enkins' on 2016-11-29T18:37Z
- STARTUP_MSG: java = 1.8.0_77
- ************************************************************/
- 2017-02-12 16:24:58,268 INFO namenode.NameNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM,
- HUP, INT]
- 2017-02-12 16:24:58,271 INFO namenode.NameNode (NameNode.java:createNameNode(1600)) - createNameNode []
- 2017-02-12 16:24:58,709 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(112)) - loaded properties from hadoop-met
- rics2.properties
- 2017-02-12 16:24:58,868 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) - Scheduled snapshot perio
- d at 10 second(s).
- 2017-02-12 16:24:58,868 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - NameNode metrics system start
- ed
- 2017-02-12 16:24:58,878 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://n
- ode0:8020
- 2017-02-12 16:24:58,881 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use node0
- :8020 to access this namenode/service.
- 2017-02-12 16:24:59,149 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor
- 2017-02-12 16:24:59,160 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1780)) - Starting Web-server for h
- dfs at: http://node0:50070
- 2017-02-12 16:24:59,237 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mo
- rtbay.log) via org.mortbay.log.Slf4jLog
- 2017-02-12 16:24:59,251 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(293)) - Una
- ble to initialize FileSignerSecretProvider, falling back to use random secrets.
- 2017-02-12 16:24:59,259 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.re
- quests.namenode is not defined
- 2017-02-12 16:24:59,267 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (c
- lass=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
- 2017-02-12 16:24:59,274 INFO http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (clas
- s=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
- 2017-02-12 16:24:59,276 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (clas
- s=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
- 2017-02-12 16:24:59,277 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (clas
- s=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
- 2017-02-12 16:24:59,277 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilte
- r(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
- 2017-02-12 16:24:59,429 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(93)) - Added filter 'org.apache.hado
- op.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
- 2017-02-12 16:24:59,430 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(653)) - addJerseyResourcePacka
- ge: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/web
- hdfs/v1/*
- 2017-02-12 16:24:59,454 INFO http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 50070
- 2017-02-12 16:24:59,454 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
- 2017-02-12 16:24:59,734 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeS
- tartup@node0:50070
- 2017-02-12 16:24:59,774 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified a
- s a URI in configuration files. Please update hdfs configuration.
- 2017-02-12 16:24:59,774 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified a
- s a URI in configuration files. Please update hdfs configuration.
- 2017-02-12 16:24:59,774 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(656)) - Only one image storage
- directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
- 2017-02-12 16:24:59,776 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(661)) - Only one namespace edi
- ts storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directori
- es!
- 2017-02-12 16:24:59,780 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified a
- s a URI in configuration files. Please update hdfs configuration.
- 2017-02-12 16:24:59,784 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified a
- s a URI in configuration files. Please update hdfs configuration.
- 2017-02-12 16:24:59,788 WARN common.Storage (NNStorage.java:setRestoreFailedStorage(210)) - set restore failed storage
- to true
- 2017-02-12 16:24:59,827 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(725)) - No KeyProvider found.
- 2017-02-12 16:24:59,827 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(731)) - Enabling async auditlog
- 2017-02-12 16:24:59,830 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(735)) - fsLock is fair:false
- 2017-02-12 16:24:59,851 INFO blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(90)) - Setting heartbeat re
- check interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
- 2017-02-12 16:24:59,867 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(242)) - dfs.block.invalidate.
- limit=1000
- 2017-02-12 16:24:59,867 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(248)) - dfs.namenode.datanode
- .registration.ip-hostname-check=true
- 2017-02-12 16:24:59,869 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.name
- node.startup.delay.block.deletion.sec is set to 000:01:00:00.000
- 2017-02-12 16:24:59,870 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The bloc
- k deletion will start around 2017 Feb 12 17:24:59
- 2017-02-12 16:24:59,872 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksM
- ap
- 2017-02-12 16:24:59,872 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
- 2017-02-12 16:24:59,874 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 1011.3 MB = 20.2 M
- B
- 2017-02-12 16:24:59,874 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^21 = 2097152 ent
- ries
- 2017-02-12 16:24:59,879 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(401)) - dfs.
- block.access.token.enable=true
- 2017-02-12 16:24:59,879 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(422)) - dfs.
- block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algor
- ithm=null
- 2017-02-12 16:25:00,094 INFO blockmanagement.BlockManager (BlockManager.java:<init>(387)) - defaultReplication
- = 3
- 2017-02-12 16:25:00,094 INFO blockmanagement.BlockManager (BlockManager.java:<init>(388)) - maxReplication
- = 50
- 2017-02-12 16:25:00,094 INFO blockmanagement.BlockManager (BlockManager.java:<init>(389)) - minReplication
- = 1
- 2017-02-12 16:25:00,094 INFO blockmanagement.BlockManager (BlockManager.java:<init>(390)) - maxReplicationStreams
- = 2
- 2017-02-12 16:25:00,094 INFO blockmanagement.BlockManager (BlockManager.java:<init>(391)) - replicationRecheckInterval
- = 3000
- 2017-02-12 16:25:00,094 INFO blockmanagement.BlockManager (BlockManager.java:<init>(392)) - encryptDataTransfer
- = false
- 2017-02-12 16:25:00,095 INFO blockmanagement.BlockManager (BlockManager.java:<init>(393)) - maxNumBlocksToLog
- = 1000
- 2017-02-12 16:25:00,103 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(755)) - fsOwner = hdfs (auth:S
- IMPLE)
- 2017-02-12 16:25:00,103 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(756)) - supergroup = hdfs
- 2017-02-12 16:25:00,103 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(757)) - isPermissionEnabled = true
- 2017-02-12 16:25:00,103 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(768)) - HA Enabled: false
- 2017-02-12 16:25:00,111 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(805)) - Append Enabled: true
- 2017-02-12 16:25:00,156 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map INodeMa
- p
- 2017-02-12 16:25:00,156 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
- 2017-02-12 16:25:00,160 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 1011.3 MB = 10.1 M
- B
- 2017-02-12 16:25:00,160 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^20 = 1048576 ent
- ries
- 2017-02-12 16:25:00,161 INFO namenode.FSDirectory (FSDirectory.java:<init>(250)) - ACLs enabled? false
- 2017-02-12 16:25:00,161 INFO namenode.FSDirectory (FSDirectory.java:<init>(254)) - XAttrs enabled? true
- 2017-02-12 16:25:00,162 INFO namenode.FSDirectory (FSDirectory.java:<init>(262)) - Maximum size of an xattr: 16384
- 2017-02-12 16:25:00,162 INFO namenode.NameNode (FSDirectory.java:<init>(315)) - Caching file names occuring more than 1
- 0 times
- 2017-02-12 16:25:00,174 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map cachedB
- locks
- 2017-02-12 16:25:00,175 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
- 2017-02-12 16:25:00,178 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 1011.3 MB = 2.5 M
- B
- 2017-02-12 16:25:00,178 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^18 = 262144 entr
- ies
- 2017-02-12 16:25:00,180 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5354)) - dfs.namenode.safemode.threshold-p
- ct = 1.0
- 2017-02-12 16:25:00,180 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5355)) - dfs.namenode.safemode.min.datanod
- es = 0
- 2017-02-12 16:25:00,180 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5356)) - dfs.namenode.safemode.extension
- = 30000
- 2017-02-12 16:25:00,182 INFO metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num
- .buckets = 10
- 2017-02-12 16:25:00,182 INFO metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users
- = 10
- 2017-02-12 16:25:00,182 INFO metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.mi
- nutes = 1,5,25
- 2017-02-12 16:25:00,187 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(915)) - Retry cache on namenode is
- enabled
- 2017-02-12 16:25:00,188 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(923)) - Retry cache will use 0.03
- of total heap and retry cache entry expiry time is 600000 millis
- 2017-02-12 16:25:00,189 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map NameNod
- eRetryCache
- 2017-02-12 16:25:00,190 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
- 2017-02-12 16:25:00,190 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory 1
- 011.3 MB = 310.7 KB
- 2017-02-12 16:25:00,190 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^15 = 32768 entri
- es
- 2017-02-12 16:25:00,211 INFO common.Storage (Storage.java:tryLock(774)) - Lock on /hadoop/hdfs/namenode/in_use.lock acq
- uired by nodename 10822@node0
- 2017-02-12 16:25:00,314 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(388)) - Re
- covering unfinalized segments in /hadoop/hdfs/namenode/current
- 2017-02-12 16:25:00,366 INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing
- edits file /hadoop/hdfs/namenode/current/edits_inprogress_0000000000000000010 -> /hadoop/hdfs/namenode/current/edits_00
- 00000000000000010-0000000000000000010
- 2017-02-12 16:25:00,374 INFO namenode.FSImage (FSImage.java:loadFSImageFile(736)) - Planning to load image: FSImageFile
- (file=/hadoop/hdfs/namenode/current/fsimage_0000000000000000009, cpktTxId=0000000000000000009)
- 2017-02-12 16:25:00,433 INFO namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(255)) - Loading
- 4 INodes.
- 2017-02-12 16:25:00,481 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(184)) - Loaded FSImage in
- 0 seconds.
- 2017-02-12 16:25:00,486 INFO namenode.FSImage (FSImage.java:loadFSImage(976)) - Loaded image for txid 9 from /hadoop/hd
- fs/namenode/current/fsimage_0000000000000000009
- 2017-02-12 16:25:00,486 INFO namenode.FSImage (FSImage.java:loadEdits(840)) - Reading org.apache.hadoop.hdfs.server.nam
- enode.RedundantEditLogInputStream@7651218e expecting start txid #10
- 2017-02-12 16:25:00,487 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(142)) - Start loading edits file /hadoo
- p/hdfs/namenode/current/edits_0000000000000000010-0000000000000000010
- 2017-02-12 16:25:00,488 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast
- -forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000010-0000000000000000010' to transaction ID 10
- 2017-02-12 16:25:00,491 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(145)) - Edits file /hadoop/hdfs/namenod
- e/current/edits_0000000000000000010-0000000000000000010 of size 1048576 edits # 1 loaded in 0 seconds
- 2017-02-12 16:25:00,498 INFO namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1022)) - Need to save fs image? true
- (staleImage=true, haEnabled=false, isRollingUpgrade=false)
- 2017-02-12 16:25:00,498 INFO namenode.FSImage (FSImage.java:saveNamespace(1096)) - Save namespace ...
- 2017-02-12 16:25:00,503 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(413)) - Saving image file
- /hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000010 using no compression
- 2017-02-12 16:25:00,650 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(416)) - Image file /hadoop
- /hdfs/namenode/current/fsimage.ckpt_0000000000000000010 of size 550 bytes saved in 0 seconds.
- 2017-02-12 16:25:00,682 INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(20
- 3)) - Going to retain 2 images with txid >= 9
- 2017-02-12 16:25:00,682 INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:purgeImage(225)) - Purg
- ing old image FSImageFile(file=/hadoop/hdfs/namenode/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
- 2017-02-12 16:25:00,706 INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(1237)) - Starting log segment at 11
- 2017-02-12 16:25:00,894 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookup
- s
- 2017-02-12 16:25:00,894 INFO namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(697)) - Finished loading FSImage in
- 697 msecs
- 2017-02-12 16:25:01,248 INFO namenode.NameNode (NameNodeRpcServer.java:<init>(422)) - RPC server is binding to node0:80
- 20
- 2017-02-12 16:25:01,262 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util
- .concurrent.LinkedBlockingQueue scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
- 2017-02-12 16:25:01,283 INFO ipc.Server (Server.java:run(811)) - Starting Socket Reader #1 for port 8020
- 2017-02-12 16:25:01,327 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(6289)) - Registered FSNamesystemSta
- te MBean
- 2017-02-12 16:25:01,329 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified a
- s a URI in configuration files. Please update hdfs configuration.
- 2017-02-12 16:25:01,344 INFO namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(138)) - Number of b
- locks under construction: 0
- 2017-02-12 16:25:01,345 INFO namenode.FSNamesystem (FSNamesystem.java:initializeReplQueues(1212)) - initializing replic
- ation queues
- 2017-02-12 16:25:01,348 INFO hdfs.StateChange (FSNamesystem.java:leave(5441)) - STATE* Leaving safe mode after 1 secs
- 2017-02-12 16:25:01,348 INFO hdfs.StateChange (FSNamesystem.java:leave(5453)) - STATE* Network topology has 0 racks and
- 0 datanodes
- 2017-02-12 16:25:01,348 INFO hdfs.StateChange (FSNamesystem.java:leave(5456)) - STATE* UnderReplicatedBlocks has 0 bloc
- ks
- 2017-02-12 16:25:01,354 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(451)) - N
- umber of failed storage changes from 0 to 0
- 2017-02-12 16:25:01,364 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(2915)) - Total n
- umber of blocks = 0
- 2017-02-12 16:25:01,364 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(2916)) - Number
- of invalid blocks = 0
- 2017-02-12 16:25:01,364 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(2917)) - Number
- of under-replicated blocks = 0
- 2017-02-12 16:25:01,364 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(2918)) - Number
- of over-replicated blocks = 0
- 2017-02-12 16:25:01,364 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(2920)) - Number
- of blocks being written = 0
- 2017-02-12 16:25:01,364 INFO hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(2921)) - STATE* Replication
- Queue initialization scan for invalid, over- and under-replicated blocks completed in 16 msec
- 2017-02-12 16:25:01,367 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:01,416 INFO ipc.Server (Server.java:run(1045)) - IPC Server Responder: starting
- 2017-02-12 16:25:01,417 INFO ipc.Server (Server.java:run(881)) - IPC Server listener on 8020: starting
- 2017-02-12 16:25:01,429 INFO namenode.NameNode (NameNode.java:startCommonServices(876)) - NameNode RPC up at: node0/127
- .0.1.1:8020
- 2017-02-12 16:25:01,430 INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1130)) - Starting services re
- quired for active state
- 2017-02-12 16:25:01,436 INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(161)) - Starting
- CacheReplicationMonitor with interval 30000 milliseconds
- 2017-02-12 16:25:03,040 INFO ipc.Server (Server.java:logException(2401)) - IPC Server handler 0 on 8020, call org.apach
- e.hadoop.hdfs.server.protocol.DatanodeProtocol.sendHeartbeat from 127.0.0.1:42972 Call#687 Retry#0
- org.apache.hadoop.ipc.RetriableException: NameNode still not started
- at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2057)
- at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendHeartbeat(NameNodeRpcServer.java:1414)
- at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.sendHeartbeat(DatanodeProtocolServer
- SideTranslatorPB.java:118)
- at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(Dat
- anodeProtocolProtos.java:29064)
- at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:422)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
- 2017-02-12 16:25:03,080 INFO fs.TrashPolicyDefault (TrashPolicyDefault.java:<init>(224)) - The configured checkpoint in
- terval is 0 minutes. Using an interval of 360 minutes that is used for deletion instead
- 2017-02-12 16:25:04,204 INFO hdfs.StateChange (DatanodeManager.java:registerDatanode(915)) - BLOCK* registerDatanode: f
- rom DatanodeRegistration(127.0.0.1:50010, datanodeUuid=5fd8a6b1-e4c9-420d-94a0-e80f3ff21e01, infoPort=50075, infoSecureP
- ort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-032ec285-ae65-4c80-9f81-dd0ad44b24af;nsid=1539198905;c=0) storage 5fd8a6
- b1-e4c9-420d-94a0-e80f3ff21e01
- 2017-02-12 16:25:04,211 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(451)) - N
- umber of failed storage changes from 0 to 0
- 2017-02-12 16:25:04,238 INFO net.NetworkTopology (NetworkTopology.java:add(426)) - Adding a new node: /default-rack/127
- .0.0.1:50010
- 2017-02-12 16:25:04,238 INFO blockmanagement.BlockReportLeaseManager (BlockReportLeaseManager.java:registerNode(205)) -
- Registered DN 5fd8a6b1-e4c9-420d-94a0-e80f3ff21e01 (127.0.0.1:50010).
- 2017-02-12 16:25:04,265 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(451)) - N
- umber of failed storage changes from 0 to 0
- 2017-02-12 16:25:04,265 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(868)) - Adding n
- ew storage ID DS-d7dfb072-6922-4c87-8b11-4cfb567d1591 for DN 127.0.0.1:50010
- 2017-02-12 16:25:04,368 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:07,368 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:10,369 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:13,369 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:16,371 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:19,372 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:22,372 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:25,373 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:28,373 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:31,374 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:34,374 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:37,374 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:40,375 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:43,375 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:46,376 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:49,376 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:52,377 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:55,378 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:25:58,378 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:01,379 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:04,379 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:07,380 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:10,380 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:13,381 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:16,382 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:19,382 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:22,383 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:25,384 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:28,384 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:31,385 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:34,386 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:37,387 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:40,388 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:43,390 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:46,293 INFO BlockStateChange (BlockManager.java:processReport(1998)) - BLOCK* processReport: from stor
- age DS-d7dfb072-6922-4c87-8b11-4cfb567d1591 node DatanodeRegistration(127.0.0.1:50010, datanodeUuid=5fd8a6b1-e4c9-420d-9
- 4a0-e80f3ff21e01, infoPort=50075, infoSecurePort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-032ec285-ae65-4c80-9f81-dd0
- ad44b24af;nsid=1539198905;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
- 2017-02-12 16:26:46,391 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
- 2017-02-12 16:26:49,392 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1588)) - BLOCK* needed
- Replications = 0, pendingReplications = 0.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement