Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Master status for slave2,60020,1397552649456 as of Thu Jul 10 16:28:39 CST 2014
- Version Info:
- ===========================================================
- HBase 0.96.2-hadoop2
- Subversion git://ruralhunter-ubt/home/ruralhunter/dev/hbase -r a1609de241144811f1a552381e11051f8c53d000
- Compiled by ruralhunter on Fri Apr 4 09:56:23 CST 2014
- Hadoop 2.2.0
- Subversion Unknown -r Unknown
- Compiled by ruralhunter on 2013-12-13T02:53Z
- Tasks:
- ===========================================================
- Task: RpcServer.handler=0,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=1,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=2,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=3,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=4,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=5,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=6,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=7,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=8,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=9,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=10,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=11,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=12,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=13,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=14,port=60020
- Status: RUNNING:Servicing call from 123.183.217.195:53157: Get
- Running for 7428259s
- Task: RpcServer.handler=15,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=16,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=17,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=18,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=19,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=20,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=21,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=22,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=23,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=24,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=25,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=26,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=27,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=28,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: RpcServer.handler=29,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=0,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=1,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=2,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=3,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=4,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=5,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=6,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=7,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=8,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Priority.RpcServer.handler=9,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Replication.RpcServer.handler=0,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Replication.RpcServer.handler=1,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Task: Replication.RpcServer.handler=2,port=60020
- Status: WAITING:Waiting for a call
- Running for 7428259s
- Executors:
- ===========================================================
- Status for executor: Executor-3-RS_CLOSE_REGION-slave2:60020
- =======================================
- 0 events queued, 0 running
- Status for executor: Executor-2-RS_OPEN_META-slave2:60020
- =======================================
- 0 events queued, 0 running
- Status for executor: Executor-5-RS_LOG_REPLAY_OPS-slave2:60020
- =======================================
- 0 events queued, 0 running
- Status for executor: Executor-4-RS_CLOSE_META-slave2:60020
- =======================================
- 0 events queued, 0 running
- Status for executor: Executor-1-RS_OPEN_REGION-slave2:60020
- =======================================
- 0 events queued, 0 running
- Stacks:
- ===========================================================
- Process Thread Dump:
- 101 active threads
- Thread 428291 (524695464@qtp-1922766448-9):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 1
- Stack:
- java.lang.Object.wait(Native Method)
- org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
- Thread 428290 (501028910@qtp-1922766448-8):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 1
- Stack:
- java.lang.Object.wait(Native Method)
- org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
- Thread 428289 (610366995@qtp-1922766448-7):
- State: RUNNABLE
- Blocked count: 378
- Waited count: 378
- Stack:
- sun.management.ThreadImpl.getThreadInfo0(Native Method)
- sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:165)
- sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:141)
- org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:165)
- org.apache.hadoop.hbase.regionserver.RSDumpServlet.doGet(RSDumpServlet.java:81)
- javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
- javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
- org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
- org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
- org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1081)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
- org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
- org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
- org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
- org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
- org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
- org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
- Thread 428237 (ResponseProcessor for block BP-1698879265-127.0.1.1-1386142874396:blk_1074217274_476453):
- State: RUNNABLE
- Blocked count: 3
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
- org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
- org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
- org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
- org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
- java.io.FilterInputStream.read(FilterInputStream.java:83)
- java.io.FilterInputStream.read(FilterInputStream.java:83)
- org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1490)
- org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116)
- org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721)
- Thread 428236 (DataStreamer for file /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404979553888 block BP-1698879265-127.0.1.1-1386142874396:blk_1074217274_476453):
- State: TIMED_WAITING
- Blocked count: 80521
- Waited count: 80520
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:491)
- Thread 11736 (regionserver60020-largeCompactions-1397752186457):
- State: WAITING
- Blocked count: 9123
- Waited count: 8140
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@298a3176
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:248)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 331 (ReplicationExecutor-0):
- State: WAITING
- Blocked count: 1
- Waited count: 3
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@28681870
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 286 (RS_CLOSE_REGION-slave2:60020-2):
- State: WAITING
- Blocked count: 207
- Waited count: 341
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12790522
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 285 (RS_CLOSE_REGION-slave2:60020-1):
- State: WAITING
- Blocked count: 282
- Waited count: 478
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12790522
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 283 (RS_CLOSE_REGION-slave2:60020-0):
- State: WAITING
- Blocked count: 280
- Waited count: 461
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12790522
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 281 (1268931815@qtp-1922766448-5):
- State: TIMED_WAITING
- Blocked count: 135
- Waited count: 123934
- Stack:
- java.lang.Object.wait(Native Method)
- org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
- Thread 172 (IPC Parameter Sending Thread #1):
- State: TIMED_WAITING
- Blocked count: 1313
- Waited count: 1797065
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
- java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:453)
- java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:352)
- java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:903)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 119 (regionserver60020-smallCompactions-1397552664633):
- State: WAITING
- Blocked count: 909191
- Waited count: 1593306
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1d7daa45
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:248)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 109 (RS_OPEN_REGION-slave2:60020-2):
- State: WAITING
- Blocked count: 161
- Waited count: 203
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@656ffdea
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 108 (RS_OPEN_REGION-slave2:60020-1):
- State: WAITING
- Blocked count: 165
- Waited count: 210
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@656ffdea
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 107 (RS_OPEN_REGION-slave2:60020-0):
- State: WAITING
- Blocked count: 188
- Waited count: 244
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@656ffdea
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 106 (FileInputStreamCache Cleaner):
- State: TIMED_WAITING
- Blocked count: 83
- Waited count: 107413244
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2081)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:193)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:688)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:681)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 105 (org.apache.hadoop.hdfs.PeerCache@378feca1):
- State: TIMED_WAITING
- Blocked count: 2
- Waited count: 61901
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:252)
- org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:39)
- org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:135)
- java.lang.Thread.run(Thread.java:701)
- Thread 104 (SplitLogWorker-slave2,60020,1397552649456):
- State: TIMED_WAITING
- Blocked count: 2
- Waited count: 1485491
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:265)
- org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:209)
- java.lang.Thread.run(Thread.java:701)
- Thread 103 (Replication.RpcServer.handler=2,port=60020):
- State: WAITING
- Blocked count: 0
- Waited count: 1
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1f360e8
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 102 (Replication.RpcServer.handler=1,port=60020):
- State: WAITING
- Blocked count: 0
- Waited count: 1
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1f360e8
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 101 (Replication.RpcServer.handler=0,port=60020):
- State: WAITING
- Blocked count: 0
- Waited count: 1
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1f360e8
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 100 (Priority.RpcServer.handler=9,port=60020):
- State: WAITING
- Blocked count: 1
- Waited count: 6
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 99 (Priority.RpcServer.handler=8,port=60020):
- State: WAITING
- Blocked count: 2
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 98 (Priority.RpcServer.handler=7,port=60020):
- State: WAITING
- Blocked count: 2
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 97 (Priority.RpcServer.handler=6,port=60020):
- State: WAITING
- Blocked count: 4
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 96 (Priority.RpcServer.handler=5,port=60020):
- State: WAITING
- Blocked count: 4
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 95 (Priority.RpcServer.handler=4,port=60020):
- State: WAITING
- Blocked count: 5
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 94 (Priority.RpcServer.handler=3,port=60020):
- State: WAITING
- Blocked count: 10
- Waited count: 19
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 93 (Priority.RpcServer.handler=2,port=60020):
- State: WAITING
- Blocked count: 2
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 92 (Priority.RpcServer.handler=1,port=60020):
- State: WAITING
- Blocked count: 2
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 91 (Priority.RpcServer.handler=0,port=60020):
- State: WAITING
- Blocked count: 3
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4f65d04f
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 90 (RpcServer.handler=29,port=60020):
- State: WAITING
- Blocked count: 84271043
- Waited count: 273126718
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 89 (RpcServer.handler=28,port=60020):
- State: RUNNABLE
- Blocked count: 84236070
- Waited count: 273097329
- Stack:
- org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.blockSeek(HFileReaderV2.java:872)
- org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:765)
- org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:481)
- org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:522)
- org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:245)
- org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:166)
- org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:361)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:336)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:293)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:258)
- org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:632)
- org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:495)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:129)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3647)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3727)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3592)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3574)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3557)
- org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4541)
- org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4515)
- Thread 88 (RpcServer.handler=27,port=60020):
- State: WAITING
- Blocked count: 84236717
- Waited count: 273099074
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 87 (RpcServer.handler=26,port=60020):
- State: WAITING
- Blocked count: 84199789
- Waited count: 273069353
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 86 (RpcServer.handler=25,port=60020):
- State: WAITING
- Blocked count: 84288895
- Waited count: 273150255
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 85 (RpcServer.handler=24,port=60020):
- State: WAITING
- Blocked count: 84283314
- Waited count: 273142245
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 84 (RpcServer.handler=23,port=60020):
- State: WAITING
- Blocked count: 84251640
- Waited count: 273117292
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 83 (RpcServer.handler=22,port=60020):
- State: WAITING
- Blocked count: 84274996
- Waited count: 273140156
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 82 (RpcServer.handler=21,port=60020):
- State: WAITING
- Blocked count: 84253561
- Waited count: 273121889
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 81 (RpcServer.handler=20,port=60020):
- State: WAITING
- Blocked count: 84268315
- Waited count: 273134493
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 80 (RpcServer.handler=19,port=60020):
- State: WAITING
- Blocked count: 84242346
- Waited count: 273110972
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 79 (RpcServer.handler=18,port=60020):
- State: WAITING
- Blocked count: 84278948
- Waited count: 273146128
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 78 (RpcServer.handler=17,port=60020):
- State: WAITING
- Blocked count: 84252249
- Waited count: 273123444
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 77 (RpcServer.handler=16,port=60020):
- State: WAITING
- Blocked count: 84251496
- Waited count: 273118194
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 76 (RpcServer.handler=15,port=60020):
- State: WAITING
- Blocked count: 84242909
- Waited count: 273105339
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 75 (RpcServer.handler=14,port=60020):
- State: WAITING
- Blocked count: 84249449
- Waited count: 273115795
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 74 (RpcServer.handler=13,port=60020):
- State: WAITING
- Blocked count: 84280051
- Waited count: 273150479
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 73 (RpcServer.handler=12,port=60020):
- State: WAITING
- Blocked count: 84267742
- Waited count: 273132588
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 72 (RpcServer.handler=11,port=60020):
- State: WAITING
- Blocked count: 84285131
- Waited count: 273151637
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 71 (RpcServer.handler=10,port=60020):
- State: WAITING
- Blocked count: 84287850
- Waited count: 273151757
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 70 (RpcServer.handler=9,port=60020):
- State: WAITING
- Blocked count: 84247807
- Waited count: 273114644
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 69 (RpcServer.handler=8,port=60020):
- State: WAITING
- Blocked count: 84286830
- Waited count: 273148550
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 68 (RpcServer.handler=7,port=60020):
- State: WAITING
- Blocked count: 84281548
- Waited count: 273147117
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 67 (RpcServer.handler=6,port=60020):
- State: WAITING
- Blocked count: 84247421
- Waited count: 273112739
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 66 (RpcServer.handler=5,port=60020):
- State: WAITING
- Blocked count: 84227433
- Waited count: 273100732
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 65 (RpcServer.handler=4,port=60020):
- State: WAITING
- Blocked count: 84215546
- Waited count: 273084005
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 64 (RpcServer.handler=3,port=60020):
- State: WAITING
- Blocked count: 84195448
- Waited count: 273064032
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 63 (RpcServer.handler=2,port=60020):
- State: WAITING
- Blocked count: 84205929
- Waited count: 273071865
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 62 (RpcServer.handler=1,port=60020):
- State: WAITING
- Blocked count: 84219629
- Waited count: 273085824
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 61 (RpcServer.handler=0,port=60020):
- State: WAITING
- Blocked count: 84276907
- Waited count: 273141187
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@b4f554c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1858)
- Thread 12 (RpcServer.listener,port=60020):
- State: BLOCKED
- Blocked count: 123264191
- Waited count: 0
- Blocked on org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader@77f87716
- Blocked by 14 (RpcServer.reader=1,port=60020)
- Stack:
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.registerChannel(RpcServer.java:598)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener.doAccept(RpcServer.java:755)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener.run(RpcServer.java:673)
- Thread 24 (RpcServer.responder):
- State: RUNNABLE
- Blocked count: 18731
- Waited count: 14442
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- org.apache.hadoop.hbase.ipc.RpcServer$Responder.doRunLoop(RpcServer.java:857)
- org.apache.hadoop.hbase.ipc.RpcServer$Responder.run(RpcServer.java:840)
- Thread 60 (slave2:60020Replication Statistics #0):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 24847
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2081)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:193)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:688)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:681)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 58 (regionserver60020-EventThread):
- State: WAITING
- Blocked count: 0
- Waited count: 3
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6c6dcb8c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491)
- Thread 57 (regionserver60020-SendThread(slave2:2181)):
- State: RUNNABLE
- Blocked count: 0
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338)
- org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
- Thread 42 (regionserver60020.leaseChecker):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 742775
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:95)
- java.lang.Thread.run(Thread.java:701)
- Thread 41 (regionserver60020.periodicFlusher):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 742848
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
- org.apache.hadoop.hbase.Chore.run(Chore.java:88)
- java.lang.Thread.run(Thread.java:701)
- Thread 40 (regionserver60020.compactionChecker):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 743270
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
- org.apache.hadoop.hbase.Chore.run(Chore.java:88)
- java.lang.Thread.run(Thread.java:701)
- Thread 54 (MemStoreFlusher.0):
- State: TIMED_WAITING
- Blocked count: 1057882
- Waited count: 2605436
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2081)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:230)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:68)
- org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:233)
- java.lang.Thread.run(Thread.java:701)
- Thread 48 (regionserver60020.logRoller):
- State: TIMED_WAITING
- Blocked count: 37990
- Waited count: 800163
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:78)
- java.lang.Thread.run(Thread.java:701)
- Thread 53 (HBase-Metrics2-1):
- State: TIMED_WAITING
- Blocked count: 506
- Waited count: 2141254
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2081)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:193)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:688)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:681)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 49 (regionserver60020.logSyncer):
- State: TIMED_WAITING
- Blocked count: 1247358
- Waited count: 8645724
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.regionserver.wal.FSHLog$LogSyncer.run(FSHLog.java:985)
- java.lang.Thread.run(Thread.java:701)
- Thread 51 (LeaseRenewer:hadoop@master:9000):
- State: TIMED_WAITING
- Blocked count: 246624
- Waited count: 7916334
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:438)
- org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
- org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
- java.lang.Thread.run(Thread.java:701)
- Thread 45 (IPC Client (1506367663) connection to master/192.168.2.86:60000 from hadoop):
- State: TIMED_WAITING
- Blocked count: 2476819
- Waited count: 2472881
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:675)
- org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:721)
- Thread 44 (JvmPauseMonitor):
- State: TIMED_WAITING
- Blocked count: 10
- Waited count: 14839303
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hbase.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:159)
- java.lang.Thread.run(Thread.java:701)
- Thread 39 (regionserver60020-EventThread):
- State: WAITING
- Blocked count: 0
- Waited count: 3
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@79d25421
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491)
- Thread 38 (regionserver60020-SendThread(slave3:2181)):
- State: RUNNABLE
- Blocked count: 1
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338)
- org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
- Thread 34 (regionserver60020-EventThread):
- State: WAITING
- Blocked count: 17
- Waited count: 25
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@562a5d9e
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
- org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491)
- Thread 33 (regionserver60020-SendThread(slave3:2181)):
- State: RUNNABLE
- Blocked count: 20
- Waited count: 17
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338)
- org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
- Thread 32 (regionserver60020):
- State: TIMED_WAITING
- Blocked count: 2472967
- Waited count: 4946173
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:92)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:56)
- org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:851)
- java.lang.Thread.run(Thread.java:701)
- Thread 31 (Timer-0):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 247605
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:531)
- java.util.TimerThread.run(Timer.java:484)
- Thread 30 (1621366179@qtp-1922766448-1 - Acceptor0 SelectChannelConnector@0.0.0.0:60030):
- State: RUNNABLE
- Blocked count: 12
- Waited count: 4
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
- org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
- org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
- org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
- org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
- Thread 27 (LruStats #0):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 24847
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2081)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:193)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:688)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:681)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1069)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1131)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 25 (main.LruBlockCache.EvictionThread):
- State: WAITING
- Blocked count: 1625925
- Waited count: 1625875
- Waiting on org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread@4ff66cbc
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:502)
- org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:666)
- java.lang.Thread.run(Thread.java:701)
- Thread 23 (Timer for 'HBase' metrics system):
- State: TIMED_WAITING
- Blocked count: 1
- Waited count: 742829
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:531)
- java.util.TimerThread.run(Timer.java:484)
- Thread 22 (RpcServer.reader=9,port=60020):
- State: RUNNABLE
- Blocked count: 12512800
- Waited count: 12830539
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 21 (RpcServer.reader=8,port=60020):
- State: RUNNABLE
- Blocked count: 12517530
- Waited count: 12857056
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 20 (RpcServer.reader=7,port=60020):
- State: RUNNABLE
- Blocked count: 12510673
- Waited count: 12904831
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 19 (RpcServer.reader=6,port=60020):
- State: RUNNABLE
- Blocked count: 12507371
- Waited count: 12851928
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 18 (RpcServer.reader=5,port=60020):
- State: RUNNABLE
- Blocked count: 12502241
- Waited count: 12898142
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 17 (RpcServer.reader=4,port=60020):
- State: RUNNABLE
- Blocked count: 12496015
- Waited count: 12906015
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 16 (RpcServer.reader=3,port=60020):
- State: RUNNABLE
- Blocked count: 12500159
- Waited count: 12832412
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 15 (RpcServer.reader=2,port=60020):
- State: RUNNABLE
- Blocked count: 12510789
- Waited count: 12920782
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 14 (RpcServer.reader=1,port=60020):
- State: RUNNABLE
- Blocked count: 12510492
- Waited count: 12826560
- Stack:
- sun.nio.ch.FileDispatcher.read0(Native Method)
- sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- sun.nio.ch.IOUtil.read(IOUtil.java:224)
- sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- org.apache.hadoop.hbase.ipc.RpcServer.channelIO(RpcServer.java:2438)
- org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1498)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 13 (RpcServer.reader=0,port=60020):
- State: RUNNABLE
- Blocked count: 12503741
- Waited count: 12841613
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:83)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:557)
- org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- java.lang.Thread.run(Thread.java:701)
- Thread 5 (Signal Dispatcher):
- State: RUNNABLE
- Blocked count: 0
- Waited count: 0
- Stack:
- Thread 3 (Finalizer):
- State: WAITING
- Blocked count: 1383051
- Waited count: 317356
- Waiting on java.lang.ref.ReferenceQueue$Lock@64e265d0
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:133)
- java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:149)
- java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
- Thread 2 (Reference Handler):
- State: WAITING
- Blocked count: 13402069
- Waited count: 13265130
- Waiting on java.lang.ref.Reference$Lock@4b8a6e6e
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:502)
- java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
- Thread 1 (main):
- State: WAITING
- Blocked count: 44
- Waited count: 3
- Waiting on java.lang.Thread@2f9e8a44
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Thread.join(Thread.java:1225)
- java.lang.Thread.join(Thread.java:1278)
- org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
- org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
- org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
- org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
- org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2340)
- RS Configuration:
- ===========================================================
- <?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
- <property><name>dfs.journalnode.rpc-address</name><value>0.0.0.0:8485</value><source>hdfs-default.xml</source></property>
- <property><name>io.storefile.bloom.block.size</name><value>131072</value><source>hbase-default.xml</source></property>
- <property><name>yarn.ipc.rpc.class</name><value>org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.job.maxtaskfailures.per.tracker</name><value>3</value><source>mapred-default.xml</source></property>
- <property><name>hbase.rest.threads.min</name><value>2</value><source>hbase-default.xml</source></property>
- <property><name>ha.health-monitor.connect-retry-interval.ms</name><value>1000</value><source>core-default.xml</source></property>
- <property><name>hbase.rs.cacheblocksonwrite</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.reduce.markreset.buffer.percent</name><value>0.0</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.data.dir</name><value>file://${hadoop.tmp.dir}/dfs/data</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobhistory.max-age-ms</name><value>604800000</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.ubertask.enable</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>dfs.namenode.delegation.token.renew-interval</name><value>86400000</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.log-aggregation.compression-type</name><value>none</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.replication.considerLoad</name><value>true</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.job.complete.cancel.delegation.tokens</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobhistory.datestring.cache.size</name><value>200000</value><source>mapred-default.xml</source></property>
- <property><name>hbase.status.multicast.address.ip</name><value>226.1.1.3</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.retrycache.heap.percent</name><value>0.03f</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.resourcemanager.scheduler.address</name><value>${yarn.resourcemanager.hostname}:8030</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.logging.level</name><value>info</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.client.file-block-storage-locations.num-threads</name><value>10</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.datanode.balance.bandwidthPerSec</name><value>1048576</value><source>hdfs-default.xml</source></property>
- <property><name>io.mapfile.bloom.error.rate</name><value>0.005</value><source>core-default.xml</source></property>
- <property><name>yarn.resourcemanager.nodemanagers.heartbeat-interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
- <property><name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name><value>${dfs.web.authentication.kerberos.principal}</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.delete.debug-delay-sec</name><value>0</value><source>yarn-default.xml</source></property>
- <property><name>yarn.scheduler.maximum-allocation-vcores</name><value>32</value><source>yarn-default.xml</source></property>
- <property><name>dfs.image.transfer.bandwidthPerSec</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.zookeeper.quorum</name><value>master,slave1,slave2,slave3</value><source>hbase-site.xml</source></property>
- <property><name>hfile.block.bloom.cacheonwrite</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.job.hdfs-servers</name><value>${fs.defaultFS}</value><source>yarn-default.xml</source></property>
- <property><name>hbase.zookeeper.property.syncLimit</name><value>5</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.fs-limits.min-block-size</name><value>1048576</value><source>hdfs-default.xml</source></property>
- <property><name>ftp.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
- <property><name>dfs.datanode.directoryscan.threads</name><value>1</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.job.split.metainfo.maxsize</name><value>10000000</value><source>mapred-default.xml</source></property>
- <property><name>dfs.namenode.edits.noeditlogchannelflush</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>s3native.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
- <property><name>hbase.rest.filter.classes</name><value>org.apache.hadoop.hbase.rest.filter.GzipFilter</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.tasktracker.tasks.sleeptimebeforesigkill</name><value>5000</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.http.authentication.type</name><value>simple</value><source>core-default.xml</source></property>
- <property><name>mapreduce.local.clientfactory.class.name</name><value>org.apache.hadoop.mapred.LocalClientFactory</value><source>mapred-default.xml</source></property>
- <property><name>ipc.client.connection.maxidletime</name><value>10000</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.safemode.threshold-pct</name><value>0.999f</value><source>hdfs-default.xml</source></property>
- <property><name>hfile.block.cache.size</name><value>0.4</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.num.checkpoints.retained</name><value>2</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.hregion.memstore.mslab.enabled</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.job.ubertask.maxmaps</name><value>9</value><source>mapred-default.xml</source></property>
- <property><name>dfs.namenode.stale.datanode.interval</name><value>30000</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.tasktracker.http.address</name><value>0.0.0.0:50060</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.ifile.readahead.bytes</name><value>4194304</value><source>mapred-default.xml</source></property>
- <property><name>s3.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
- <property><name>hbase.master.port</name><value>60000</value><source>hbase-default.xml</source></property>
- <property><name>dfs.block.access.token.lifetime</name><value>600</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.resource.cpu-vcores</name><value>1</value><source>mapred-default.xml</source></property>
- <property><name>hbase.regionserver.checksum.verify</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.num.extra.edits.retained</name><value>1000000</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.reduce.shuffle.input.buffer.percent</name><value>0.70</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.http.staticuser.user</name><value>dr.who</value><source>core-default.xml</source></property>
- <property><name>mapreduce.reduce.maxattempts</name><value>4</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.security.group.mapping.ldap.search.filter.user</name><value>(&(objectClass=user)(sAMAccountName={0}))</value><source>core-default.xml</source></property>
- <property><name>mapreduce.map.maxattempts</name><value>4</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobhistory.cleaner.interval-ms</name><value>86400000</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.drop.cache.behind.reads</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.permissions.superusergroup</name><value>supergroup</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.server.versionfile.writeattempts</name><value>3</value><source>hbase-default.xml</source></property>
- <property><name>yarn.application.classpath</name><value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value><source>yarn-default.xml</source></property>
- <property><name>hbase.zookeeper.useMulti</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>fs.s3n.block.size</name><value>67108864</value><source>core-default.xml</source></property>
- <property><name>hbase.zookeeper.leaderport</name><value>3888</value><source>hbase-default.xml</source></property>
- <property><name>hbase.master.info.port</name><value>60010</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.fs-limits.max-blocks-per-file</name><value>1048576</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.vmem-check-enabled</name><value>true</value><source>yarn-default.xml</source></property>
- <property><name>hadoop.security.authentication</name><value>simple</value><source>core-default.xml</source></property>
- <property><name>mapreduce.reduce.cpu.vcores</name><value>1</value><source>mapred-default.xml</source></property>
- <property><name>net.topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value><source>core-default.xml</source></property>
- <property><name>fs.s3.sleepTimeSeconds</name><value>10</value><source>core-default.xml</source></property>
- <property><name>yarn.resourcemanager.keytab</name><value>/etc/krb5.keytab</value><source>yarn-default.xml</source></property>
- <property><name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.jobtracker.heartbeats.in.second</name><value>100</value><source>mapred-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms</name><value>1000</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name><value>/hadoop-yarn</value><source>yarn-default.xml</source></property>
- <property><name>s3.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.dns.nameserver</name><value>default</value><source>hbase-default.xml</source></property>
- <property><name>hadoop.ssl.require.client.cert</name><value>false</value><source>core-default.xml</source></property>
- <property><name>mapreduce.output.fileoutputformat.compress</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>dfs.journalnode.http-address</name><value>0.0.0.0:8480</value><source>hdfs-default.xml</source></property>
- <property><name>fs.default.name</name><value>hdfs://master:9000/hbase</value><source>programatically</source></property>
- <property><name>dfs.ha.automatic-failover.enabled</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.cluster.distributed</name><value>true</value><source>hbase-site.xml</source></property>
- <property><name>hbase.rootdir</name><value>hdfs://master:9000/hbase</value><source>programatically</source></property>
- <property><name>s3native.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.invalidate.work.pct.per.iteration</name><value>0.32f</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.client.block.write.replace-datanode-on-failure.policy</name><value>DEFAULT</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.client.submit.file.replication</name><value>10</value><source>mapred-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.job.committer.commit-window</name><value>10000</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.sleep-delay-before-sigkill.ms</name><value>250</value><source>yarn-default.xml</source></property>
- <property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.secondary.http-address</name><value>0.0.0.0:50090</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.map.speculative</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.speculative.slowtaskthreshold</name><value>1.0</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.task.tmp.dir</name><value>./tmp</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.linux-container-executor.cgroups.mount</name><value>false</value><source>yarn-default.xml</source></property>
- <property><name>hbase.regionserver.msginterval</name><value>3000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.tasktracker.http.threads</name><value>40</value><source>mapred-default.xml</source></property>
- <property><name>hbase.auth.token.max.lifetime</name><value>604800000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobhistory.http.policy</name><value>HTTP_ONLY</value><source>mapred-default.xml</source></property>
- <property><name>hbase.ipc.client.fallback-to-simple-auth-allowed</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>hbase.rest.threads.max</name><value>100</value><source>hbase-default.xml</source></property>
- <property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value><source>core-default.xml</source></property>
- <property><name>hbase.snapshot.enabled</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>hbase.dynamic.jars.dir</name><value>${hbase.rootdir}/lib</value><source>hbase-default.xml</source></property>
- <property><name>hbase.defaults.for.version</name><value>0.96.2-hadoop2</value><source>hbase-default.xml</source></property>
- <property><name>io.native.lib.available</name><value>true</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobhistory.done-dir</name><value>${yarn.app.mapreduce.am.staging-dir}/history/done</value><source>mapred-default.xml</source></property>
- <property><name>hbase.regions.slop</name><value>0.2</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.avoid.write.stale.datanode</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.namenode.checkpoint.txns</name><value>1000000</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.ssl.hostname.verifier</name><value>DEFAULT</value><source>core-default.xml</source></property>
- <property><name>mapreduce.task.timeout</name><value>600000</value><source>mapred-default.xml</source></property>
- <property><name>zookeeper.znode.rootserver</name><value>root-region-server</value><source>hbase-default.xml</source></property>
- <property><name>hbase.client.max.perserver.tasks</name><value>5</value><source>hbase-default.xml</source></property>
- <property><name>yarn.nodemanager.disk-health-checker.interval-ms</name><value>120000</value><source>yarn-default.xml</source></property>
- <property><name>hadoop.security.groups.cache.secs</name><value>300</value><source>core-default.xml</source></property>
- <property><name>mapreduce.input.fileinputformat.split.minsize</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.sync.behind.writes</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>zookeeper.session.timeout</name><value>90000</value><source>hbase-default.xml</source></property>
- <property><name>ipc.server.tcpnodelay</name><value>false</value><source>core-default.xml</source></property>
- <property><name>mapreduce.shuffle.port</name><value>13562</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.rpc.protection</name><value>authentication</value><source>core-default.xml</source></property>
- <property><name>dfs.client.https.keystore.resource</name><value>ssl-client.xml</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobtracker.retiredjobs.cache.size</name><value>1000</value><source>mapred-default.xml</source></property>
- <property><name>hbase.balancer.period</name><value>300000</value><source>hbase-default.xml</source></property>
- <property><name>yarn.nodemanager.resourcemanager.connect.retry_interval.secs</name><value>30</value><source>yarn-default.xml</source></property>
- <property><name>ipc.client.tcpnodelay</name><value>false</value><source>core-default.xml</source></property>
- <property><name>fs.s3.maxRetries</name><value>4</value><source>core-default.xml</source></property>
- <property><name>dfs.datanode.drop.cache.behind.writes</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.ha.tail-edits.period</name><value>60</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobtracker.address</name><value>local</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.http.authentication.kerberos.principal</name><value>HTTP/_HOST@LOCALHOST</value><source>core-default.xml</source></property>
- <property><name>yarn.resourcemanager.webapp.address</name><value>${yarn.resourcemanager.hostname}:8088</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.task.profile.reduces</name><value>0-2</value><source>mapred-default.xml</source></property>
- <property><name>yarn.resourcemanager.am.max-attempts</name><value>2</value><source>yarn-default.xml</source></property>
- <property><name>hbase.hstore.blockingWaitTime</name><value>90000</value><source>hbase-default.xml</source></property>
- <property><name>hbase.client.pause</name><value>100</value><source>hbase-default.xml</source></property>
- <property><name>hbase.client.write.buffer</name><value>2097152</value><source>hbase-default.xml</source></property>
- <property><name>dfs.bytes-per-checksum</name><value>512</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.job.end-notification.max.retry.interval</name><value>5000</value><source>mapred-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.command-opts</name><value>-Xmx1024m</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.process-kill-wait.ms</name><value>2000</value><source>yarn-default.xml</source></property>
- <property><name>hbase.rpc.timeout</name><value>60000</value><source>hbase-default.xml</source></property>
- <property><name>hbase.metrics.exposeOperationTimes</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.safemode.min.datanodes</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.thrift.maxWorkerThreads</name><value>1000</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.write.stale.datanode.ratio</name><value>0.5f</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.jetty.logs.serve.aliases</name><value>true</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.global.memstore.upperLimit</name><value>0.4</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.tasktracker.dns.nameserver</name><value>default</value><source>mapred-default.xml</source></property>
- <property><name>hbase.master.catalog.timeout</name><value>600000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.task.merge.progress.records</name><value>10000</value><source>mapred-default.xml</source></property>
- <property><name>dfs.webhdfs.enabled</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.ssl.client.conf</name><value>ssl-client.xml</value><source>core-default.xml</source></property>
- <property><name>mapreduce.job.counters.max</name><value>120</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.localizer.fetch.thread-count</name><value>4</value><source>yarn-default.xml</source></property>
- <property><name>io.mapfile.bloom.size</name><value>1048576</value><source>core-default.xml</source></property>
- <property><name>yarn.nodemanager.localizer.client.thread-count</name><value>5</value><source>yarn-default.xml</source></property>
- <property><name>fs.automatic.close</name><value>true</value><source>core-default.xml</source></property>
- <property><name>mapreduce.task.profile</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.shuffle.ssl.file.buffer.size</name><value>65536</value><source>mapred-default.xml</source></property>
- <property><name>hbase.hstore.bytes.per.checksum</name><value>16384</value><source>hbase-default.xml</source></property>
- <property><name>yarn.ipc.serializer.type</name><value>protocolbuffers</value><source>yarn-default.xml</source></property>
- <property><name>dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction</name><value>0.75f</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.namenode.backup.address</name><value>0.0.0.0:50100</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.client.https.need-auth</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.keytab</name><value>/etc/krb5.keytab</value><source>yarn-default.xml</source></property>
- <property><name>dfs.client.write.exclude.nodes.cache.expiry.interval.millis</name><value>600000</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobtracker.restart.recover</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.map.skip.proc.count.autoincr</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.security.instrumentation.requires.admin</name><value>false</value><source>core-default.xml</source></property>
- <property><name>io.compression.codec.bzip2.library</name><value>system-native</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.name.dir.restore</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.client.retries.number</name><value>350</value><source>programatically</source></property>
- <property><name>hadoop.ssl.keystores.factory.class</name><value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value><source>core-default.xml</source></property>
- <property><name>hbase.status.multicast.address.port</name><value>60100</value><source>hbase-default.xml</source></property>
- <property><name>fs.ftp.host</name><value>0.0.0.0</value><source>core-default.xml</source></property>
- <property><name>hbase.hstore.checksum.algorithm</name><value>CRC32</value><source>hbase-default.xml</source></property>
- <property><name>s3.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
- <property><name>s3native.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobtracker.taskscheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.dns.nameserver</name><value>default</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.resource.memory-mb</name><value>8192</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.task.userlog.limit.kb</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.reduce.speculative</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.container-monitor.interval-ms</name><value>3000</value><source>yarn-default.xml</source></property>
- <property><name>dfs.replication.max</name><value>512</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.replication</name><value>2</value><source>hbase-site.xml</source></property>
- <property><name>dfs.client.socket-timeout</name><value>10000</value><source>hbase-site.xml</source></property>
- <property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value><source>yarn-default.xml</source></property>
- <property><name>hbase.server.thread.wakefrequency</name><value>10000</value><source>hbase-default.xml</source></property>
- <property><name>hbase.lease.recovery.timeout</name><value>900000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.reduce.shuffle.memory.limit.percent</name><value>0.25</value><source>mapred-default.xml</source></property>
- <property><name>file.replication</name><value>1</value><source>core-default.xml</source></property>
- <property><name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name><value>org.apache.hadoop.mapreduce.task.reduce.Shuffle</value><source>mapred-default.xml</source></property>
- <property><name>hfile.format.version</name><value>2</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.job.jvm.numtasks</name><value>1</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.am.max-attempts</name><value>2</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.fuse.timer.period</name><value>5</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.job.reduces</name><value>1</value><source>mapred-default.xml</source></property>
- <property><name>hbase.thrift.minWorkerThreads</name><value>16</value><source>hbase-default.xml</source></property>
- <property><name>hbase.zookeeper.dns.interface</name><value>default</value><source>hbase-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.job.task.listener.thread-count</name><value>30</value><source>mapred-default.xml</source></property>
- <property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.job.speculative.slownodethreshold</name><value>1.0</value><source>mapred-default.xml</source></property>
- <property><name>s3native.replication</name><value>3</value><source>core-default.xml</source></property>
- <property><name>mapreduce.tasktracker.reduce.tasks.maximum</name><value>2</value><source>mapred-default.xml</source></property>
- <property><name>hbase.snapshot.restore.failsafe.name</name><value>hbase-failsafe-{snapshot.name}-{restore.timestamp}</value><source>hbase-default.xml</source></property>
- <property><name>fs.permissions.umask-mode</name><value>022</value><source>core-default.xml</source></property>
- <property><name>mapreduce.cluster.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.client.output.filter</name><value>FAILED</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.pmem-check-enabled</name><value>true</value><source>yarn-default.xml</source></property>
- <property><name>dfs.client.failover.connection.retries.on.timeouts</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobtracker.instrumentation</name><value>org.apache.hadoop.mapred.JobTrackerMetricsInst</value><source>mapred-default.xml</source></property>
- <property><name>ftp.replication</name><value>3</value><source>core-default.xml</source></property>
- <property><name>hbase.hstore.blockingStoreFiles</name><value>10</value><source>hbase-default.xml</source></property>
- <property><name>hadoop.security.group.mapping.ldap.search.attr.member</name><value>member</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.hlog.reader.impl</name><value>org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.replication.work.multiplier.per.iteration</name><value>2</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.resourcemanager.resource-tracker.address</name><value>${yarn.resourcemanager.hostname}:8031</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.tasktracker.outofband.heartbeat</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>hbase.master.info.bindAddress</name><value>0.0.0.0</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.edits.dir</name><value>${dfs.namenode.name.dir}</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.resourcemanager.scheduler.monitor.enable</name><value>false</value><source>yarn-default.xml</source></property>
- <property><name>fs.trash.checkpoint.interval</name><value>0</value><source>core-default.xml</source></property>
- <property><name>s3.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
- <property><name>file.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
- <property><name>mapreduce.tasktracker.healthchecker.script.timeout</name><value>600000</value><source>mapred-default.xml</source></property>
- <property><name>hbase.status.listener.class</name><value>org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.fs-limits.max-directory-items</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.tasktracker.taskcontroller</name><value>org.apache.hadoop.mapred.DefaultTaskController</value><source>mapred-default.xml</source></property>
- <property><name>nfs3.server.port</name><value>2049</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.checkpoint.dir</name><value>file://${hadoop.tmp.dir}/dfs/namesecondary</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.remote-app-log-dir</name><value>/tmp/logs</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.reduce.shuffle.retry-delay.max.ms</name><value>60000</value><source>mapred-default.xml</source></property>
- <property><name>hbase.regionserver.dns.interface</name><value>default</value><source>hbase-default.xml</source></property>
- <property><name>io.map.index.interval</name><value>128</value><source>core-default.xml</source></property>
- <property><name>dfs.client.block.write.replace-datanode-on-failure.enable</name><value>true</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.namenode.replication.interval</name><value>3</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.admin.user.env</name><value>LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native</value><source>mapred-default.xml</source></property>
- <property><name>hbase.rest.port</name><value>8080</value><source>hbase-default.xml</source></property>
- <property><name>hbase.regionserver.handler.count</name><value>30</value><source>hbase-default.xml</source></property>
- <property><name>hadoop.ssl.server.conf</name><value>ssl-server.xml</value><source>core-default.xml</source></property>
- <property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value><source>core-default.xml</source></property>
- <property><name>yarn.app.mapreduce.client.max-retries</name><value>3</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.address</name><value>${yarn.nodemanager.hostname}:0</value><source>yarn-default.xml</source></property>
- <property><name>dfs.datanode.max.transfer.threads</name><value>4096</value><source>hdfs-default.xml</source></property>
- <property><name>ha.failover-controller.graceful-fence.rpc-timeout.ms</name><value>5000</value><source>core-default.xml</source></property>
- <property><name>dfs.datanode.ipc.address</name><value>0.0.0.0:50020</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.resourcemanager.delayed.delegation-token.removal-interval-ms</name><value>30000</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.backup.http-address</name><value>0.0.0.0:50105</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.tasktracker.report.address</name><value>127.0.0.1:0</value><source>mapred-default.xml</source></property>
- <property><name>dfs.namenode.checkpoint.period</name><value>3600</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.bulkload.retries.number</name><value>0</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.task.attempt.id</name><value>hb_rs_slave2,60020,1397552649456</value><source>because mapred.task.id is deprecated</source></property>
- <property><name>hbase.hregion.max.filesize</name><value>10737418240</value><source>hbase-default.xml</source></property>
- <property><name>hbase.master.loadbalancer.class</name><value>org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.job.speculative.speculativecap</name><value>0.1</value><source>mapred-default.xml</source></property>
- <property><name>dfs.namenode.support.allow.format</name><value>true</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.namenode.checkpoint.max-retries</name><value>3</value><source>hdfs-default.xml</source></property>
- <property><name>zookeeper.znode.acl.parent</name><value>acl</value><source>hbase-default.xml</source></property>
- <property><name>hbase.status.publisher.class</name><value>org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher</value><source>hbase-default.xml</source></property>
- <property><name>hbase.tmp.dir</name><value>${java.io.tmpdir}/hbase-${user.name}</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.decommission.nodes.per.interval</name><value>5</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.job.committer.setup.cleanup.needed</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.end-notification.retry.attempts</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.map.output.compress</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>hbase.client.localityCheck.threadPoolSize</name><value>2</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobhistory.cleaner.enable</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>io.seqfile.local.dir</name><value>${hadoop.tmp.dir}/io/local</value><source>core-default.xml</source></property>
- <property><name>mapreduce.reduce.shuffle.read.timeout</name><value>180000</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.queuename</name><value>default</value><source>mapred-default.xml</source></property>
- <property><name>ipc.client.connect.max.retries</name><value>10</value><source>core-default.xml</source></property>
- <property><name>io.seqfile.lazydecompress</name><value>true</value><source>core-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.staging-dir</name><value>/tmp/hadoop-yarn/staging</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.linux-container-executor.resources-handler.class</name><value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value><source>yarn-default.xml</source></property>
- <property><name>io.file.buffer.size</name><value>4096</value><source>core-default.xml</source></property>
- <property><name>ha.zookeeper.parent-znode</name><value>/hadoop-ha</value><source>core-default.xml</source></property>
- <property><name>mapreduce.tasktracker.indexcache.mb</name><value>10</value><source>mapred-default.xml</source></property>
- <property><name>tfile.io.chunk.size</name><value>1048576</value><source>core-default.xml</source></property>
- <property><name>yarn.acl.enable</name><value>true</value><source>yarn-default.xml</source></property>
- <property><name>rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB</name><value>org.apache.hadoop.ipc.ProtobufRpcEngine</value><source>programatically</source></property>
- <property><name>hadoop.security.group.mapping.ldap.directory.search.timeout</name><value>10000</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.regionSplitLimit</name><value>2147483647</value><source>hbase-default.xml</source></property>
- <property><name>yarn.nodemanager.resourcemanager.connect.wait.secs</name><value>900</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.job.token.tracking.ids.enabled</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>hbase.thrift.maxQueuedRequests</name><value>1000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value><source>mapred-default.xml</source></property>
- <property><name>s3.replication</name><value>3</value><source>core-default.xml</source></property>
- <property><name>tfile.fs.input.buffer.size</name><value>262144</value><source>core-default.xml</source></property>
- <property><name>ha.failover-controller.graceful-fence.connection.retries</name><value>1</value><source>core-default.xml</source></property>
- <property><name>net.topology.script.number.args</name><value>100</value><source>core-default.xml</source></property>
- <property><name>hfile.block.index.cacheonwrite</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>hadoop.ssl.enabled</name><value>false</value><source>core-default.xml</source></property>
- <property><name>hbase.config.read.zookeeper.config</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>dfs.client.read.shortcircuit.buffer.size</name><value>131072</value><source>hbase-site.xml</source></property>
- <property><name>yarn.nodemanager.log.retain-seconds</name><value>10800</value><source>yarn-default.xml</source></property>
- <property><name>yarn.resourcemanager.admin.address</name><value>${yarn.resourcemanager.hostname}:8033</value><source>yarn-default.xml</source></property>
- <property><name>yarn.resourcemanager.recovery.enabled</name><value>false</value><source>yarn-default.xml</source></property>
- <property><name>fs.AbstractFileSystem.viewfs.impl</name><value>org.apache.hadoop.fs.viewfs.ViewFs</value><source>core-default.xml</source></property>
- <property><name>mapreduce.tasktracker.dns.interface</name><value>default</value><source>mapred-default.xml</source></property>
- <property><name>hbase.offheapcache.percentage</name><value>0</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobtracker.handler.count</name><value>10</value><source>mapred-default.xml</source></property>
- <property><name>dfs.blockreport.initialDelay</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>fs.AbstractFileSystem.hdfs.impl</name><value>org.apache.hadoop.fs.Hdfs</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.retrycache.expirytime.millis</name><value>600000</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.client.failover.sleep.max.millis</name><value>15000</value><source>hdfs-default.xml</source></property>
- <property><name>mapred.task.id</name><value>hb_rs_slave2,60020,1397552649456</value><source>programatically</source></property>
- <property><name>hbase.zookeeper.property.clientPort</name><value>2181</value><source>hbase-default.xml</source></property>
- <property><name>yarn.resourcemanager.max-completed-applications</name><value>10000</value><source>yarn-default.xml</source></property>
- <property><name>yarn.nodemanager.log-dirs</name><value>${yarn.log.dir}/userlogs</value><source>yarn-default.xml</source></property>
- <property><name>dfs.client.failover.sleep.base.millis</name><value>500</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.rest.readonly</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>dfs.default.chunk.view.size</name><value>32768</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.client.read.shortcircuit</name><value>true</value><source>hbase-site.xml</source></property>
- <property><name>hbase.rpc.server.engine</name><value>org.apache.hadoop.hbase.ipc.ProtobufRpcServerEngine</value><source>hbase-default.xml</source></property>
- <property><name>ftp.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
- <property><name>mapreduce.job.acl-modify-job</name><value> </value><source>mapred-default.xml</source></property>
- <property><name>zookeeper.znode.parent</name><value>/hbase</value><source>hbase-default.xml</source></property>
- <property><name>fs.defaultFS</name><value>hdfs://master:9000/hbase</value><source>because fs.default.name is deprecated</source></property>
- <property><name>hbase.rpc.shortoperation.timeout</name><value>10000</value><source>hbase-default.xml</source></property>
- <property><name>hadoop.http.filter.initializers</name><value>org.apache.hadoop.http.lib.StaticUserWebFilter</value><source>core-default.xml</source></property>
- <property><name>yarn.resourcemanager.connect.max-wait.ms</name><value>900000</value><source>yarn-default.xml</source></property>
- <property><name>hadoop.security.group.mapping.ldap.ssl</name><value>false</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.max.extra.edits.segments.retained</name><value>10000</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.namenode.https-address</name><value>0.0.0.0:50470</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.resourcemanager.admin.client.thread-count</name><value>1</value><source>yarn-default.xml</source></property>
- <property><name>ipc.client.kill.max</name><value>10</value><source>core-default.xml</source></property>
- <property><name>hadoop.security.group.mapping.ldap.search.filter.group</name><value>(objectClass=group)</value><source>core-default.xml</source></property>
- <property><name>fs.AbstractFileSystem.file.impl</name><value>org.apache.hadoop.fs.local.LocalFs</value><source>core-default.xml</source></property>
- <property><name>hadoop.http.authentication.kerberos.keytab</name><value>${user.home}/hadoop.keytab</value><source>core-default.xml</source></property>
- <property><name>mapreduce.job.map.output.collector.class</name><value>org.apache.hadoop.mapred.MapTask$MapOutputBuffer</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.security.uid.cache.secs</name><value>14400</value><source>core-default.xml</source></property>
- <property><name>mapreduce.map.cpu.vcores</name><value>1</value><source>mapred-default.xml</source></property>
- <property><name>yarn.log-aggregation.retain-check-interval-seconds</name><value>-1</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.map.log.level</name><value>INFO</value><source>mapred-default.xml</source></property>
- <property><name>mapred.child.java.opts</name><value>-Xmx200m</value><source>mapred-default.xml</source></property>
- <property><name>hfile.index.block.max.size</name><value>131072</value><source>hbase-default.xml</source></property>
- <property><name>hbase.client.scanner.timeout.period</name><value>60000</value><source>hbase-default.xml</source></property>
- <property><name>yarn.nodemanager.local-cache.max-files-per-directory</name><value>8192</value><source>yarn-default.xml</source></property>
- <property><name>dfs.https.server.keystore.resource</name><value>ssl-server.xml</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobtracker.taskcache.levels</name><value>2</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.handler.count</name><value>10</value><source>hdfs-default.xml</source></property>
- <property><name>s3native.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
- <property><name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.client.completion.pollinterval</name><value>5000</value><source>mapred-default.xml</source></property>
- <property><name>hbase.hstore.compactionThreshold</name><value>3</value><source>hbase-default.xml</source></property>
- <property><name>dfs.stream-buffer-size</name><value>4096</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.namenode.delegation.key.update-interval</name><value>86400000</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.job.maps</name><value>2</value><source>mapred-default.xml</source></property>
- <property><name>hbase.master.logcleaner.ttl</name><value>600000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.job.acl-view-job</name><value> </value><source>mapred-default.xml</source></property>
- <property><name>dfs.namenode.enable.retrycache</name><value>true</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.resourcemanager.connect.retry-interval.ms</name><value>30000</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.decommission.interval</name><value>30</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.hregion.majorcompaction</name><value>604800000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.shuffle.max.connections</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>yarn.log-aggregation-enable</name><value>false</value><source>yarn-default.xml</source></property>
- <property><name>dfs.client-write-packet-size</name><value>65536</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobtracker.expire.trackers.interval</name><value>600000</value><source>mapred-default.xml</source></property>
- <property><name>dfs.client.block.write.retries</name><value>3</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.task.io.sort.factor</name><value>10</value><source>mapred-default.xml</source></property>
- <property><name>ha.health-monitor.sleep-after-disconnect.ms</name><value>1000</value><source>core-default.xml</source></property>
- <property><name>hbase.hregion.memstore.flush.size</name><value>134217728</value><source>hbase-default.xml</source></property>
- <property><name>ha.zookeeper.session-timeout.ms</name><value>5000</value><source>core-default.xml</source></property>
- <property><name>dfs.support.append</name><value>true</value><source>hdfs-default.xml</source></property>
- <property><name>io.skip.checksum.errors</name><value>false</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.optionalcacheflushinterval</name><value>3600000</value><source>hbase-default.xml</source></property>
- <property><name>hbase.ipc.client.tcpnodelay</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>yarn.resourcemanager.scheduler.client.thread-count</name><value>50</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.safemode.extension</name><value>30000</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobhistory.move.thread-count</name><value>3</value><source>mapred-default.xml</source></property>
- <property><name>ipc.client.idlethreshold</name><value>4000</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.port</name><value>60020</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.accesstime.precision</name><value>3600000</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.regionserver.logroll.errors.tolerated</name><value>2</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobhistory.keytab</name><value>/etc/security/keytab/jhs.service.keytab</value><source>mapred-default.xml</source></property>
- <property><name>hbase.hstore.compaction.max</name><value>10</value><source>hbase-default.xml</source></property>
- <property><name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
- <property><name>yarn.scheduler.minimum-allocation-mb</name><value>1024</value><source>yarn-default.xml</source></property>
- <property><name>yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs</name><value>86400</value><source>yarn-default.xml</source></property>
- <property><name>dfs.datanode.hdfs-blocks-metadata.enabled</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.datanode.directoryscan.interval</name><value>21600</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.zookeeper.property.initLimit</name><value>10</value><source>hbase-default.xml</source></property>
- <property><name>yarn.resourcemanager.scheduler.monitor.policies</name><value>org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy</value><source>yarn-default.xml</source></property>
- <property><name>ipc.server.listen.queue.size</name><value>128</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobtracker.persist.jobstatus.dir</name><value>/jobtracker/jobsInfo</value><source>mapred-default.xml</source></property>
- <property><name>dfs.domain.socket.path</name><value>/var/run/hadoop/dn_socket</value><source>hbase-site.xml</source></property>
- <property><name>yarn.client.nodemanager-client-async.thread-pool-max-size</name><value>500</value><source>yarn-default.xml</source></property>
- <property><name>hadoop.security.group.mapping</name><value>org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.name.dir</name><value>file://${hadoop.tmp.dir}/dfs/name</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.coprocessor.abortonerror</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>yarn.am.liveness-monitor.expiry-interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
- <property><name>yarn.nm.liveness-monitor.expiry-interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
- <property><name>hbase.hregion.preclose.flush.size</name><value>5242880</value><source>hbase-default.xml</source></property>
- <property><name>hbase.hstore.compaction.kv.max</name><value>10</value><source>hbase-default.xml</source></property>
- <property><name>ftp.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.max.objects</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.master.hfilecleaner.plugins</name><value>org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner</value><source>hbase-default.xml</source></property>
- <property><name>hbase.metrics.showTableName</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.edits.journal-plugin.qjournal</name><value>org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.tasktracker.healthchecker.interval</name><value>60000</value><source>mapred-default.xml</source></property>
- <property><name>hbase.lease.recovery.dfs.timeout</name><value>23000</value><source>hbase-site.xml</source></property>
- <property><name>yarn.resourcemanager.address</name><value>${yarn.resourcemanager.hostname}:8032</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.task.skip.start.attempts</name><value>2</value><source>mapred-default.xml</source></property>
- <property><name>fail.fast.expired.active.master</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.checkpoint.edits.dir</name><value>${dfs.namenode.checkpoint.dir}</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.hdfs.configuration.version</name><value>1</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.map.skip.maxrecords</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold</name><value>10737418240</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.jobtracker.system.dir</name><value>${hadoop.tmp.dir}/mapred/system</value><source>mapred-default.xml</source></property>
- <property><name>hbase.zookeeper.dns.nameserver</name><value>default</value><source>hbase-default.xml</source></property>
- <property><name>hbase.master.dns.nameserver</name><value>default</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.shuffle.ssl.enabled</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.vmem-pmem-ratio</name><value>2.1</value><source>yarn-default.xml</source></property>
- <property><name>yarn.nodemanager.container-manager.thread-count</name><value>20</value><source>yarn-default.xml</source></property>
- <property><name>dfs.encrypt.data.transfer</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.block.access.key.update.interval</name><value>600</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.tmp.dir</name><value>/tmp/hadoop-${user.name}</value><source>core-default.xml</source></property>
- <property><name>dfs.namenode.audit.loggers</name><value>default</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.localizer.cache.target-size-mb</name><value>10240</value><source>yarn-default.xml</source></property>
- <property><name>yarn.http.policy</name><value>HTTP_ONLY</value><source>yarn-default.xml</source></property>
- <property><name>hbase.regionserver.logroll.period</name><value>3600000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobtracker.persist.jobstatus.hours</name><value>1</value><source>mapred-default.xml</source></property>
- <property><name>tfile.fs.output.buffer.size</name><value>262144</value><source>core-default.xml</source></property>
- <property><name>hbase.hregion.memstore.block.multiplier</name><value>2</value><source>hbase-default.xml</source></property>
- <property><name>dfs.namenode.checkpoint.check.period</name><value>60</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.datanode.dns.interface</name><value>default</value><source>hdfs-default.xml</source></property>
- <property><name>fs.ftp.host.port</name><value>21</value><source>core-default.xml</source></property>
- <property><name>mapreduce.task.io.sort.mb</name><value>100</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.security.group.mapping.ldap.search.attr.group.name</name><value>cn</value><source>core-default.xml</source></property>
- <property><name>mapreduce.output.fileoutputformat.compress.type</name><value>RECORD</value><source>mapred-default.xml</source></property>
- <property><name>dfs.namenode.avoid.read.stale.datanode</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.storescanner.parallel.seek.enable</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.reduce.skip.proc.count.autoincr</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>hbase.dfs.client.read.shortcircuit.buffer.size</name><value>131072</value><source>hbase-default.xml</source></property>
- <property><name>dfs.client.file-block-storage-locations.timeout</name><value>60</value><source>hdfs-default.xml</source></property>
- <property><name>file.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
- <property><name>mapreduce.job.userlog.retain.hours</name><value>24</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.http.address</name><value>0.0.0.0:50075</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.image.compress</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>ha.health-monitor.check-interval.ms</name><value>1000</value><source>core-default.xml</source></property>
- <property><name>dfs.permissions.enabled</name><value>true</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.thrift.htablepool.size.max</name><value>1000</value><source>hbase-default.xml</source></property>
- <property><name>yarn.resourcemanager.resource-tracker.client.thread-count</name><value>50</value><source>yarn-default.xml</source></property>
- <property><name>dfs.image.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.datanode.address</name><value>0.0.0.0:50010</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.block.access.token.enable</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.reduce.input.buffer.percent</name><value>0.0</value><source>mapred-default.xml</source></property>
- <property><name>hbase.client.scanner.caching</name><value>100</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.tasktracker.local.dir.minspacestart</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>dfs.blockreport.intervalMsec</name><value>21600000</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.snapshot.restore.take.failsafe.snapshot</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>ha.health-monitor.rpc-timeout.ms</name><value>45000</value><source>core-default.xml</source></property>
- <property><name>dfs.client.failover.connection.retries</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.namenode.kerberos.internal.spnego.principal</name><value>${dfs.web.authentication.kerberos.principal}</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.policy.file</name><value>hbase-policy.xml</value><source>hbase-default.xml</source></property>
- <property><name>yarn.scheduler.maximum-allocation-mb</name><value>8192</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.task.files.preserve.failedtasks</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>hbase.status.published</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>yarn.nodemanager.delete.thread-count</name><value>4</value><source>yarn-default.xml</source></property>
- <property><name>dfs.https.enable</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.output.fileoutputformat.compress.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value><source>mapred-default.xml</source></property>
- <property><name>map.sort.class</name><value>org.apache.hadoop.util.QuickSort</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.classloader.system.classes</name><value>java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop.</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.classloader</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobtracker.tasktracker.maxblacklists</name><value>4</value><source>mapred-default.xml</source></property>
- <property><name>io.seqfile.compress.blocksize</name><value>1000000</value><source>core-default.xml</source></property>
- <property><name>dfs.blocksize</name><value>134217728</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.task.profile.maps</name><value>0-2</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobtracker.staging.root.dir</name><value>${hadoop.tmp.dir}/mapred/staging</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.localizer.cache.cleanup.interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.jobtracker.http.address</name><value>0.0.0.0:50030</value><source>mapred-default.xml</source></property>
- <property><name>hbase.regionserver.info.bindAddress</name><value>0.0.0.0</value><source>hbase-default.xml</source></property>
- <property><name>fs.client.resolve.remote.symlinks</name><value>true</value><source>core-default.xml</source></property>
- <property><name>hbase.master.logcleaner.plugins</name><value>org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner</value><source>hbase-default.xml</source></property>
- <property><name>hbase.data.umask.enable</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>hbase.master.dns.interface</name><value>default</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.tasktracker.local.dir.minspacekill</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name><value>0.25</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.tasktracker.taskmemorymanager.monitoringinterval</name><value>5000</value><source>mapred-default.xml</source></property>
- <property><name>hbase.local.dir</name><value>${hbase.tmp.dir}/local/</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.job.end-notification.retry.interval</name><value>1000</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobhistory.loadedjobs.cache.size</name><value>5</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.local-dirs</name><value>${hadoop.tmp.dir}/nm-local-dir</value><source>yarn-default.xml</source></property>
- <property><name>hbase.table.lock.enable</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>hbase.storescanner.parallel.seek.threads</name><value>10</value><source>hbase-default.xml</source></property>
- <property><name>hbase.online.schema.update.enable</name><value>true</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobhistory.address</name><value>0.0.0.0:10020</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobtracker.persist.jobstatus.active</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>file.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
- <property><name>dfs.datanode.readahead.bytes</name><value>4193404</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.zookeeper.property.dataDir</name><value>/data/hddata/zookeeper</value><source>hbase-site.xml</source></property>
- <property><name>dfs.namenode.http-address</name><value>0.0.0.0:50070</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.work.around.non.threadsafe.getpwuid</name><value>false</value><source>core-default.xml</source></property>
- <property><name>yarn.resourcemanager.hostname</name><value>0.0.0.0</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.fs-limits.max-component-length</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>ha.failover-controller.cli-check.rpc-timeout.ms</name><value>20000</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.info.port.auto</name><value>false</value><source>hbase-default.xml</source></property>
- <property><name>hbase.auth.key.update.interval</name><value>86400000</value><source>hbase-default.xml</source></property>
- <property><name>ftp.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
- <property><name>mapreduce.reduce.shuffle.parallelcopies</name><value>5</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobhistory.principal</name><value>jhs/_HOST@REALM.TLD</value><source>mapred-default.xml</source></property>
- <property><name>hadoop.http.authentication.simple.anonymous.allowed</name><value>true</value><source>core-default.xml</source></property>
- <property><name>yarn.log-aggregation.retain-seconds</name><value>-1</value><source>yarn-default.xml</source></property>
- <property><name>hbase.zookeeper.property.maxClientCnxns</name><value>5000</value><source>hbase-site.xml</source></property>
- <property><name>mapreduce.client.genericoptionsparser.used</name><value>true</value><source>programatically</source></property>
- <property><name>mapreduce.job.ubertask.maxreduces</name><value>1</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.health-checker.interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
- <property><name>hbase.server.compactchecker.interval.multiplier</name><value>1000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobtracker.jobhistory.task.numberprogresssplits</name><value>12</value><source>mapred-default.xml</source></property>
- <property><name>yarn.client.max-nodemanagers-proxies</name><value>500</value><source>yarn-default.xml</source></property>
- <property><name>yarn.nodemanager.webapp.address</name><value>${yarn.nodemanager.hostname}:8042</value><source>yarn-default.xml</source></property>
- <property><name>yarn.app.mapreduce.client-am.ipc.max-retries</name><value>3</value><source>mapred-default.xml</source></property>
- <property><name>ha.failover-controller.new-active.rpc-timeout.ms</name><value>60000</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobhistory.client.thread-count</name><value>10</value><source>mapred-default.xml</source></property>
- <property><name>fs.trash.interval</name><value>0</value><source>core-default.xml</source></property>
- <property><name>hbase.client.max.perregion.tasks</name><value>1</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.reduce.skip.maxgroups</name><value>0</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.health-checker.script.timeout-ms</name><value>1200000</value><source>yarn-default.xml</source></property>
- <property><name>dfs.datanode.du.reserved</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.client.app-submission.poll-interval</name><value>1000</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.client.progressmonitor.pollinterval</name><value>1000</value><source>mapred-default.xml</source></property>
- <property><name>yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs</name><value>86400</value><source>yarn-default.xml</source></property>
- <property><name>yarn.nodemanager.hostname</name><value>0.0.0.0</value><source>yarn-default.xml</source></property>
- <property><name>yarn.scheduler.minimum-allocation-vcores</name><value>1</value><source>yarn-default.xml</source></property>
- <property><name>dfs.ha.log-roll.period</name><value>120</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.http.authentication.signature.secret.file</name><value>${user.home}/hadoop-http-auth-signature-secret</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobhistory.move.interval-ms</name><value>180000</value><source>mapred-default.xml</source></property>
- <property><name>yarn.nodemanager.container-executor.class</name><value>org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor</value><source>yarn-default.xml</source></property>
- <property><name>hadoop.security.authorization</name><value>false</value><source>core-default.xml</source></property>
- <property><name>dfs.datanode.https.address</name><value>0.0.0.0:50475</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.localizer.address</name><value>${yarn.nodemanager.hostname}:8040</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.replication.min</name><value>1</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.common.configuration.version</name><value>0.23.0</value><source>core-default.xml</source></property>
- <property><name>mapreduce.ifile.readahead</name><value>true</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobhistory.joblist.cache.size</name><value>20000</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.end-notification.max.attempts</name><value>5</value><source>mapred-default.xml</source></property>
- <property><name>dfs.image.transfer.timeout</name><value>600000</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.regionserver.optionallogflushinterval</name><value>1000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.reduce.shuffle.connect.timeout</name><value>180000</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobhistory.webapp.address</name><value>0.0.0.0:19888</value><source>mapred-default.xml</source></property>
- <property><name>dfs.datanode.failed.volumes.tolerated</name><value>0</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.datanode.data.dir.perm</name><value>700</value><source>hdfs-default.xml</source></property>
- <property><name>hadoop.http.authentication.token.validity</name><value>36000</value><source>core-default.xml</source></property>
- <property><name>ipc.client.connect.max.retries.on.timeouts</name><value>45</value><source>core-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.job.committer.cancel-timeout</name><value>60000</value><source>mapred-default.xml</source></property>
- <property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value><source>core-default.xml</source></property>
- <property><name>hbase.data.umask</name><value>000</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.reduce.log.level</name><value>INFO</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.reduce.shuffle.merge.percent</name><value>0.66</value><source>mapred-default.xml</source></property>
- <property><name>ipc.client.fallback-to-simple-auth-allowed</name><value>false</value><source>core-default.xml</source></property>
- <property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization</value><source>core-default.xml</source></property>
- <property><name>fs.s3.block.size</name><value>67108864</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.hlog.writer.impl</name><value>org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter</value><source>hbase-default.xml</source></property>
- <property><name>hadoop.kerberos.kinit.command</name><value>kinit</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.global.memstore.lowerLimit</name><value>0.38</value><source>hbase-default.xml</source></property>
- <property><name>yarn.resourcemanager.fs.state-store.uri</name><value>${hadoop.tmp.dir}/yarn/system/rmstore</value><source>yarn-default.xml</source></property>
- <property><name>hbase.regionserver.region.split.policy</name><value>org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy</value><source>hbase-default.xml</source></property>
- <property><name>yarn.admin.acl</name><value>*</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.delegation.token.max-lifetime</name><value>604800000</value><source>hdfs-default.xml</source></property>
- <property><name>mapreduce.reduce.merge.inmem.threshold</name><value>1000</value><source>mapred-default.xml</source></property>
- <property><name>net.topology.impl</name><value>org.apache.hadoop.net.NetworkTopology</value><source>core-default.xml</source></property>
- <property><name>dfs.datanode.use.datanode.hostname</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.heartbeat.interval</name><value>3</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value><source>yarn-default.xml</source></property>
- <property><name>io.map.index.skip</name><value>0</value><source>core-default.xml</source></property>
- <property><name>yarn.resourcemanager.webapp.https.address</name><value>${yarn.resourcemanager.hostname}:8090</value><source>yarn-default.xml</source></property>
- <property><name>dfs.namenode.handler.count</name><value>10</value><source>hdfs-default.xml</source></property>
- <property><name>yarn.nodemanager.admin-env</name><value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX</value><source>yarn-default.xml</source></property>
- <property><name>hbase.client.max.total.tasks</name><value>100</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.jobtracker.jobhistory.block.size</name><value>3145728</value><source>mapred-default.xml</source></property>
- <property><name>hbase.zookeeper.peerport</name><value>2888</value><source>hbase-default.xml</source></property>
- <property><name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.cluster.acls.enabled</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>hbase.regionserver.info.port</name><value>60030</value><source>hbase-default.xml</source></property>
- <property><name>hbase.hregion.majorcompaction.jitter</name><value>0.50</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.tasktracker.map.tasks.maximum</name><value>2</value><source>mapred-default.xml</source></property>
- <property><name>ipc.client.connect.timeout</name><value>20000</value><source>core-default.xml</source></property>
- <property><name>yarn.nodemanager.remote-app-log-dir-suffix</name><value>logs</value><source>yarn-default.xml</source></property>
- <property><name>fs.df.interval</name><value>60000</value><source>core-default.xml</source></property>
- <property><name>hadoop.util.hash.type</name><value>murmur</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobhistory.minicluster.fixed.ports</name><value>false</value><source>mapred-default.xml</source></property>
- <property><name>nfs3.mountd.port</name><value>4242</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobtracker.jobhistory.lru.cache.size</name><value>5</value><source>mapred-default.xml</source></property>
- <property><name>dfs.client.failover.max.attempts</name><value>15</value><source>hdfs-default.xml</source></property>
- <property><name>dfs.client.use.datanode.hostname</name><value>false</value><source>hdfs-default.xml</source></property>
- <property><name>ha.zookeeper.acl</name><value>world:anyone:rwcda</value><source>core-default.xml</source></property>
- <property><name>mapreduce.jobtracker.maxtasks.perjob</name><value>-1</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.map.sort.spill.percent</name><value>0.80</value><source>mapred-default.xml</source></property>
- <property><name>file.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
- <property><name>hbase.regionserver.catalog.timeout</name><value>600000</value><source>hbase-default.xml</source></property>
- <property><name>hbase.security.authentication</name><value>simple</value><source>hbase-default.xml</source></property>
- <property><name>hadoop.fuse.connection.timeout</name><value>300</value><source>hdfs-default.xml</source></property>
- <property><name>hbase.client.keyvalue.maxsize</name><value>10485760</value><source>hbase-default.xml</source></property>
- <property><name>mapreduce.tasktracker.instrumentation</name><value>org.apache.hadoop.mapred.TaskTrackerMetricsInst</value><source>mapred-default.xml</source></property>
- <property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value><source>core-default.xml</source></property>
- <property><name>yarn.app.mapreduce.am.resource.mb</name><value>1536</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.framework.name</name><value>local</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.job.reduce.slowstart.completedmaps</name><value>0.05</value><source>mapred-default.xml</source></property>
- <property><name>yarn.resourcemanager.client.thread-count</name><value>50</value><source>yarn-default.xml</source></property>
- <property><name>mapreduce.cluster.temp.dir</name><value>${hadoop.tmp.dir}/mapred/temp</value><source>mapred-default.xml</source></property>
- <property><name>mapreduce.jobhistory.intermediate-done-dir</name><value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value><source>mapred-default.xml</source></property>
- <property><name>hbase.defaults.for.version.skip</name><value>false</value><source>hbase-default.xml</source></property>
- </configuration>
- Logs
- ===========================================================
- +++++++++++++++++++++++++++++++
- /opt/hbase/bin/../logs/hbase-hadoop-regionserver-slave2.log
- +++++++++++++++++++++++++++++++
- 2014-07-10 14:00:15,814 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. into tmpdir=hdfs://master:9000/hbase/data/default/visited_article/e5472d54066f93e8591fa51fe4f27541/.tmp, totalSize=29.8 M
- 2014-07-10 14:00:17,398 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. into f1d0e314777a468ca8abf6a7abbd7747(size=29.7 M), total size for store is 2.3 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 14:00:17,398 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541., storeName=cf1, fileCount=3, fileSize=29.8 M, priority=6, time=47229488459725281; duration=1sec
- 2014-07-10 14:05:03,294 WARN [RpcServer.handler=20,port=60020] ipc.RpcServer: RpcServer.respondercallId: 304 service: ClientService methodName: Multi size: 324 connection: 64.78.170.82:38536: output error
- 2014-07-10 14:05:03,294 WARN [RpcServer.handler=20,port=60020] ipc.RpcServer: RpcServer.handler=20,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 14:05:53,687 INFO [regionserver60020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404968753563 with entries=129301, filesize=53.7 M; new WAL /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404972353650
- 2014-07-10 14:05:53,688 INFO [regionserver60020.logRoller] wal.FSHLog: moving old hlog file /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404965153418 whose highest sequenceid is 1456814580 to /hbase/oldWALs/slave2%2C60020%2C1397552649456.1404965153418
- 2014-07-10 14:13:49,614 WARN [RpcServer.reader=7,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 14:16:12,261 WARN [RpcServer.handler=2,port=60020] ipc.RpcServer: RpcServer.respondercallId: 24752 service: ClientService methodName: Multi size: 235 connection: 64.78.165.82:42668: output error
- 2014-07-10 14:16:12,261 WARN [RpcServer.handler=2,port=60020] ipc.RpcServer: RpcServer.handler=2,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 14:20:31,159 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. after a delay of 19632
- 2014-07-10 14:20:41,158 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. after a delay of 18094
- 2014-07-10 14:20:50,816 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1456973245, memsize=1.1 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/.tmp/84ac91b188e9476eaab228aa0597ced7
- 2014-07-10 14:20:50,830 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/cf1/84ac91b188e9476eaab228aa0597ced7, entries=5208, sequenceid=1456973245, filesize=167.5 K
- 2014-07-10 14:20:50,830 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.1 M/1142688, currentsize=0/0 for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. in 38ms, sequenceid=1456973245, compaction requested=true
- 2014-07-10 14:20:50,831 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf.
- 2014-07-10 14:20:50,831 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. into tmpdir=hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/.tmp, totalSize=24.7 M
- 2014-07-10 14:20:52,233 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. into 506678d76dc449cd8e32b58f77b0b99e(size=24.7 M), total size for store is 1.1 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 14:20:52,233 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf., storeName=cf1, fileCount=3, fileSize=24.7 M, priority=6, time=47230723476667924; duration=1sec
- 2014-07-10 14:25:02,254 WARN [RpcServer.reader=6,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 14:26:09,108 WARN [RpcServer.handler=25,port=60020] ipc.RpcServer: RpcServer.respondercallId: 5979 service: ClientService methodName: Get size: 160 connection: 64.78.164.178:37232: output error
- 2014-07-10 14:26:09,108 WARN [RpcServer.handler=25,port=60020] ipc.RpcServer: RpcServer.handler=25,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 14:26:39,502 WARN [RpcServer.handler=2,port=60020] ipc.RpcServer: RpcServer.respondercallId: 124569 service: ClientService methodName: Get size: 161 connection: 64.78.164.242:35056: output error
- 2014-07-10 14:26:39,502 WARN [RpcServer.handler=2,port=60020] ipc.RpcServer: RpcServer.handler=2,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 14:28:45,346 WARN [RpcServer.handler=4,port=60020] ipc.RpcServer: RpcServer.respondercallId: 939 service: ClientService methodName: Get size: 133 connection: 64.78.170.82:42675: output error
- 2014-07-10 14:28:45,347 WARN [RpcServer.handler=4,port=60020] ipc.RpcServer: RpcServer.handler=4,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 14:28:50,529 WARN [RpcServer.handler=0,port=60020] ipc.RpcServer: RpcServer.respondercallId: 68838 service: ClientService methodName: Multi size: 292 connection: 64.78.164.242:35156: output error
- 2014-07-10 14:28:50,530 WARN [RpcServer.handler=0,port=60020] ipc.RpcServer: RpcServer.handler=0,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 14:30:57,518 WARN [RpcServer.handler=29,port=60020] ipc.RpcServer: RpcServer.respondercallId: 68898 service: ClientService methodName: Get size: 134 connection: 64.78.164.242:35237: output error
- 2014-07-10 14:30:57,518 WARN [RpcServer.handler=29,port=60020] ipc.RpcServer: RpcServer.handler=29,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 14:31:36,163 WARN [RpcServer.reader=7,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 14:32:11,163 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,,1390820890834.f7635647bcae9291971843131c4add0b. after a delay of 17487
- 2014-07-10 14:32:21,163 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f. after a delay of 7057
- 2014-07-10 14:32:21,163 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,,1390820890834.f7635647bcae9291971843131c4add0b. after a delay of 17006
- 2014-07-10 14:32:28,251 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1456997231, memsize=832.1 K, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/c73728f0c060569c44ade8188c6ea84f/.tmp/5b0252f92bee483997a39c4b706c7b07
- 2014-07-10 14:32:28,267 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/c73728f0c060569c44ade8188c6ea84f/cf1/5b0252f92bee483997a39c4b706c7b07, entries=3842, sequenceid=1456997231, filesize=164.9 K
- 2014-07-10 14:32:28,267 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~841.3 K/861480, currentsize=0/0 for region url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f. in 47ms, sequenceid=1456997231, compaction requested=true
- 2014-07-10 14:32:28,677 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1456997242, memsize=1.2 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/f7635647bcae9291971843131c4add0b/.tmp/cc40baa3c12a4d12b2b93eab74ef3c9a
- 2014-07-10 14:32:28,699 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/f7635647bcae9291971843131c4add0b/cf1/cc40baa3c12a4d12b2b93eab74ef3c9a, entries=5682, sequenceid=1456997242, filesize=174.9 K
- 2014-07-10 14:32:28,699 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.2 M/1240888, currentsize=0/0 for region visited_article,,1390820890834.f7635647bcae9291971843131c4add0b. in 47ms, sequenceid=1456997242, compaction requested=true
- 2014-07-10 14:32:28,700 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region visited_article,,1390820890834.f7635647bcae9291971843131c4add0b.
- 2014-07-10 14:32:28,700 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of visited_article,,1390820890834.f7635647bcae9291971843131c4add0b. into tmpdir=hdfs://master:9000/hbase/data/default/visited_article/f7635647bcae9291971843131c4add0b/.tmp, totalSize=8.1 M
- 2014-07-10 14:32:29,640 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of visited_article,,1390820890834.f7635647bcae9291971843131c4add0b. into f642c62ed6354fecb6d203da9b3ba4ea(size=8.1 M), total size for store is 1.1 G. This selection was in queue for 0sec, and took 0sec to execute.
- 2014-07-10 14:32:29,641 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=visited_article,,1390820890834.f7635647bcae9291971843131c4add0b., storeName=cf1, fileCount=3, fileSize=8.1 M, priority=6, time=47231421345869722; duration=0sec
- 2014-07-10 14:39:59,592 WARN [RpcServer.reader=9,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 14:47:31,172 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. after a delay of 18474
- 2014-07-10 14:47:41,172 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. after a delay of 18086
- 2014-07-10 14:47:49,675 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457030873, memsize=420.1 K, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/ded050ffeacd83613a0e3db7baaee4f0/.tmp/5538855e045d495bac7a9ebee3aa1a06
- 2014-07-10 14:47:49,690 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/ded050ffeacd83613a0e3db7baaee4f0/cf1/5538855e045d495bac7a9ebee3aa1a06, entries=1930, sequenceid=1457030873, filesize=84.1 K
- 2014-07-10 14:47:49,690 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~426.1 K/436368, currentsize=0/0 for region url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. in 44ms, sequenceid=1457030873, compaction requested=true
- 2014-07-10 14:47:49,691 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0.
- 2014-07-10 14:47:49,691 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. into tmpdir=hdfs://master:9000/hbase/data/default/url_guid/ded050ffeacd83613a0e3db7baaee4f0/.tmp, totalSize=12.8 M
- 2014-07-10 14:47:50,371 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. into 4c9b3e43f8584149aa8ba98dc06cf012(size=12.8 M), total size for store is 586.1 M. This selection was in queue for 0sec, and took 0sec to execute.
- 2014-07-10 14:47:50,371 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0., storeName=cf1, fileCount=3, fileSize=12.8 M, priority=6, time=47232342336828226; duration=0sec
- 2014-07-10 14:51:31,208 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. after a delay of 3513
- 2014-07-10 14:51:35,116 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457035499, memsize=5.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/.tmp/277227473a134a3580503f4ffaf04ab8
- 2014-07-10 14:51:35,747 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457035499, memsize=13.9 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/.tmp/188f08e23b8a4b279fb742173328c249
- 2014-07-10 14:51:35,883 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/detail/277227473a134a3580503f4ffaf04ab8, entries=4551, sequenceid=1457035499, filesize=2.9 M
- 2014-07-10 14:51:35,907 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/list/188f08e23b8a4b279fb742173328c249, entries=67837, sequenceid=1457035499, filesize=2.2 M
- 2014-07-10 14:51:35,907 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~21.2 M/22181968, currentsize=832/832 for region article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. in 1186ms, sequenceid=1457035499, compaction requested=true
- 2014-07-10 14:51:35,907 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on list in region article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671.
- 2014-07-10 14:51:35,908 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in list of article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. into tmpdir=hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/.tmp, totalSize=120.0 M
- 2014-07-10 14:51:44,062 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 956beae66c464c06bbd6e49fbe9b4efd
- 2014-07-10 14:51:44,068 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 956beae66c464c06bbd6e49fbe9b4efd
- 2014-07-10 14:51:44,241 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in list of article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. into 956beae66c464c06bbd6e49fbe9b4efd(size=118.1 M), total size for store is 4.9 G. This selection was in queue for 0sec, and took 8sec to execute.
- 2014-07-10 14:51:44,241 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671., storeName=list, fileCount=3, fileSize=120.0 M, priority=6, time=47232568553543670; duration=8sec
- 2014-07-10 14:53:21,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. after a delay of 4104
- 2014-07-10 14:53:25,373 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457037179, memsize=2.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/.tmp/0eb4be043cc045cf886117015289b96a
- 2014-07-10 14:53:25,450 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457037179, memsize=6.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/.tmp/de2b30770a2e4d728ffc57f9d02b0ce8
- 2014-07-10 14:53:25,476 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/detail/0eb4be043cc045cf886117015289b96a, entries=2139, sequenceid=1457037179, filesize=1.4 M
- 2014-07-10 14:53:25,489 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/list/de2b30770a2e4d728ffc57f9d02b0ce8, entries=32485, sequenceid=1457037179, filesize=1.1 M
- 2014-07-10 14:53:25,489 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~10.5 M/11009008, currentsize=0/0 for region article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. in 176ms, sequenceid=1457037179, compaction requested=true
- 2014-07-10 14:53:25,489 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on detail in region article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f.
- 2014-07-10 14:53:25,490 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in detail of article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. into tmpdir=hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/.tmp, totalSize=18.0 M
- 2014-07-10 14:53:28,857 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in detail of article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. into 73e2a422a7b249359092b7eb6c5934d8(size=18.0 M), total size for store is 4.9 G. This selection was in queue for 0sec, and took 3sec to execute.
- 2014-07-10 14:53:28,857 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f., storeName=detail, fileCount=3, fileSize=18.0 M, priority=6, time=47232678135742435; duration=3sec
- 2014-07-10 14:53:28,858 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on list in region article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f.
- 2014-07-10 14:53:28,858 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in list of article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. into tmpdir=hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/.tmp, totalSize=16.6 M
- 2014-07-10 14:53:29,680 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b42824dc53fd4c7db0ec4246dbea32ce
- 2014-07-10 14:53:29,695 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b42824dc53fd4c7db0ec4246dbea32ce
- 2014-07-10 14:53:30,045 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in list of article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. into b42824dc53fd4c7db0ec4246dbea32ce(size=15.7 M), total size for store is 2.4 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 14:53:30,045 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f., storeName=list, fileCount=3, fileSize=16.6 M, priority=5, time=47232681503822099; duration=1sec
- 2014-07-10 14:54:11,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. after a delay of 21254
- 2014-07-10 14:54:21,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. after a delay of 22226
- 2014-07-10 14:54:31,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. after a delay of 18198
- 2014-07-10 14:54:32,493 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457038009, memsize=1.5 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/b4275622def9b40e84be8b79ff61dfef/.tmp/6d1d57bf74344adcaba41f358df5936b
- 2014-07-10 14:54:32,506 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/b4275622def9b40e84be8b79ff61dfef/cf1/6d1d57bf74344adcaba41f358df5936b, entries=6956, sequenceid=1457038009, filesize=298.0 K
- 2014-07-10 14:54:32,506 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.5 M/1563496, currentsize=0/0 for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. in 42ms, sequenceid=1457038009, compaction requested=true
- 2014-07-10 14:55:01,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. after a delay of 15730
- 2014-07-10 14:55:01,210 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. after a delay of 16017
- 2014-07-10 14:55:11,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. after a delay of 3006
- 2014-07-10 14:55:11,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. after a delay of 21604
- 2014-07-10 14:55:17,422 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457038746, memsize=5.8 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/.tmp/6e0fd3afbe4f4b07b57cfd687d637e32
- 2014-07-10 14:55:17,959 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457038746, memsize=13.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/.tmp/f718c39a6e634694a98eefa485bba674
- 2014-07-10 14:55:17,975 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/detail/6e0fd3afbe4f4b07b57cfd687d637e32, entries=4340, sequenceid=1457038746, filesize=3.1 M
- 2014-07-10 14:55:18,015 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/list/f718c39a6e634694a98eefa485bba674, entries=64667, sequenceid=1457038746, filesize=2.1 M
- 2014-07-10 14:55:18,016 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~21.1 M/22115280, currentsize=832/832 for region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. in 1077ms, sequenceid=1457038746, compaction requested=true
- 2014-07-10 14:55:18,063 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457038759, memsize=2.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/.tmp/46b58807a66846b4848ab78e3c2939e1
- 2014-07-10 14:55:18,148 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457038759, memsize=6.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/.tmp/f49aaf41d5164e32b0be88972b17ff34
- 2014-07-10 14:55:18,161 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/detail/46b58807a66846b4848ab78e3c2939e1, entries=2188, sequenceid=1457038759, filesize=1.5 M
- 2014-07-10 14:55:18,174 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/list/f49aaf41d5164e32b0be88972b17ff34, entries=32813, sequenceid=1457038759, filesize=1.1 M
- 2014-07-10 14:55:18,174 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~10.5 M/11033216, currentsize=0/0 for region article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. in 158ms, sequenceid=1457038759, compaction requested=true
- 2014-07-10 14:55:18,174 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on detail in region article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7.
- 2014-07-10 14:55:18,175 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in detail of article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. into tmpdir=hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/.tmp, totalSize=108.8 M
- 2014-07-10 14:55:21,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. after a delay of 10239
- 2014-07-10 14:55:21,447 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd7b118c391b4e68a6a31ef14c0b691e
- 2014-07-10 14:55:21,455 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd7b118c391b4e68a6a31ef14c0b691e
- 2014-07-10 14:55:21,484 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in detail of article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. into dd7b118c391b4e68a6a31ef14c0b691e(size=108.7 M), total size for store is 4.9 G. This selection was in queue for 0sec, and took 3sec to execute.
- 2014-07-10 14:55:21,484 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7., storeName=detail, fileCount=3, fileSize=108.8 M, priority=6, time=47232790820690538; duration=3sec
- 2014-07-10 14:55:31,209 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. after a delay of 4605
- 2014-07-10 14:55:31,501 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457039043, memsize=2.6 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/.tmp/29881c87979a401d868caa5eb3580ccd
- 2014-07-10 14:55:31,603 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457039043, memsize=6.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/.tmp/fe96a073631449cc88b82cdeadd63ad9
- 2014-07-10 14:55:31,617 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/detail/29881c87979a401d868caa5eb3580ccd, entries=2194, sequenceid=1457039043, filesize=1.4 M
- 2014-07-10 14:55:31,802 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/list/fe96a073631449cc88b82cdeadd63ad9, entries=32411, sequenceid=1457039043, filesize=1.1 M
- 2014-07-10 14:55:31,802 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~10.3 M/10763624, currentsize=0/0 for region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. in 354ms, sequenceid=1457039043, compaction requested=true
- 2014-07-10 14:55:31,802 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on detail in region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72.
- 2014-07-10 14:55:31,802 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in detail of article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. into tmpdir=hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/.tmp, totalSize=106.7 M
- 2014-07-10 14:55:35,297 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 13ec7f7abec84657945550cca54f4aaf
- 2014-07-10 14:55:35,303 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 13ec7f7abec84657945550cca54f4aaf
- 2014-07-10 14:55:35,330 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in detail of article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. into 13ec7f7abec84657945550cca54f4aaf(size=106.7 M), total size for store is 4.9 G. This selection was in queue for 0sec, and took 3sec to execute.
- 2014-07-10 14:55:35,330 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72., storeName=detail, fileCount=3, fileSize=106.7 M, priority=6, time=47232804448439173; duration=3sec
- 2014-07-10 14:55:35,330 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on list in region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72.
- 2014-07-10 14:55:35,330 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in list of article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. into tmpdir=hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/.tmp, totalSize=85.7 M
- 2014-07-10 14:55:39,952 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e05576b4d1941079839f13085d8ae75
- 2014-07-10 14:55:39,960 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e05576b4d1941079839f13085d8ae75
- 2014-07-10 14:55:39,990 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in list of article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. into 3e05576b4d1941079839f13085d8ae75(size=84.8 M), total size for store is 2.4 G. This selection was in queue for 0sec, and took 4sec to execute.
- 2014-07-10 14:55:39,990 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72., storeName=list, fileCount=3, fileSize=85.7 M, priority=6, time=47232807976380731; duration=4sec
- 2014-07-10 14:56:01,210 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. after a delay of 19287
- 2014-07-10 14:56:11,210 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. after a delay of 10573
- 2014-07-10 14:56:20,525 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457040268, memsize=1.4 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/.tmp/936b11800f71406e82273022ac390647
- 2014-07-10 14:56:20,581 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457040268, memsize=3.2 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/.tmp/8146003d2e834f4fae1c092d8941ff50
- 2014-07-10 14:56:20,593 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/detail/936b11800f71406e82273022ac390647, entries=1018, sequenceid=1457040268, filesize=779.4 K
- 2014-07-10 14:56:20,602 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/list/8146003d2e834f4fae1c092d8941ff50, entries=15519, sequenceid=1457040268, filesize=523.0 K
- 2014-07-10 14:56:20,602 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~5.0 M/5223016, currentsize=208/208 for region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. in 105ms, sequenceid=1457040268, compaction requested=true
- 2014-07-10 14:58:31,211 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. after a delay of 14892
- 2014-07-10 14:58:41,211 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. after a delay of 7076
- 2014-07-10 14:58:46,140 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457044469, memsize=1.8 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/c6fdec019ff1dba3f2137ff724a191cb/.tmp/695e3b7487194a86a8d30b95a97df27c
- 2014-07-10 14:58:46,165 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/c6fdec019ff1dba3f2137ff724a191cb/cf1/695e3b7487194a86a8d30b95a97df27c, entries=8715, sequenceid=1457044469, filesize=285.8 K
- 2014-07-10 14:58:46,165 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.8 M/1917744, currentsize=0/0 for region visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. in 60ms, sequenceid=1457044469, compaction requested=true
- 2014-07-10 14:59:21,211 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07. after a delay of 10247
- 2014-07-10 14:59:31,211 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07. after a delay of 4161
- 2014-07-10 14:59:31,506 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457046250, memsize=1.4 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/1af7ec7e41ac505904377d5ea9971c07/.tmp/2fcdde1c390946029a1686bcb6c344fd
- 2014-07-10 14:59:31,544 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/1af7ec7e41ac505904377d5ea9971c07/cf1/2fcdde1c390946029a1686bcb6c344fd, entries=6670, sequenceid=1457046250, filesize=282.2 K
- 2014-07-10 14:59:31,544 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.4 M/1496928, currentsize=0/0 for region url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07. in 85ms, sequenceid=1457046250, compaction requested=true
- 2014-07-10 14:59:31,545 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07.
- 2014-07-10 14:59:31,545 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07. into tmpdir=hdfs://master:9000/hbase/data/default/url_guid/1af7ec7e41ac505904377d5ea9971c07/.tmp, totalSize=27.3 M
- 2014-07-10 14:59:33,031 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07. into fa5fc00df27b4d92885916061d6e1bb1(size=27.3 M), total size for store is 2.3 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 14:59:33,031 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07., storeName=cf1, fileCount=3, fileSize=27.3 M, priority=6, time=47233044190876405; duration=1sec
- 2014-07-10 15:00:21,211 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. after a delay of 17356
- 2014-07-10 15:00:31,211 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. after a delay of 10501
- 2014-07-10 15:00:38,597 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457048992, memsize=1.8 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/e5472d54066f93e8591fa51fe4f27541/.tmp/aa1095700a53401c90141f5a613e855d
- 2014-07-10 15:00:38,608 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/e5472d54066f93e8591fa51fe4f27541/cf1/aa1095700a53401c90141f5a613e855d, entries=8618, sequenceid=1457048992, filesize=285.7 K
- 2014-07-10 15:00:38,608 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.8 M/1903304, currentsize=0/0 for region visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. in 40ms, sequenceid=1457048992, compaction requested=true
- 2014-07-10 15:00:52,728 WARN [RpcServer.handler=6,port=60020] ipc.RpcServer: RpcServer.respondercallId: 126802 service: ClientService methodName: Get size: 161 connection: 64.78.164.242:38933: output error
- 2014-07-10 15:00:52,729 WARN [RpcServer.handler=6,port=60020] ipc.RpcServer: RpcServer.handler=6,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 15:00:54,914 WARN [RpcServer.handler=22,port=60020] ipc.RpcServer: RpcServer.respondercallId: 72451 service: ClientService methodName: Get size: 134 connection: 64.78.164.242:38936: output error
- 2014-07-10 15:00:54,915 WARN [RpcServer.handler=22,port=60020] ipc.RpcServer: RpcServer.handler=22,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 15:05:53,777 INFO [regionserver60020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404972353650 with entries=115617, filesize=51.3 M; new WAL /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404975953759
- 2014-07-10 15:05:53,778 INFO [regionserver60020.logRoller] wal.FSHLog: moving old hlog file /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404968753563 whose highest sequenceid is 1456943895 to /hbase/oldWALs/slave2%2C60020%2C1397552649456.1404968753563
- 2014-07-10 15:06:02,255 WARN [RpcServer.reader=7,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 15:20:51,243 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. after a delay of 14348
- 2014-07-10 15:21:01,242 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. after a delay of 14018
- 2014-07-10 15:21:05,618 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457092548, memsize=960.9 K, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/.tmp/d51155ab6ccc483f86b8b3122d021692
- 2014-07-10 15:21:05,637 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/cf1/d51155ab6ccc483f86b8b3122d021692, entries=4550, sequenceid=1457092548, filesize=151.1 K
- 2014-07-10 15:21:05,637 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~980.1 K/1003672, currentsize=0/0 for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. in 46ms, sequenceid=1457092548, compaction requested=true
- 2014-07-10 15:32:31,253 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f. after a delay of 16240
- 2014-07-10 15:32:31,253 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,,1390820890834.f7635647bcae9291971843131c4add0b. after a delay of 3757
- 2014-07-10 15:32:35,035 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457119385, memsize=1.1 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/f7635647bcae9291971843131c4add0b/.tmp/4ba07b292e334613ab8f8a55fe58ea8f
- 2014-07-10 15:32:35,048 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/f7635647bcae9291971843131c4add0b/cf1/4ba07b292e334613ab8f8a55fe58ea8f, entries=5433, sequenceid=1457119385, filesize=166.9 K
- 2014-07-10 15:32:35,048 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.1 M/1188280, currentsize=0/0 for region visited_article,,1390820890834.f7635647bcae9291971843131c4add0b. in 38ms, sequenceid=1457119385, compaction requested=true
- 2014-07-10 15:32:41,253 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f. after a delay of 19731
- 2014-07-10 15:32:47,519 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457119983, memsize=757.1 K, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/c73728f0c060569c44ade8188c6ea84f/.tmp/5f50cae3bcb0439fb4603fa0ce351489
- 2014-07-10 15:32:47,534 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/c73728f0c060569c44ade8188c6ea84f/cf1/5f50cae3bcb0439fb4603fa0ce351489, entries=3490, sequenceid=1457119983, filesize=150.3 K
- 2014-07-10 15:32:47,534 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~765.8 K/784216, currentsize=0/0 for region url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f. in 41ms, sequenceid=1457119983, compaction requested=true
- 2014-07-10 15:32:47,534 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f.
- 2014-07-10 15:32:47,535 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f. into tmpdir=hdfs://master:9000/hbase/data/default/url_guid/c73728f0c060569c44ade8188c6ea84f/.tmp, totalSize=36.4 M
- 2014-07-10 15:32:49,371 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f. into eab6662a2488456a8067be4041b459e3(size=36.4 M), total size for store is 1.1 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 15:32:49,371 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=url_guid,60208b,1387273200413.c73728f0c060569c44ade8188c6ea84f., storeName=cf1, fileCount=3, fileSize=36.4 M, priority=6, time=47235040180666727; duration=1sec
- 2014-07-10 15:42:04,976 WARN [RpcServer.reader=8,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 15:47:51,269 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. after a delay of 18236
- 2014-07-10 15:48:01,269 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. after a delay of 11890
- 2014-07-10 15:48:09,531 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457155785, memsize=402.3 K, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/ded050ffeacd83613a0e3db7baaee4f0/.tmp/8a7cd3e192ce4e18aa6d21e50aa498b9
- 2014-07-10 15:48:09,549 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/ded050ffeacd83613a0e3db7baaee4f0/cf1/8a7cd3e192ce4e18aa6d21e50aa498b9, entries=1866, sequenceid=1457155785, filesize=80.0 K
- 2014-07-10 15:48:09,550 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~405.5 K/415280, currentsize=0/0 for region url_guid,781e82be88ce94de7bb2dee9f3436e97,1392692827380.ded050ffeacd83613a0e3db7baaee4f0. in 44ms, sequenceid=1457155785, compaction requested=true
- 2014-07-10 15:51:34,707 WARN [RpcServer.reader=1,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 15:51:41,279 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. after a delay of 14737
- 2014-07-10 15:51:51,279 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. after a delay of 9302
- 2014-07-10 15:51:56,503 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457165032, memsize=5.5 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/.tmp/182769ef00154654b0780f0404adb5dd
- 2014-07-10 15:51:56,516 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 182769ef00154654b0780f0404adb5dd
- 2014-07-10 15:51:56,662 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457165032, memsize=16.4 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/.tmp/159247950370406ea25ed8afe14014f7
- 2014-07-10 15:51:56,690 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 159247950370406ea25ed8afe14014f7
- 2014-07-10 15:51:56,696 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 182769ef00154654b0780f0404adb5dd
- 2014-07-10 15:51:56,697 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/detail/182769ef00154654b0780f0404adb5dd, entries=5453, sequenceid=1457165032, filesize=3.0 M
- 2014-07-10 15:51:56,704 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 159247950370406ea25ed8afe14014f7
- 2014-07-10 15:51:56,704 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/list/159247950370406ea25ed8afe14014f7, entries=79594, sequenceid=1457165032, filesize=2.5 M
- 2014-07-10 15:51:56,704 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~24.0 M/25116824, currentsize=2.2 K/2248 for region article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. in 686ms, sequenceid=1457165032, compaction requested=true
- 2014-07-10 15:51:56,704 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on detail in region article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671.
- 2014-07-10 15:51:56,705 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in detail of article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. into tmpdir=hdfs://master:9000/hbase/data/default/article/96d761959641d2ccc69c224c06b52671/.tmp, totalSize=106.9 M
- 2014-07-10 15:51:59,859 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 80d487f5655647dc9e2d0ef776044968
- 2014-07-10 15:51:59,865 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 80d487f5655647dc9e2d0ef776044968
- 2014-07-10 15:51:59,900 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in detail of article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671. into 80d487f5655647dc9e2d0ef776044968(size=106.9 M), total size for store is 9.8 G. This selection was in queue for 0sec, and took 3sec to execute.
- 2014-07-10 15:51:59,900 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,804d25f09ba3603572620c7c42869e60,1388454093837.96d761959641d2ccc69c224c06b52671., storeName=detail, fileCount=3, fileSize=106.9 M, priority=5, time=47236189350542636; duration=3sec
- 2014-07-10 15:53:31,280 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. after a delay of 7346
- 2014-07-10 15:53:38,718 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457169121, memsize=2.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/.tmp/ba363d6dc36c4ddeb7fc922290de10ab
- 2014-07-10 15:53:38,728 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ba363d6dc36c4ddeb7fc922290de10ab
- 2014-07-10 15:53:38,826 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457169121, memsize=8.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/.tmp/7e9c2a389ce04e6684cfbae82817b392
- 2014-07-10 15:53:38,832 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7e9c2a389ce04e6684cfbae82817b392
- 2014-07-10 15:53:38,841 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ba363d6dc36c4ddeb7fc922290de10ab
- 2014-07-10 15:53:38,841 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/detail/ba363d6dc36c4ddeb7fc922290de10ab, entries=2722, sequenceid=1457169121, filesize=1.5 M
- 2014-07-10 15:53:38,849 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7e9c2a389ce04e6684cfbae82817b392
- 2014-07-10 15:53:38,849 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/f6b3920b4bb1f63832b3a0529c639e6f/list/7e9c2a389ce04e6684cfbae82817b392, entries=40422, sequenceid=1457169121, filesize=1.3 M
- 2014-07-10 15:53:38,849 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~12.2 M/12811664, currentsize=624/624 for region article,,1392692828476.f6b3920b4bb1f63832b3a0529c639e6f. in 223ms, sequenceid=1457169121, compaction requested=true
- 2014-07-10 15:54:41,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. after a delay of 21215
- 2014-07-10 15:54:51,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. after a delay of 7800
- 2014-07-10 15:55:01,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. after a delay of 6192
- 2014-07-10 15:55:02,556 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457172299, memsize=1.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/b4275622def9b40e84be8b79ff61dfef/.tmp/8bb9afeb81b04a23943a7e5b7d6d6cfa
- 2014-07-10 15:55:02,570 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/b4275622def9b40e84be8b79ff61dfef/cf1/8bb9afeb81b04a23943a7e5b7d6d6cfa, entries=7902, sequenceid=1457172299, filesize=334.2 K
- 2014-07-10 15:55:02,570 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.7 M/1756528, currentsize=0/0 for region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. in 74ms, sequenceid=1457172299, compaction requested=true
- 2014-07-10 15:55:02,570 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef.
- 2014-07-10 15:55:02,570 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. into tmpdir=hdfs://master:9000/hbase/data/default/url_guid/b4275622def9b40e84be8b79ff61dfef/.tmp, totalSize=16.0 M
- 2014-07-10 15:55:03,309 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef. into 566a5db923774655be2a597fca5e5ed9(size=16.0 M), total size for store is 2.3 G. This selection was in queue for 0sec, and took 0sec to execute.
- 2014-07-10 15:55:03,309 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=url_guid,e00998f7542fadc8e0b9c1f260780d16,1386609914459.b4275622def9b40e84be8b79ff61dfef., storeName=cf1, fileCount=3, fileSize=16.0 M, priority=6, time=47236375216512376; duration=0sec
- 2014-07-10 15:55:21,282 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. after a delay of 19539
- 2014-07-10 15:55:21,282 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. after a delay of 5632
- 2014-07-10 15:55:27,369 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457173222, memsize=2.6 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/.tmp/8e6fd6f951db41389d93d41bfb0883d6
- 2014-07-10 15:55:27,375 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8e6fd6f951db41389d93d41bfb0883d6
- 2014-07-10 15:55:27,482 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457173222, memsize=8.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/.tmp/a3f3759828574900a9f71851696abdf6
- 2014-07-10 15:55:27,490 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3f3759828574900a9f71851696abdf6
- 2014-07-10 15:55:27,497 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8e6fd6f951db41389d93d41bfb0883d6
- 2014-07-10 15:55:27,497 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/detail/8e6fd6f951db41389d93d41bfb0883d6, entries=2701, sequenceid=1457173222, filesize=1.5 M
- 2014-07-10 15:55:27,503 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3f3759828574900a9f71851696abdf6
- 2014-07-10 15:55:27,503 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/list/a3f3759828574900a9f71851696abdf6, entries=40275, sequenceid=1457173222, filesize=1.3 M
- 2014-07-10 15:55:27,503 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~12.0 M/12566568, currentsize=208/208 for region article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. in 589ms, sequenceid=1457173222, compaction requested=true
- 2014-07-10 15:55:27,504 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on list in region article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7.
- 2014-07-10 15:55:27,504 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in list of article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. into tmpdir=hdfs://master:9000/hbase/data/default/article/b3d7c510a7bbe6e2c0acb162f9ff68c7/.tmp, totalSize=60.2 M
- 2014-07-10 15:55:30,884 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d2ad08a15f2141d09c26c1dcadf24ba6
- 2014-07-10 15:55:30,890 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d2ad08a15f2141d09c26c1dcadf24ba6
- 2014-07-10 15:55:30,918 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in list of article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7. into d2ad08a15f2141d09c26c1dcadf24ba6(size=59.3 M), total size for store is 2.4 G. This selection was in queue for 0sec, and took 3sec to execute.
- 2014-07-10 15:55:30,918 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,3841557549dce31b0c552c81f0b4ad04,1395916557016.b3d7c510a7bbe6e2c0acb162f9ff68c7., storeName=list, fileCount=3, fileSize=60.2 M, priority=6, time=47236400149832129; duration=3sec
- 2014-07-10 15:55:31,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. after a delay of 14645
- 2014-07-10 15:55:40,894 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457173764, memsize=5.2 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/.tmp/656f372f76ff4e6aaff6e5b0db2a4c71
- 2014-07-10 15:55:40,899 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 656f372f76ff4e6aaff6e5b0db2a4c71
- 2014-07-10 15:55:41,052 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457173764, memsize=16.5 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/.tmp/e8d3191731d64e9bbd964a9f907cda9a
- 2014-07-10 15:55:41,057 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8d3191731d64e9bbd964a9f907cda9a
- 2014-07-10 15:55:41,092 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 656f372f76ff4e6aaff6e5b0db2a4c71
- 2014-07-10 15:55:41,092 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/detail/656f372f76ff4e6aaff6e5b0db2a4c71, entries=5439, sequenceid=1457173764, filesize=2.8 M
- 2014-07-10 15:55:41,100 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8d3191731d64e9bbd964a9f907cda9a
- 2014-07-10 15:55:41,100 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/list/e8d3191731d64e9bbd964a9f907cda9a, entries=80127, sequenceid=1457173764, filesize=2.6 M
- 2014-07-10 15:55:41,100 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~23.9 M/25052640, currentsize=616/616 for region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. in 278ms, sequenceid=1457173764, compaction requested=true
- 2014-07-10 15:55:41,100 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on detail in region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b.
- 2014-07-10 15:55:41,100 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in detail of article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. into tmpdir=hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/.tmp, totalSize=44.8 M
- 2014-07-10 15:55:41,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. after a delay of 10137
- 2014-07-10 15:55:42,305 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 82b457446c6b40ce90621dcc021c2825
- 2014-07-10 15:55:42,312 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 82b457446c6b40ce90621dcc021c2825
- 2014-07-10 15:55:42,354 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in detail of article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. into 82b457446c6b40ce90621dcc021c2825(size=44.7 M), total size for store is 9.8 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 15:55:42,354 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b., storeName=detail, fileCount=3, fileSize=44.8 M, priority=6, time=47236413746632964; duration=1sec
- 2014-07-10 15:55:42,354 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on list in region article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b.
- 2014-07-10 15:55:42,354 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in list of article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. into tmpdir=hdfs://master:9000/hbase/data/default/article/55f82857c8408b1af3d671dacfce0b9b/.tmp, totalSize=106.3 M
- 2014-07-10 15:55:47,954 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1617cdbfaf5e4e79b1dbfe3ed7f9e3cf
- 2014-07-10 15:55:47,960 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1617cdbfaf5e4e79b1dbfe3ed7f9e3cf
- 2014-07-10 15:55:47,989 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in list of article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b. into 1617cdbfaf5e4e79b1dbfe3ed7f9e3cf(size=104.4 M), total size for store is 4.8 G. This selection was in queue for 0sec, and took 5sec to execute.
- 2014-07-10 15:55:47,989 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,dffe0300a6722922f2c4c93b352c3c54,1394100183332.55f82857c8408b1af3d671dacfce0b9b., storeName=list, fileCount=3, fileSize=106.3 M, priority=6, time=47236415000487291; duration=5sec
- 2014-07-10 15:55:51,282 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. after a delay of 20524
- 2014-07-10 15:55:51,473 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457174217, memsize=2.6 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/.tmp/14cde3aab9ed4529a1ce3e0f6957974e
- 2014-07-10 15:55:51,508 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 14cde3aab9ed4529a1ce3e0f6957974e
- 2014-07-10 15:55:51,589 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457174217, memsize=8.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/.tmp/e78f57fd6fbd4c929b48cc51b7ba08e6
- 2014-07-10 15:55:51,593 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e78f57fd6fbd4c929b48cc51b7ba08e6
- 2014-07-10 15:55:51,600 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 14cde3aab9ed4529a1ce3e0f6957974e
- 2014-07-10 15:55:51,600 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/detail/14cde3aab9ed4529a1ce3e0f6957974e, entries=2765, sequenceid=1457174217, filesize=1.4 M
- 2014-07-10 15:55:51,607 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e78f57fd6fbd4c929b48cc51b7ba08e6
- 2014-07-10 15:55:51,608 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/9dedc8916c06fdaf2e3406106daa6d72/list/e78f57fd6fbd4c929b48cc51b7ba08e6, entries=40382, sequenceid=1457174217, filesize=1.3 M
- 2014-07-10 15:55:51,608 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~12.0 M/12534800, currentsize=0/0 for region article,303b80a67cbf4b522043c1c9b8bc061f,1395916557016.9dedc8916c06fdaf2e3406106daa6d72. in 190ms, sequenceid=1457174217, compaction requested=true
- 2014-07-10 15:56:21,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. after a delay of 21059
- 2014-07-10 15:56:31,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. after a delay of 17617
- 2014-07-10 15:56:41,281 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. after a delay of 17187
- 2014-07-10 15:56:42,370 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457176199, memsize=1.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/.tmp/76907471ecfe467da14efe19691269fe
- 2014-07-10 15:56:42,376 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 76907471ecfe467da14efe19691269fe
- 2014-07-10 15:56:42,442 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457176199, memsize=4.2 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/.tmp/18f1dd75767a4fbfb5304e90bf7f22f0
- 2014-07-10 15:56:42,447 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 18f1dd75767a4fbfb5304e90bf7f22f0
- 2014-07-10 15:56:42,457 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 76907471ecfe467da14efe19691269fe
- 2014-07-10 15:56:42,457 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/detail/76907471ecfe467da14efe19691269fe, entries=1396, sequenceid=1457176199, filesize=786.7 K
- 2014-07-10 15:56:42,465 INFO [MemStoreFlusher.0] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 18f1dd75767a4fbfb5304e90bf7f22f0
- 2014-07-10 15:56:42,466 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/list/18f1dd75767a4fbfb5304e90bf7f22f0, entries=20390, sequenceid=1457176199, filesize=672.9 K
- 2014-07-10 15:56:42,466 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~6.1 M/6414424, currentsize=416/416 for region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. in 126ms, sequenceid=1457176199, compaction requested=true
- 2014-07-10 15:56:42,466 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on detail in region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76.
- 2014-07-10 15:56:42,466 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in detail of article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. into tmpdir=hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/.tmp, totalSize=22.8 M
- 2014-07-10 15:56:43,279 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 551df2177b2b467c85cfa06f155ffd22
- 2014-07-10 15:56:43,292 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 551df2177b2b467c85cfa06f155ffd22
- 2014-07-10 15:56:43,336 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in detail of article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. into 551df2177b2b467c85cfa06f155ffd22(size=22.8 M), total size for store is 2.4 G. This selection was in queue for 0sec, and took 0sec to execute.
- 2014-07-10 15:56:43,336 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76., storeName=detail, fileCount=3, fileSize=22.8 M, priority=6, time=47236475112373346; duration=0sec
- 2014-07-10 15:56:43,336 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on list in region article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76.
- 2014-07-10 15:56:43,337 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in list of article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. into tmpdir=hdfs://master:9000/hbase/data/default/article/21f50e59624cda1b9cb00645f23a5c76/.tmp, totalSize=26.8 M
- 2014-07-10 15:56:44,726 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aebf489c984b41ea90543368d3c857c7
- 2014-07-10 15:56:44,759 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aebf489c984b41ea90543368d3c857c7
- 2014-07-10 15:56:44,787 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in list of article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76. into aebf489c984b41ea90543368d3c857c7(size=26.3 M), total size for store is 1.2 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 15:56:44,788 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=article,684401e8f32d9e37972147a5a70e0de0,1397098375738.21f50e59624cda1b9cb00645f23a5c76., storeName=list, fileCount=3, fileSize=26.8 M, priority=6, time=47236475982722733; duration=1sec
- 2014-07-10 15:57:58,960 WARN [RpcServer.reader=4,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 15:58:51,282 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. after a delay of 18967
- 2014-07-10 15:59:01,282 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. after a delay of 9675
- 2014-07-10 15:59:10,296 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457181674, memsize=2.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/c6fdec019ff1dba3f2137ff724a191cb/.tmp/66691e269b7840b6827b9c3a99462697
- 2014-07-10 15:59:10,308 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/c6fdec019ff1dba3f2137ff724a191cb/cf1/66691e269b7840b6827b9c3a99462697, entries=11367, sequenceid=1457181674, filesize=367.8 K
- 2014-07-10 15:59:10,308 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~2.4 M/2505536, currentsize=0/0 for region visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. in 59ms, sequenceid=1457181674, compaction requested=true
- 2014-07-10 15:59:10,309 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb.
- 2014-07-10 15:59:10,309 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. into tmpdir=hdfs://master:9000/hbase/data/default/visited_article/c6fdec019ff1dba3f2137ff724a191cb/.tmp, totalSize=30.7 M
- 2014-07-10 15:59:11,974 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb. into 5e755b4870f64ec9b4e27368eb8c00ab(size=30.7 M), total size for store is 2.2 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 15:59:11,974 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=visited_article,40484e9bddd7954f44c35bdf134e4dee\x00\x1D\x17\xE6,1389119155036.c6fdec019ff1dba3f2137ff724a191cb., storeName=cf1, fileCount=3, fileSize=30.7 M, priority=6, time=47236622954697274; duration=1sec
- 2014-07-10 15:59:41,287 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07. after a delay of 3239
- 2014-07-10 15:59:44,559 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457183158, memsize=1.7 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/url_guid/1af7ec7e41ac505904377d5ea9971c07/.tmp/ca06ce769ae24ad1909935ba2e6583bc
- 2014-07-10 15:59:44,571 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/url_guid/1af7ec7e41ac505904377d5ea9971c07/cf1/ca06ce769ae24ad1909935ba2e6583bc, entries=7846, sequenceid=1457183158, filesize=329.3 K
- 2014-07-10 15:59:44,571 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.7 M/1745464, currentsize=0/0 for region url_guid,a017c0ff6d49bef5fbdb34806fa09ee1,1386682300480.1af7ec7e41ac505904377d5ea9971c07. in 45ms, sequenceid=1457183158, compaction requested=true
- 2014-07-10 16:00:41,287 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. after a delay of 14287
- 2014-07-10 16:00:51,287 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. after a delay of 21648
- 2014-07-10 16:00:55,655 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457186081, memsize=2.3 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/e5472d54066f93e8591fa51fe4f27541/.tmp/8dea5d8039fa400ebd2aca5b3850ab69
- 2014-07-10 16:00:55,669 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/e5472d54066f93e8591fa51fe4f27541/cf1/8dea5d8039fa400ebd2aca5b3850ab69, entries=11123, sequenceid=1457186081, filesize=366.9 K
- 2014-07-10 16:00:55,670 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~2.3 M/2458784, currentsize=1.1 K/1096 for region visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. in 94ms, sequenceid=1457186081, compaction requested=true
- 2014-07-10 16:00:55,671 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541.
- 2014-07-10 16:00:55,671 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. into tmpdir=hdfs://master:9000/hbase/data/default/visited_article/e5472d54066f93e8591fa51fe4f27541/.tmp, totalSize=30.4 M
- 2014-07-10 16:00:57,720 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541. into 83e456282f644383b81a4e6256ab377d(size=30.3 M), total size for store is 2.3 G. This selection was in queue for 0sec, and took 2sec to execute.
- 2014-07-10 16:00:57,720 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=visited_article,204644f25ca678a2fe64b28ade4f1b0c\x00\x0A\x1C\xDE,1388226171288.e5472d54066f93e8591fa51fe4f27541., storeName=cf1, fileCount=3, fileSize=30.4 M, priority=6, time=47236728316608787; duration=2sec
- 2014-07-10 16:04:12,263 WARN [RpcServer.reader=5,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 16:05:53,921 INFO [regionserver60020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404975953759 with entries=137754, filesize=59.4 M; new WAL /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404979553888
- 2014-07-10 16:05:53,922 INFO [regionserver60020.logRoller] wal.FSHLog: moving old hlog file /hbase/WALs/slave2,60020,1397552649456/slave2%2C60020%2C1397552649456.1404972353650 whose highest sequenceid is 1457059526 to /hbase/oldWALs/slave2%2C60020%2C1397552649456.1404972353650
- 2014-07-10 16:21:21,343 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. after a delay of 10416
- 2014-07-10 16:21:31,344 INFO [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. after a delay of 9385
- 2014-07-10 16:21:31,804 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1457232652, memsize=1.2 M, hasBloomFilter=true, into tmp file hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/.tmp/b0935301a46142bc93512235695b8922
- 2014-07-10 16:21:31,820 INFO [MemStoreFlusher.0] regionserver.HStore: Added hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/cf1/b0935301a46142bc93512235695b8922, entries=6009, sequenceid=1457232652, filesize=196.1 K
- 2014-07-10 16:21:31,820 INFO [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~1.3 M/1319152, currentsize=0/0 for region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. in 60ms, sequenceid=1457232652, compaction requested=true
- 2014-07-10 16:21:31,821 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HRegion: Starting compaction on cf1 in region visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf.
- 2014-07-10 16:21:31,821 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Starting compaction of 3 file(s) in cf1 of visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. into tmpdir=hdfs://master:9000/hbase/data/default/visited_article/b3a19a2f85e170275c4f141a18a08abf/.tmp, totalSize=25.0 M
- 2014-07-10 16:21:33,691 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.HStore: Completed compaction of 3 file(s) in cf1 of visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf. into 33de67666f654214813c4358093795f1(size=25.0 M), total size for store is 1.1 G. This selection was in queue for 0sec, and took 1sec to execute.
- 2014-07-10 16:21:33,691 INFO [regionserver60020-smallCompactions-1397552664633] regionserver.CompactSplitThread: Completed compaction: Request = regionName=visited_article,600fcc228b36625c1904ba4f11e6207a\x00\x17o\xD8,1395813592646.b3a19a2f85e170275c4f141a18a08abf., storeName=cf1, fileCount=3, fileSize=25.0 M, priority=6, time=47237964467320480; duration=1sec
- 2014-07-10 16:22:45,550 ERROR [RpcServer.handler=6,port=60020] ipc.RpcServer: Unexpected throwable object
- java.lang.NullPointerException
- 2014-07-10 16:24:43,604 ERROR [RpcServer.handler=11,port=60020] ipc.RpcServer: Unexpected throwable object
- java.lang.NullPointerException
- 2014-07-10 16:26:40,410 WARN [RpcServer.reader=2,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 16:26:43,723 WARN [RpcServer.reader=5,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 16:26:43,731 WARN [RpcServer.handler=0,port=60020] ipc.RpcServer: RpcServer.respondercallId: 740303 service: ClientService methodName: Get size: 161 connection: 115.238.188.210:58524: output error
- 2014-07-10 16:26:43,731 WARN [RpcServer.handler=0,port=60020] ipc.RpcServer: RpcServer.handler=0,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
- 2014-07-10 16:27:35,778 WARN [RpcServer.reader=6,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
- java.io.IOException: Connection reset by peer
- at sun.nio.ch.FileDispatcher.read0(Native Method)
- at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
- at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
- at sun.nio.ch.IOUtil.read(IOUtil.java:224)
- at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
- at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2404)
- at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1425)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:780)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:568)
- at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:543)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
- at java.lang.Thread.run(Thread.java:701)
- 2014-07-10 16:28:40,062 INFO [610366995@qtp-1922766448-7] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
- RS Queue:
- ===========================================================
- Compaction/Split Queue summary: compaction_queue=(0:0), split_queue=0, merge_queue=0
- Compaction/Split Queue dump:
- LargeCompation Queue:
- SmallCompation Queue:
- Split Queue:
- Region Merge Queue:
- Flush Queue summary: flush_queue=0
- Flush Queue Queue dump:
- Flush Queue:
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement