Advertisement
Guest User

Untitled

a guest
Apr 22nd, 2014
178
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 64.53 KB | None | 0 0
  1. 2014-04-21 20:31:27,173 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: moving old hlog file /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398079948947 whose highest sequenceid is 10116829 to /hbase/.oldlogs/app-hbase-2%2C60020%2C1392084195006.1398079948947
  2. 2014-04-21 20:31:27,177 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: moving old hlog file /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398080204321 whose highest sequenceid is 10424921 to /hbase/.oldlogs/app-hbase-2%2C60020%2C1392084195006.1398080204321
  3. 2014-04-21 20:31:27,179 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: moving old hlog file /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398080382942 whose highest sequenceid is 10730232 to /hbase/.oldlogs/app-hbase-2%2C60020%2C1392084195006.1398080382942
  4. 2014-04-21 20:31:27,182 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: moving old hlog file /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398080566769 whose highest sequenceid is 16754626 to /hbase/.oldlogs/app-hbase-2%2C60020%2C1392084195006.1398080566769
  5. 2014-04-21 20:32:28,614 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=1.6 GB, free=1.12 GB, max=2.72 GB, blocks=25639, accesses=48736031, hits=47290142, hitRatio=97.03%, , cachingAccesses=47268022, cachingHits=47155834, cachingHitsRatio=99.76%, , evictions=0, evicted=85470, evictedPerRun=Infinity
  6. 2014-04-21 20:33:03,646 DEBUG org.apache.hadoop.hbase.regionserver.LogRoller: HLog roll requested
  7. 2014-04-21 20:33:03,646 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  8. 2014-04-21 20:33:03,646 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  9. 2014-04-21 20:33:03,649 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using new createWriter -- HADOOP-6840
  10. 2014-04-21 20:33:03,649 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Path=hdfs://192.168.10.48:8020/hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083583646, syncFs=true, hflush=false, compression=false
  11. 2014-04-21 20:33:03,649 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: cleanupCurrentWriter waiting for transactions to get synced total 18513141 synced till here 18513130
  12. 2014-04-21 20:33:03,660 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083487152, entries=275189, filesize=63756148. for /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083583646
  13. 2014-04-21 20:33:34,169 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open region: vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010.
  14. 2014-04-21 20:33:34,178 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x1441eaf4e9d0003-0x1441eaf4e9d0003-0x1441eaf4e9d0003 Attempting to transition node 728c510f3a60b25ec6085803205b6010 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
  15. 2014-04-21 20:33:34,183 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x1441eaf4e9d0003-0x1441eaf4e9d0003-0x1441eaf4e9d0003 Successfully transitioned node 728c510f3a60b25ec6085803205b6010 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
  16. 2014-04-21 20:33:34,183 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Opening region: {NAME => 'vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010.', STARTKEY => '', ENDKEY => '99999wgaxlm.cn.roowei.com', ENCODED => 728c510f3a60b25ec6085803205b6010,}
  17. 2014-04-21 20:33:34,184 INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ...
  18. 2014-04-21 20:33:34,184 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Instantiated vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010.
  19. 2014-04-21 20:33:34,203 INFO org.apache.hadoop.hbase.regionserver.Store: time to purge deletes set to 0ms in store cf
  20. 2014-04-21 20:33:34,205 INFO org.apache.hadoop.hbase.regionserver.Store: hbase.hstore.compaction.min = 3
  21. 2014-04-21 20:33:34,234 DEBUG org.apache.hadoop.hbase.regionserver.Store: loaded hdfs://192.168.10.48:8020/hbase/vc2.host_stat/728c510f3a60b25ec6085803205b6010/cf/fb5206f16a40488d97cd809a436eef3a, isReference=false, isBulkLoadResult=false, seqid=20631663, majorCompaction=false
  22. 2014-04-21 20:33:34,251 DEBUG org.apache.hadoop.hbase.regionserver.Store: loaded hdfs://192.168.10.48:8020/hbase/vc2.host_stat/728c510f3a60b25ec6085803205b6010/cf/fda9f2d8334e4993a66b8530654162eb, isReference=false, isBulkLoadResult=false, seqid=20553171, majorCompaction=true
  23. 2014-04-21 20:33:34,255 INFO org.apache.hadoop.hbase.regionserver.HRegion: Onlined vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010.; next sequenceid=20631664
  24. 2014-04-21 20:33:34,255 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x1441eaf4e9d0003-0x1441eaf4e9d0003-0x1441eaf4e9d0003 Attempting to transition node 728c510f3a60b25ec6085803205b6010 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
  25. 2014-04-21 20:33:34,261 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x1441eaf4e9d0003-0x1441eaf4e9d0003-0x1441eaf4e9d0003 Successfully transitioned node 728c510f3a60b25ec6085803205b6010 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
  26. 2014-04-21 20:33:34,262 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Post open deploy tasks for region=vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010., daughter=false
  27. 2014-04-21 20:33:34,269 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Updated row vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010. with server=app-hbase-2,60020,1392084195006
  28. 2014-04-21 20:33:34,269 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Done with post open deploy task for region=vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010., daughter=false
  29. 2014-04-21 20:33:34,270 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x1441eaf4e9d0003-0x1441eaf4e9d0003-0x1441eaf4e9d0003 Attempting to transition node 728c510f3a60b25ec6085803205b6010 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
  30. 2014-04-21 20:33:34,276 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x1441eaf4e9d0003-0x1441eaf4e9d0003-0x1441eaf4e9d0003 Successfully transitioned node 728c510f3a60b25ec6085803205b6010 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
  31. 2014-04-21 20:33:34,276 DEBUG org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: region transitioned to opened in zookeeper: {NAME => 'vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010.', STARTKEY => '', ENDKEY => '99999wgaxlm.cn.roowei.com', ENCODED => 728c510f3a60b25ec6085803205b6010,}, server: app-hbase-2,60020,1392084195006
  32. 2014-04-21 20:33:34,276 DEBUG org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opened vc2.host_stat,,1398083567973.728c510f3a60b25ec6085803205b6010. on server:app-hbase-2,60020,1392084195006
  33. 2014-04-21 20:33:59,292 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block blk_5571989386432628923_202164java.io.IOException: Bad response 1 for block blk_5571989386432628923_202164 from datanode 192.168.10.45:50010
  34. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2977)
  35.  
  36. 2014-04-21 20:33:59,293 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_5571989386432628923_202164 bad datanode[2] 192.168.10.45:50010
  37. 2014-04-21 20:33:59,293 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_5571989386432628923_202164 in pipeline 192.168.10.46:50010, 192.168.10.49:50010, 192.168.10.45:50010: bad datanode 192.168.10.45:50010
  38. 2014-04-21 20:33:59,328 WARN org.apache.hadoop.hbase.regionserver.wal.HLog: HDFS pipeline error detected. Found 2 replicas but expecting no less than 3 replicas. Requesting close of hlog.
  39. 2014-04-21 20:33:59,329 DEBUG org.apache.hadoop.hbase.regionserver.LogRoller: HLog roll requested
  40. 2014-04-21 20:33:59,329 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  41. 2014-04-21 20:33:59,329 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  42. 2014-04-21 20:33:59,353 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using new createWriter -- HADOOP-6840
  43. 2014-04-21 20:33:59,353 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Path=hdfs://192.168.10.48:8020/hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083639329, syncFs=true, hflush=false, compression=false
  44. 2014-04-21 20:33:59,353 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: cleanupCurrentWriter waiting for transactions to get synced total 18633042 synced till here 18633039
  45. 2014-04-21 20:33:59,357 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083583646, entries=119901, filesize=26019745. for /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083639329
  46. 2014-04-21 20:34:25,737 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -5309054457271248587 lease expired on region vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab.
  47. 2014-04-21 20:34:25,852 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -8256804226731677412 lease expired on region vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea.
  48. 2014-04-21 20:35:36,455 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Flush requested on vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f.
  49. 2014-04-21 20:35:36,455 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush for vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f., current region memstore size 128.0m
  50. 2014-04-21 20:35:36,460 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f., commencing wait for mvcc, flushsize=134220120
  51. 2014-04-21 20:35:36,460 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting, commencing flushing stores
  52. 2014-04-21 20:35:36,548 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/76834e957c9640ec85bff549cb8345e4 with permission=rwxrwxrwx
  53. 2014-04-21 20:35:36,549 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  54. 2014-04-21 20:35:36,549 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  55. 2014-04-21 20:35:36,551 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  56. 2014-04-21 20:35:36,551 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/76834e957c9640ec85bff549cb8345e4: CompoundBloomFilterWriter
  57. 2014-04-21 20:35:37,398 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/76834e957c9640ec85bff549cb8345e4)
  58. 2014-04-21 20:35:37,398 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=24625882, memsize=128.0m, into tmp file hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/76834e957c9640ec85bff549cb8345e4
  59. 2014-04-21 20:35:37,409 DEBUG org.apache.hadoop.hbase.regionserver.Store: Renaming flushed file at hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/76834e957c9640ec85bff549cb8345e4 to hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4
  60. 2014-04-21 20:35:37,419 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4, entries=656900, sequenceid=24625882, filesize=41.1m
  61. 2014-04-21 20:35:37,426 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~128.0m/134220120, currentsize=168.5k/172584 for region vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f. in 971ms, sequenceid=24625882, compaction requested=true
  62. 2014-04-21 20:35:37,426 DEBUG org.apache.hadoop.hbase.regionserver.Store: d75f6efce1eccc4d90883435ba503a1f - cf: Initiating minorcompaction
  63. 2014-04-21 20:35:37,426 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f., storeName=cf, fileCount=3, fileSize=123.4m (41.2m, 41.1m, 41.1m), priority=3, time=6074715933632445; Because: regionserver60020.cacheFlusher; compaction_queue=(0:0), split_queue=0
  64. 2014-04-21 20:35:37,427 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on cf in region vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f.
  65. 2014-04-21 20:35:37,427 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3 file(s) in cf of vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f. into tmpdir=hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp, seqid=24625882, totalSize=123.4m
  66. 2014-04-21 20:35:37,427 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/419a6748ab5646c3a310b2735f6ea3c6, keycount=656347, bloomtype=NONE, size=41.2m, encoding=NONE
  67. 2014-04-21 20:35:37,427 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/f7fccbb33ace4c51b9ab980974a0455a, keycount=657203, bloomtype=NONE, size=41.1m, encoding=NONE
  68. 2014-04-21 20:35:37,427 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4, keycount=656900, bloomtype=NONE, size=41.1m, encoding=NONE
  69. 2014-04-21 20:35:37,431 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/d1512c4397a84e6284d1a398f4902562 with permission=rwxrwxrwx
  70. 2014-04-21 20:35:37,432 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  71. 2014-04-21 20:35:37,432 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  72. 2014-04-21 20:35:37,434 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  73. 2014-04-21 20:35:37,434 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/d1512c4397a84e6284d1a398f4902562: CompoundBloomFilterWriter
  74. 2014-04-21 20:35:42,252 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/d1512c4397a84e6284d1a398f4902562)
  75. 2014-04-21 20:35:42,263 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/.tmp/d1512c4397a84e6284d1a398f4902562 to hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/d1512c4397a84e6284d1a398f4902562
  76. 2014-04-21 20:35:42,279 DEBUG org.apache.hadoop.hbase.regionserver.Store: Removing store files after compaction...
  77. 2014-04-21 20:35:42,281 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving compacted store files.
  78. 2014-04-21 20:35:42,281 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Starting to archive files:[class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/419a6748ab5646c3a310b2735f6ea3c6, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/f7fccbb33ace4c51b9ab980974a0455a, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4]
  79. 2014-04-21 20:35:42,281 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: moving files to the archive directory: hdfs://192.168.10.48:8020/hbase/.archive/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf
  80. 2014-04-21 20:35:42,282 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/419a6748ab5646c3a310b2735f6ea3c6
  81. 2014-04-21 20:35:42,283 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/419a6748ab5646c3a310b2735f6ea3c6, free to archive original file.
  82. 2014-04-21 20:35:42,288 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/419a6748ab5646c3a310b2735f6ea3c6, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/419a6748ab5646c3a310b2735f6ea3c6
  83. 2014-04-21 20:35:42,288 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/f7fccbb33ace4c51b9ab980974a0455a
  84. 2014-04-21 20:35:42,289 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/f7fccbb33ace4c51b9ab980974a0455a, free to archive original file.
  85. 2014-04-21 20:35:42,296 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/f7fccbb33ace4c51b9ab980974a0455a, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/f7fccbb33ace4c51b9ab980974a0455a
  86. 2014-04-21 20:35:42,296 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4
  87. 2014-04-21 20:35:42,296 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4, free to archive original file.
  88. 2014-04-21 20:35:42,303 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.out_link/d75f6efce1eccc4d90883435ba503a1f/cf/76834e957c9640ec85bff549cb8345e4
  89. 2014-04-21 20:35:42,303 INFO org.apache.hadoop.hbase.regionserver.Store: Completed compaction of 3 file(s) in cf of vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f. into d1512c4397a84e6284d1a398f4902562, size=121.5m; total size for store is 283.3m
  90. 2014-04-21 20:35:42,303 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=vc2.out_link,\x9B\xF8\xB2\xD8\xE8B\x1B\x06\xB8'\xE8OM\xC9\xDBSKTqG\x0A\xFBi\x03\xF1!&\xED\xB7s\x9C\x89,1398081154562.d75f6efce1eccc4d90883435ba503a1f., storeName=cf, fileCount=3, fileSize=123.4m, priority=3, time=6074715933632445; duration=4sec
  91. 2014-04-21 20:35:42,303 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(0:0), split_queue=0
  92. 2014-04-21 20:35:45,267 DEBUG org.apache.hadoop.hbase.regionserver.LogRoller: HLog roll requested
  93. 2014-04-21 20:35:45,268 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  94. 2014-04-21 20:35:45,269 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  95. 2014-04-21 20:35:45,271 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using new createWriter -- HADOOP-6840
  96. 2014-04-21 20:35:45,271 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Path=hdfs://192.168.10.48:8020/hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083745268, syncFs=true, hflush=false, compression=false
  97. 2014-04-21 20:35:45,271 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: cleanupCurrentWriter waiting for transactions to get synced total 18923194 synced till here 18923176
  98. 2014-04-21 20:35:45,279 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083639329, entries=290152, filesize=63756779. for /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083745268
  99. 2014-04-21 20:37:20,472 DEBUG org.apache.hadoop.hbase.regionserver.LogRoller: HLog roll requested
  100. 2014-04-21 20:37:20,473 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  101. 2014-04-21 20:37:20,473 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  102. 2014-04-21 20:37:20,477 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using new createWriter -- HADOOP-6840
  103. 2014-04-21 20:37:20,477 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Path=hdfs://192.168.10.48:8020/hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083840473, syncFs=true, hflush=false, compression=false
  104. 2014-04-21 20:37:20,477 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: cleanupCurrentWriter waiting for transactions to get synced total 19215812 synced till here 19215795
  105. 2014-04-21 20:37:20,486 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083745268, entries=292618, filesize=63758845. for /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083840473
  106. 2014-04-21 20:37:28,643 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=1.58 GB, free=1.14 GB, max=2.72 GB, blocks=25267, accesses=52680882, hits=51233387, hitRatio=97.25%, , cachingAccesses=51201545, cachingHits=51088929, cachingHitsRatio=99.78%, , evictions=0, evicted=86270, evictedPerRun=Infinity
  107. 2014-04-21 20:38:36,449 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Flush requested on vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea.
  108. 2014-04-21 20:38:36,449 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush for vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea., current region memstore size 128.0m
  109. 2014-04-21 20:38:36,454 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea., commencing wait for mvcc, flushsize=134223856
  110. 2014-04-21 20:38:36,454 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting, commencing flushing stores
  111. 2014-04-21 20:38:36,572 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/de66388a8d114b7782d8a724b07e8466 with permission=rwxrwxrwx
  112. 2014-04-21 20:38:36,574 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  113. 2014-04-21 20:38:36,574 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  114. 2014-04-21 20:38:36,576 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  115. 2014-04-21 20:38:36,576 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/de66388a8d114b7782d8a724b07e8466: CompoundBloomFilterWriter
  116. 2014-04-21 20:38:37,362 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/de66388a8d114b7782d8a724b07e8466)
  117. 2014-04-21 20:38:37,363 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=25177187, memsize=128.0m, into tmp file hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/de66388a8d114b7782d8a724b07e8466
  118. 2014-04-21 20:38:37,373 DEBUG org.apache.hadoop.hbase.regionserver.Store: Renaming flushed file at hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/de66388a8d114b7782d8a724b07e8466 to hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466
  119. 2014-04-21 20:38:37,392 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466, entries=703994, sequenceid=25177187, filesize=35.2m
  120. 2014-04-21 20:38:37,398 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~128.0m/134223856, currentsize=185.2k/189664 for region vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea. in 949ms, sequenceid=25177187, compaction requested=true
  121. 2014-04-21 20:38:37,399 DEBUG org.apache.hadoop.hbase.regionserver.Store: a3e447607044de831ab6e1fcb955dbea - cf: Initiating minorcompaction
  122. 2014-04-21 20:38:37,399 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea., storeName=cf, fileCount=4, fileSize=327.8m (153.7m, 103.6m, 35.2m, 35.2m), priority=3, time=6074895906020401; Because: regionserver60020.cacheFlusher; compaction_queue=(0:1), split_queue=0
  123. 2014-04-21 20:38:37,399 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on cf in region vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea.
  124. 2014-04-21 20:38:37,399 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in cf of vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea. into tmpdir=hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp, seqid=25177187, totalSize=327.8m
  125. 2014-04-21 20:38:37,399 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f0e22fa9854c4e58bfbadabc30da3a07, keycount=3129618, bloomtype=NONE, size=153.7m, encoding=NONE, earliestPutTs=1398075826952
  126. 2014-04-21 20:38:37,399 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/a92aecfa10f94dcd9ce3902a03bda4aa, keycount=2111830, bloomtype=NONE, size=103.6m, encoding=NONE, earliestPutTs=1398075826952
  127. 2014-04-21 20:38:37,399 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f41bddde18e44bf19f86e0f192653b23, keycount=704186, bloomtype=NONE, size=35.2m, encoding=NONE, earliestPutTs=1398075826952
  128. 2014-04-21 20:38:37,399 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466, keycount=703994, bloomtype=NONE, size=35.2m, encoding=NONE, earliestPutTs=1398075826952
  129. 2014-04-21 20:38:37,403 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/fb6f283a60e148feb3c7862e396befeb with permission=rwxrwxrwx
  130. 2014-04-21 20:38:37,404 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  131. 2014-04-21 20:38:37,404 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  132. 2014-04-21 20:38:37,406 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  133. 2014-04-21 20:38:37,406 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/fb6f283a60e148feb3c7862e396befeb: CompoundBloomFilterWriter
  134. 2014-04-21 20:38:40,421 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Flush requested on vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab.
  135. 2014-04-21 20:38:40,421 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush for vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab., current region memstore size 128.0m
  136. 2014-04-21 20:38:40,421 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab., commencing wait for mvcc, flushsize=134221352
  137. 2014-04-21 20:38:40,422 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting, commencing flushing stores
  138. 2014-04-21 20:38:40,603 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/9c1adc7bad424fcb90dfbc31221f81e6 with permission=rwxrwxrwx
  139. 2014-04-21 20:38:40,605 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  140. 2014-04-21 20:38:40,605 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  141. 2014-04-21 20:38:40,607 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  142. 2014-04-21 20:38:40,607 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/9c1adc7bad424fcb90dfbc31221f81e6: CompoundBloomFilterWriter
  143. 2014-04-21 20:38:41,831 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/9c1adc7bad424fcb90dfbc31221f81e6)
  144. 2014-04-21 20:38:41,831 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=25185892, memsize=128.0m, into tmp file hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/9c1adc7bad424fcb90dfbc31221f81e6
  145. 2014-04-21 20:38:41,842 DEBUG org.apache.hadoop.hbase.regionserver.Store: Renaming flushed file at hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/9c1adc7bad424fcb90dfbc31221f81e6 to hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6
  146. 2014-04-21 20:38:41,854 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6, entries=703983, sequenceid=25185892, filesize=35.2m
  147. 2014-04-21 20:38:41,859 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~128.0m/134221352, currentsize=252.3k/258336 for region vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab. in 1437ms, sequenceid=25185892, compaction requested=true
  148. 2014-04-21 20:38:41,859 DEBUG org.apache.hadoop.hbase.regionserver.Store: e5856d387a8bfcc206ecd72be10861ab - cf: Initiating minorcompaction
  149. 2014-04-21 20:38:41,859 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab., storeName=cf, fileCount=4, fileSize=327.8m (153.8m, 103.5m, 35.2m, 35.2m), priority=3, time=6074900366167026; Because: regionserver60020.cacheFlusher; compaction_queue=(0:1), split_queue=0
  150. 2014-04-21 20:38:48,656 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/fb6f283a60e148feb3c7862e396befeb)
  151. 2014-04-21 20:38:48,673 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/.tmp/fb6f283a60e148feb3c7862e396befeb to hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/fb6f283a60e148feb3c7862e396befeb
  152. 2014-04-21 20:38:48,692 DEBUG org.apache.hadoop.hbase.regionserver.Store: Removing store files after compaction...
  153. 2014-04-21 20:38:48,696 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving compacted store files.
  154. 2014-04-21 20:38:48,696 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Starting to archive files:[class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f0e22fa9854c4e58bfbadabc30da3a07, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/a92aecfa10f94dcd9ce3902a03bda4aa, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f41bddde18e44bf19f86e0f192653b23, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466]
  155. 2014-04-21 20:38:48,696 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: moving files to the archive directory: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf
  156. 2014-04-21 20:38:48,699 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f0e22fa9854c4e58bfbadabc30da3a07
  157. 2014-04-21 20:38:48,702 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f0e22fa9854c4e58bfbadabc30da3a07, free to archive original file.
  158. 2014-04-21 20:38:48,711 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f0e22fa9854c4e58bfbadabc30da3a07, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f0e22fa9854c4e58bfbadabc30da3a07
  159. 2014-04-21 20:38:48,711 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/a92aecfa10f94dcd9ce3902a03bda4aa
  160. 2014-04-21 20:38:48,716 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/a92aecfa10f94dcd9ce3902a03bda4aa, free to archive original file.
  161. 2014-04-21 20:38:48,729 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/a92aecfa10f94dcd9ce3902a03bda4aa, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/a92aecfa10f94dcd9ce3902a03bda4aa
  162. 2014-04-21 20:38:48,729 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f41bddde18e44bf19f86e0f192653b23
  163. 2014-04-21 20:38:48,731 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f41bddde18e44bf19f86e0f192653b23, free to archive original file.
  164. 2014-04-21 20:38:48,742 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f41bddde18e44bf19f86e0f192653b23, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/f41bddde18e44bf19f86e0f192653b23
  165. 2014-04-21 20:38:48,742 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466
  166. 2014-04-21 20:38:48,748 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466, free to archive original file.
  167. 2014-04-21 20:38:48,763 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/a3e447607044de831ab6e1fcb955dbea/cf/de66388a8d114b7782d8a724b07e8466
  168. 2014-04-21 20:38:48,763 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 4 file(s) in cf of vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea. into fb6f283a60e148feb3c7862e396befeb, size=325.8m; total size for store is 325.8m
  169. 2014-04-21 20:38:48,763 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=vc2.url_db,\xBF\xB5\xB7\xC6Za\x0Fj1\xC6l\x05m~\xC0\x1E,1398080480004.a3e447607044de831ab6e1fcb955dbea., storeName=cf, fileCount=4, fileSize=327.8m, priority=3, time=6074895906020401; duration=11sec
  170. 2014-04-21 20:38:48,763 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(0:1), split_queue=0
  171. 2014-04-21 20:38:48,764 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on cf in region vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab.
  172. 2014-04-21 20:38:48,764 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in cf of vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab. into tmpdir=hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp, seqid=25185892, totalSize=327.8m
  173. 2014-04-21 20:38:48,764 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/1734dbc8da654ea7a5ca79b89e8e3fa2, keycount=3130700, bloomtype=NONE, size=153.8m, encoding=NONE, earliestPutTs=1398075827040
  174. 2014-04-21 20:38:48,764 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/66ba4b396d3b488b87b53ac27e9aa4d8, keycount=2110258, bloomtype=NONE, size=103.5m, encoding=NONE, earliestPutTs=1398075827040
  175. 2014-04-21 20:38:48,764 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/d20b9619eac948bb99bcafd9fbf74976, keycount=704641, bloomtype=NONE, size=35.2m, encoding=NONE, earliestPutTs=1398075827040
  176. 2014-04-21 20:38:48,764 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6, keycount=703983, bloomtype=NONE, size=35.2m, encoding=NONE, earliestPutTs=1398075827040
  177. 2014-04-21 20:38:48,771 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/198bdedfe5074703999aa67d5b5056ac with permission=rwxrwxrwx
  178. 2014-04-21 20:38:48,776 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  179. 2014-04-21 20:38:48,776 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  180. 2014-04-21 20:38:48,785 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  181. 2014-04-21 20:38:48,785 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/198bdedfe5074703999aa67d5b5056ac: CompoundBloomFilterWriter
  182. 2014-04-21 20:38:59,810 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/198bdedfe5074703999aa67d5b5056ac)
  183. 2014-04-21 20:38:59,825 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/.tmp/198bdedfe5074703999aa67d5b5056ac to hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/198bdedfe5074703999aa67d5b5056ac
  184. 2014-04-21 20:38:59,838 DEBUG org.apache.hadoop.hbase.regionserver.Store: Removing store files after compaction...
  185. 2014-04-21 20:38:59,839 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving compacted store files.
  186. 2014-04-21 20:38:59,839 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Starting to archive files:[class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/1734dbc8da654ea7a5ca79b89e8e3fa2, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/66ba4b396d3b488b87b53ac27e9aa4d8, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/d20b9619eac948bb99bcafd9fbf74976, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6]
  187. 2014-04-21 20:38:59,839 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: moving files to the archive directory: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf
  188. 2014-04-21 20:38:59,840 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/1734dbc8da654ea7a5ca79b89e8e3fa2
  189. 2014-04-21 20:38:59,841 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/1734dbc8da654ea7a5ca79b89e8e3fa2, free to archive original file.
  190. 2014-04-21 20:38:59,848 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/1734dbc8da654ea7a5ca79b89e8e3fa2, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/1734dbc8da654ea7a5ca79b89e8e3fa2
  191. 2014-04-21 20:38:59,848 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/66ba4b396d3b488b87b53ac27e9aa4d8
  192. 2014-04-21 20:38:59,848 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/66ba4b396d3b488b87b53ac27e9aa4d8, free to archive original file.
  193. 2014-04-21 20:38:59,855 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/66ba4b396d3b488b87b53ac27e9aa4d8, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/66ba4b396d3b488b87b53ac27e9aa4d8
  194. 2014-04-21 20:38:59,855 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/d20b9619eac948bb99bcafd9fbf74976
  195. 2014-04-21 20:38:59,856 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/d20b9619eac948bb99bcafd9fbf74976, free to archive original file.
  196. 2014-04-21 20:38:59,860 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/d20b9619eac948bb99bcafd9fbf74976, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/d20b9619eac948bb99bcafd9fbf74976
  197. 2014-04-21 20:38:59,860 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6
  198. 2014-04-21 20:38:59,861 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6, free to archive original file.
  199. 2014-04-21 20:38:59,866 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.url_db/e5856d387a8bfcc206ecd72be10861ab/cf/9c1adc7bad424fcb90dfbc31221f81e6
  200. 2014-04-21 20:38:59,866 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 4 file(s) in cf of vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab. into 198bdedfe5074703999aa67d5b5056ac, size=325.8m; total size for store is 325.8m
  201. 2014-04-21 20:38:59,867 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=vc2.url_db,\xCF\xC1\xE0\xF6\xD6\xD2H\x99\xCF\xD9\xCE\x9B\x81?\x1E\xC4,1398080480004.e5856d387a8bfcc206ecd72be10861ab., storeName=cf, fileCount=4, fileSize=327.8m, priority=3, time=6074900366167026; duration=11sec
  202. 2014-04-21 20:38:59,867 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(0:0), split_queue=0
  203. 2014-04-21 20:39:05,698 DEBUG org.apache.hadoop.hbase.regionserver.LogRoller: HLog roll requested
  204. 2014-04-21 20:39:05,701 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  205. 2014-04-21 20:39:05,701 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  206. 2014-04-21 20:39:05,709 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using new createWriter -- HADOOP-6840
  207. 2014-04-21 20:39:05,709 DEBUG org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Path=hdfs://192.168.10.48:8020/hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083945701, syncFs=true, hflush=false, compression=false
  208. 2014-04-21 20:39:05,709 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: cleanupCurrentWriter waiting for transactions to get synced total 19502768 synced till here 19502749
  209. 2014-04-21 20:39:08,515 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083840473, entries=286956, filesize=63757179. for /hbase/.logs/app-hbase-2,60020,1392084195006/app-hbase-2%2C60020%2C1392084195006.1398083945701
  210. 2014-04-21 20:40:07,828 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Flush requested on vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2.
  211. 2014-04-21 20:40:07,828 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush for vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2., current region memstore size 128.0m
  212. 2014-04-21 20:40:07,833 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2., commencing wait for mvcc, flushsize=134218664
  213. 2014-04-21 20:40:07,833 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting, commencing flushing stores
  214. 2014-04-21 20:40:08,000 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/3af004e4f656429fb4830f103eedf125 with permission=rwxrwxrwx
  215. 2014-04-21 20:40:08,001 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  216. 2014-04-21 20:40:08,001 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  217. 2014-04-21 20:40:08,004 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  218. 2014-04-21 20:40:08,004 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/3af004e4f656429fb4830f103eedf125: CompoundBloomFilterWriter
  219. 2014-04-21 20:40:09,168 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/3af004e4f656429fb4830f103eedf125)
  220. 2014-04-21 20:40:09,168 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=25384080, memsize=128.0m, into tmp file hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/3af004e4f656429fb4830f103eedf125
  221. 2014-04-21 20:40:09,176 DEBUG org.apache.hadoop.hbase.regionserver.Store: Renaming flushed file at hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/3af004e4f656429fb4830f103eedf125 to hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125
  222. 2014-04-21 20:40:09,186 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125, entries=632615, sequenceid=25384080, filesize=44.5m
  223. 2014-04-21 20:40:09,193 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~128.0m/134218664, currentsize=235.5k/241136 for region vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2. in 1365ms, sequenceid=25384080, compaction requested=true
  224. 2014-04-21 20:40:09,193 DEBUG org.apache.hadoop.hbase.regionserver.Store: 6b8ebe88d4f2ede4926201969b768aa2 - cf: Initiating minorcompaction
  225. 2014-04-21 20:40:09,194 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2., storeName=cf, fileCount=4, fileSize=426.4m (202.8m, 134.1m, 45.0m, 44.5m), priority=3, time=6074987700692506; Because: regionserver60020.cacheFlusher; compaction_queue=(0:0), split_queue=0
  226. 2014-04-21 20:40:09,194 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on cf in region vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2.
  227. 2014-04-21 20:40:09,194 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in cf of vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2. into tmpdir=hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp, seqid=25384080, totalSize=426.4m
  228. 2014-04-21 20:40:09,194 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/da917c0f09aa451cb3be998e2f5a596b, keycount=2747931, bloomtype=NONE, size=202.8m, encoding=NONE, earliestPutTs=1398076077369
  229. 2014-04-21 20:40:09,194 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/e2f88eed81514590b36b47508dd74b88, keycount=1881910, bloomtype=NONE, size=134.1m, encoding=NONE, earliestPutTs=1398076077369
  230. 2014-04-21 20:40:09,194 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/bb10a773345143dd95546efd04778df0, keycount=629761, bloomtype=NONE, size=45.0m, encoding=NONE, earliestPutTs=1398076077369
  231. 2014-04-21 20:40:09,194 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: Compacting hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125, keycount=632615, bloomtype=NONE, size=44.5m, encoding=NONE, earliestPutTs=1398076077369
  232. 2014-04-21 20:40:09,400 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/085a884f8ff445fd97289742abf63907 with permission=rwxrwxrwx
  233. 2014-04-21 20:40:09,403 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultReplication
  234. 2014-04-21 20:40:09,403 INFO org.apache.hadoop.hbase.util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  235. 2014-04-21 20:40:09,406 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
  236. 2014-04-21 20:40:09,406 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/085a884f8ff445fd97289742abf63907: CompoundBloomFilterWriter
  237. 2014-04-21 20:40:24,542 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/085a884f8ff445fd97289742abf63907)
  238. 2014-04-21 20:40:24,557 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/.tmp/085a884f8ff445fd97289742abf63907 to hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/085a884f8ff445fd97289742abf63907
  239. 2014-04-21 20:40:24,575 DEBUG org.apache.hadoop.hbase.regionserver.Store: Removing store files after compaction...
  240. 2014-04-21 20:40:24,576 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving compacted store files.
  241. 2014-04-21 20:40:24,576 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Starting to archive files:[class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/da917c0f09aa451cb3be998e2f5a596b, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/e2f88eed81514590b36b47508dd74b88, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/bb10a773345143dd95546efd04778df0, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125]
  242. 2014-04-21 20:40:24,576 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: moving files to the archive directory: hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf
  243. 2014-04-21 20:40:24,577 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/da917c0f09aa451cb3be998e2f5a596b
  244. 2014-04-21 20:40:24,578 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/da917c0f09aa451cb3be998e2f5a596b, free to archive original file.
  245. 2014-04-21 20:40:24,587 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/da917c0f09aa451cb3be998e2f5a596b, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/da917c0f09aa451cb3be998e2f5a596b
  246. 2014-04-21 20:40:24,587 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/e2f88eed81514590b36b47508dd74b88
  247. 2014-04-21 20:40:24,589 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/e2f88eed81514590b36b47508dd74b88, free to archive original file.
  248. 2014-04-21 20:40:24,610 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/e2f88eed81514590b36b47508dd74b88, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/e2f88eed81514590b36b47508dd74b88
  249. 2014-04-21 20:40:24,610 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/bb10a773345143dd95546efd04778df0
  250. 2014-04-21 20:40:24,612 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/bb10a773345143dd95546efd04778df0, free to archive original file.
  251. 2014-04-21 20:40:24,624 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/bb10a773345143dd95546efd04778df0, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/bb10a773345143dd95546efd04778df0
  252. 2014-04-21 20:40:24,624 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Archiving:class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125
  253. 2014-04-21 20:40:24,626 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: No existing file in archive for:hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125, free to archive original file.
  254. 2014-04-21 20:40:24,631 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://192.168.10.48:8020/hbase/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125, to: hdfs://192.168.10.48:8020/hbase/.archive/vc2.in_link/6b8ebe88d4f2ede4926201969b768aa2/cf/3af004e4f656429fb4830f103eedf125
  255. 2014-04-21 20:40:24,631 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 4 file(s) in cf of vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2. into 085a884f8ff445fd97289742abf63907, size=425.1m; total size for store is 425.1m
  256. 2014-04-21 20:40:24,631 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=vc2.in_link,?u\xEEc\x8A\x1F\xCD\xB57Y\x9Fz=\xD86\xE08\xF9\xA2\x1C\xD0\xD1\xD8\xDA(\xD8\xFB\x0AJ\x83\x81L,1398077955817.6b8ebe88d4f2ede4926201969b768aa2., storeName=cf, fileCount=4, fileSize=426.4m, priority=3, time=6074987700692506; duration=15sec
  257. 2014-04-21 20:40:24,631 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(0:0), split_queue=0
  258. 2014-04-21 20:40:58,931 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Flush requested on vc2.in_link,,1398077955817.5ec97a83f8dbb32a7f288f32c574647e.
  259. 2014-04-21 20:40:58,933 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush for vc2.in_link,,1398077955817.5ec97a83f8dbb32a7f288f32c574647e., current region memstore size 128.0m
  260. 2014-04-21 20:40:58,939 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting vc2.in_link,,1398077955817.5ec97a83f8dbb32a7f288f32c574647e., commencing wait for mvcc, flushsize=134219008
  261. 2014-04-21 20:40:58,939 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting, commencing flushing stores
  262. 2014-04-21 20:40:59,278 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://192.168.10.48:8020/hbase/vc2.in_link/5ec97a83f8dbb32a7f288f32c574647e/.tmp/eda54f49097a497ca09faf058375e5b5 with permission=rwxrwxrwx
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement