Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2012-06-20 12:59:20,578 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=3.28 GB, free=731.24 MB, max=3.99 GB, blocks=3411, accesses=115579643, hits=115465840, hitRatio=99.90%, cachingAccesses=115475699, cachingHits=115459392, cachingHitsRatio=99.98%, evictions=13, evicted=12896, evictedPerRun=992.0
- 2012-06-20 12:59:26,414 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.23 MB of total=3.39 GB
- 2012-06-20 12:59:26,427 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=409.87 MB, total=2.99 GB, single=212.23 MB, multi=3.15 GB, memory=0 KB
- 2012-06-20 12:59:46,440 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.62 MB of total=3.4 GB
- 2012-06-20 12:59:46,453 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=410.18 MB, total=2.99 GB, single=597.47 MB, multi=2.78 GB, memory=0 KB
- 2012-06-20 13:00:48,565 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.04 MB of total=3.39 GB
- 2012-06-20 13:00:48,578 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=409.35 MB, total=3 GB, single=981.93 MB, multi=2.4 GB, memory=0 KB
- 2012-06-20 13:02:22,130 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 546034961125846643 lease expired on region BR,\x00\x00\x00\x00\x00\x00QW\x00\x00\x03\xD8\xE8;\x00\x00,1338361531663.74632f5b207e7ab3120f5c853a2d63cb.
- 2012-06-20 13:02:22,193 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 157435759045097394 lease expired on region FI,\x00\x00\x00\x00\x00\x000\xEF\x00\x00\x04$\xE2\xF7m\x80,1338381398401.0ecb248bdc78f46798e1454e5a432cce.
- 2012-06-20 13:02:22,315 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -8096617724791789890 lease expired on region SE,\x00\x00\x00\x00\x00\x00T\xFC\x00\x00\x03\xDB\xF1\xA4U\x80,1339012964193.762980fe45e702e0aad23a3d6db950e4.
- 2012-06-20 13:02:57,387 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -8480175801990346216 lease expired on region sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA/\x00\x00\x03\xCF\x11\xBEU\x80,1340220387952.0923ea7d83b7fd4bc17e2b62fb2b0186.
- 2012-06-20 13:03:03,013 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -4052308468565282138 lease expired on region sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA/\x00\x00\x03\xCF\x11\xBEU\x80,1340220387952.0923ea7d83b7fd4bc17e2b62fb2b0186.
- 2012-06-20 13:03:35,836 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.28 MB of total=3.4 GB
- 2012-06-20 13:03:35,849 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=410.07 MB, total=2.99 GB, single=1.13 GB, multi=2.23 GB, memory=0 KB
- 2012-06-20 13:04:20,578 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=3.09 GB, free=926.13 MB, max=3.99 GB, blocks=3209, accesses=115842668, hits=115727409, hitRatio=99.90%, cachingAccesses=115738724, cachingHits=115720961, cachingHitsRatio=99.98%, evictions=17, evicted=14554, evictedPerRun=856.11767578125
- 2012-06-20 13:05:21,904 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1848519192ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1855426618ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1845696795ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1835455897ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1867358915ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1866194012ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1839922900ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 748283099ms
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 748283099ms
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=ussc6_iu,\x00\x00\x00\x00\x00\x00\\x88\x00\x00\x04#\xA9\x0Ap\x00,1338376122427.4aec16388216c879d4ffb62d23978f96., storeName=kcf, fileCount=2, fileSize=343.2m (343.2m, 7.9k), priority=5, time=10786950161250067; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(0:0), split_queue=0
- 2012-06-20 13:05:21,905 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region ussc6_iu,\x00\x00\x00\x00\x00\x00\\x88\x00\x00\x04#\xA9\x0Ap\x00,1338376122427.4aec16388216c879d4ffb62d23978f96.
- 2012-06-20 13:05:21,905 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1877177911ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,906 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1829071406ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,906 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1848329931ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,906 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1866139484ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,906 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of ussc6_iu,\x00\x00\x00\x00\x00\x00\\x88\x00\x00\x04#\xA9\x0Ap\x00,1338376122427.4aec16388216c879d4ffb62d23978f96. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/.tmp, seqid=14752537, totalSize=343.2m
- 2012-06-20 13:05:21,906 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 768320414ms
- 2012-06-20 13:05:21,906 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/kcf/d1e1cc452f7544a39da159e81cfc86bf, keycount=23288, bloomtype=ROW, size=343.2m, encoding=NONE
- 2012-06-20 13:05:21,906 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 768320414ms
- 2012-06-20 13:05:21,906 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/kcf/d4242d8286244ad2959e6e1a434c8b26, keycount=185, bloomtype=ROW, size=7.9k, encoding=NONE
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=DK,\x00\x00\x00\x00\x00\x00[\xBB\x00\x00\x03\xD0\xF6\x0B\x0C\x00,1338395491594.a9d86b777245bdd9e9bd0a08404c3ec5., storeName=kcf, fileCount=2, fileSize=55.3m (55.3m, 16.0k), priority=5, time=10786950162874580; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(0:0), split_queue=0
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1839175604ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1860442818ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1887404424ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,907 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region DK,\x00\x00\x00\x00\x00\x00[\xBB\x00\x00\x03\xD0\xF6\x0B\x0C\x00,1338395491594.a9d86b777245bdd9e9bd0a08404c3ec5.
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1874803152ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1830460795ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,907 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of DK,\x00\x00\x00\x00\x00\x00[\xBB\x00\x00\x03\xD0\xF6\x0B\x0C\x00,1338395491594.a9d86b777245bdd9e9bd0a08404c3ec5. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/.tmp, seqid=14779361, totalSize=55.3m
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1861783955ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,907 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/kcf/8be9ec34f65149c3949c5cb09ae88b56, keycount=109073, bloomtype=ROW, size=55.3m, encoding=NONE
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1886529061ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/kcf/aee88175c0524c9dbcac3194bdc741f5, keycount=31, bloomtype=ROW, size=16.0k, encoding=NONE
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1859902342ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 762987681ms
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 762987681ms
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=DE,\x00\x00\x00\x00\x00\x00.c\x00\x00\x04.8~-\x80,1338356013711.8738fde6c046ef3f6993904c704c391f., storeName=kcf, fileCount=2, fileSize=62.2m (62.2m, 40.0k), priority=5, time=10786950164257623; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(0:1), split_queue=0
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1844519298ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1844930317ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1855687924ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 569997637ms
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 569997637ms
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=SE,\x00\x00\x00\x00\x00\x00T\xFC\x00\x00\x03\xDB\xF1\xA4U\x80,1339012964193.762980fe45e702e0aad23a3d6db950e4., storeName=kcf, fileCount=2, fileSize=217.5m (217.4m, 142.6k), priority=5, time=10786950164593599; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(0:2), split_queue=0
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1887489753ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1877568130ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 569997624ms
- 2012-06-20 13:05:21,908 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 569997624ms
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=US,\x00\x00\x00\x00\x00\x00^1\x00\x00\x047\x12lM\x80,1338372225027.d3d055bd22363ff022588ebae3835700., storeName=kcf, fileCount=2, fileSize=538.7m (535.1m, 3.6m), priority=5, time=10786950164837528; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(1:2), split_queue=0
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1887707773ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1840396950ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1883763440ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1886180644ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1883428394ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1868056952ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1874115969ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1887885039ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1885386628ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 768285106ms
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 768285106ms
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=FI,\x00\x00\x00\x00\x00\x000\xEF\x00\x00\x04$\xE2\xF7m\x80,1338381398401.0ecb248bdc78f46798e1454e5a432cce., storeName=kcf, fileCount=2, fileSize=62.9m (62.9m, 35.8k), priority=5, time=10786950165412670; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(1:3), split_queue=0
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1855763147ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1826006381ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1885824081ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1831431361ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1885877599ms
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1885877599ms
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=europe4_iu,\x00\x00\x00\x00\x00\x00\x11\xD1\x00\x00\x04\x1D\x9A\xB9U\x80,1338336819431.d2847e560e66a55bd34bcb71b943fb23., storeName=kcf, fileCount=2, fileSize=65.3m (65.3m, 1.1k), priority=5, time=10786950165763777; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(1:4), split_queue=0
- 2012-06-20 13:05:21,909 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339998134ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339998135ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=NL,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04\x13\xCE\xF7@\x00,1339014483555.85edb0d95ddcbe25619d645923595dce., storeName=kcf, fileCount=2, fileSize=164.9m (164.8m, 118.2k), priority=5, time=10786950165932093; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(1:5), split_queue=0
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1887235048ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1842942861ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329997901ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329997901ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=NL,\x00\x00\x00\x00\x00\x00*l\x00\x00\x04\x0D!7\xF0\x00,1338363945423.60ffb13e5a656154264dd09af42d7f34., storeName=kcf, fileCount=2, fileSize=62.1m (62.1m, 21.2k), priority=5, time=10786950166199693; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(1:6), split_queue=0
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339998157ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339998157ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=FR,\x00\x00\x00\x00\x00\x00xl\x00\x00\x045\xC3\xAE\xF1\x80,1338917904608.63c201fb3b7b041205fefc5a6090f484., storeName=kcf, fileCount=2, fileSize=270.3m (270.1m, 273.0k), priority=5, time=10786950166376469; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(2:6), split_queue=0
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1833510579ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1867911303ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1816919564ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1847912656ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1840789870ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1872516341ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 768259263ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 768259263ms
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=europe4_iu,,1338336160462.35251e7bc44b7f893547ba6d69022da2., storeName=kcf, fileCount=2, fileSize=59.3m (59.3m, 1011.0), priority=5, time=10786950166785836; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(2:7), split_queue=0
- 2012-06-20 13:05:21,910 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1720030419ms
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1720030420ms
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=All,,1338334850467.46d9b296358ede178c3bfa3dffaf22b0., storeName=kcf, fileCount=2, fileSize=4.4m (4.4m, 1.5k), priority=5, time=10786950166948125; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(2:8), split_queue=0
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1884642311ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1841300953ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1887995767ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329991612ms
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329991612ms
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=AT,,1338392191015.03a78f94457c8094142fd0b3245ce29c., storeName=kcf, fileCount=2, fileSize=65.3m (65.2m, 52.1k), priority=5, time=10786950167262082; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(2:9), split_queue=0
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1869774972ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1846362268ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1885787373ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1859027589ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 895548618ms
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 895548618ms
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=europe4_iu,\x00\x00\x00\x00\x00\x006\xA9\x00\x00\x04-\xE0\xF2\x11\x80,1338360607321.70dd58ed425461ec6909a72a47ce3dc0., storeName=kcf, fileCount=2, fileSize=345.5m (345.5m, 13.3k), priority=5, time=10786950167594925; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(3:9), split_queue=0
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1868563523ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1886269546ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1852266775ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1870049825ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,911 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1876199111ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1875779150ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1885904741ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339943183ms
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339943183ms
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=IT,\x00\x00\x00\x00\x00\x00d$\x00\x00\x03\xECk\\xA9\x80,1338399084772.7204e2535056af80cb7496d75feb3c68., storeName=kcf, fileCount=2, fileSize=430.1m (429.8m, 277.9k), priority=5, time=10786950168086466; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(4:9), split_queue=0
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1872048836ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1878870155ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 576603012ms
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 576603012ms
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=DK,\x00\x00\x00\x00\x00\x00g\xB6\x00\x00\x03\xD1\x81\x16\xC0\x00,1338414715821.1a85e42ba41c54a407384e2fbfffd761., storeName=kcf, fileCount=2, fileSize=58.4m (58.3m, 38.1k), priority=5, time=10786950168337643; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(4:10), split_queue=0
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1857661268ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1886301158ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1821266674ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 707163677ms
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 707163677ms
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=ussc6_iu,\x00\x00\x00\x00\x00\x00zj\x00\x00\x04\x17?_\xC5\x80,1338407904463.c0b5c9b18e299d0d5028ebc6dcc655e9., storeName=kcf, fileCount=2, fileSize=347.3m (347.3m, 7.2k), priority=5, time=10786950168624090; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(5:10), split_queue=0
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1885464480ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1861602621ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 587420100ms
- 2012-06-20 13:05:21,912 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 587420100ms
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=NO,\x00\x00\x00\x00\x00\x00c\x0E\x00\x00\x03\xE1`PL\x00,1338409231790.3737265bc491e39a3b125b63615479cd., storeName=kcf, fileCount=2, fileSize=62.5m (62.5m, 30.7k), priority=5, time=10786950168875387; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(5:11), split_queue=0
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1845661404ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329985330ms
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329985330ms
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=DE,,1338345079921.fd87805a0ef1867e7e724b94eef47bd2., storeName=kcf, fileCount=2, fileSize=68.4m (68.3m, 78.3k), priority=5, time=10786950169100628; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(5:12), split_queue=0
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1870748441ms
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1870748441ms
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=europe4_iu,\x00\x00\x00\x00\x00\x00+\x97\x00\x00\x047\xB2\x11q\x80,1338351822635.a1612e6cbd7a709dfb1ab53182e45bbe., storeName=kcf, fileCount=2, fileSize=342.2m (342.0m, 235.7k), priority=5, time=10786950169297168; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(6:12), split_queue=0
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1879732457ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1883045522ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1882616010ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1840362983ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1852191889ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1863644533ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1863584828ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1880782271ms
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 1880782271ms
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=europe4_iu,\x00\x00\x00\x00\x00\x00#\xA4\x00\x00\x03\xEC\xCD5}\x80,1338341828356.59b8f77b8dcf72e780a957c74550ea5c., storeName=kcf, fileCount=2, fileSize=373.5m (373.5m, 1011.0), priority=5, time=10786950169786863; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(7:12), split_queue=0
- 2012-06-20 13:05:21,913 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1883528364ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1870872735ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 687469373ms
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 687469373ms
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=ussc6_iu,\x00\x00\x00\x00\x00\x00v\xC0\x00\x00\x03\xEFu3\xDC\x00,1338392572889.2a478f868a2c2cf9e355efd9c8e278d4., storeName=kcf, fileCount=2, fileSize=346.0m (345.9m, 19.0k), priority=5, time=10786950170066403; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(8:12), split_queue=0
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1866717215ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1866639330ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1869452838ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1872683110ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1881647925ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1109529268ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339966210ms
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339966210ms
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=NO,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04!\xAA\xFD\xED\x80,1338409231790.74c54a2380c6bfcc7b73b7a65468372c., storeName=kcf, fileCount=2, fileSize=96.9m (96.7m, 109.8k), priority=5, time=10786950170506427; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(8:13), split_queue=0
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1883081993ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1876689746ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 762874639ms
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 762874639ms
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=FR,\x00\x00\x00\x00\x00\x00.\x8A\x00\x00\x03\xE0\xFEwx\x00,1338355599011.e07f7444aff4dc98e4aebeabe85aae9b., storeName=kcf, fileCount=2, fileSize=63.5m (63.5m, 36.6k), priority=5, time=10786950170766211; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(8:14), split_queue=0
- 2012-06-20 13:05:21,914 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339806176ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339806177ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction requested: regionName=CH,\x00\x00\x00\x00\x00\x00p&\x00\x00\x048\:<\x00,1339401258130.343f77c46d55a9f0ada62b8043454903., storeName=kcf, fileCount=2, fileSize=260.8m (260.6m, 246.1k), priority=5, time=10786950170944779; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(9:14), split_queue=0
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1873296147ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1875032144ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339956325ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 339956325ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=BE,\x00\x00\x00\x00\x00\x00p\x07\x00\x00\x04\x0B\xD2z\x94\x00,1338402695197.2011d1a1ef22daceebbbf18fca0ce337., storeName=kcf, fileCount=2, fileSize=105.9m (105.8m, 110.1k), priority=5, time=10786950171188567; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(9:15), split_queue=0
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1883494970ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329979419ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329979419ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=CH,,1338350142110.159944a3482132c82229fc55ee014195., storeName=kcf, fileCount=2, fileSize=66.1m (66.1m, 57.3k), priority=5, time=10786950171424042; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(9:16), split_queue=0
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1842731582ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329973124ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329973124ms
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=IT,\x00\x00\x00\x00\x00\x00+\xEC\x00\x00\x04\x15\x94&\xE0\x00,1338356247503.37233ae8b105114d795cc2a33104a86b., storeName=kcf, fileCount=2, fileSize=63.0m (63.0m, 23.5k), priority=5, time=10786950171640061; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(9:17), split_queue=0
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1886906974ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1881601597ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1865475868ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,915 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1880245216ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1824031716ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1879182165ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1881958624ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1830557878ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1842489573ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1876315923ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329967310ms
- 2012-06-20 13:05:21,916 DEBUG org.apache.hadoop.hbase.regionserver.Store: Major compaction triggered on store kcf; time since last major compaction 329967310ms
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread: Small Compaction requested: regionName=SE,,1338350797178.9b22c362ed151972378864cf8a0cb9b9., storeName=kcf, fileCount=2, fileSize=60.1m (60.0m, 52.8k), priority=5, time=10786950172925898; Because: regionserver60020.compactionChecker requests major compaction; use default priority; compaction_queue=(9:18), split_queue=0
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1884277192ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1882062880ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1838956722ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1844105781ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1865533233ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1886128728ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1848759905ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1887775263ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1870923447ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1825393048ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1868407801ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,917 DEBUG org.apache.hadoop.hbase.regionserver.Store: Skipping major compaction of kcf because one (major) compacted file only and oldestTime 1838718634ms is < ttl=9223372036854775807
- 2012-06-20 13:05:21,944 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/.tmp/6d3c545ff821498b9d9988b45c946c25with permission:rwxrwxrwx
- 2012-06-20 13:05:21,956 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:05:21,956 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/.tmp/6d3c545ff821498b9d9988b45c946c25: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:05:21,956 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/.tmp/6d3c545ff821498b9d9988b45c946c25: CompoundBloomFilterWriter
- 2012-06-20 13:05:21,956 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/.tmp/f032cedf29224e389739a290e7b79faawith permission:rwxrwxrwx
- 2012-06-20 13:05:21,963 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:05:21,963 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/.tmp/f032cedf29224e389739a290e7b79faa: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:05:21,963 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/.tmp/f032cedf29224e389739a290e7b79faa: CompoundBloomFilterWriter
- 2012-06-20 13:05:28,013 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/.tmp/6d3c545ff821498b9d9988b45c946c25)
- 2012-06-20 13:05:28,020 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 6d3c545ff821498b9d9988b45c946c25
- 2012-06-20 13:05:28,020 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/.tmp/6d3c545ff821498b9d9988b45c946c25 to hdfs://hmaster101.mentacapital.local:8020/hbase/DK/a9d86b777245bdd9e9bd0a08404c3ec5/kcf/6d3c545ff821498b9d9988b45c946c25
- 2012-06-20 13:05:28,029 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 6d3c545ff821498b9d9988b45c946c25
- 2012-06-20 13:05:28,045 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of DK,\x00\x00\x00\x00\x00\x00[\xBB\x00\x00\x03\xD0\xF6\x0B\x0C\x00,1338395491594.a9d86b777245bdd9e9bd0a08404c3ec5. into 6d3c545ff821498b9d9988b45c946c25, size=55.3m; total size for store is 55.3m
- 2012-06-20 13:05:28,045 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=DK,\x00\x00\x00\x00\x00\x00[\xBB\x00\x00\x03\xD0\xF6\x0B\x0C\x00,1338395491594.a9d86b777245bdd9e9bd0a08404c3ec5., storeName=kcf, fileCount=2, fileSize=55.3m, priority=5, time=10786950162874580; duration=6sec
- 2012-06-20 13:05:28,045 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(9:18), split_queue=0
- 2012-06-20 13:05:28,045 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region DE,\x00\x00\x00\x00\x00\x00.c\x00\x00\x04.8~-\x80,1338356013711.8738fde6c046ef3f6993904c704c391f.
- 2012-06-20 13:05:28,046 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of DE,\x00\x00\x00\x00\x00\x00.c\x00\x00\x04.8~-\x80,1338356013711.8738fde6c046ef3f6993904c704c391f. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/.tmp, seqid=14779356, totalSize=62.2m
- 2012-06-20 13:05:28,046 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/kcf/7caade0577a34b0ba98387295549e14c, keycount=35115, bloomtype=ROW, size=62.2m, encoding=NONE
- 2012-06-20 13:05:28,046 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/kcf/d61331dbafde422dac74ce6272e8daad, keycount=17, bloomtype=ROW, size=40.0k, encoding=NONE
- 2012-06-20 13:05:28,096 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/.tmp/b227803b7fb24740bdf18207f1b76388with permission:rwxrwxrwx
- 2012-06-20 13:05:28,101 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:05:28,101 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/.tmp/b227803b7fb24740bdf18207f1b76388: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:05:28,101 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/.tmp/b227803b7fb24740bdf18207f1b76388: CompoundBloomFilterWriter
- 2012-06-20 13:05:35,984 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:05:36,228 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/.tmp/b227803b7fb24740bdf18207f1b76388)
- 2012-06-20 13:05:36,233 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for b227803b7fb24740bdf18207f1b76388
- 2012-06-20 13:05:36,233 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/.tmp/b227803b7fb24740bdf18207f1b76388 to hdfs://hmaster101.mentacapital.local:8020/hbase/DE/8738fde6c046ef3f6993904c704c391f/kcf/b227803b7fb24740bdf18207f1b76388
- 2012-06-20 13:05:36,244 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for b227803b7fb24740bdf18207f1b76388
- 2012-06-20 13:05:36,263 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of DE,\x00\x00\x00\x00\x00\x00.c\x00\x00\x04.8~-\x80,1338356013711.8738fde6c046ef3f6993904c704c391f. into b227803b7fb24740bdf18207f1b76388, size=62.2m; total size for store is 62.2m
- 2012-06-20 13:05:36,264 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=DE,\x00\x00\x00\x00\x00\x00.c\x00\x00\x04.8~-\x80,1338356013711.8738fde6c046ef3f6993904c704c391f., storeName=kcf, fileCount=2, fileSize=62.2m, priority=5, time=10786950164257623; duration=8sec
- 2012-06-20 13:05:36,264 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(9:17), split_queue=0
- 2012-06-20 13:05:36,264 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region SE,\x00\x00\x00\x00\x00\x00T\xFC\x00\x00\x03\xDB\xF1\xA4U\x80,1339012964193.762980fe45e702e0aad23a3d6db950e4.
- 2012-06-20 13:05:36,264 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of SE,\x00\x00\x00\x00\x00\x00T\xFC\x00\x00\x03\xDB\xF1\xA4U\x80,1339012964193.762980fe45e702e0aad23a3d6db950e4. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/.tmp, seqid=14779343, totalSize=217.5m
- 2012-06-20 13:05:36,264 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/kcf/af5429c36b3545cfb4d3af6219f328ef, keycount=192647, bloomtype=ROW, size=217.4m, encoding=NONE
- 2012-06-20 13:05:36,264 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/kcf/6c16409c38244a548af61d070a75b018, keycount=434, bloomtype=ROW, size=142.6k, encoding=NONE
- 2012-06-20 13:05:36,278 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/.tmp/13c03c541dec459282bc6c026db10419with permission:rwxrwxrwx
- 2012-06-20 13:05:36,284 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:05:36,284 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/.tmp/13c03c541dec459282bc6c026db10419: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:05:36,284 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/.tmp/13c03c541dec459282bc6c026db10419: CompoundBloomFilterWriter
- 2012-06-20 13:06:09,423 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:06:09,438 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/.tmp/f032cedf29224e389739a290e7b79faa)
- 2012-06-20 13:06:09,445 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for f032cedf29224e389739a290e7b79faa
- 2012-06-20 13:06:09,445 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/.tmp/f032cedf29224e389739a290e7b79faa to hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/4aec16388216c879d4ffb62d23978f96/kcf/f032cedf29224e389739a290e7b79faa
- 2012-06-20 13:06:09,455 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for f032cedf29224e389739a290e7b79faa
- 2012-06-20 13:06:09,477 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of ussc6_iu,\x00\x00\x00\x00\x00\x00\\x88\x00\x00\x04#\xA9\x0Ap\x00,1338376122427.4aec16388216c879d4ffb62d23978f96. into f032cedf29224e389739a290e7b79faa, size=343.2m; total size for store is 343.2m
- 2012-06-20 13:06:09,477 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=ussc6_iu,\x00\x00\x00\x00\x00\x00\\x88\x00\x00\x04#\xA9\x0Ap\x00,1338376122427.4aec16388216c879d4ffb62d23978f96., storeName=kcf, fileCount=2, fileSize=343.2m, priority=5, time=10786950161250067; duration=47sec
- 2012-06-20 13:06:09,478 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(9:16), split_queue=0
- 2012-06-20 13:06:09,478 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region US,\x00\x00\x00\x00\x00\x00^1\x00\x00\x047\x12lM\x80,1338372225027.d3d055bd22363ff022588ebae3835700.
- 2012-06-20 13:06:09,478 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of US,\x00\x00\x00\x00\x00\x00^1\x00\x00\x047\x12lM\x80,1338372225027.d3d055bd22363ff022588ebae3835700. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/.tmp, seqid=14752538, totalSize=538.7m
- 2012-06-20 13:06:09,478 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/kcf/bfe94fdd0fcb48c39ee5c7b2cf4e58a4, keycount=31215, bloomtype=ROW, size=535.1m, encoding=NONE
- 2012-06-20 13:06:09,478 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/kcf/a5a8202e448240f6893d2ceb06797dbf, keycount=558, bloomtype=ROW, size=3.6m, encoding=NONE
- 2012-06-20 13:06:09,506 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/.tmp/13c052dfd6a54d72b7dc28c6d167c2a2with permission:rwxrwxrwx
- 2012-06-20 13:06:09,512 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:06:09,512 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/.tmp/13c052dfd6a54d72b7dc28c6d167c2a2: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:06:09,512 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/.tmp/13c052dfd6a54d72b7dc28c6d167c2a2: CompoundBloomFilterWriter
- 2012-06-20 13:06:10,187 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/.tmp/13c03c541dec459282bc6c026db10419)
- 2012-06-20 13:06:10,192 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 13c03c541dec459282bc6c026db10419
- 2012-06-20 13:06:10,192 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/.tmp/13c03c541dec459282bc6c026db10419 to hdfs://hmaster101.mentacapital.local:8020/hbase/SE/762980fe45e702e0aad23a3d6db950e4/kcf/13c03c541dec459282bc6c026db10419
- 2012-06-20 13:06:10,202 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 13c03c541dec459282bc6c026db10419
- 2012-06-20 13:06:10,221 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of SE,\x00\x00\x00\x00\x00\x00T\xFC\x00\x00\x03\xDB\xF1\xA4U\x80,1339012964193.762980fe45e702e0aad23a3d6db950e4. into 13c03c541dec459282bc6c026db10419, size=217.5m; total size for store is 217.5m
- 2012-06-20 13:06:10,221 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=SE,\x00\x00\x00\x00\x00\x00T\xFC\x00\x00\x03\xDB\xF1\xA4U\x80,1339012964193.762980fe45e702e0aad23a3d6db950e4., storeName=kcf, fileCount=2, fileSize=217.5m, priority=5, time=10786950164593599; duration=33sec
- 2012-06-20 13:06:10,222 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:16), split_queue=0
- 2012-06-20 13:06:10,222 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region FI,\x00\x00\x00\x00\x00\x000\xEF\x00\x00\x04$\xE2\xF7m\x80,1338381398401.0ecb248bdc78f46798e1454e5a432cce.
- 2012-06-20 13:06:10,222 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of FI,\x00\x00\x00\x00\x00\x000\xEF\x00\x00\x04$\xE2\xF7m\x80,1338381398401.0ecb248bdc78f46798e1454e5a432cce. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/.tmp, seqid=14779311, totalSize=62.9m
- 2012-06-20 13:06:10,222 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/kcf/d719db63fc3846a6b807747869137333, keycount=128730, bloomtype=ROW, size=62.9m, encoding=NONE
- 2012-06-20 13:06:10,222 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/kcf/0de8cd796de54cf8a9f260472bbcd67e, keycount=73, bloomtype=ROW, size=35.8k, encoding=NONE
- 2012-06-20 13:06:10,241 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/.tmp/115df23361b641078c6adb3e191f2758with permission:rwxrwxrwx
- 2012-06-20 13:06:10,247 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:06:10,247 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/.tmp/115df23361b641078c6adb3e191f2758: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:06:10,247 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/.tmp/115df23361b641078c6adb3e191f2758: CompoundBloomFilterWriter
- 2012-06-20 13:06:17,365 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #1 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:06:17,559 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/.tmp/115df23361b641078c6adb3e191f2758)
- 2012-06-20 13:06:17,576 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 115df23361b641078c6adb3e191f2758
- 2012-06-20 13:06:17,577 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/.tmp/115df23361b641078c6adb3e191f2758 to hdfs://hmaster101.mentacapital.local:8020/hbase/FI/0ecb248bdc78f46798e1454e5a432cce/kcf/115df23361b641078c6adb3e191f2758
- 2012-06-20 13:06:17,587 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 115df23361b641078c6adb3e191f2758
- 2012-06-20 13:06:17,611 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of FI,\x00\x00\x00\x00\x00\x000\xEF\x00\x00\x04$\xE2\xF7m\x80,1338381398401.0ecb248bdc78f46798e1454e5a432cce. into 115df23361b641078c6adb3e191f2758, size=62.9m; total size for store is 62.9m
- 2012-06-20 13:06:17,611 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=FI,\x00\x00\x00\x00\x00\x000\xEF\x00\x00\x04$\xE2\xF7m\x80,1338381398401.0ecb248bdc78f46798e1454e5a432cce., storeName=kcf, fileCount=2, fileSize=62.9m, priority=5, time=10786950165412670; duration=7sec
- 2012-06-20 13:06:17,611 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:15), split_queue=0
- 2012-06-20 13:06:17,611 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region europe4_iu,\x00\x00\x00\x00\x00\x00\x11\xD1\x00\x00\x04\x1D\x9A\xB9U\x80,1338336819431.d2847e560e66a55bd34bcb71b943fb23.
- 2012-06-20 13:06:17,612 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x00\x11\xD1\x00\x00\x04\x1D\x9A\xB9U\x80,1338336819431.d2847e560e66a55bd34bcb71b943fb23. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/.tmp, seqid=14779358, totalSize=65.3m
- 2012-06-20 13:06:17,612 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/kcf/988075d2dad74f24b2ad661e32aa1e47, keycount=11896, bloomtype=ROW, size=65.3m, encoding=NONE
- 2012-06-20 13:06:17,612 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/kcf/3eaa22b6799547cd81f59a47a64acef7, keycount=3, bloomtype=ROW, size=1.1k, encoding=NONE
- 2012-06-20 13:06:17,646 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/.tmp/7b3932e8cea74c84abb104a4d74a9befwith permission:rwxrwxrwx
- 2012-06-20 13:06:17,653 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:06:17,653 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/.tmp/7b3932e8cea74c84abb104a4d74a9bef: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:06:17,653 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/.tmp/7b3932e8cea74c84abb104a4d74a9bef: CompoundBloomFilterWriter
- 2012-06-20 13:06:29,706 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [13663 max keys, 16384 bytes]
- 2012-06-20 13:06:29,843 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/.tmp/7b3932e8cea74c84abb104a4d74a9bef)
- 2012-06-20 13:06:29,848 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 7b3932e8cea74c84abb104a4d74a9bef
- 2012-06-20 13:06:29,848 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/.tmp/7b3932e8cea74c84abb104a4d74a9bef to hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/d2847e560e66a55bd34bcb71b943fb23/kcf/7b3932e8cea74c84abb104a4d74a9bef
- 2012-06-20 13:06:29,857 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 7b3932e8cea74c84abb104a4d74a9bef
- 2012-06-20 13:06:29,873 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x00\x11\xD1\x00\x00\x04\x1D\x9A\xB9U\x80,1338336819431.d2847e560e66a55bd34bcb71b943fb23. into 7b3932e8cea74c84abb104a4d74a9bef, size=65.3m; total size for store is 65.3m
- 2012-06-20 13:06:29,873 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=europe4_iu,\x00\x00\x00\x00\x00\x00\x11\xD1\x00\x00\x04\x1D\x9A\xB9U\x80,1338336819431.d2847e560e66a55bd34bcb71b943fb23., storeName=kcf, fileCount=2, fileSize=65.3m, priority=5, time=10786950165763777; duration=12sec
- 2012-06-20 13:06:29,874 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:14), split_queue=0
- 2012-06-20 13:06:29,874 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region NL,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04\x13\xCE\xF7@\x00,1339014483555.85edb0d95ddcbe25619d645923595dce.
- 2012-06-20 13:06:29,874 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of NL,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04\x13\xCE\xF7@\x00,1339014483555.85edb0d95ddcbe25619d645923595dce. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/.tmp, seqid=14779360, totalSize=164.9m
- 2012-06-20 13:06:29,874 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/kcf/240f6338df074d6998eff62a83e8d34a, keycount=165938, bloomtype=ROW, size=164.8m, encoding=NONE
- 2012-06-20 13:06:29,874 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/kcf/f3f5499fad6b43478bcecf27b7dc8ec4, keycount=245, bloomtype=ROW, size=118.2k, encoding=NONE
- 2012-06-20 13:06:29,915 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/.tmp/72f805ee7cec46b5a3d45472c74fa22ewith permission:rwxrwxrwx
- 2012-06-20 13:06:29,921 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:06:29,921 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/.tmp/72f805ee7cec46b5a3d45472c74fa22e: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:06:29,921 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/.tmp/72f805ee7cec46b5a3d45472c74fa22e: CompoundBloomFilterWriter
- 2012-06-20 13:06:50,761 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/.tmp/72f805ee7cec46b5a3d45472c74fa22e)
- 2012-06-20 13:06:50,767 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 72f805ee7cec46b5a3d45472c74fa22e
- 2012-06-20 13:06:50,767 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/.tmp/72f805ee7cec46b5a3d45472c74fa22e to hdfs://hmaster101.mentacapital.local:8020/hbase/NL/85edb0d95ddcbe25619d645923595dce/kcf/72f805ee7cec46b5a3d45472c74fa22e
- 2012-06-20 13:06:50,777 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 72f805ee7cec46b5a3d45472c74fa22e
- 2012-06-20 13:06:50,808 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of NL,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04\x13\xCE\xF7@\x00,1339014483555.85edb0d95ddcbe25619d645923595dce. into 72f805ee7cec46b5a3d45472c74fa22e, size=164.9m; total size for store is 164.9m
- 2012-06-20 13:06:50,809 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=NL,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04\x13\xCE\xF7@\x00,1339014483555.85edb0d95ddcbe25619d645923595dce., storeName=kcf, fileCount=2, fileSize=164.9m, priority=5, time=10786950165932093; duration=20sec
- 2012-06-20 13:06:50,809 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:13), split_queue=0
- 2012-06-20 13:06:50,809 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region NL,\x00\x00\x00\x00\x00\x00*l\x00\x00\x04\x0D!7\xF0\x00,1338363945423.60ffb13e5a656154264dd09af42d7f34.
- 2012-06-20 13:06:50,809 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of NL,\x00\x00\x00\x00\x00\x00*l\x00\x00\x04\x0D!7\xF0\x00,1338363945423.60ffb13e5a656154264dd09af42d7f34. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/.tmp, seqid=14779354, totalSize=62.1m
- 2012-06-20 13:06:50,809 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/kcf/51a3c4ec28874fd8a519d8c6c56dbad0, keycount=71020, bloomtype=ROW, size=62.1m, encoding=NONE
- 2012-06-20 13:06:50,809 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/kcf/de271c18e9d543c6a0c2299c54a5edd2, keycount=30, bloomtype=ROW, size=21.2k, encoding=NONE
- 2012-06-20 13:06:50,943 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/.tmp/93546ab9c8704b60a1569a5f0b374ac2with permission:rwxrwxrwx
- 2012-06-20 13:06:50,952 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:06:50,952 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/.tmp/93546ab9c8704b60a1569a5f0b374ac2: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:06:50,952 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/.tmp/93546ab9c8704b60a1569a5f0b374ac2: CompoundBloomFilterWriter
- 2012-06-20 13:06:57,791 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/.tmp/93546ab9c8704b60a1569a5f0b374ac2)
- 2012-06-20 13:06:57,796 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 93546ab9c8704b60a1569a5f0b374ac2
- 2012-06-20 13:06:57,796 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/.tmp/93546ab9c8704b60a1569a5f0b374ac2 to hdfs://hmaster101.mentacapital.local:8020/hbase/NL/60ffb13e5a656154264dd09af42d7f34/kcf/93546ab9c8704b60a1569a5f0b374ac2
- 2012-06-20 13:06:57,806 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 93546ab9c8704b60a1569a5f0b374ac2
- 2012-06-20 13:06:57,822 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of NL,\x00\x00\x00\x00\x00\x00*l\x00\x00\x04\x0D!7\xF0\x00,1338363945423.60ffb13e5a656154264dd09af42d7f34. into 93546ab9c8704b60a1569a5f0b374ac2, size=62.1m; total size for store is 62.1m
- 2012-06-20 13:06:57,823 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=NL,\x00\x00\x00\x00\x00\x00*l\x00\x00\x04\x0D!7\xF0\x00,1338363945423.60ffb13e5a656154264dd09af42d7f34., storeName=kcf, fileCount=2, fileSize=62.1m, priority=5, time=10786950166199693; duration=7sec
- 2012-06-20 13:06:57,823 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:12), split_queue=0
- 2012-06-20 13:06:57,823 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region europe4_iu,,1338336160462.35251e7bc44b7f893547ba6d69022da2.
- 2012-06-20 13:06:57,824 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of europe4_iu,,1338336160462.35251e7bc44b7f893547ba6d69022da2. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/.tmp, seqid=14779333, totalSize=59.3m
- 2012-06-20 13:06:57,824 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/kcf/884ba20cd01b4f3e93e3e136876da420, keycount=8618, bloomtype=ROW, size=59.3m, encoding=NONE
- 2012-06-20 13:06:57,824 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/kcf/b50702f75c994a259f3ad1f0f2ec76fd, keycount=1, bloomtype=ROW, size=1011.0, encoding=NONE
- 2012-06-20 13:06:57,868 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/.tmp/5af6fb81f54b417fb2e2fedf0504882ewith permission:rwxrwxrwx
- 2012-06-20 13:06:57,875 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:06:57,875 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/.tmp/5af6fb81f54b417fb2e2fedf0504882e: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:06:57,875 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/.tmp/5af6fb81f54b417fb2e2fedf0504882e: CompoundBloomFilterWriter
- 2012-06-20 13:07:09,474 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [13663 max keys, 16384 bytes]
- 2012-06-20 13:07:09,595 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/.tmp/5af6fb81f54b417fb2e2fedf0504882e)
- 2012-06-20 13:07:09,600 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 5af6fb81f54b417fb2e2fedf0504882e
- 2012-06-20 13:07:09,600 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/.tmp/5af6fb81f54b417fb2e2fedf0504882e to hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/35251e7bc44b7f893547ba6d69022da2/kcf/5af6fb81f54b417fb2e2fedf0504882e
- 2012-06-20 13:07:09,612 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 5af6fb81f54b417fb2e2fedf0504882e
- 2012-06-20 13:07:09,630 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of europe4_iu,,1338336160462.35251e7bc44b7f893547ba6d69022da2. into 5af6fb81f54b417fb2e2fedf0504882e, size=59.3m; total size for store is 59.3m
- 2012-06-20 13:07:09,630 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=europe4_iu,,1338336160462.35251e7bc44b7f893547ba6d69022da2., storeName=kcf, fileCount=2, fileSize=59.3m, priority=5, time=10786950166785836; duration=11sec
- 2012-06-20 13:07:09,630 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:11), split_queue=0
- 2012-06-20 13:07:09,630 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region All,,1338334850467.46d9b296358ede178c3bfa3dffaf22b0.
- 2012-06-20 13:07:09,631 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of All,,1338334850467.46d9b296358ede178c3bfa3dffaf22b0. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/.tmp, seqid=14732401, totalSize=4.4m
- 2012-06-20 13:07:09,631 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/kcf/3fc0a63598fb47d9add895b633d9f297, keycount=67202, bloomtype=ROW, size=4.4m, encoding=NONE
- 2012-06-20 13:07:09,631 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/kcf/e9c171fe90484513ae8aebc83fb52238, keycount=22, bloomtype=ROW, size=1.5k, encoding=NONE
- 2012-06-20 13:07:09,669 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/.tmp/17db3ef69d2d445f809116ffd6d9ca44with permission:rwxrwxrwx
- 2012-06-20 13:07:09,674 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:09,674 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/.tmp/17db3ef69d2d445f809116ffd6d9ca44: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:09,674 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/.tmp/17db3ef69d2d445f809116ffd6d9ca44: CompoundBloomFilterWriter
- 2012-06-20 13:07:10,267 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/.tmp/17db3ef69d2d445f809116ffd6d9ca44)
- 2012-06-20 13:07:10,272 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 17db3ef69d2d445f809116ffd6d9ca44
- 2012-06-20 13:07:10,272 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/.tmp/17db3ef69d2d445f809116ffd6d9ca44 to hdfs://hmaster101.mentacapital.local:8020/hbase/All/46d9b296358ede178c3bfa3dffaf22b0/kcf/17db3ef69d2d445f809116ffd6d9ca44
- 2012-06-20 13:07:10,283 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 17db3ef69d2d445f809116ffd6d9ca44
- 2012-06-20 13:07:10,303 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of All,,1338334850467.46d9b296358ede178c3bfa3dffaf22b0. into 17db3ef69d2d445f809116ffd6d9ca44, size=4.4m; total size for store is 4.4m
- 2012-06-20 13:07:10,303 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=All,,1338334850467.46d9b296358ede178c3bfa3dffaf22b0., storeName=kcf, fileCount=2, fileSize=4.4m, priority=5, time=10786950166948125; duration=0sec
- 2012-06-20 13:07:10,303 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:10), split_queue=0
- 2012-06-20 13:07:10,303 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region AT,,1338392191015.03a78f94457c8094142fd0b3245ce29c.
- 2012-06-20 13:07:10,303 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of AT,,1338392191015.03a78f94457c8094142fd0b3245ce29c. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/.tmp, seqid=14779307, totalSize=65.3m
- 2012-06-20 13:07:10,303 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/kcf/49d54caec2d142228354161aeda98f0f, keycount=206453, bloomtype=ROW, size=65.2m, encoding=NONE
- 2012-06-20 13:07:10,303 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/kcf/1eaf9962f2cd408db7caf3610ec88528, keycount=129, bloomtype=ROW, size=52.1k, encoding=NONE
- 2012-06-20 13:07:10,320 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/.tmp/643d0d0d6e9e4ee8be4fe09ba9685c4ewith permission:rwxrwxrwx
- 2012-06-20 13:07:10,327 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:10,327 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/.tmp/643d0d0d6e9e4ee8be4fe09ba9685c4e: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:10,327 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/.tmp/643d0d0d6e9e4ee8be4fe09ba9685c4e: CompoundBloomFilterWriter
- 2012-06-20 13:07:17,006 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:07:17,022 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/.tmp/13c052dfd6a54d72b7dc28c6d167c2a2)
- 2012-06-20 13:07:17,029 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 13c052dfd6a54d72b7dc28c6d167c2a2
- 2012-06-20 13:07:17,029 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/.tmp/13c052dfd6a54d72b7dc28c6d167c2a2 to hdfs://hmaster101.mentacapital.local:8020/hbase/US/d3d055bd22363ff022588ebae3835700/kcf/13c052dfd6a54d72b7dc28c6d167c2a2
- 2012-06-20 13:07:17,039 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 13c052dfd6a54d72b7dc28c6d167c2a2
- 2012-06-20 13:07:17,058 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of US,\x00\x00\x00\x00\x00\x00^1\x00\x00\x047\x12lM\x80,1338372225027.d3d055bd22363ff022588ebae3835700. into 13c052dfd6a54d72b7dc28c6d167c2a2, size=538.9m; total size for store is 538.9m
- 2012-06-20 13:07:17,058 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=US,\x00\x00\x00\x00\x00\x00^1\x00\x00\x047\x12lM\x80,1338372225027.d3d055bd22363ff022588ebae3835700., storeName=kcf, fileCount=2, fileSize=538.7m, priority=5, time=10786950164837528; duration=1mins, 7sec
- 2012-06-20 13:07:17,058 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(8:9), split_queue=0
- 2012-06-20 13:07:17,058 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region FR,\x00\x00\x00\x00\x00\x00xl\x00\x00\x045\xC3\xAE\xF1\x80,1338917904608.63c201fb3b7b041205fefc5a6090f484.
- 2012-06-20 13:07:17,058 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of FR,\x00\x00\x00\x00\x00\x00xl\x00\x00\x045\xC3\xAE\xF1\x80,1338917904608.63c201fb3b7b041205fefc5a6090f484. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/.tmp, seqid=14779353, totalSize=270.3m
- 2012-06-20 13:07:17,058 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/kcf/1d7b59d48263473aa9bf0bc3db959a32, keycount=134481, bloomtype=ROW, size=270.1m, encoding=NONE
- 2012-06-20 13:07:17,058 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/kcf/69253870d98f4d0f9fe135a5940acc28, keycount=128, bloomtype=ROW, size=273.0k, encoding=NONE
- 2012-06-20 13:07:17,085 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/.tmp/0ab972b243374f6295c617a11e627c34with permission:rwxrwxrwx
- 2012-06-20 13:07:17,090 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:17,090 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/.tmp/0ab972b243374f6295c617a11e627c34: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:17,090 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/.tmp/0ab972b243374f6295c617a11e627c34: CompoundBloomFilterWriter
- 2012-06-20 13:07:17,132 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/.tmp/643d0d0d6e9e4ee8be4fe09ba9685c4e)
- 2012-06-20 13:07:17,138 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 643d0d0d6e9e4ee8be4fe09ba9685c4e
- 2012-06-20 13:07:17,138 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/.tmp/643d0d0d6e9e4ee8be4fe09ba9685c4e to hdfs://hmaster101.mentacapital.local:8020/hbase/AT/03a78f94457c8094142fd0b3245ce29c/kcf/643d0d0d6e9e4ee8be4fe09ba9685c4e
- 2012-06-20 13:07:17,146 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 643d0d0d6e9e4ee8be4fe09ba9685c4e
- 2012-06-20 13:07:17,166 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of AT,,1338392191015.03a78f94457c8094142fd0b3245ce29c. into 643d0d0d6e9e4ee8be4fe09ba9685c4e, size=65.3m; total size for store is 65.3m
- 2012-06-20 13:07:17,166 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=AT,,1338392191015.03a78f94457c8094142fd0b3245ce29c., storeName=kcf, fileCount=2, fileSize=65.3m, priority=5, time=10786950167262082; duration=6sec
- 2012-06-20 13:07:17,166 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(7:9), split_queue=0
- 2012-06-20 13:07:17,166 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region DK,\x00\x00\x00\x00\x00\x00g\xB6\x00\x00\x03\xD1\x81\x16\xC0\x00,1338414715821.1a85e42ba41c54a407384e2fbfffd761.
- 2012-06-20 13:07:17,167 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of DK,\x00\x00\x00\x00\x00\x00g\xB6\x00\x00\x03\xD1\x81\x16\xC0\x00,1338414715821.1a85e42ba41c54a407384e2fbfffd761. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/.tmp, seqid=14779332, totalSize=58.4m
- 2012-06-20 13:07:17,167 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/kcf/ce9b484dd10d4719869e5c0fb19cde8d, keycount=105987, bloomtype=ROW, size=58.3m, encoding=NONE
- 2012-06-20 13:07:17,167 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/kcf/4a36d09169da4978befb8e89754fbce6, keycount=95, bloomtype=ROW, size=38.1k, encoding=NONE
- 2012-06-20 13:07:17,202 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/.tmp/ff10d2d6e8b04db8ab5f77e8e80e7c4cwith permission:rwxrwxrwx
- 2012-06-20 13:07:17,208 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:17,208 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/.tmp/ff10d2d6e8b04db8ab5f77e8e80e7c4c: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:17,208 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/.tmp/ff10d2d6e8b04db8ab5f77e8e80e7c4c: CompoundBloomFilterWriter
- 2012-06-20 13:07:23,048 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/.tmp/ff10d2d6e8b04db8ab5f77e8e80e7c4c)
- 2012-06-20 13:07:23,055 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for ff10d2d6e8b04db8ab5f77e8e80e7c4c
- 2012-06-20 13:07:23,055 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/.tmp/ff10d2d6e8b04db8ab5f77e8e80e7c4c to hdfs://hmaster101.mentacapital.local:8020/hbase/DK/1a85e42ba41c54a407384e2fbfffd761/kcf/ff10d2d6e8b04db8ab5f77e8e80e7c4c
- 2012-06-20 13:07:23,064 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for ff10d2d6e8b04db8ab5f77e8e80e7c4c
- 2012-06-20 13:07:23,085 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of DK,\x00\x00\x00\x00\x00\x00g\xB6\x00\x00\x03\xD1\x81\x16\xC0\x00,1338414715821.1a85e42ba41c54a407384e2fbfffd761. into ff10d2d6e8b04db8ab5f77e8e80e7c4c, size=58.3m; total size for store is 58.3m
- 2012-06-20 13:07:23,085 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=DK,\x00\x00\x00\x00\x00\x00g\xB6\x00\x00\x03\xD1\x81\x16\xC0\x00,1338414715821.1a85e42ba41c54a407384e2fbfffd761., storeName=kcf, fileCount=2, fileSize=58.4m, priority=5, time=10786950168337643; duration=5sec
- 2012-06-20 13:07:23,086 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(7:8), split_queue=0
- 2012-06-20 13:07:23,086 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region NO,\x00\x00\x00\x00\x00\x00c\x0E\x00\x00\x03\xE1`PL\x00,1338409231790.3737265bc491e39a3b125b63615479cd.
- 2012-06-20 13:07:23,086 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of NO,\x00\x00\x00\x00\x00\x00c\x0E\x00\x00\x03\xE1`PL\x00,1338409231790.3737265bc491e39a3b125b63615479cd. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/.tmp, seqid=14779325, totalSize=62.5m
- 2012-06-20 13:07:23,086 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/kcf/9c89d3ae387a4a8cb59e5bd392cf6fef, keycount=111568, bloomtype=ROW, size=62.5m, encoding=NONE
- 2012-06-20 13:07:23,086 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/kcf/c2a8dc782ce147319f348c6236d07469, keycount=120, bloomtype=ROW, size=30.7k, encoding=NONE
- 2012-06-20 13:07:23,114 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/.tmp/35ea2272ea754c378ea487f0a2a3d41ewith permission:rwxrwxrwx
- 2012-06-20 13:07:23,119 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:23,120 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/.tmp/35ea2272ea754c378ea487f0a2a3d41e: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:23,120 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/.tmp/35ea2272ea754c378ea487f0a2a3d41e: CompoundBloomFilterWriter
- 2012-06-20 13:07:34,648 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #1 from [109306 max keys, 131072 bytes] to [3415 max keys, 4096 bytes]
- 2012-06-20 13:07:34,803 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/.tmp/35ea2272ea754c378ea487f0a2a3d41e)
- 2012-06-20 13:07:34,808 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 35ea2272ea754c378ea487f0a2a3d41e
- 2012-06-20 13:07:34,808 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/.tmp/35ea2272ea754c378ea487f0a2a3d41e to hdfs://hmaster101.mentacapital.local:8020/hbase/NO/3737265bc491e39a3b125b63615479cd/kcf/35ea2272ea754c378ea487f0a2a3d41e
- 2012-06-20 13:07:34,818 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 35ea2272ea754c378ea487f0a2a3d41e
- 2012-06-20 13:07:34,838 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of NO,\x00\x00\x00\x00\x00\x00c\x0E\x00\x00\x03\xE1`PL\x00,1338409231790.3737265bc491e39a3b125b63615479cd. into 35ea2272ea754c378ea487f0a2a3d41e, size=62.5m; total size for store is 62.5m
- 2012-06-20 13:07:34,838 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=NO,\x00\x00\x00\x00\x00\x00c\x0E\x00\x00\x03\xE1`PL\x00,1338409231790.3737265bc491e39a3b125b63615479cd., storeName=kcf, fileCount=2, fileSize=62.5m, priority=5, time=10786950168875387; duration=11sec
- 2012-06-20 13:07:34,838 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(7:7), split_queue=0
- 2012-06-20 13:07:34,839 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region DE,,1338345079921.fd87805a0ef1867e7e724b94eef47bd2.
- 2012-06-20 13:07:34,839 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of DE,,1338345079921.fd87805a0ef1867e7e724b94eef47bd2. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/.tmp, seqid=14779355, totalSize=68.4m
- 2012-06-20 13:07:34,839 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/kcf/b6d37ee64f644079ab1cb44283ddb1b1, keycount=33030, bloomtype=ROW, size=68.3m, encoding=NONE
- 2012-06-20 13:07:34,839 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/kcf/7636df20b10544fd9fa4ecddf8867add, keycount=69, bloomtype=ROW, size=78.3k, encoding=NONE
- 2012-06-20 13:07:34,883 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/.tmp/533ee3ce613a472a8947d030243803f3with permission:rwxrwxrwx
- 2012-06-20 13:07:34,889 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:34,889 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/.tmp/533ee3ce613a472a8947d030243803f3: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:34,889 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/.tmp/533ee3ce613a472a8947d030243803f3: CompoundBloomFilterWriter
- 2012-06-20 13:07:35,878 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
- 2012-06-20 13:07:41,489 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:07:41,506 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/.tmp/533ee3ce613a472a8947d030243803f3)
- 2012-06-20 13:07:41,511 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 533ee3ce613a472a8947d030243803f3
- 2012-06-20 13:07:41,511 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/.tmp/533ee3ce613a472a8947d030243803f3 to hdfs://hmaster101.mentacapital.local:8020/hbase/DE/fd87805a0ef1867e7e724b94eef47bd2/kcf/533ee3ce613a472a8947d030243803f3
- 2012-06-20 13:07:41,521 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 533ee3ce613a472a8947d030243803f3
- 2012-06-20 13:07:41,537 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of DE,,1338345079921.fd87805a0ef1867e7e724b94eef47bd2. into 533ee3ce613a472a8947d030243803f3, size=68.4m; total size for store is 68.4m
- 2012-06-20 13:07:41,537 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=DE,,1338345079921.fd87805a0ef1867e7e724b94eef47bd2., storeName=kcf, fileCount=2, fileSize=68.4m, priority=5, time=10786950169100628; duration=6sec
- 2012-06-20 13:07:41,537 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(7:6), split_queue=0
- 2012-06-20 13:07:41,537 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region NO,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04!\xAA\xFD\xED\x80,1338409231790.74c54a2380c6bfcc7b73b7a65468372c.
- 2012-06-20 13:07:41,537 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of NO,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04!\xAA\xFD\xED\x80,1338409231790.74c54a2380c6bfcc7b73b7a65468372c. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/.tmp, seqid=14779351, totalSize=96.9m
- 2012-06-20 13:07:41,537 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/kcf/584532937325425aa952109954431f4c, keycount=165070, bloomtype=ROW, size=96.7m, encoding=NONE
- 2012-06-20 13:07:41,537 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/kcf/14906f34769842d7ac27d6dc6653c242, keycount=245, bloomtype=ROW, size=109.8k, encoding=NONE
- 2012-06-20 13:07:41,555 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/.tmp/1a62d29a4dbc47aaaea743819cd01a45with permission:rwxrwxrwx
- 2012-06-20 13:07:41,561 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:41,561 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/.tmp/1a62d29a4dbc47aaaea743819cd01a45: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:41,561 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/.tmp/1a62d29a4dbc47aaaea743819cd01a45: CompoundBloomFilterWriter
- 2012-06-20 13:07:49,129 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #1 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:07:49,336 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/.tmp/0ab972b243374f6295c617a11e627c34)
- 2012-06-20 13:07:49,342 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 0ab972b243374f6295c617a11e627c34
- 2012-06-20 13:07:49,343 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/.tmp/0ab972b243374f6295c617a11e627c34 to hdfs://hmaster101.mentacapital.local:8020/hbase/FR/63c201fb3b7b041205fefc5a6090f484/kcf/0ab972b243374f6295c617a11e627c34
- 2012-06-20 13:07:49,353 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 0ab972b243374f6295c617a11e627c34
- 2012-06-20 13:07:49,372 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of FR,\x00\x00\x00\x00\x00\x00xl\x00\x00\x045\xC3\xAE\xF1\x80,1338917904608.63c201fb3b7b041205fefc5a6090f484. into 0ab972b243374f6295c617a11e627c34, size=270.3m; total size for store is 270.3m
- 2012-06-20 13:07:49,372 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=FR,\x00\x00\x00\x00\x00\x00xl\x00\x00\x045\xC3\xAE\xF1\x80,1338917904608.63c201fb3b7b041205fefc5a6090f484., storeName=kcf, fileCount=2, fileSize=270.3m, priority=5, time=10786950166376469; duration=32sec
- 2012-06-20 13:07:49,372 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(7:5), split_queue=0
- 2012-06-20 13:07:49,372 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region europe4_iu,\x00\x00\x00\x00\x00\x006\xA9\x00\x00\x04-\xE0\xF2\x11\x80,1338360607321.70dd58ed425461ec6909a72a47ce3dc0.
- 2012-06-20 13:07:49,373 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x006\xA9\x00\x00\x04-\xE0\xF2\x11\x80,1338360607321.70dd58ed425461ec6909a72a47ce3dc0. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/.tmp, seqid=14782087, totalSize=345.5m
- 2012-06-20 13:07:49,374 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/kcf/e903a93aa05648bba2fd434dea958d55, keycount=33182, bloomtype=ROW, size=345.5m, encoding=NONE
- 2012-06-20 13:07:49,374 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/kcf/b71478a9942949cdbeb2008c3aed860e, keycount=1, bloomtype=ROW, size=13.3k, encoding=NONE
- 2012-06-20 13:07:49,400 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/.tmp/6fa3f4d966574c08ab29d57c0bf0f816with permission:rwxrwxrwx
- 2012-06-20 13:07:49,404 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:49,404 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/.tmp/6fa3f4d966574c08ab29d57c0bf0f816: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:49,404 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/.tmp/6fa3f4d966574c08ab29d57c0bf0f816: CompoundBloomFilterWriter
- 2012-06-20 13:07:51,831 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/.tmp/1a62d29a4dbc47aaaea743819cd01a45)
- 2012-06-20 13:07:51,837 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 1a62d29a4dbc47aaaea743819cd01a45
- 2012-06-20 13:07:51,837 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/.tmp/1a62d29a4dbc47aaaea743819cd01a45 to hdfs://hmaster101.mentacapital.local:8020/hbase/NO/74c54a2380c6bfcc7b73b7a65468372c/kcf/1a62d29a4dbc47aaaea743819cd01a45
- 2012-06-20 13:07:51,846 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 1a62d29a4dbc47aaaea743819cd01a45
- 2012-06-20 13:07:51,864 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of NO,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04!\xAA\xFD\xED\x80,1338409231790.74c54a2380c6bfcc7b73b7a65468372c. into 1a62d29a4dbc47aaaea743819cd01a45, size=96.8m; total size for store is 96.8m
- 2012-06-20 13:07:51,864 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=NO,\x00\x00\x00\x00\x00\x00v`\x00\x00\x04!\xAA\xFD\xED\x80,1338409231790.74c54a2380c6bfcc7b73b7a65468372c., storeName=kcf, fileCount=2, fileSize=96.9m, priority=5, time=10786950170506427; duration=10sec
- 2012-06-20 13:07:51,864 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(6:5), split_queue=0
- 2012-06-20 13:07:51,864 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region FR,\x00\x00\x00\x00\x00\x00.\x8A\x00\x00\x03\xE0\xFEwx\x00,1338355599011.e07f7444aff4dc98e4aebeabe85aae9b.
- 2012-06-20 13:07:51,866 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of FR,\x00\x00\x00\x00\x00\x00.\x8A\x00\x00\x03\xE0\xFEwx\x00,1338355599011.e07f7444aff4dc98e4aebeabe85aae9b. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/.tmp, seqid=14779359, totalSize=63.5m
- 2012-06-20 13:07:51,866 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/kcf/caa923918ca648ce804a5d884f4713dd, keycount=34067, bloomtype=ROW, size=63.5m, encoding=NONE
- 2012-06-20 13:07:51,866 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/kcf/cf694fa1fa9b4d6bacb12876b075ec6c, keycount=16, bloomtype=ROW, size=36.6k, encoding=NONE
- 2012-06-20 13:07:51,897 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/.tmp/4a20a63aef3d404eb8e712cb145e3fe0with permission:rwxrwxrwx
- 2012-06-20 13:07:51,904 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:07:51,904 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/.tmp/4a20a63aef3d404eb8e712cb145e3fe0: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:07:51,904 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/.tmp/4a20a63aef3d404eb8e712cb145e3fe0: CompoundBloomFilterWriter
- 2012-06-20 13:08:01,413 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:08:01,612 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/.tmp/4a20a63aef3d404eb8e712cb145e3fe0)
- 2012-06-20 13:08:01,617 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 4a20a63aef3d404eb8e712cb145e3fe0
- 2012-06-20 13:08:01,617 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/.tmp/4a20a63aef3d404eb8e712cb145e3fe0 to hdfs://hmaster101.mentacapital.local:8020/hbase/FR/e07f7444aff4dc98e4aebeabe85aae9b/kcf/4a20a63aef3d404eb8e712cb145e3fe0
- 2012-06-20 13:08:01,627 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 4a20a63aef3d404eb8e712cb145e3fe0
- 2012-06-20 13:08:01,647 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of FR,\x00\x00\x00\x00\x00\x00.\x8A\x00\x00\x03\xE0\xFEwx\x00,1338355599011.e07f7444aff4dc98e4aebeabe85aae9b. into 4a20a63aef3d404eb8e712cb145e3fe0, size=63.5m; total size for store is 63.5m
- 2012-06-20 13:08:01,647 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=FR,\x00\x00\x00\x00\x00\x00.\x8A\x00\x00\x03\xE0\xFEwx\x00,1338355599011.e07f7444aff4dc98e4aebeabe85aae9b., storeName=kcf, fileCount=2, fileSize=63.5m, priority=5, time=10786950170766211; duration=9sec
- 2012-06-20 13:08:01,647 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(6:4), split_queue=0
- 2012-06-20 13:08:01,647 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region BE,\x00\x00\x00\x00\x00\x00p\x07\x00\x00\x04\x0B\xD2z\x94\x00,1338402695197.2011d1a1ef22daceebbbf18fca0ce337.
- 2012-06-20 13:08:01,647 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of BE,\x00\x00\x00\x00\x00\x00p\x07\x00\x00\x04\x0B\xD2z\x94\x00,1338402695197.2011d1a1ef22daceebbbf18fca0ce337. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/.tmp, seqid=14779319, totalSize=105.9m
- 2012-06-20 13:08:01,648 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/kcf/dd06e8c810cb403681798179f8cbef26, keycount=209496, bloomtype=ROW, size=105.8m, encoding=NONE
- 2012-06-20 13:08:01,648 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/kcf/49cf5c85c01a4aaf98cf29d678c34eb5, keycount=175, bloomtype=ROW, size=110.1k, encoding=NONE
- 2012-06-20 13:08:01,680 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/.tmp/b56d2cde91424e878bd881e2e389ecaewith permission:rwxrwxrwx
- 2012-06-20 13:08:01,685 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:08:01,685 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/.tmp/b56d2cde91424e878bd881e2e389ecae: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:08:01,685 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/.tmp/b56d2cde91424e878bd881e2e389ecae: CompoundBloomFilterWriter
- 2012-06-20 13:08:14,072 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/.tmp/b56d2cde91424e878bd881e2e389ecae)
- 2012-06-20 13:08:14,078 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for b56d2cde91424e878bd881e2e389ecae
- 2012-06-20 13:08:14,078 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/.tmp/b56d2cde91424e878bd881e2e389ecae to hdfs://hmaster101.mentacapital.local:8020/hbase/BE/2011d1a1ef22daceebbbf18fca0ce337/kcf/b56d2cde91424e878bd881e2e389ecae
- 2012-06-20 13:08:14,087 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for b56d2cde91424e878bd881e2e389ecae
- 2012-06-20 13:08:14,103 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of BE,\x00\x00\x00\x00\x00\x00p\x07\x00\x00\x04\x0B\xD2z\x94\x00,1338402695197.2011d1a1ef22daceebbbf18fca0ce337. into b56d2cde91424e878bd881e2e389ecae, size=105.9m; total size for store is 105.9m
- 2012-06-20 13:08:14,103 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=BE,\x00\x00\x00\x00\x00\x00p\x07\x00\x00\x04\x0B\xD2z\x94\x00,1338402695197.2011d1a1ef22daceebbbf18fca0ce337., storeName=kcf, fileCount=2, fileSize=105.9m, priority=5, time=10786950171188567; duration=12sec
- 2012-06-20 13:08:14,103 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(6:3), split_queue=0
- 2012-06-20 13:08:14,103 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region CH,,1338350142110.159944a3482132c82229fc55ee014195.
- 2012-06-20 13:08:14,103 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of CH,,1338350142110.159944a3482132c82229fc55ee014195. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/.tmp, seqid=14779315, totalSize=66.1m
- 2012-06-20 13:08:14,103 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/kcf/e1f913963de241aca17d7200eb01f6e0, keycount=53977, bloomtype=ROW, size=66.1m, encoding=NONE
- 2012-06-20 13:08:14,103 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/kcf/53cf6071abfd4e1e86ee8fa708d4cd49, keycount=77, bloomtype=ROW, size=57.3k, encoding=NONE
- 2012-06-20 13:08:14,142 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/.tmp/f4a4d10d0d4a430e9fdd9f75d7e9e3d8with permission:rwxrwxrwx
- 2012-06-20 13:08:14,146 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:08:14,146 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/.tmp/f4a4d10d0d4a430e9fdd9f75d7e9e3d8: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:08:14,146 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/.tmp/f4a4d10d0d4a430e9fdd9f75d7e9e3d8: CompoundBloomFilterWriter
- 2012-06-20 13:08:20,689 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:08:20,702 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/.tmp/f4a4d10d0d4a430e9fdd9f75d7e9e3d8)
- 2012-06-20 13:08:20,708 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for f4a4d10d0d4a430e9fdd9f75d7e9e3d8
- 2012-06-20 13:08:20,708 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/.tmp/f4a4d10d0d4a430e9fdd9f75d7e9e3d8 to hdfs://hmaster101.mentacapital.local:8020/hbase/CH/159944a3482132c82229fc55ee014195/kcf/f4a4d10d0d4a430e9fdd9f75d7e9e3d8
- 2012-06-20 13:08:20,718 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for f4a4d10d0d4a430e9fdd9f75d7e9e3d8
- 2012-06-20 13:08:20,738 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of CH,,1338350142110.159944a3482132c82229fc55ee014195. into f4a4d10d0d4a430e9fdd9f75d7e9e3d8, size=66.1m; total size for store is 66.1m
- 2012-06-20 13:08:20,738 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=CH,,1338350142110.159944a3482132c82229fc55ee014195., storeName=kcf, fileCount=2, fileSize=66.1m, priority=5, time=10786950171424042; duration=6sec
- 2012-06-20 13:08:20,738 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(6:2), split_queue=0
- 2012-06-20 13:08:20,738 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region IT,\x00\x00\x00\x00\x00\x00+\xEC\x00\x00\x04\x15\x94&\xE0\x00,1338356247503.37233ae8b105114d795cc2a33104a86b.
- 2012-06-20 13:08:20,738 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of IT,\x00\x00\x00\x00\x00\x00+\xEC\x00\x00\x04\x15\x94&\xE0\x00,1338356247503.37233ae8b105114d795cc2a33104a86b. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/.tmp, seqid=14779338, totalSize=63.0m
- 2012-06-20 13:08:20,738 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/kcf/ac15f29b5b3f47ec99116f13c537d98f, keycount=44122, bloomtype=ROW, size=63.0m, encoding=NONE
- 2012-06-20 13:08:20,738 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/kcf/a8bd3ea1c94f4cf2878ceab681c218b2, keycount=19, bloomtype=ROW, size=23.5k, encoding=NONE
- 2012-06-20 13:08:20,774 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/.tmp/99f64207e21f496f863d41a6323b9208with permission:rwxrwxrwx
- 2012-06-20 13:08:20,780 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:08:20,780 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/.tmp/99f64207e21f496f863d41a6323b9208: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:08:20,780 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/.tmp/99f64207e21f496f863d41a6323b9208: CompoundBloomFilterWriter
- 2012-06-20 13:08:24,949 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:08:25,047 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/.tmp/6fa3f4d966574c08ab29d57c0bf0f816)
- 2012-06-20 13:08:25,053 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 6fa3f4d966574c08ab29d57c0bf0f816
- 2012-06-20 13:08:25,054 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/.tmp/6fa3f4d966574c08ab29d57c0bf0f816 to hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/70dd58ed425461ec6909a72a47ce3dc0/kcf/6fa3f4d966574c08ab29d57c0bf0f816
- 2012-06-20 13:08:25,064 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 6fa3f4d966574c08ab29d57c0bf0f816
- 2012-06-20 13:08:25,083 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x006\xA9\x00\x00\x04-\xE0\xF2\x11\x80,1338360607321.70dd58ed425461ec6909a72a47ce3dc0. into 6fa3f4d966574c08ab29d57c0bf0f816, size=345.5m; total size for store is 345.5m
- 2012-06-20 13:08:25,083 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=europe4_iu,\x00\x00\x00\x00\x00\x006\xA9\x00\x00\x04-\xE0\xF2\x11\x80,1338360607321.70dd58ed425461ec6909a72a47ce3dc0., storeName=kcf, fileCount=2, fileSize=345.5m, priority=5, time=10786950167594925; duration=35sec
- 2012-06-20 13:08:25,083 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(6:1), split_queue=0
- 2012-06-20 13:08:25,083 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region IT,\x00\x00\x00\x00\x00\x00d$\x00\x00\x03\xECk\\xA9\x80,1338399084772.7204e2535056af80cb7496d75feb3c68.
- 2012-06-20 13:08:25,083 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of IT,\x00\x00\x00\x00\x00\x00d$\x00\x00\x03\xECk\\xA9\x80,1338399084772.7204e2535056af80cb7496d75feb3c68. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/.tmp, seqid=14779352, totalSize=430.1m
- 2012-06-20 13:08:25,083 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/kcf/759f89010c794c4fba6a0c0c84412e98, keycount=277693, bloomtype=ROW, size=429.8m, encoding=NONE
- 2012-06-20 13:08:25,083 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/kcf/3beb663a5a7c43fdbb3344afd6fc5cc2, keycount=316, bloomtype=ROW, size=277.9k, encoding=NONE
- 2012-06-20 13:08:25,109 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/.tmp/16931189811e49449bb7e742f0bd4314with permission:rwxrwxrwx
- 2012-06-20 13:08:25,116 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:08:25,116 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/.tmp/16931189811e49449bb7e742f0bd4314: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:08:25,116 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/.tmp/16931189811e49449bb7e742f0bd4314: CompoundBloomFilterWriter
- 2012-06-20 13:08:27,182 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:08:27,198 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/.tmp/99f64207e21f496f863d41a6323b9208)
- 2012-06-20 13:08:27,204 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 99f64207e21f496f863d41a6323b9208
- 2012-06-20 13:08:27,204 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/.tmp/99f64207e21f496f863d41a6323b9208 to hdfs://hmaster101.mentacapital.local:8020/hbase/IT/37233ae8b105114d795cc2a33104a86b/kcf/99f64207e21f496f863d41a6323b9208
- 2012-06-20 13:08:27,213 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 99f64207e21f496f863d41a6323b9208
- 2012-06-20 13:08:27,230 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of IT,\x00\x00\x00\x00\x00\x00+\xEC\x00\x00\x04\x15\x94&\xE0\x00,1338356247503.37233ae8b105114d795cc2a33104a86b. into 99f64207e21f496f863d41a6323b9208, size=63.0m; total size for store is 63.0m
- 2012-06-20 13:08:27,230 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=IT,\x00\x00\x00\x00\x00\x00+\xEC\x00\x00\x04\x15\x94&\xE0\x00,1338356247503.37233ae8b105114d795cc2a33104a86b., storeName=kcf, fileCount=2, fileSize=63.0m, priority=5, time=10786950171640061; duration=6sec
- 2012-06-20 13:08:27,230 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(5:1), split_queue=0
- 2012-06-20 13:08:27,230 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region SE,,1338350797178.9b22c362ed151972378864cf8a0cb9b9.
- 2012-06-20 13:08:27,231 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of SE,,1338350797178.9b22c362ed151972378864cf8a0cb9b9. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/.tmp, seqid=14779357, totalSize=60.1m
- 2012-06-20 13:08:27,231 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/kcf/39e3366c48e747d6852aaa9b73c5df12, keycount=47028, bloomtype=ROW, size=60.0m, encoding=NONE
- 2012-06-20 13:08:27,231 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/kcf/eee57c4745b3445bb350603a9f8e17eb, keycount=74, bloomtype=ROW, size=52.8k, encoding=NONE
- 2012-06-20 13:08:27,256 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/.tmp/99ddbf8c1c2a4c98873c6ffea890b716with permission:rwxrwxrwx
- 2012-06-20 13:08:27,262 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:08:27,262 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/.tmp/99ddbf8c1c2a4c98873c6ffea890b716: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:08:27,262 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/.tmp/99ddbf8c1c2a4c98873c6ffea890b716: CompoundBloomFilterWriter
- 2012-06-20 13:08:33,251 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:08:33,266 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/.tmp/99ddbf8c1c2a4c98873c6ffea890b716)
- 2012-06-20 13:08:33,290 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 99ddbf8c1c2a4c98873c6ffea890b716
- 2012-06-20 13:08:33,290 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/.tmp/99ddbf8c1c2a4c98873c6ffea890b716 to hdfs://hmaster101.mentacapital.local:8020/hbase/SE/9b22c362ed151972378864cf8a0cb9b9/kcf/99ddbf8c1c2a4c98873c6ffea890b716
- 2012-06-20 13:08:33,297 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 99ddbf8c1c2a4c98873c6ffea890b716
- 2012-06-20 13:08:33,314 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of SE,,1338350797178.9b22c362ed151972378864cf8a0cb9b9. into 99ddbf8c1c2a4c98873c6ffea890b716, size=60.1m; total size for store is 60.1m
- 2012-06-20 13:08:33,314 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=SE,,1338350797178.9b22c362ed151972378864cf8a0cb9b9., storeName=kcf, fileCount=2, fileSize=60.1m, priority=5, time=10786950172925898; duration=6sec
- 2012-06-20 13:08:33,314 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(5:0), split_queue=0
- 2012-06-20 13:09:05,332 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/.tmp/16931189811e49449bb7e742f0bd4314)
- 2012-06-20 13:09:05,338 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 16931189811e49449bb7e742f0bd4314
- 2012-06-20 13:09:05,338 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/.tmp/16931189811e49449bb7e742f0bd4314 to hdfs://hmaster101.mentacapital.local:8020/hbase/IT/7204e2535056af80cb7496d75feb3c68/kcf/16931189811e49449bb7e742f0bd4314
- 2012-06-20 13:09:05,350 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 16931189811e49449bb7e742f0bd4314
- 2012-06-20 13:09:05,368 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of IT,\x00\x00\x00\x00\x00\x00d$\x00\x00\x03\xECk\\xA9\x80,1338399084772.7204e2535056af80cb7496d75feb3c68. into 16931189811e49449bb7e742f0bd4314, size=430.2m; total size for store is 430.2m
- 2012-06-20 13:09:05,369 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=IT,\x00\x00\x00\x00\x00\x00d$\x00\x00\x03\xECk\\xA9\x80,1338399084772.7204e2535056af80cb7496d75feb3c68., storeName=kcf, fileCount=2, fileSize=430.1m, priority=5, time=10786950168086466; duration=40sec
- 2012-06-20 13:09:05,369 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(5:0), split_queue=0
- 2012-06-20 13:09:05,369 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region ussc6_iu,\x00\x00\x00\x00\x00\x00zj\x00\x00\x04\x17?_\xC5\x80,1338407904463.c0b5c9b18e299d0d5028ebc6dcc655e9.
- 2012-06-20 13:09:05,369 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of ussc6_iu,\x00\x00\x00\x00\x00\x00zj\x00\x00\x04\x17?_\xC5\x80,1338407904463.c0b5c9b18e299d0d5028ebc6dcc655e9. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/.tmp, seqid=14748421, totalSize=347.3m
- 2012-06-20 13:09:05,369 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/kcf/1676fa3312d74223b732c83dfcc22e6b, keycount=44673, bloomtype=ROW, size=347.3m, encoding=NONE
- 2012-06-20 13:09:05,369 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/kcf/c0c60099035e497a98578dbaf052fc3f, keycount=185, bloomtype=ROW, size=7.2k, encoding=NONE
- 2012-06-20 13:09:05,438 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/.tmp/29024b4745d64fdab4105da2effdd214with permission:rwxrwxrwx
- 2012-06-20 13:09:05,445 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:09:05,445 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/.tmp/29024b4745d64fdab4105da2effdd214: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:09:05,445 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/.tmp/29024b4745d64fdab4105da2effdd214: CompoundBloomFilterWriter
- 2012-06-20 13:09:20,578 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=3.08 GB, free=938.83 MB, max=3.99 GB, blocks=3162, accesses=116218760, hits=116090607, hitRatio=99.88%, cachingAccesses=116101815, cachingHits=116083708, cachingHitsRatio=99.98%, evictions=17, evicted=14945, evictedPerRun=879.11767578125
- 2012-06-20 13:09:38,735 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:09:38,798 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/.tmp/29024b4745d64fdab4105da2effdd214)
- 2012-06-20 13:09:38,805 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 29024b4745d64fdab4105da2effdd214
- 2012-06-20 13:09:38,805 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/.tmp/29024b4745d64fdab4105da2effdd214 to hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/c0b5c9b18e299d0d5028ebc6dcc655e9/kcf/29024b4745d64fdab4105da2effdd214
- 2012-06-20 13:09:38,815 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 29024b4745d64fdab4105da2effdd214
- 2012-06-20 13:09:38,834 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of ussc6_iu,\x00\x00\x00\x00\x00\x00zj\x00\x00\x04\x17?_\xC5\x80,1338407904463.c0b5c9b18e299d0d5028ebc6dcc655e9. into 29024b4745d64fdab4105da2effdd214, size=347.3m; total size for store is 347.3m
- 2012-06-20 13:09:38,834 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=ussc6_iu,\x00\x00\x00\x00\x00\x00zj\x00\x00\x04\x17?_\xC5\x80,1338407904463.c0b5c9b18e299d0d5028ebc6dcc655e9., storeName=kcf, fileCount=2, fileSize=347.3m, priority=5, time=10786950168624090; duration=33sec
- 2012-06-20 13:09:38,834 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(4:0), split_queue=0
- 2012-06-20 13:09:38,834 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region europe4_iu,\x00\x00\x00\x00\x00\x00+\x97\x00\x00\x047\xB2\x11q\x80,1338351822635.a1612e6cbd7a709dfb1ab53182e45bbe.
- 2012-06-20 13:09:38,834 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x00+\x97\x00\x00\x047\xB2\x11q\x80,1338351822635.a1612e6cbd7a709dfb1ab53182e45bbe. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/.tmp, seqid=14780727, totalSize=342.2m
- 2012-06-20 13:09:38,835 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/kcf/0722f78603cd42e2966650590df91fe3, keycount=27872, bloomtype=ROW, size=342.0m, encoding=NONE
- 2012-06-20 13:09:38,835 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/kcf/da9d4bea39ce445cb0217d806a3b4cff, keycount=17, bloomtype=ROW, size=235.7k, encoding=NONE
- 2012-06-20 13:09:38,871 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/.tmp/fd2cdffbaae243e59bd22624ce7fb1bdwith permission:rwxrwxrwx
- 2012-06-20 13:09:38,879 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:09:38,879 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/.tmp/fd2cdffbaae243e59bd22624ce7fb1bd: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:09:38,879 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/.tmp/fd2cdffbaae243e59bd22624ce7fb1bd: CompoundBloomFilterWriter
- 2012-06-20 13:10:05,755 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [54653 max keys, 65536 bytes]
- 2012-06-20 13:10:05,770 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/.tmp/fd2cdffbaae243e59bd22624ce7fb1bd)
- 2012-06-20 13:10:05,777 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for fd2cdffbaae243e59bd22624ce7fb1bd
- 2012-06-20 13:10:05,777 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/.tmp/fd2cdffbaae243e59bd22624ce7fb1bd to hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/a1612e6cbd7a709dfb1ab53182e45bbe/kcf/fd2cdffbaae243e59bd22624ce7fb1bd
- 2012-06-20 13:10:05,786 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for fd2cdffbaae243e59bd22624ce7fb1bd
- 2012-06-20 13:10:05,805 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x00+\x97\x00\x00\x047\xB2\x11q\x80,1338351822635.a1612e6cbd7a709dfb1ab53182e45bbe. into fd2cdffbaae243e59bd22624ce7fb1bd, size=342.2m; total size for store is 342.2m
- 2012-06-20 13:10:05,805 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=europe4_iu,\x00\x00\x00\x00\x00\x00+\x97\x00\x00\x047\xB2\x11q\x80,1338351822635.a1612e6cbd7a709dfb1ab53182e45bbe., storeName=kcf, fileCount=2, fileSize=342.2m, priority=5, time=10786950169297168; duration=26sec
- 2012-06-20 13:10:05,805 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(3:0), split_queue=0
- 2012-06-20 13:10:05,805 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region europe4_iu,\x00\x00\x00\x00\x00\x00#\xA4\x00\x00\x03\xEC\xCD5}\x80,1338341828356.59b8f77b8dcf72e780a957c74550ea5c.
- 2012-06-20 13:10:05,806 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x00#\xA4\x00\x00\x03\xEC\xCD5}\x80,1338341828356.59b8f77b8dcf72e780a957c74550ea5c. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/.tmp, seqid=14782085, totalSize=373.5m
- 2012-06-20 13:10:05,806 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/kcf/54957f04d65b49a880b630dc99dc2d64, keycount=27272, bloomtype=ROW, size=373.5m, encoding=NONE
- 2012-06-20 13:10:05,806 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/kcf/21a403402b9f49efbf99f960c330a4f7, keycount=1, bloomtype=ROW, size=1011.0, encoding=NONE
- 2012-06-20 13:10:05,824 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/.tmp/10c36b9e2b7b45938d8ac3f1304ba9f0with permission:rwxrwxrwx
- 2012-06-20 13:10:05,829 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:10:05,829 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/.tmp/10c36b9e2b7b45938d8ac3f1304ba9f0: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:10:05,829 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/.tmp/10c36b9e2b7b45938d8ac3f1304ba9f0: CompoundBloomFilterWriter
- 2012-06-20 13:10:38,954 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:10:39,073 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/.tmp/10c36b9e2b7b45938d8ac3f1304ba9f0)
- 2012-06-20 13:10:39,080 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 10c36b9e2b7b45938d8ac3f1304ba9f0
- 2012-06-20 13:10:39,080 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/.tmp/10c36b9e2b7b45938d8ac3f1304ba9f0 to hdfs://hmaster101.mentacapital.local:8020/hbase/europe4_iu/59b8f77b8dcf72e780a957c74550ea5c/kcf/10c36b9e2b7b45938d8ac3f1304ba9f0
- 2012-06-20 13:10:39,093 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 10c36b9e2b7b45938d8ac3f1304ba9f0
- 2012-06-20 13:10:39,112 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of europe4_iu,\x00\x00\x00\x00\x00\x00#\xA4\x00\x00\x03\xEC\xCD5}\x80,1338341828356.59b8f77b8dcf72e780a957c74550ea5c. into 10c36b9e2b7b45938d8ac3f1304ba9f0, size=373.5m; total size for store is 373.5m
- 2012-06-20 13:10:39,113 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=europe4_iu,\x00\x00\x00\x00\x00\x00#\xA4\x00\x00\x03\xEC\xCD5}\x80,1338341828356.59b8f77b8dcf72e780a957c74550ea5c., storeName=kcf, fileCount=2, fileSize=373.5m, priority=5, time=10786950169786863; duration=33sec
- 2012-06-20 13:10:39,113 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(2:0), split_queue=0
- 2012-06-20 13:10:39,113 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region ussc6_iu,\x00\x00\x00\x00\x00\x00v\xC0\x00\x00\x03\xEFu3\xDC\x00,1338392572889.2a478f868a2c2cf9e355efd9c8e278d4.
- 2012-06-20 13:10:39,113 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of ussc6_iu,\x00\x00\x00\x00\x00\x00v\xC0\x00\x00\x03\xEFu3\xDC\x00,1338392572889.2a478f868a2c2cf9e355efd9c8e278d4. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/.tmp, seqid=14732399, totalSize=346.0m
- 2012-06-20 13:10:39,113 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/kcf/ab67026f3e714cc1a0efe16e5b90c0fd, keycount=20113, bloomtype=ROW, size=345.9m, encoding=NONE
- 2012-06-20 13:10:39,113 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/kcf/db717525974d4e538dd98e7577069c8f, keycount=1, bloomtype=ROW, size=19.0k, encoding=NONE
- 2012-06-20 13:10:39,138 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/.tmp/93500b418694400daa4eeb85122bca90with permission:rwxrwxrwx
- 2012-06-20 13:10:39,144 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:10:39,144 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/.tmp/93500b418694400daa4eeb85122bca90: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:10:39,144 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/.tmp/93500b418694400daa4eeb85122bca90: CompoundBloomFilterWriter
- 2012-06-20 13:11:08,541 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:11:08,621 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/.tmp/93500b418694400daa4eeb85122bca90)
- 2012-06-20 13:11:08,627 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 93500b418694400daa4eeb85122bca90
- 2012-06-20 13:11:08,627 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/.tmp/93500b418694400daa4eeb85122bca90 to hdfs://hmaster101.mentacapital.local:8020/hbase/ussc6_iu/2a478f868a2c2cf9e355efd9c8e278d4/kcf/93500b418694400daa4eeb85122bca90
- 2012-06-20 13:11:08,638 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 93500b418694400daa4eeb85122bca90
- 2012-06-20 13:11:08,658 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of ussc6_iu,\x00\x00\x00\x00\x00\x00v\xC0\x00\x00\x03\xEFu3\xDC\x00,1338392572889.2a478f868a2c2cf9e355efd9c8e278d4. into 93500b418694400daa4eeb85122bca90, size=345.9m; total size for store is 345.9m
- 2012-06-20 13:11:08,658 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=ussc6_iu,\x00\x00\x00\x00\x00\x00v\xC0\x00\x00\x03\xEFu3\xDC\x00,1338392572889.2a478f868a2c2cf9e355efd9c8e278d4., storeName=kcf, fileCount=2, fileSize=346.0m, priority=5, time=10786950170066403; duration=29sec
- 2012-06-20 13:11:08,658 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(1:0), split_queue=0
- 2012-06-20 13:11:08,658 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on kcf in region CH,\x00\x00\x00\x00\x00\x00p&\x00\x00\x048\:<\x00,1339401258130.343f77c46d55a9f0ada62b8043454903.
- 2012-06-20 13:11:08,658 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2 file(s) in kcf of CH,\x00\x00\x00\x00\x00\x00p&\x00\x00\x048\:<\x00,1339401258130.343f77c46d55a9f0ada62b8043454903. into tmpdir=hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/.tmp, seqid=14779330, totalSize=260.8m
- 2012-06-20 13:11:08,658 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/kcf/64a4929555ba47f7843e8b7b80c510fb, keycount=207463, bloomtype=ROW, size=260.6m, encoding=NONE
- 2012-06-20 13:11:08,658 DEBUG org.apache.hadoop.hbase.regionserver.Store: Compacting hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/kcf/9cb62a6bd8ee43ad964905012de17876, keycount=309, bloomtype=ROW, size=246.1k, encoding=NONE
- 2012-06-20 13:11:08,676 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/.tmp/1c3a2becedf14dc7bf801119a8908f49with permission:rwxrwxrwx
- 2012-06-20 13:11:08,682 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:11:08,682 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/.tmp/1c3a2becedf14dc7bf801119a8908f49: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:11:08,682 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/.tmp/1c3a2becedf14dc7bf801119a8908f49: CompoundBloomFilterWriter
- 2012-06-20 13:11:34,215 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and NO DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/.tmp/1c3a2becedf14dc7bf801119a8908f49)
- 2012-06-20 13:11:34,221 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 1c3a2becedf14dc7bf801119a8908f49
- 2012-06-20 13:11:34,221 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/.tmp/1c3a2becedf14dc7bf801119a8908f49 to hdfs://hmaster101.mentacapital.local:8020/hbase/CH/343f77c46d55a9f0ada62b8043454903/kcf/1c3a2becedf14dc7bf801119a8908f49
- 2012-06-20 13:11:34,233 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 1c3a2becedf14dc7bf801119a8908f49
- 2012-06-20 13:11:34,251 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 2 file(s) in kcf of CH,\x00\x00\x00\x00\x00\x00p&\x00\x00\x048\:<\x00,1339401258130.343f77c46d55a9f0ada62b8043454903. into 1c3a2becedf14dc7bf801119a8908f49, size=260.8m; total size for store is 260.8m
- 2012-06-20 13:11:34,251 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=CH,\x00\x00\x00\x00\x00\x00p&\x00\x00\x048\:<\x00,1339401258130.343f77c46d55a9f0ada62b8043454903., storeName=kcf, fileCount=2, fileSize=260.8m, priority=5, time=10786950170944779; duration=25sec
- 2012-06-20 13:11:34,251 DEBUG org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: CompactSplitThread status: compaction_queue=(0:0), split_queue=0
- 2012-06-20 13:13:55,572 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.4 MB of total=3.4 GB
- 2012-06-20 13:13:55,583 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=410.26 MB, total=2.99 GB, single=1.96 GB, multi=1.4 GB, memory=0 KB
- 2012-06-20 13:14:20,578 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=3.38 GB, free=625.01 MB, max=3.99 GB, blocks=3472, accesses=116589935, hits=116455978, hitRatio=99.88%, cachingAccesses=116468003, cachingHits=116448738, cachingHitsRatio=99.98%, evictions=18, evicted=15794, evictedPerRun=877.4444580078125
- 2012-06-20 13:14:21,096 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.4 MB of total=3.4 GB
- 2012-06-20 13:14:21,743 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=410.27 MB, total=2.99 GB, single=1.96 GB, multi=1.41 GB, memory=0 KB
- 2012-06-20 13:14:49,191 INFO org.apache.hadoop.http.HttpServer: Process Thread Dump: jsp requested
- 71 active threads
- Thread 22659 (IPC Client (47) connection to hmaster101.mentacapital.local/10.10.10.156:8020 from hbase):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 1
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:680)
- org.apache.hadoop.ipc.Client$Connection.run(Client.java:723)
- Thread 22507 (ResponseProcessor for block blk_-8182319801061868969_56068):
- State: RUNNABLE
- Blocked count: 6
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:332)
- org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
- org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
- org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
- java.io.DataInputStream.readFully(DataInputStream.java:178)
- java.io.DataInputStream.readLong(DataInputStream.java:399)
- org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
- org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2956)
- Thread 22506 (DataStreamer for file /hbase/.logs/hb102.mentacapital.local.,60020,1339642721519/hb102.mentacapital.local.%2C60020%2C1339642721519.1340222236089 block blk_-8182319801061868969_56068):
- State: TIMED_WAITING
- Blocked count: 113830
- Waited count: 114787
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2805)
- Thread 2245 (regionserver60020-splits-1339669238456):
- State: WAITING
- Blocked count: 213
- Waited count: 361
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3447f368
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 1269 (regionserver60020-largeCompactions-1339652721904):
- State: WAITING
- Blocked count: 365050
- Waited count: 2837
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@60db6407
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:220)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 775 (RS_CLOSE_REGION-hb102.mentacapital.local.,60020,1339642721519-2):
- State: WAITING
- Blocked count: 120
- Waited count: 217
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3f3a8990
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 771 (RS_CLOSE_REGION-hb102.mentacapital.local.,60020,1339642721519-1):
- State: WAITING
- Blocked count: 111
- Waited count: 193
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3f3a8990
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 769 (RS_CLOSE_REGION-hb102.mentacapital.local.,60020,1339642721519-0):
- State: WAITING
- Blocked count: 149
- Waited count: 238
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3f3a8990
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 515 (regionserver60020-smallCompactions-1339643016086):
- State: WAITING
- Blocked count: 198894
- Waited count: 7841
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@67b72ada
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:220)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 237 (sendParams-1):
- State: TIMED_WAITING
- Blocked count: 13
- Waited count: 35187
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
- java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
- java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 93 (PostOpenDeployTasks:e6fe2ea5a0893550c80f5cf0849f70f6-EventThread):
- State: WAITING
- Blocked count: 2
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@62363bce
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:493)
- Thread 92 (PostOpenDeployTasks:e6fe2ea5a0893550c80f5cf0849f70f6-SendThread(hmaster101.mentacapital.local.:2181)):
- State: RUNNABLE
- Blocked count: 2
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:274)
- org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
- Thread 83 (RS_OPEN_REGION-hb102.mentacapital.local.,60020,1339642721519-2):
- State: WAITING
- Blocked count: 1092
- Waited count: 1625
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@569083c1
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 82 (RS_OPEN_REGION-hb102.mentacapital.local.,60020,1339642721519-1):
- State: WAITING
- Blocked count: 1035
- Waited count: 1526
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@569083c1
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 81 (RS_OPEN_REGION-hb102.mentacapital.local.,60020,1339642721519-0):
- State: WAITING
- Blocked count: 1219
- Waited count: 1804
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@569083c1
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 69 (SplitLogWorker-hb102.mentacapital.local.,60020,1339642721519):
- State: WAITING
- Blocked count: 4
- Waited count: 3
- Waiting on java.lang.Object@41e80761
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:485)
- org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:205)
- org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
- java.lang.Thread.run(Thread.java:619)
- Thread 68 (PRI IPC Server handler 9 on 60020):
- State: WAITING
- Blocked count: 37
- Waited count: 133
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 67 (PRI IPC Server handler 8 on 60020):
- State: WAITING
- Blocked count: 43
- Waited count: 142
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 66 (PRI IPC Server handler 7 on 60020):
- State: WAITING
- Blocked count: 40
- Waited count: 140
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 65 (PRI IPC Server handler 6 on 60020):
- State: WAITING
- Blocked count: 45
- Waited count: 145
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 64 (PRI IPC Server handler 5 on 60020):
- State: WAITING
- Blocked count: 42
- Waited count: 138
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 63 (PRI IPC Server handler 4 on 60020):
- State: WAITING
- Blocked count: 49
- Waited count: 147
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 62 (PRI IPC Server handler 3 on 60020):
- State: WAITING
- Blocked count: 40
- Waited count: 137
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 61 (PRI IPC Server handler 2 on 60020):
- State: WAITING
- Blocked count: 54
- Waited count: 146
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 60 (PRI IPC Server handler 1 on 60020):
- State: WAITING
- Blocked count: 59
- Waited count: 161
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 59 (PRI IPC Server handler 0 on 60020):
- State: WAITING
- Blocked count: 55
- Waited count: 151
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 58 (IPC Server handler 9 on 60020):
- State: WAITING
- Blocked count: 373003
- Waited count: 12182121
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 57 (IPC Server handler 8 on 60020):
- State: WAITING
- Blocked count: 405973
- Waited count: 12211111
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 56 (IPC Server handler 7 on 60020):
- State: WAITING
- Blocked count: 391448
- Waited count: 12198823
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 55 (IPC Server handler 6 on 60020):
- State: WAITING
- Blocked count: 394391
- Waited count: 12203167
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 54 (IPC Server handler 5 on 60020):
- State: WAITING
- Blocked count: 415979
- Waited count: 12224279
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 53 (IPC Server handler 4 on 60020):
- State: WAITING
- Blocked count: 375708
- Waited count: 12182734
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 52 (IPC Server handler 3 on 60020):
- State: WAITING
- Blocked count: 379575
- Waited count: 12189521
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 51 (IPC Server handler 2 on 60020):
- State: WAITING
- Blocked count: 391904
- Waited count: 12200830
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 50 (IPC Server handler 1 on 60020):
- State: RUNNABLE
- Blocked count: 384389
- Waited count: 12192804
- Stack:
- org.apache.hadoop.hbase.regionserver.StoreFileScanner.realSeekDone(StoreFileScanner.java:340)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:331)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:291)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
- org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
- org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:127)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3354)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3310)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3327)
- org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2393)
- sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
- sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- java.lang.reflect.Method.invoke(Method.java:597)
- org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
- Thread 49 (IPC Server handler 0 on 60020):
- State: WAITING
- Blocked count: 425811
- Waited count: 12235857
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 10 (IPC Server listener on 60020):
- State: RUNNABLE
- Blocked count: 16032
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener.run(HBaseServer.java:610)
- Thread 22 (IPC Server Responder):
- State: RUNNABLE
- Blocked count: 4085
- Waited count: 3867
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRunLoop(HBaseServer.java:799)
- org.apache.hadoop.hbase.ipc.HBaseServer$Responder.run(HBaseServer.java:782)
- Thread 48 (Timer-0):
- State: TIMED_WAITING
- Blocked count: 1
- Waited count: 19353
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 47 (819761253@qtp-96507428-1 - Acceptor0 SelectChannelConnector@0.0.0.0:60030):
- State: RUNNABLE
- Blocked count: 1
- Waited count: 1
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
- org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
- org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
- org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
- org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
- Thread 46 (1844438133@qtp-96507428-0):
- State: RUNNABLE
- Blocked count: 2
- Waited count: 9678
- Stack:
- sun.management.ThreadImpl.getThreadInfo0(Native Method)
- sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:147)
- sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:123)
- org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:149)
- org.apache.hadoop.util.ReflectionUtils.logThreadInfo(ReflectionUtils.java:203)
- org.apache.hadoop.http.HttpServer$StackServlet.doGet(HttpServer.java:699)
- javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
- javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
- org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
- org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:829)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
- org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
- org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
- org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
- org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
- org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
- org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
- org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
- org.mortbay.jetty.Server.handle(Server.java:326)
- Thread 36 (regionserver60020.leaseChecker):
- State: TIMED_WAITING
- Blocked count: 5
- Waited count: 80399779
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:209)
- org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:83)
- java.lang.Thread.run(Thread.java:619)
- Thread 35 (regionserver60020.compactionChecker):
- State: TIMED_WAITING
- Blocked count: 98
- Waited count: 59
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:91)
- org.apache.hadoop.hbase.Chore.run(Chore.java:75)
- java.lang.Thread.run(Thread.java:619)
- Thread 32 (regionserver60020.cacheFlusher):
- State: TIMED_WAITING
- Blocked count: 44269
- Waited count: 72571
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:201)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:39)
- org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:218)
- java.lang.Thread.run(Thread.java:619)
- Thread 38 (regionserver60020.logRoller):
- State: TIMED_WAITING
- Blocked count: 1434
- Waited count: 60152
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:76)
- java.lang.Thread.run(Thread.java:619)
- Thread 44 (Timer thread for monitoring jvm):
- State: TIMED_WAITING
- Blocked count: 4
- Waited count: 58057
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 43 (Timer thread for monitoring hbase):
- State: TIMED_WAITING
- Blocked count: 3
- Waited count: 58057
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 42 (regionserver60020.decayingSampleTick.1):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 1161011
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 41 (regionserver60020.logSyncer):
- State: TIMED_WAITING
- Blocked count: 9422
- Waited count: 589010
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1193)
- java.lang.Thread.run(Thread.java:619)
- Thread 40 (LeaseChecker):
- State: TIMED_WAITING
- Blocked count: 19339
- Waited count: 618802
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1302)
- java.lang.Thread.run(Thread.java:619)
- Thread 37 (IPC Client (47) connection to hmaster101.mentacapital.local./10.10.10.156:60000 from hbase):
- State: TIMED_WAITING
- Blocked count: 192946
- Waited count: 192946
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.ipc.HBaseClient$Connection.waitForWork(HBaseClient.java:459)
- org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:504)
- Thread 34 (DestroyJavaVM):
- State: RUNNABLE
- Blocked count: 0
- Waited count: 0
- Stack:
- Thread 29 (regionserver60020-EventThread):
- State: WAITING
- Blocked count: 10
- Waited count: 16
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6a69ed4a
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:493)
- Thread 28 (regionserver60020-SendThread(hmaster101.mentacapital.local.:2181)):
- State: RUNNABLE
- Blocked count: 489
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:274)
- org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
- Thread 26 (regionserver60020):
- State: TIMED_WAITING
- Blocked count: 193060
- Waited count: 385915
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:91)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:55)
- org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:714)
- java.lang.Thread.run(Thread.java:619)
- Thread 25 (LRU Statistics #0):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 1943
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 23 (main.LruBlockCache.EvictionThread):
- State: WAITING
- Blocked count: 19
- Waited count: 20
- Waiting on org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread@f8600d6
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:485)
- org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:594)
- java.lang.Thread.run(Thread.java:619)
- Thread 21 (Timer thread for monitoring rpc):
- State: TIMED_WAITING
- Blocked count: 1
- Waited count: 58058
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 20 (IPC Reader 9 on port 60020):
- State: RUNNABLE
- Blocked count: 2439
- Waited count: 1819
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 19 (IPC Reader 8 on port 60020):
- State: RUNNABLE
- Blocked count: 1690
- Waited count: 1734
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 18 (IPC Reader 7 on port 60020):
- State: RUNNABLE
- Blocked count: 1780
- Waited count: 1954
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 17 (IPC Reader 6 on port 60020):
- State: RUNNABLE
- Blocked count: 1844
- Waited count: 1746
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 16 (IPC Reader 5 on port 60020):
- State: RUNNABLE
- Blocked count: 2578
- Waited count: 1829
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 15 (IPC Reader 4 on port 60020):
- State: RUNNABLE
- Blocked count: 2424
- Waited count: 1788
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 14 (IPC Reader 3 on port 60020):
- State: RUNNABLE
- Blocked count: 2548
- Waited count: 1733
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 13 (IPC Reader 2 on port 60020):
- State: RUNNABLE
- Blocked count: 1851
- Waited count: 1657
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 12 (IPC Reader 1 on port 60020):
- State: RUNNABLE
- Blocked count: 1973
- Waited count: 1747
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 11 (IPC Reader 0 on port 60020):
- State: RUNNABLE
- Blocked count: 2848
- Waited count: 1765
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 5 (Signal Dispatcher):
- State: RUNNABLE
- Blocked count: 0
- Waited count: 0
- Stack:
- Thread 3 (Finalizer):
- State: WAITING
- Blocked count: 6993
- Waited count: 6962
- Waiting on java.lang.ref.ReferenceQueue$Lock@e4600c0
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
- java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
- java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
- Thread 2 (Reference Handler):
- State: WAITING
- Blocked count: 11057
- Waited count: 11030
- Waiting on java.lang.ref.Reference$Lock@15db4492
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:485)
- java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
- 2012-06-20 13:15:25,000 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.6 MB of total=3.4 GB
- 2012-06-20 13:15:25,011 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=410.26 MB, total=2.99 GB, single=1.93 GB, multi=1.43 GB, memory=0 KB
- 2012-06-20 13:15:43,246 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.21 MB of total=3.39 GB
- 2012-06-20 13:15:43,257 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=409.26 MB, total=3 GB, single=1.9 GB, multi=1.46 GB, memory=0 KB
- 2012-06-20 13:16:01,658 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.2 MB of total=3.39 GB
- 2012-06-20 13:16:01,669 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=409.26 MB, total=3 GB, single=1.87 GB, multi=1.49 GB, memory=0 KB
- 2012-06-20 13:16:08,437 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Flush requested on sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA\x10\x00\x00\x03\xD5\x1F\xD8\x81\x80,1340220387952.10dfadbc61d2388e6aa34646de5cd947.
- 2012-06-20 13:16:08,438 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush for sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA\x10\x00\x00\x03\xD5\x1F\xD8\x81\x80,1340220387952.10dfadbc61d2388e6aa34646de5cd947., current region memstore size 128.0m
- 2012-06-20 13:16:08,439 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA\x10\x00\x00\x03\xD5\x1F\xD8\x81\x80,1340220387952.10dfadbc61d2388e6aa34646de5cd947., commencing wait for mvcc, flushsize=134218240
- 2012-06-20 13:16:08,439 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting, commencing flushing stores
- 2012-06-20 13:16:08,493 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/.tmp/5d24b13f33274673be6e204377a54b41with permission:rwxrwxrwx
- 2012-06-20 13:16:08,505 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:16:08,505 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/.tmp/5d24b13f33274673be6e204377a54b41: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:16:08,505 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/.tmp/5d24b13f33274673be6e204377a54b41: CompoundBloomFilterWriter
- 2012-06-20 13:16:09,095 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:16:09,096 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:16:09,105 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/.tmp/5d24b13f33274673be6e204377a54b41)
- 2012-06-20 13:16:09,105 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=14909989, memsize=56.8m, into tmp file hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/.tmp/5d24b13f33274673be6e204377a54b41
- 2012-06-20 13:16:09,111 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 5d24b13f33274673be6e204377a54b41
- 2012-06-20 13:16:09,111 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5d24b13f33274673be6e204377a54b41
- 2012-06-20 13:16:09,111 DEBUG org.apache.hadoop.hbase.regionserver.Store: Renaming flushed file at hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/.tmp/5d24b13f33274673be6e204377a54b41 to hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/kcf/5d24b13f33274673be6e204377a54b41
- 2012-06-20 13:16:09,121 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 5d24b13f33274673be6e204377a54b41
- 2012-06-20 13:16:09,121 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5d24b13f33274673be6e204377a54b41
- 2012-06-20 13:16:09,121 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/10dfadbc61d2388e6aa34646de5cd947/kcf/5d24b13f33274673be6e204377a54b41, entries=257698, sequenceid=14909989, filesize=4.5m
- 2012-06-20 13:16:09,122 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~128.0m/134218240, currentsize=57.4k/58752 for region sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA\x10\x00\x00\x03\xD5\x1F\xD8\x81\x80,1340220387952.10dfadbc61d2388e6aa34646de5cd947. in 684ms, sequenceid=14909989, compaction requested=false
- 2012-06-20 13:16:18,510 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.2 MB of total=3.39 GB
- 2012-06-20 13:16:18,520 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=409.29 MB, total=3 GB, single=1.84 GB, multi=1.52 GB, memory=0 KB
- 2012-06-20 13:16:34,230 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.77 MB of total=3.4 GB
- 2012-06-20 13:16:34,240 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=410.6 MB, total=2.99 GB, single=1.84 GB, multi=1.53 GB, memory=0 KB
- 2012-06-20 13:16:36,966 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Flush requested on sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA/\x00\x00\x03\xCF\x11\xBEU\x80,1340220387952.0923ea7d83b7fd4bc17e2b62fb2b0186.
- 2012-06-20 13:16:36,966 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush for sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA/\x00\x00\x03\xCF\x11\xBEU\x80,1340220387952.0923ea7d83b7fd4bc17e2b62fb2b0186., current region memstore size 128.0m
- 2012-06-20 13:16:36,967 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA/\x00\x00\x03\xCF\x11\xBEU\x80,1340220387952.0923ea7d83b7fd4bc17e2b62fb2b0186., commencing wait for mvcc, flushsize=134218376
- 2012-06-20 13:16:36,967 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting, commencing flushing stores
- 2012-06-20 13:16:37,020 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file:hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/.tmp/019bf799b2904dc185bf15478c9a991awith permission:rwxrwxrwx
- 2012-06-20 13:16:37,028 DEBUG org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
- 2012-06-20 13:16:37,028 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/.tmp/019bf799b2904dc185bf15478c9a991a: ROW, CompoundBloomFilterWriter
- 2012-06-20 13:16:37,028 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/.tmp/019bf799b2904dc185bf15478c9a991a: CompoundBloomFilterWriter
- 2012-06-20 13:16:37,551 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:16:37,551 DEBUG org.apache.hadoop.hbase.util.CompoundBloomFilterWriter: Compacted Bloom chunk #0 from [109306 max keys, 131072 bytes] to [27326 max keys, 32768 bytes]
- 2012-06-20 13:16:37,567 INFO org.apache.hadoop.hbase.regionserver.StoreFile: General Bloom and DeleteFamily was added to HFile (hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/.tmp/019bf799b2904dc185bf15478c9a991a)
- 2012-06-20 13:16:37,567 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=14928236, memsize=16.8m, into tmp file hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/.tmp/019bf799b2904dc185bf15478c9a991a
- 2012-06-20 13:16:37,573 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 019bf799b2904dc185bf15478c9a991a
- 2012-06-20 13:16:37,573 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 019bf799b2904dc185bf15478c9a991a
- 2012-06-20 13:16:37,573 DEBUG org.apache.hadoop.hbase.regionserver.Store: Renaming flushed file at hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/.tmp/019bf799b2904dc185bf15478c9a991a to hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/kcf/019bf799b2904dc185bf15478c9a991a
- 2012-06-20 13:16:37,584 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded ROW (CompoundBloomFilter) metadata for 019bf799b2904dc185bf15478c9a991a
- 2012-06-20 13:16:37,584 INFO org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 019bf799b2904dc185bf15478c9a991a
- 2012-06-20 13:16:37,584 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://hmaster101.mentacapital.local:8020/hbase/sdb1_iu/0923ea7d83b7fd4bc17e2b62fb2b0186/kcf/019bf799b2904dc185bf15478c9a991a, entries=77620, sequenceid=14928236, filesize=1.3m
- 2012-06-20 13:16:37,585 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~128.0m/134218376, currentsize=4.9k/4968 for region sdb1_iu,\x00\x00\x00\x00\x00\x00\xAA/\x00\x00\x03\xCF\x11\xBEU\x80,1340220387952.0923ea7d83b7fd4bc17e2b62fb2b0186. in 619ms, sequenceid=14928236, compaction requested=false
- 2012-06-20 13:18:43,465 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction started; Attempting to free 409.16 MB of total=3.39 GB
- 2012-06-20 13:18:43,475 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction completed; freed=409.53 MB, total=2.99 GB, single=1.83 GB, multi=1.53 GB, memory=0 KB
- 2012-06-20 13:19:15,113 INFO org.apache.hadoop.http.HttpServer: Process Thread Dump: jsp requested
- 70 active threads
- Thread 22507 (ResponseProcessor for block blk_-8182319801061868969_56068):
- State: RUNNABLE
- Blocked count: 21
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:332)
- org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
- org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
- org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
- java.io.DataInputStream.readFully(DataInputStream.java:178)
- java.io.DataInputStream.readLong(DataInputStream.java:399)
- org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
- org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2956)
- Thread 22506 (DataStreamer for file /hbase/.logs/hb102.mentacapital.local.,60020,1339642721519/hb102.mentacapital.local.%2C60020%2C1339642721519.1340222236089 block blk_-8182319801061868969_56068):
- State: TIMED_WAITING
- Blocked count: 209329
- Waited count: 210486
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2805)
- Thread 2245 (regionserver60020-splits-1339669238456):
- State: WAITING
- Blocked count: 213
- Waited count: 361
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3447f368
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 1269 (regionserver60020-largeCompactions-1339652721904):
- State: WAITING
- Blocked count: 365050
- Waited count: 2837
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@60db6407
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:220)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 775 (RS_CLOSE_REGION-hb102.mentacapital.local.,60020,1339642721519-2):
- State: WAITING
- Blocked count: 120
- Waited count: 217
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3f3a8990
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 771 (RS_CLOSE_REGION-hb102.mentacapital.local.,60020,1339642721519-1):
- State: WAITING
- Blocked count: 111
- Waited count: 193
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3f3a8990
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 769 (RS_CLOSE_REGION-hb102.mentacapital.local.,60020,1339642721519-0):
- State: WAITING
- Blocked count: 149
- Waited count: 238
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3f3a8990
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 515 (regionserver60020-smallCompactions-1339643016086):
- State: WAITING
- Blocked count: 198894
- Waited count: 7841
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@67b72ada
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:220)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 237 (sendParams-1):
- State: TIMED_WAITING
- Blocked count: 13
- Waited count: 35231
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
- java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
- java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 93 (PostOpenDeployTasks:e6fe2ea5a0893550c80f5cf0849f70f6-EventThread):
- State: WAITING
- Blocked count: 2
- Waited count: 7
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@62363bce
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:493)
- Thread 92 (PostOpenDeployTasks:e6fe2ea5a0893550c80f5cf0849f70f6-SendThread(hmaster101.mentacapital.local.:2181)):
- State: RUNNABLE
- Blocked count: 2
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:274)
- org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
- Thread 83 (RS_OPEN_REGION-hb102.mentacapital.local.,60020,1339642721519-2):
- State: WAITING
- Blocked count: 1092
- Waited count: 1625
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@569083c1
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 82 (RS_OPEN_REGION-hb102.mentacapital.local.,60020,1339642721519-1):
- State: WAITING
- Blocked count: 1035
- Waited count: 1526
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@569083c1
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 81 (RS_OPEN_REGION-hb102.mentacapital.local.,60020,1339642721519-0):
- State: WAITING
- Blocked count: 1219
- Waited count: 1804
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@569083c1
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 69 (SplitLogWorker-hb102.mentacapital.local.,60020,1339642721519):
- State: WAITING
- Blocked count: 4
- Waited count: 3
- Waiting on java.lang.Object@41e80761
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:485)
- org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:205)
- org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
- java.lang.Thread.run(Thread.java:619)
- Thread 68 (PRI IPC Server handler 9 on 60020):
- State: WAITING
- Blocked count: 37
- Waited count: 133
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 67 (PRI IPC Server handler 8 on 60020):
- State: WAITING
- Blocked count: 43
- Waited count: 143
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 66 (PRI IPC Server handler 7 on 60020):
- State: WAITING
- Blocked count: 40
- Waited count: 141
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 65 (PRI IPC Server handler 6 on 60020):
- State: WAITING
- Blocked count: 45
- Waited count: 145
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 64 (PRI IPC Server handler 5 on 60020):
- State: WAITING
- Blocked count: 42
- Waited count: 138
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 63 (PRI IPC Server handler 4 on 60020):
- State: WAITING
- Blocked count: 49
- Waited count: 147
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 62 (PRI IPC Server handler 3 on 60020):
- State: WAITING
- Blocked count: 40
- Waited count: 137
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 61 (PRI IPC Server handler 2 on 60020):
- State: WAITING
- Blocked count: 54
- Waited count: 146
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 60 (PRI IPC Server handler 1 on 60020):
- State: WAITING
- Blocked count: 59
- Waited count: 161
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 59 (PRI IPC Server handler 0 on 60020):
- State: WAITING
- Blocked count: 55
- Waited count: 151
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@683c9314
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 58 (IPC Server handler 9 on 60020):
- State: WAITING
- Blocked count: 378151
- Waited count: 12203755
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 57 (IPC Server handler 8 on 60020):
- State: WAITING
- Blocked count: 411121
- Waited count: 12232401
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 56 (IPC Server handler 7 on 60020):
- State: WAITING
- Blocked count: 400027
- Waited count: 12223680
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 55 (IPC Server handler 6 on 60020):
- State: WAITING
- Blocked count: 408113
- Waited count: 12232551
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 54 (IPC Server handler 5 on 60020):
- State: WAITING
- Blocked count: 422848
- Waited count: 12247436
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 53 (IPC Server handler 4 on 60020):
- State: WAITING
- Blocked count: 385997
- Waited count: 12209213
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 52 (IPC Server handler 3 on 60020):
- State: WAITING
- Blocked count: 386436
- Waited count: 12212697
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 51 (IPC Server handler 2 on 60020):
- State: WAITING
- Blocked count: 407331
- Waited count: 12232428
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 50 (IPC Server handler 1 on 60020):
- State: RUNNABLE
- Blocked count: 398102
- Waited count: 12222670
- Stack:
- org.apache.hadoop.hbase.regionserver.StoreFileScanner.realSeekDone(StoreFileScanner.java:340)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:331)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:291)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
- org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
- org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
- org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:127)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3354)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3310)
- org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3327)
- org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2393)
- sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
- sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- java.lang.reflect.Method.invoke(Method.java:597)
- org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
- Thread 49 (IPC Server handler 0 on 60020):
- State: WAITING
- Blocked count: 436100
- Waited count: 12262013
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac7057c
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1348)
- Thread 10 (IPC Server listener on 60020):
- State: RUNNABLE
- Blocked count: 16044
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener.run(HBaseServer.java:610)
- Thread 22 (IPC Server Responder):
- State: RUNNABLE
- Blocked count: 4086
- Waited count: 3868
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRunLoop(HBaseServer.java:799)
- org.apache.hadoop.hbase.ipc.HBaseServer$Responder.run(HBaseServer.java:782)
- Thread 48 (Timer-0):
- State: TIMED_WAITING
- Blocked count: 1
- Waited count: 19362
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 47 (819761253@qtp-96507428-1 - Acceptor0 SelectChannelConnector@0.0.0.0:60030):
- State: RUNNABLE
- Blocked count: 1
- Waited count: 1
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
- org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
- org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
- org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
- org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
- Thread 46 (1844438133@qtp-96507428-0):
- State: RUNNABLE
- Blocked count: 8
- Waited count: 9687
- Stack:
- sun.management.ThreadImpl.getThreadInfo0(Native Method)
- sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:147)
- sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:123)
- org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:149)
- org.apache.hadoop.util.ReflectionUtils.logThreadInfo(ReflectionUtils.java:203)
- org.apache.hadoop.http.HttpServer$StackServlet.doGet(HttpServer.java:699)
- javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
- javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
- org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
- org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:829)
- org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
- org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
- org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
- org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
- org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
- org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
- org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
- org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
- org.mortbay.jetty.Server.handle(Server.java:326)
- Thread 36 (regionserver60020.leaseChecker):
- State: TIMED_WAITING
- Blocked count: 5
- Waited count: 80536739
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:209)
- org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:83)
- java.lang.Thread.run(Thread.java:619)
- Thread 35 (regionserver60020.compactionChecker):
- State: TIMED_WAITING
- Blocked count: 98
- Waited count: 59
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:91)
- org.apache.hadoop.hbase.Chore.run(Chore.java:75)
- java.lang.Thread.run(Thread.java:619)
- Thread 32 (regionserver60020.cacheFlusher):
- State: TIMED_WAITING
- Blocked count: 44410
- Waited count: 72680
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:201)
- java.util.concurrent.DelayQueue.poll(DelayQueue.java:39)
- org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:218)
- java.lang.Thread.run(Thread.java:619)
- Thread 38 (regionserver60020.logRoller):
- State: TIMED_WAITING
- Blocked count: 1434
- Waited count: 60178
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:76)
- java.lang.Thread.run(Thread.java:619)
- Thread 44 (Timer thread for monitoring jvm):
- State: TIMED_WAITING
- Blocked count: 4
- Waited count: 58084
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 43 (Timer thread for monitoring hbase):
- State: TIMED_WAITING
- Blocked count: 3
- Waited count: 58084
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 42 (regionserver60020.decayingSampleTick.1):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 1161543
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 41 (regionserver60020.logSyncer):
- State: TIMED_WAITING
- Blocked count: 9458
- Waited count: 589309
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1193)
- java.lang.Thread.run(Thread.java:619)
- Thread 40 (LeaseChecker):
- State: TIMED_WAITING
- Blocked count: 19347
- Waited count: 619084
- Stack:
- java.lang.Thread.sleep(Native Method)
- org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1302)
- java.lang.Thread.run(Thread.java:619)
- Thread 37 (IPC Client (47) connection to hmaster101.mentacapital.local./10.10.10.156:60000 from hbase):
- State: TIMED_WAITING
- Blocked count: 193034
- Waited count: 193034
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.ipc.HBaseClient$Connection.waitForWork(HBaseClient.java:459)
- org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:504)
- Thread 34 (DestroyJavaVM):
- State: RUNNABLE
- Blocked count: 0
- Waited count: 0
- Stack:
- Thread 29 (regionserver60020-EventThread):
- State: WAITING
- Blocked count: 10
- Waited count: 16
- Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6a69ed4a
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
- java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
- org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:493)
- Thread 28 (regionserver60020-SendThread(hmaster101.mentacapital.local.:2181)):
- State: RUNNABLE
- Blocked count: 489
- Waited count: 0
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:274)
- org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
- Thread 26 (regionserver60020):
- State: TIMED_WAITING
- Blocked count: 193148
- Waited count: 386091
- Stack:
- java.lang.Object.wait(Native Method)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:91)
- org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:55)
- org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:714)
- java.lang.Thread.run(Thread.java:619)
- Thread 25 (LRU Statistics #0):
- State: TIMED_WAITING
- Blocked count: 0
- Waited count: 1943
- Stack:
- sun.misc.Unsafe.park(Native Method)
- java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
- java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
- java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583)
- java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576)
- java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
- java.lang.Thread.run(Thread.java:619)
- Thread 23 (main.LruBlockCache.EvictionThread):
- State: WAITING
- Blocked count: 25
- Waited count: 26
- Waiting on org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread@f8600d6
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:485)
- org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:594)
- java.lang.Thread.run(Thread.java:619)
- Thread 21 (Timer thread for monitoring rpc):
- State: TIMED_WAITING
- Blocked count: 1
- Waited count: 58085
- Stack:
- java.lang.Object.wait(Native Method)
- java.util.TimerThread.mainLoop(Timer.java:509)
- java.util.TimerThread.run(Timer.java:462)
- Thread 20 (IPC Reader 9 on port 60020):
- State: RUNNABLE
- Blocked count: 2440
- Waited count: 1820
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 19 (IPC Reader 8 on port 60020):
- State: RUNNABLE
- Blocked count: 1691
- Waited count: 1735
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 18 (IPC Reader 7 on port 60020):
- State: RUNNABLE
- Blocked count: 1781
- Waited count: 1956
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 17 (IPC Reader 6 on port 60020):
- State: RUNNABLE
- Blocked count: 1845
- Waited count: 1747
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 16 (IPC Reader 5 on port 60020):
- State: RUNNABLE
- Blocked count: 2579
- Waited count: 1830
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 15 (IPC Reader 4 on port 60020):
- State: RUNNABLE
- Blocked count: 2425
- Waited count: 1789
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 14 (IPC Reader 3 on port 60020):
- State: RUNNABLE
- Blocked count: 2549
- Waited count: 1734
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 13 (IPC Reader 2 on port 60020):
- State: RUNNABLE
- Blocked count: 1853
- Waited count: 1659
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 12 (IPC Reader 1 on port 60020):
- State: RUNNABLE
- Blocked count: 1975
- Waited count: 1749
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 11 (IPC Reader 0 on port 60020):
- State: RUNNABLE
- Blocked count: 2849
- Waited count: 1766
- Stack:
- sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
- sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
- sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
- sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
- sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:502)
- org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
- java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- java.lang.Thread.run(Thread.java:619)
- Thread 5 (Signal Dispatcher):
- State: RUNNABLE
- Blocked count: 0
- Waited count: 0
- Stack:
- Thread 3 (Finalizer):
- State: WAITING
- Blocked count: 7036
- Waited count: 7005
- Waiting on java.lang.ref.ReferenceQueue$Lock@e4600c0
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
- java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
- java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
- Thread 2 (Reference Handler):
- State: WAITING
- Blocked count: 11111
- Waited count: 11081
- Waiting on java.lang.ref.Reference$Lock@15db4492
- Stack:
- java.lang.Object.wait(Native Method)
- java.lang.Object.wait(Object.java:485)
- java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
- 2012-06-20 13:19:20,578 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=3.08 GB, free=939.72 MB, max=3.99 GB, blocks=3151, accesses=117048136, hits=116911623, hitRatio=99.88%, cachingAccesses=116926191, cachingHits=116904369, cachingHitsRatio=99.98%, evictions=25, evicted=18671, evictedPerRun=746.8400268554688
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement