Advertisement
Guest User

Untitled

a guest
Oct 30th, 2014
314
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
XML 2.40 KB | None | 0 0
  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <configuration>
  4.   <property>
  5.     <name>hbase.zookeeper.quorum</name>
  6.     <value>...</value>
  7.   </property>
  8.   <property>
  9.     <name>zookeeper.znode.parent</name>
  10.     <value>/hbase</value>
  11.   </property>
  12.   <property>
  13.     <name>hbase.zookeeper.property.dataDir</name>
  14.     <value>/disk0/zk/data</value>
  15.   </property>
  16.   <property>
  17.     <name>hbase.rootdir</name>
  18.     <value>hdfs://master52.hadoop.prod.kontagent.com:8020/hbase</value>
  19.   </property>
  20.   <property>
  21.     <name>hbase.cluster.distributed</name>
  22.     <value>true</value>
  23.   </property>
  24.   <property>
  25.     <name>hbase.hregion.max.filesize</name>
  26.     <value>10737418240</value>
  27.   </property>
  28.   <property>
  29.     <name>hbase.regionserver.lease.period</name>
  30.     <value>900000</value> <!-- 15 minutes -->
  31.   </property>
  32.   <property>
  33.     <name>hbase.rpc.timeout</name>
  34.     <value>900000</value> <!-- 15 minutes -->
  35.   </property>
  36.  
  37.   <property>
  38.     <name>zookeeper.session.timeout</name>
  39.     <value>120000</value> <!-- 2 mins -->
  40.   </property>
  41.   <property>
  42.     <name>hbase.zookeeper.property.tickTime</name>
  43.     <value>2000</value> <!-- matches zoo.cfg -->
  44.   </property>
  45.  
  46.   <property>
  47.     <name>hbase.snapshot.enabled</name>
  48.     <value>true</value>
  49.   </property>
  50.  
  51.   <property>
  52.      <name>hbase.hregion.majorcompaction</name>
  53.      <value>0</value>
  54.   </property>
  55.  
  56.   <!--
  57.      Added this setting in response to "RegionServerTooBusy" exceptions that we were seeing on tesseract. The regionservers weren't able to flush due to too many storefiles (this setting), causing the memstores to fill up, and the region to be unable to write. Bumping
  58. this setting relaxes that requirement slightly, allowing us to keep writing to the affected regions. Longer term fix would be to
  59. do a better job of preventing region hotspotting, such that no region has too high a writeload.
  60. -->
  61.   <property>
  62.     <name>hbase.hstore.blockingStoreFiles</name>
  63.     <value>20</value>
  64.   </property>
  65.  
  66.  
  67. <!--
  68.  
  69. Increasing this so that we create larger HFiles and not flushs often hence improving write perf.
  70.  
  71. Memstore will be flushed to disk if size of the memstore
  72. exceeds this number of bytes. Value is checked by a thread that runs
  73. every hbase.server.thread.wakefrequency.
  74. -->
  75. <property>
  76.  <name>hbase.hregion.memstore.flush.size</name>
  77.  <value>536870912</value>
  78. </property>
  79. </configuration>
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement