Guest User

Untitled

a guest
Feb 7th, 2019
131
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.74 KB | None | 0 0
  1. Prerequisites :
  2. >> Kerberized Cluster
  3.  
  4. >>Enable hive interactive server in hive
  5.  
  6. >>Get following details from hive for spark
  7.  
  8. spark.hadoop.hive.llap.daemon.service.hosts @llap0
  9. spark.sql.hive.hiveserver2.jdbc.url jdbc:hive2://c420-node2.squadron-labs.com:2181,c420-node3.squadron-labs.com:2181,c420-node4.squadron-labs.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive
  10. spark.datasource.hive.warehouse.metastoreUri thrift://c420-node3.squadron-labs.com:9083
  11.  
  12.  
  13.  
  14. Basic testing :
  15.  
  16. 1) Create a table employee in hive and load some data
  17. eg:
  18. Create table
  19. ----------------
  20.  
  21. CREATE TABLE IF NOT EXISTS employee ( eid int, name String, salary String, destination String)
  22. COMMENT 'Employee details'
  23. ROW FORMAT DELIMITED
  24. FIELDS TERMINATED BY ','
  25. LINES TERMINATED BY '\n'
  26. STORED AS TEXTFILE;
  27.  
  28. Load data data.txt file into hdfs
  29. ---------------
  30. 1201,Gopal,45000,Technical manager
  31. 1202,Manisha,45000,Proof reader
  32. 1203,Masthanvali,40000,Technical writer
  33. 1204,Kiran,40000,Hr Admin
  34. 1205,Kranthi,30000,Op Admin
  35.  
  36.  
  37. LOAD DATA INPATH '/tmp/data.txt' OVERWRITE INTO TABLE employee;
  38.  
  39. 2) kinit to the spark user and run
  40.  
  41. spark-shell --master yarn --conf "spark.security.credentials.hiveserver2.enabled=false" --conf "spark.sql.hive.hiveserver2.jdbc.url=jdbc:hive2://c420-node2.squadron-labs.com:2181,c420-node3.squadron-labs.com:2181,c420-node4.squadron-labs.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive;principal=hive/_HOST@HWX.COM" --conf "spark.datasource.hive.warehouse.metastoreUri=thrift://c420-node3.squadron-labs.com:9083" --conf "spark.datasource.hive.warehouse.load.staging.dir=/tmp/" --conf "spark.hadoop.hive.llap.daemon.service.hosts=@llap0" --conf "spark.hadoop.hive.zookeeper.quorum=c420-node2.squadron-labs.com:2181,c420-node3.squadron-labs.com:2181,c420-node4.squadron-labs.com:2181" --jars /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.3.0.1.0-187.jar
  42.  
  43. Note: spark.security.credentials.hiveserver2.enabled should be set to false for YARN client deploy mode, and true for YARN cluster deploy mode (by default). This configuration is required for a Kerberized cluster
  44.  
  45. 3) run following code in scala shell to view the table data
  46. import com.hortonworks.hwc.HiveWarehouseSession
  47. val hive = HiveWarehouseSession.session(spark).build()
  48. hive.execute("show tables").show
  49. hive.executeQuery("select * from employee").show
  50.  
  51.  
  52.  
  53. 4) To apply common properties by default, add following setting into ambari spark2 custom conf
  54.  
  55.  
  56. spark.sql.hive.hiveserver2.jdbc.url=jdbc:hive2://c420-node2.squadron-labs.com:2181,c420-node3.squadron-labs.com:2181,c420-node4.squadron-labs.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive;principal=hive/_HOST@HWX.COM
  57. spark.datasource.hive.warehouse.metastoreUri=thrift://c420-node3.squadron-labs.com:9083
  58. spark.datasource.hive.warehouse.load.staging.dir=/tmp/
  59. spark.hadoop.hive.llap.daemon.service.hosts=@llap0
  60. spark.hadoop.hive.zookeeper.quorum=c420-node2.squadron-labs.com:2181,c420-node3.squadron-labs.com:2181,c420-node4.squadron-labs.com:2181
  61.  
  62.  
  63. 5) spark-shell --master yarn --conf "spark.security.credentials.hiveserver2.enabled=false" --jars /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.3.0.1.0-187.jar
  64. Note: Common properties are read from spark default properties
  65.  
  66. 6) run following code in scala shell to view the hive table data
  67.  
  68. import com.hortonworks.hwc.HiveWarehouseSession
  69. val hive = HiveWarehouseSession.session(spark).build()
  70. hive.execute("show tables").show
  71. hive.executeQuery("select * from employee").show
  72.  
  73.  
  74. 7) To integrate HWC in Livy2
  75.  
  76. a) add following property in Custom livy2-conf
  77. livy.file.local-dir-whitelist=/usr/hdp/current/hive_warehouse_connector/
  78. b) Add hive-site.xml to /usr/hdp/current/spark2-client/conf on all cluster nodes.
  79.  
  80. c) Login to Zeppelin and in livy2 interpreter settings add following
  81.  
  82. livy.spark.hadoop.hive.llap.daemon.service.hosts @llap0
  83. livy.spark.security.credentials.hiveserver2.enabled true
  84. livy.spark.sql.hive.hiveserver2.jdbc.url jdbc:hive2://c420-node2.squadron-labs.com:2181,c420-node3.squadron-labs.com:2181,c420-node4.squadron-labs.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive
  85. livy.spark.sql.hive.hiveserver2.jdbc.url.principal hive/_HOST@HWX.COM
  86. livy.spark.yarn.security.credentials.hiveserver2.enabled true
  87. livy.spark.jars file:///usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.3.0.1.0-187.jar
  88.  
  89. d) Restart livy2 interpreter
  90.  
  91. e) in first paragraph add
  92. %livy2
  93. import com.hortonworks.hwc.HiveWarehouseSession
  94. val hive = HiveWarehouseSession.session(spark).build()
  95.  
  96. f) in second paragraph add
  97. %livy2
  98. hive.executeQuery("select * from employee").show
Add Comment
Please, Sign In to add comment