Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # SCOOP
- Integrates SQL and Hadoop. It handles big data. It takes MAPREDUCE out of the equation and handles the importing and expoting of the data.
- ### Import data from (My)SQL to HDFS
- ```
- sqoop import --connect jdbc://localhost/movielns --driver com.mysql.jdbc.Driver --table table_name
- ```
- ### Import data from (My)SQL to Hive
- ```
- sqoop import --connect jdbc://localhost/movielns --driver com.mysql.jdbc.Driver --table table_name --hive-import
- ```
- ### Export data from Hive to SQL
- * First you need to create a table so as to receive the data.
- * Connect on MySQL in maria_dev server using
- ```
- mysql -u root -p
- ```
- ```
- CREATE TABLE table_name ( column_nam type_name);
- ```
- * Exit and in maria_dev export data to sql
- ```
- sqoop export -connect jdcb:mysql://loacalhost/filename -m 1 --driver com.mysql.jdcb.Driver --table exported_movies --export-dir /apps/hive/warehouse/data_file --input-fields-terminated-by '\0001'
- ```
- * connect on 'filename' database
- * m1 is for 1 mapper since we are running locally.
- * specify the driver
- * and put it in a table
- * take the data from data_file
- * how that data are delimeter
Add Comment
Please, Sign In to add comment