1. 程式人生 > >sqoop 之數據遷移

sqoop 之數據遷移

1.2 ide span data conf error nds 修改配置文件 解壓

安裝sqoop的前提是已經具備java和hadoop的環境
1、下載並解壓
最新版下載地址http://ftp.wayne.edu/apache/sqoop/1.4.6/


2、修改配置文件
$ cd $SQOOP_HOME/conf
$ mv sqoop-env-template.sh sqoop-env.sh
打開sqoop-env.sh並編輯下面幾行:
export HADOOP_COMMON_HOME=/home/hadoop/apps/hadoop-2.6.1/ 
export HADOOP_MAPRED_HOME=/home/hadoop/apps/hadoop-2.6.1/
export HIVE_HOME=/home/hadoop/apps/hive-1.2
.1 3、加入mysql的jdbc驅動包 cp ~/app/hive/lib/mysql-connector-java-5.1.28.jar $SQOOP_HOME/lib/

4、驗證啟動

$ cd $SQOOP_HOME/bin

$ sqoop-version

預期的輸出:

15/12/17 14:52:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6

Sqoop 1.4.6 git commit id 5b34accaca7de251fc91161733f906af2eddbe83

Compiled by abe on Fri Aug 1 11:19:26 PDT 2015

到這裏,整個Sqoop安裝工作完成。

常用命令:

Available commands:
codegen    Generate code to interact with database records
create-hive-table    Import a table definition into Hive
eval    Evaluate a SQL statement and display the results
export    Export an HDFS directory to a database table
help

   List available commands
import    Import a table from a database to HDFS
import-all-tables    Import tables from a database to HDFS
import-mainframe    Import datasets from a mainframe server to HDFS
job    Work with saved jobs
list-databases    List available databases on a server
list-tables    List available tables in a database
merge    Merge results of incremental imports
metastore    Run a standalone Sqoop metastore
version    Display version information

測試例子:

將數據庫的表數據導入的hdfs:

mysql> select * from tmp;               
+------+------+
| id   | name |
+------+------+
|    1 | f    |
|    2 | a    |
|    3 | b    |
|    4 | c    |
+------+------+
4 rows in set (0.00 sec)

在sqoop 機器上測試是否可以連上mysql:
[root@master sqoop-1.4.6]# mysql -h slave1 -uroot -p123456
[root@master sqoop-1.4.6]# mysql -h slave1 -uroot -p123456
ERROR 1045 (28000): Access denied for user root@master (using password: YES)
需要授權:
mysql> show grants;
+----------------------------------------------------------------------------------------------------------------------------------------+
| Grants for root@localhost                                                                                                              |
+----------------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO root@localhost IDENTIFIED BY PASSWORD *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 WITH GRANT OPTION |
+----------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> grant all privileges on *.* to root@% identified by 123456;            
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

#####################例子##############

[root@master sqoop-1.4.6]# pwd
/usr/local/src/sqoop-1.4.6

[root@master sqoop-1.4.6]# bin/sqoop import --connect jdbc:mysql://slave1:3306/test --username root --password 123456 --table tmp --m 1

。。。

。。。

18/01/18 02:20:47 INFO mapreduce.ImportJobBase: Transferred 16 bytes in 146.6658 seconds (0.1091 bytes/sec)
18/01/18 02:20:47 INFO mapreduce.ImportJobBase: Retrieved 4 records.

成功!!!!!!!!!!

提示:sqoop 是要跑mapreduce ,所以所有的MapReduce 機器一定要能給解析mysql 機器(也就是可以ping mysql 機器)

默認不指定路徑:

[root@master sqoop-1.4.6]# hadoop fs -ls /user/root/tmp/
Found 2 items
-rw-r--r-- 3 root supergroup 0 2018-01-18 02:20 /user/root/tmp/_SUCCESS
-rw-r--r-- 3 root supergroup 16 2018-01-18 02:20 /user/root/tmp/part-m-00000
[root@master sqoop-1.4.6]# hadoop fs -cat /user/root/tmp/part-m-00000
1,f
2,a
3,b
4,c

sqoop 之數據遷移