一個sqoop export案例中踩到的坑
阿新 • • 發佈:2019-03-27
遠程訪問 child pill 記錄 snapshot lis native 默認 puts
案例分析:
需要將hdfs上的數據導出到mysql裏的一張表裏。
虛擬機集群的為:centos1-centos5
問題1:
在centos1上將hdfs上的數據導出到centos1上的mysql裏:
sqoop export --connect jdbc:mysql://centos1:3306/test \ --username root --password root --table order_uid --export-dir /user/hive/warehouse/test.db/order_uid/ --fields-terminated-by ‘,‘
報錯:
Error executing statement: java.sql.SQLException: Access denied for user ‘root‘@‘centos1‘ (using password: YES)
改成:
sqoop export
--connect jdbc:mysql://localhost:3306/test \
--username root \
--password root \
--table order_uid \
--export-dir /user/hive/warehouse/test.db/order_uid/ \
--fields-terminated-by ‘,‘
報錯:
Error: java.io.IOException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table ‘test.order_uid‘ doesn‘t exist at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.close(AsyncSqlRecordWriter.java:205) at
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:670) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at ...
問題2:
在centos3上將hdfs上的數據導出到centos1上的mysql裏:
sqoop export --connect jdbc:mysql://centos1:3306/test \ --username root --password root --table order_uid --export-dir /user/hive/warehouse/test.db/order_uid/ --fields-terminated-by ‘,‘
報錯:
19/03/27 17:47:41 ERROR mapreduce.ExportJobBase: Export job failed! 19/03/27 17:47:41 ERROR tool.ExportTool: Error during export: Export job failed! at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
在網上找到兩種解決方案:
1.在網上找到有人說在hdfs的路徑寫到具體文件,而不是寫到目錄,改成:
sqoop export --connect jdbc:mysql://centos1:3306/test \ --username root --password root --table order_uid --export-dir /user/hive/warehouse/test.db/order_uid/t1.dat --fields-terminated-by ‘,‘
還是報相同錯誤!
2. 更改mysql裏表的編碼
將cengos1裏mysql的表order_uid字符集編碼改成:utf-8,重新執行,centos1的mysql表裏導入了部分數據, 仍然報錯:
Job failed as tasks failed. failedMaps:1 failedReduces:0 19/03/27 17:54:08 INFO mapreduce.Job: Counters: 33 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=290362 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1130 HDFS: Number of bytes written=0 HDFS: Number of read operations=8 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=1 Killed map tasks=1 Launched map tasks=4 Data-local map tasks=1 Rack-local map tasks=3 Total time spent by all maps in occupied slots (ms)=300866 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=300866 Total vcore-milliseconds taken by all map tasks=300866 Total megabyte-milliseconds taken by all map tasks=308086784 Map-Reduce Framework Map input records=5 Map output records=5 Input split bytes=282 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=664 CPU time spent (ms)=2460 Physical memory (bytes) snapshot=315748352 Virtual memory (bytes) snapshot=4170031104 Total committed heap usage (bytes)=146800640 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=0 19/03/27 17:54:08 INFO mapreduce.ExportJobBase: Transferred 1.1035 KB in 223.4219 seconds (5.0577 bytes/sec) 19/03/27 17:54:08 INFO mapreduce.ExportJobBase: Exported 5 records. 19/03/27 17:54:08 ERROR mapreduce.ExportJobBase: Export job failed! 19/03/27 17:54:08 ERROR tool.ExportTool: Error during export: Export job failed! at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
當指定map為1時:
sqoop export --connect jdbc:mysql://centos1:3306/test \ --username root --password root --table order_uid --export-dir /user/hive/warehouse/test.db/order_uid --fields-terminated-by ‘,‘ --m 1
運行成功了!!!
sqoop默認情況下的map數量為4,也就是說這種情況下1個map能運行成功,而多個map會失敗。於是將map改為2又試了一遍:
sqoop export --connect jdbc:mysql://centos1:3306/test \ --username root --password root --table order_uid --export-dir /user/hive/warehouse/test.db/order_uid --fields-terminated-by ‘,‘ --m 2
執行結果為:
19/03/27 19:17:22 INFO mapreduce.Job: Counters: 32 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=145181 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=705 HDFS: Number of bytes written=0 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=1 Launched map tasks=2 Data-local map tasks=1 Rack-local map tasks=1 Total time spent by all maps in occupied slots (ms)=88960 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=88960 Total vcore-milliseconds taken by all map tasks=88960 Total megabyte-milliseconds taken by all map tasks=91095040 Map-Reduce Framework Map input records=5 Map output records=5 Input split bytes=141 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=278 CPU time spent (ms)=1200 Physical memory (bytes) snapshot=159162368 Virtual memory (bytes) snapshot=2087399424 Total committed heap usage (bytes)=77070336 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=0 19/03/27 19:17:22 INFO mapreduce.ExportJobBase: Transferred 705 bytes in 99.4048 seconds (7.0922 bytes/sec) 19/03/27 19:17:22 INFO mapreduce.ExportJobBase: Exported 5 records. 19/03/27 19:17:22 ERROR mapreduce.ExportJobBase: Export job failed! 19/03/27 19:17:22 ERROR tool.ExportTool: Error during export: Export job failed! at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
可以看到它成功的導入了5條記錄!!!
通過jobhistory窗口可以看到兩個map一個成功,一個執行失敗
點擊task的name:
成功的任務由centos4節點運行的,失敗的task由centos1運行,又回到了問題1,就是centos1不能訪問centos1的mysql數據!
最終一個朋友告訴我再centos1上單獨添加對centos1的遠程訪問權限:
grant all privileges on *.* to ‘root‘@‘centos1‘ identified by ‘root‘ with grant option; flush privileges;
然後重新運行一下,問題1和問題2都被愉快的解決了!!!
當時在centos1上的mysql裏執行了:
GRANT ALL PRIVILEGES ON *.* TO ‘root‘@‘%‘ IDENTIFIED BY ‘root‘ WITH GRANT OPTION;
flush privileges;
對其他節點添加了遠程訪問,但沒有對自己添加遠程訪問權限。
一個sqoop export案例中踩到的坑