1. 程式人生 > >HADOOP常見問題總結

HADOOP常見問題總結

1:
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out
Answer:
程式裡面需要開啟多個檔案,進行剖析,系統一般預設數量是1024,(用ulimit -a可以看到)對於正常運用是夠了,但是對於程式來講,就太少了。
修改辦法:
修改2個檔案。
/etc/security/limits.conf
vi /etc/security/limits.conf
加上:

  • soft nofile 102400
  • hard nofile 409600
    $cd /etc/pam.d/
    $sudo vi login
    新增 session required /lib/security/pam_limits.so
    針對第一個我糾正下答案:
    這是reduce預處理階段shuffle時獲取已完成的map的輸出失敗次數超過上限造成的,上限預設為5。引起此的方式可能會有很多種,比如網路連線不正常,連線超時,頻寬較差以及埠阻塞等。。。通常框架內網路情況較好是不會出現此錯誤的。
    2:
    Too many fetch-failures
    Answer:
    出現這個主要是結點間的連通不夠全面。
  1. 檢查 、/etc/hosts
    要求高手本機ip 對應 伺服器名
    要求高手要包含所有的伺服器ip + 伺服器名
  2. 檢查 .ssh/authorized_keys
    要求高手包含所有伺服器(包括其自身)的public key
    3:
    處理速度特別的慢 出現map很快 但是reduce很慢 而且反覆出現 reduce=0%
    Answer:
    結合第二點,然後
    修改 conf/hadoop-env.sh 中的export HADOOP_HEAPSIZE=4000
    4:
    能夠開啟datanode,但無法訪問,也無法結束的錯誤
    在重新格式化一個新的分散式檔案時,需要將你NameNode上所配置的dfs.name.dir這一namenode用來存放NameNode 持久儲存名字空間及事務日誌的本地檔案系統路徑刪除,同時將各DataNode上的dfs.data.dir的路徑 DataNode 存放塊資料的本地檔案系統路徑的目錄也刪除。如本此配置就是在NameNode上刪除/home/hadoop/NameData,在DataNode上刪除/home/hadoop/DataNode1和/home/hadoop/DataNode2。這是因為Hadoop在格式化一個新的分散式檔案系統時,每個儲存的名字空間都對應了建立時間的那個版本(可以檢視/home/hadoop /NameData/current目錄下的VERSION檔案,上面記錄了版本資訊),在重新格式化新的分散式系統檔案時,最好先刪除NameData 目錄。必須刪除各DataNode的dfs.data.dir。這樣才可以使namedode和datanode記錄的資訊版本對應。
    注意:刪除是個很危險的動作,不能確認的情況下不能刪除!!做好刪除的檔案等通通備份!!
    5:
    java.io.IOException: Could not obtain block: blk_194219614024901469_1100 file=/user/hive/warehouse/src_20090724_log/src_20090724_log
    出現這種情況大多是結點斷了,沒有連線上。
    6:
    java.lang.OutOfMemoryError: Java heap space
    出現這種出錯,明顯是jvm記憶體不夠得原因,要修改所有的datanode的jvm記憶體大小。
    Java -Xms1024m -Xmx4096m
    一般jvm的最大記憶體運用應該為總記憶體大小的一半,我們運用的8G記憶體,所以設定為4096m,這一值可能依舊不是最優的值。

7:
Namenode in safe mode
解決方法
bin/hadoop dfsadmin -safemode leave
8:
java.net.NoRouteToHostException: No route to host
j解決方法:
sudo /etc/init.d/iptables stop
9:
更改namenode後,在hive中執行select 依舊指向之前的namenode地址
這是因為:When youcreate a table, hive actually stores the location of the table (e.g.
hdfs://ip:port/user/root/…) in the SDS and DBS tables in the metastore . So when I bring up a new cluster the master has a new IP, but hive’s metastore is still pointing to the locations within the old
cluster. I could modify the metastore to update with the new IP everytime I bring up a cluster. But the easier and simpler solution was to just use an elastic IP for the master
所以要將metastore中的之前出現的namenode地址全部更換為現有的namenode地址

10:
[color=]Your DataNode is started and you can create directories with bin/hadoop dfs -mkdir, but you get an error message when you try to put files into the HDFS (e.g., when you run a command like bin/hadoop dfs -put).
解決方法:
Go to the HDFS info web page (open your web browser and go to http://namenode:dfs_info_port where namenode is the hostname of your NameNode and dfs_info_port is the port you chose dfs.info.port; if followed the QuickStart on your personal computer then this URL will be http://localhost:50070). Once at that page click on the number where it tells you how many DataNodes you have to look at a list of the DataNodes in your cluster.
If it says you have used 100% of your space, then you need to free up room on local disk(s) of the DataNode(s).
If you are on Windows then this number will not be accurate (there is some kind of bug either in Cygwin’s df.exe or in Windows). Just free up some more space and you should be okay. On one Windows machine we tried the disk had 1GB free but Hadoop reported that it was 100% full. Then we freed up another 1GB and then it said that the disk was 99.15% full and started writing data into the HDFS again. We encountered this bug on Windows XP SP2.
11:Your DataNodes won’t start, and you see something like this in logs/datanode:
Incompatible namespaceIDs in /tmp/hadoop-ross/dfs/data
原因:
Your Hadoop namespaceID became corrupted. Unfortunately the easiest thing to do reformat the HDFS.
解決方法:
You need to do something like this:
bin/stop-all.sh
rm -Rf /tmp/hadoop-your-username/*
bin/hadoop namenode -format
12:
[color=]You can run Hadoop jobs written in Java (like the grep example), but your HadoopStreaming jobs (such as the Python example that fetches web page titles) won’t work.
原因:
You might have given only a relative path to the mapper and reducer programs. The tutorial originally just specified relative paths, but absolute paths are required if you are running in a real cluster.
解決方法:
Use absolute paths like this from the tutorial:
bin/hadoop jar contrib/hadoop-0.15.2-streaming.jar
-mapper $HOME/proj/hadoop/multifetch.py
-reducer $HOME/proj/hadoop/reducer.py
-input urls/*
-output titles

13:
ERROR metadata.Hive (Hive.java:getPartitions(499)) - javax.jdo.JDODataStoreException: Required table missing : ““PARTITIONS”” in Catalog “” Schema “”. JPOX requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable “org.jpox.autoCreateTables”
原因:就是因為
在 hive-default.xml 裡把 org.jpox.fixedDatastore 設定成 true 了
14:
把IP換成主機名,datanode 掛不上
解決方法:
把temp檔案刪除,重啟hadoop叢集就行了
是因為多次部署,造成temp檔案與namenode不一致的原因

15:
從url中解析出中文,但hadoop中打印出來仍是亂碼?我們曾經以為hadoop是不支援中文的,後來經過檢視原始碼,發現hadoop僅僅是不支援以gbk格式輸出中文而己。
這是TextOutputFormat.class中的程式碼,hadoop預設的輸出都是繼承自FileOutputFormat來的,FileOutputFormat的兩個子類一個是基於二進位制流的輸出,一個就是基於文字的輸出TextOutputFormat。
public class TextOutputFormat extends FileOutputFormat {
protected static class LineRecordWriter
implements RecordWriter {
private static final String utf8 = “UTF-8″;//這裡被寫死成了utf-8
private static final byte[] newline;
static {
try {
newline = “\n”.getBytes(utf8);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find ” + utf8 + ” encoding”);
}
}

public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
this.out = out;
try {
this.keyValueSeparator = keyValueSeparator.getBytes(utf8);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find ” + utf8 + ” encoding”);
}
}

private void writeObject(Object o) throws IOException {
if (o instanceof Text) {
Text to = (Text) o;
out.write(to.getBytes(), 0, to.getLength());//這裡也需要修改
} else {
out.write(o.toString().getBytes(utf8));
}
}

}
可以看出hadoop預設的輸出寫死為utf-8,因此如果decode中文正確,那麼將Linux客戶端的character設為utf-8是可以看到中文的。因為hadoop用utf-8的格式輸出了中文。
因為大多數資料庫是用gbk來定義欄位的,如果想讓hadoop用gbk格式輸出中文以相容資料庫咋辦嗎?
我們可以定義一個新的類:
public class GbkOutputFormat extends FileOutputFormat {
protected static class LineRecordWriter
implements RecordWriter {
//寫成gbk即可
private static final String gbk = “gbk”;
private static final byte[] newline;
static {
try {
newline = “\n”.getBytes(gbk);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find ” + gbk + ” encoding”);
}
}

public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
this.out = out;
try {
this.keyValueSeparator = keyValueSeparator.getBytes(gbk);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find ” + gbk + ” encoding”);
}
}

private void writeObject(Object o) throws IOException {
if (o instanceof Text) {
// Text to = (Text) o;
// out.write(to.getBytes(), 0, to.getLength());
// } else {
out.write(o.toString().getBytes(gbk));
}
}

}
然後在mapreduce程式碼中加入conf1.setOutputFormat(GbkOutputFormat.class)
即可以gbk格式輸出中文。
16:
某次正常執行mapreduce例子時某次正常執行mapreduce例子時,丟擲錯誤
java.io.IOException: All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…
at org.apache.hadoop.dfs.DFSClientDFSOutputStream.processDatanodeError(DFSClient.java:2158)atorg.apache.hadoop.dfs.DFSClientDFSOutputStream.processDatanodeError(DFSClient.java:2158) at org.apache.hadoop.dfs.DFSClientDFSOutputStream.access1400(DFSClient.java:1735)atorg.apache.hadoop.dfs.DFSClient1400(DFSClient.java:1735) at org.apache.hadoop.dfs.DFSClientDFSOutputStreamDataStreamer.run(DFSClient.java:1889)java.io.IOException:Couldnotgetblocklocations.Abortingatorg.apache.hadoop.dfs.DFSClientDataStreamer.run(DFSClient.java:1889) java.io.IOException: Could not get block locations. Aborting… at org.apache.hadoop.dfs.DFSClientDFSOutputStream.processDatanodeError(DFSClient.java:2143)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access1400(DFSClient.java:1735)atorg.apache.hadoop.dfs.DFSClient1400(DFSClient.java:1735) at org.apache.hadoop.dfs.DFSClientDFSOutputStream$DataStreamer.run(DFSClient.java:1889)
經查明,原因是linux機器打開了過多的檔案導致。用命令ulimit -n可以發現linux預設的檔案開啟數目為1024,修改/ect/security/limit.conf,增加hadoop soft 65535
再重新執行程式(最好所有的datanode都修改),解決
17:
執行一段時間後hadoop不能stop-all執行一段時間後hadoop不能stop-all.sh的,顯示出錯
no tasktracker to stop ,no datanode to stop
的原因是hadoop在stop的時候依據的是datanode上的mapred和dfs程序號。而預設的程序號儲存在/tmp下,linux預設會每隔一段時間(一般是一個月或者7天左右)去刪除這個目錄下的檔案。因此刪掉hadoop-hadoop-jobtracker.pid和hadoop-hadoop-namenode.pid兩個檔案後,namenode自然就找不到datanode上的這兩個程序了。
在配置檔案中的export HADOOP_PID_DIR可以解決這個

18:
Incompatible namespaceIDs in /usr/local/hadoop/dfs/data: namenode namespaceID = 405233244966; datanode namespaceID = 33333244
原因:
在每次執行hadoop namenode -format時,都會為NameNode生成namespaceID,,但是在hadoop.tmp.dir目錄下的DataNode還是保留上次的namespaceID,因為namespaceID的不一致,而導致DataNode無法開啟,所以只要在每次執行hadoop namenode -format之前,先刪除hadoop.tmp.dir目錄就可以開啟成功。請注意是刪除hadoop.tmp.dir對應的本地目錄,而不是HDFS目錄。
19:
bin/hadoop jps後報如下出錯:
Exception in thread “main” java.lang.NullPointerException
at sun.jvmstat.perfdata.monitor.protocol.local.LocalVmManager.activeVms(LocalVmManager.java:127)
at sun.jvmstat.perfdata.monitor.protocol.local.MonitoredHostProvider.activeVms(MonitoredHostProvider.java:133)
at sun.tools.jps.Jps.main(Jps.java:45)
原因為:
系統根目錄/tmp資料夾被刪除了。重新建立/tmp資料夾即可。
bin/hive
中出現 unable to create log directory /tmp/…也可能是這個原因

//以上是來自網上很多前輩的總結