1. 程式人生 > >hive(1.2.2)執行的一些錯誤(不定期更新)

hive(1.2.2)執行的一些錯誤(不定期更新)

(1)hive啟動前提:

java 環境

hadoop啟動

MySQL 啟動

(2) Missing Hive Execution Jar: /usr/usr/hive-1.2.2/lib/hive-exec-*.jar

直接用:Missing Hive Execution Jar ...查百度,谷歌,是查不到這個答案的. 本人找了好久,通過這篇:https://stackoverflow.com/questions/25033471/hivecommand-not-found-in-ubuntu

受到啟發, hive配置檔案中,還有一個 ./bin/hive-config.sh 檔案,在這裡配置(本人之前實際上是這裡配錯了,後來又忘了這個檔案):

JAVA_HOME PATH

HADOOP_HOME PATH

HIVE_HOME PATH

以上路徑根據安裝的路徑.

(3)

Logging initialized using configuration in jar:file:/usr/hive-1.2.2/lib/hive-common-roperties
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteEoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive. Name
The reported blocks 8 needs additional 20 blocks to reach the threshold 0.9990 of to
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be t once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.j
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeR
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslanodeProtocolServerSideTranslatorPB.java:622)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Pr6)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:52
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namen Cannot create directory /tmp/hive. Name node is in safe mode.
The reported blocks 8 needs additional 20 blocks to reach the threshold 0.9990 of to
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be t once the thresholds have been reached.

解決:關閉namenode 安全模式:

進入hadoop路徑下,

./bin/hdfs dfsadmin -safemode leave

其中 /tmp/hive   路徑,由於hadoop啟動預設的快取路徑是/tmp,這樣會跟hadoop共用同一個快取路徑,因此,最好改一下:

進入hive路徑下,開啟./conf/hive-site.xml,本人的是改成這樣的:

<property>
    <name>hive.exec.scratchdir</name>
    <!--<value>/tmp/hive</value>-->    
	<value>/usr/hive-1.2.2/iotmp/hive</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
  </property>

(4)寫hql的時候,

alter table t_hive add columns(d int);

出現錯誤:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. For direct MetaStore DB connections, we don't support retries at the client level.


發現是許可權不夠,在hive-site.xml中修改:

<property>
    <name>hive.insert.into.multilevel.dirs</name>
    <!--<value>false</value>-->
	<value>true</value>
    <description>
      Where to insert into multilevel directories like
      "insert directory '/HIVEFT25686/chinna/' from table"
    </description>
  </property>

不行的話,再看下日誌,如果日誌中有出現這樣的錯誤:

com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1665)
	at com.mysql.jdbc.Connection.execSQL(Connection.java:3170)
	at com.mysql.jdbc.Connection.execSQL(Connection.java:3099)
	at com.mysql.jdbc.Statement.executeQuery(Statement.java:1138)
	at com.mysql.jdbc.Connection.getTransactionIsolation(Connection.java:3731)

表明是mysql-connector-java-5.**.tar.gz版本不對,本人就遇到如此,重新上傳一個合適的版本即可.

如果沒有以上出現的mysql-connector-java-5.**.版本錯誤,不行的話,再輸入命令試試:

$HIVE_HOME/bin/schematool -dbType mysql -initSchema


(5)

hive> select count(*) from user_log;
Query ID = root_20170815203054_4969fe90-c7e8-442e-ad9e-6796b9330b8d
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1499242411373_0006, Tracking URL = http://master:8088/proxy/application_1499242411373_0006/
Kill Command = /usr/hadoop-2.7.3/bin/hadoop job  -kill job_1499242411373_0006


mapreduce程式一直卡在starting job 狀態,首先,看 Tracking URL,如果找不到原因,

可能原因是系統記憶體不足,解決方法,要麼系統加記憶體,要麼把其它不重要的程式kill掉,

要麼在 hive-site.xml中修改hive.map.aggr.hash.percentmemory   ,該值最小為0.25