hive安裝配置
Hive只在一個節點上安裝即可
1.上傳tar包
2.解壓
[[email protected] ~]$ tar -zxvf apache-hive-1.2.1-bin.tar.gz -C apps/
3.安裝mysql來作為元資訊資料庫,替換預設derby資料庫
mysql -uroot -p
1.設定root的密碼為root
2.刪除匿名使用者
3.允許使用者遠端連線
#(執行下面的語句 *.*:所有庫下的所有表 %:任何IP地址或主機都可以連線)
mysql> grant all privileges on *.* to [email protected] '%' identified by 'root’;
mysql> flush privileges;
4.配置hive (a)配置環境變數 HIVE_HOME 和 HADOOP_HOME
/etc/profile 中配置HIVE_HOME
export HIVE_HOME=/home/hadoop/apps/hive-1.2.1
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH:$HIVE_HOME/bin
vconf/hive-env.shzh 中配置HADOOP_HOME
[[email protected] ~]$ cd apps/hive-1.2.1/conf/
[[email protected] conf]$ cp hive-env.sh.template hive-env.sh
[[email protected] conf]$ vi hive-env.sh
# Set HADOOP_HOME to point to a specific hadoop install directory
HADOOP_HOME=/home/hadoop/apps/hadoop-2.6.4/
(b)配置元資料庫資訊 vi hive-site.xml
新增如下內容: <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> <description>password to use against metastore database</description> </property> </configuration>
5.安裝hive和mysq完成後,將mysql的連線jar包拷貝到$HIVE_HOME/lib目錄下
[[email protected] ~]$ mv mysql-connector-java-5.1.28.jar apps/hive-1.2.1/lib/
6. Jline包版本不一致的問題,需要拷貝hive的lib目錄中jline.2.12.jar的jar包替換掉hadoop中的
[[email protected] ~]$ rm ~/apps/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar
[[email protected] ~]$ cd apps/hive-1.2.1/lib/
[[email protected] lib]$ cp jline-2.12.jar ~/apps/hadoop-2.6.4/share/hadoop/yarn/lib/
啟動hive
[[email protected] ~]$ apps/hive-1.2.1/bin/hive
---------------------------------------------------------------------------------------------------- 7.建表(預設是內部表) create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t'; 建分割槽表 create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t'; 建外部表 create external table td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location '/td_ext';
8.建立分割槽表 普通表和分割槽表區別:有大量資料增加的需要建分割槽表 create table book (id bigint, name string) partitioned by (pubdate string) row format delimited fields terminated by '\t';
分割槽表載入資料 load data local inpath './book.txt' overwrite into table book partition (pubdate='2010-08-22'); load data local inpath '/root/data.am' into table beauty partition (nation="USA");
select nation, avg(size) from beauties group by nation order by avg(size);