hive1.2.1版本安裝
阿新 • • 發佈:2018-12-15
Hive只在一個節點上安裝即可 1.上傳tar包 2.解壓 tar -zxvf hive-1.2.1.tar.gz -C /usr/local mv hive-1.2.1 hive 3.安裝mysql資料庫(切換到root使用者)(裝在哪裡沒有限制,只有能聯通hadoop叢集的節點) mysql安裝僅供參考,不同版本mysql有各自的安裝流程 rpm -qa | grep mysql rpm -e mysql-libs-5.1.66-2.el6_3.i686 --nodeps rpm -ivh MySQL-server-5.1.73-1.glibc23.i386.rpm rpm -ivh MySQL-client-5.1.73-1.glibc23.i386.rpm 修改mysql的密碼 /usr/bin/mysql_secure_installation (注意:刪除匿名使用者,允許使用者遠端連線) 登陸mysql mysql -u root -p 4.配置hive (a)配置HIVE_HOME環境變數 vi conf/hive-env.sh 配置其中的$hadoop_home (b)配置元資料庫資訊 vi hive-site.xml 新增如下內容: <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.31.11:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> <description>password to use against metastore database</description> </property> </configuration> 5.安裝hive和mysq完成後,將mysql的連線jar包拷貝到$HIVE_HOME/lib目錄下 如果出現沒有許可權的問題,在mysql授權(在安裝mysql的機器上執行) mysql -uroot -p #(執行下面的語句 *.*:所有庫下的所有表 %:任何IP地址或主機都可以連線) GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION; FLUSH PRIVILEGES; 6. Jline包版本不一致的問題,需要拷貝hive的lib目錄中jline.2.12.jar的jar包替換掉hadoop中的 /home/hadoop/app/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar 啟動hive bin/hive //如果後面不想進入hive/bin下啟動hive可以設定到環境變數裡 修改環境變數/etc/profile: vim /etc/profile 在檔案末尾新增以下內容 #hive export HIVE_HOME=/usr/local/hive export PATH=$PATH:$HIVE_HOME/bin 執行立即生效 source /etc/profile ---------------------------------------------------------------------------------------------------- 6.建表(預設是內部表) create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t'; 建分割槽表 create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t'; 建外部表 create external table td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location '/td_ext'; 7.建立分割槽表 普通表和分割槽表區別:有大量資料增加的需要建分割槽表 create table book (id bigint, name string) partitioned by (pubdate string) row format delimited fields terminated by '\t'; 分割槽表載入資料 load data local inpath './book.txt' overwrite into table book partition (pubdate='2010-08-22'); load data local inpath '/root/data.am' into table beauty partition (nation="USA"); select nation, avg(size) from beauties group by nation order by avg(size);