CDH版本hadoop2.6偽分布式安裝
1、基礎環境配置
主機名 | IP地址 | 角色 | Hadoop用戶 |
---|---|---|---|
localhost | 192.168.30.139 | NameNode、ResourceManager、SecondaryNameNode、DataNode、NodeManager | hadoop |
1.1、關閉防火墻和SELinux
1.1.1、關閉防火墻
$ systemctl stop firewalld
$ systemctl disable firewalld
1.1.2、關閉SELinux
$ setenforce 0
$ sed -i ‘s/enforcing/disabled/‘ /etc/sysconfig/selinux
註:以上操作需要使用root用戶
1.2、hosts配置
$ vi /etc/hosts
########## Hadoop host ##########
192.168.30.139 localhost
註:以上操作需要使用root用戶,通過ping 主機名可以返回對應的IP即可
1.3、配置無密碼訪問
首先要創建hadoop用戶,然後在4臺主機上使用hadoop用戶配置無密碼訪問,所有主機的操作相同,以hadoop-master為例
生成私鑰和公鑰
$ ssh-keygen -t rsa
拷貝公鑰到主機(需要輸入密碼)
$ ssh-copy-id hadoop@hadoop
註:以上操作需要在hadoop用戶,通過hadoop用戶ssh到其他主機不需要密碼即可。
1.4、Java環境配置
1.4.1、下載JDK
註:使用hadoop用戶操作
$ cd /home/hadoop $ curl -o jdk-8u151-linux-x64.tar.gz http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz?AuthParam=1516091623_fa4174d4b1eed73f36aa38230498cd48
1.4.2、安裝java
安裝java可使用hadoop用戶操作;
$ mkdir -p /home/hadoop/app/java
$ tar -zxf jdk-8u151-linux-x64.tar.gz
$ mv jdk1.8.0_151 /home/hadoop/app/java/jdk1.8
- 配置Java環境變量:
$ vi /home/hadoop/.bash_profile
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
啟用環境變量$ source /home/hadoop/.bash_profile
註:通過java –version
命令返回Java的版本信息即可
2、安裝hadoop
2.1、下載安裝CDH版本的hadoop
$ cd ~
$ curl -O http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.13.0.tar.gz
$ mkdir -p app/hadoop
$ tar -zxf hadoop-2.6.0-cdh5.9.0.tar.gz -C ./app/hadoop/
2.2、安裝配置hadoop
hadoop的安裝配置使用hadoop用戶操作;
- 創建目錄,用於存放hadoop數據
$ mkdir -p /home/hadoop/app/hadoop/hdfs/{name,data}
2.2.1、配置core-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/app/hadoop/tmp</value>
</property>
</configuration>
2.2.2、配置hdfs-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/app/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/app/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
2.2.3、配置mapred-site.xml
$ cd /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/
$ cp mapred-site.xml.template mapred-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
2.2.4、配置yarn-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
2.2.5、配置slaves
$ vi app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/slaves
localhost
2.2.6、配置hadoop-env
修改hadoop-env.sh文件的JAVA_HOME環境變量,操作如下:$ vi /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
2.2.7、配置yarn-env
修改yarn-env.sh文件的JAVA_HOME環境變量,操作如下:
$ vi /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
2.2.8、配置mapred-env
修改mapred-env.sh文件的JAVA_HOME環境變量,操作如下:
$ vi /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
2.2.9、配置HADOOP_PREFIX
$ vi /home/hadoop/.bash_profile
####HADOOP_PREFIX
export HADOOP_PREFIX=/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
啟用環境變量
$ source /home/hadoop/.bash_profile
註:通過echo $HADOOP_PREFIX
命令返回hadoop的安裝目錄
3、啟動hadoop偽分布式
3.1、啟動hdfs
-
格式化hdfs
$ hdfs namenode -format
-
啟動dfs
$ start-dfs.sh
- 啟動的進程
$ jps 15376 NameNode 15496 DataNode 15656 SecondaryNameNode 15759 Jps
註:關閉dfs命令為:
stop-dfs.sh
3.2、啟動yarn
$ start-yarn.sh
註:關閉yarn命令為:stop-yarn.sh
3.3、啟動集群
hdfs和yarn的啟動可以使用一條命令執行:
-
啟動:
start-all.sh
-
關閉:
stop-all.sh
-
啟動後的所有進程:
$ jps 15376 NameNode 16210 Jps 15811 ResourceManager 15907 NodeManager 15496 DataNode 15656 SecondaryNameNode
- MapReducer PI運算
$ hadoop jar /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.9.0.jar pi 5 10
返回的結果是:Estimated value of Pi is 3.28000000000000000000
- YARN管理界面:http://192.168.30.139:8088
- HDFS管理界面:http://192.168.30.139:50070
4、hdfs的shell操作和Wordcount演示
4.1、簡單的hdfs shell操作
-
創建目錄
$ hadoop fs -mkdir /input $ hadoop fs -mkdir /output
-
查看目錄
$ hadoop fs -ls / Found 4 items drwxr-xr-x - hadoop supergroup 0 2018-01-19 10:56 /input drwxr-xr-x - hadoop supergroup 0 2018-01-19 10:56 /output drwx------ - hadoop supergroup 0 2018-01-19 10:51 /tmp drwxr-xr-x - hadoop supergroup 0 2018-01-19 10:51 /user
-
上傳文件
$ hadoop fs -put /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/LICENSE.txt /input
- 查看文本文件內容
$ hadoop fs -cat /input/LICENSE.txt
4.2、Wordcount
將HDFS上/input/LICENSE.txt
使用hadoop內置Wordcount的jar包統計文檔的Wordcount
-
啟動測試
$ hadoop jar /home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.9.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.9.0.jar wordcount /input /output/wordcounttest
- 查看結果
$ hadoop fs -ls /output/wordcounttest Found 2 items -rw-r--r-- 1 hadoop supergroup 0 2018-01-19 11:04 /output/wordcounttest/_SUCCESS -rw-r--r-- 1 hadoop supergroup 22117 2018-01-19 11:04 /output/wordcounttest/part-r-00000 $ $ hadoop fs -cat /output/wordcounttest/part-r-00000|sort -k2 -nr|head the 641 of 396 or 269 and 255 to 241 this 164 in 162 OR 161 OF 160 a 128
5、參考資料
http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.5/hadoop-project-dist/hadoop-common/SingleCluster.html
CDH版本hadoop2.6偽分布式安裝