CentOS 6.5 搭建Hadoop 1.2.1集群
1、操作系統環境配置
1.1、操作系統環境
主機名 | IP地址 | 角色 | Hadoop用戶 |
---|---|---|---|
hadoop-master | 192.168.30.50 | Hadoop主節點 | hadoop |
hadoop-slave01 | 192.168.30.51 | Hadoop從節點 | hadoop |
hadoop-slave02 | 192.168.30.52 | Hadoop從節點 | hadoop |
1.2、關閉防火墻和SELinux
1.2.1、關閉防火墻
service iptables stop chkconfig iptables off
1.2.2、關閉SELinux
setenforce 0
sed -i ‘s/enforcing/disabled/‘ /etc/sysconfig/selinux
1.3、hosts配置
vim /etc/hosts
########## Hadoop host ##########
192.168.30.50 hadoop-master
192.168.30.51 hadoop-slave01
192.168.30.52 hadoop-slave02
註:以上操作需要在root用戶,通過ping 主機名可以返回對應的IP即可
1.4、配置無密碼訪問
在3臺主機上使用hadoop用戶配置無密碼訪問,所有主機的操作相同,以hadoop-master為例
生成私鑰和公鑰ssh-keygen -t rsa
拷貝公鑰到主機(需要輸入密碼)
ssh-copy-id hadoop@hadoop-master
ssh-copy-id hadoop@hadoop-slave01
ssh-copy-id hadoop@hadoop-slave02
註:以上操作需要在hadoop用戶,通過hadoop用戶ssh到其他主機不需要密碼即可。
2、Java環境配置
2.1、下載JDK
mkdir -p /home/hadoop/app/java cd /home/hadoop/app/java wget -c http://download.oracle.com/otn/java/jdk/6u45-b06/jdk-6u45-linux-x64.bin
2.2、安裝java
cd /home/hadoop/app/java
chmod +x jdk-6u45-linux-x64.bin
./jdk-6u45-linux-x64.bin
2.3、配置Java環境變量
vim .bash_profile
export JAVA_HOME=/home/hadoop/app/java/jdk1.6.0_45
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
啟用環境變量source .bash_profile
註:使用hadoop用戶在所有機器安裝jdk,通過java –version
命令返回Java的版本信息即可
3、Hadoop安裝配置
hadoop的安裝配置使用hadoop用戶操作;
3.1、安裝Hadoop
-
下載hadoop 1.2.1
mkdir -p /home/hadoop/app/hadoop cd /home/hadoop/app/hadoop wget -c https://archive.apache.org/dist/hadoop/common/hadoop-1.2.1/hadoop-1.2.1-bin.tar.gz tar -zxf hadoop-1.2.1-bin.tar.gz
- 創建hadoop臨時文件目錄
mkdir -p /home/hadoop/app/tmp
3.2、配置Hadoop
Hadoop配置文件都是XML文件,使用hadoop用戶操作即可;
3.2.1、配置core-site.xml
vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/app/hadoop/hadoop-1.2.1/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.30.50:9000</value>
</property>
</configuration>
3.2.2、配置hdfs-site.xml
vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
3.2.3、配置mapred-site.xml
vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>http://192.168.30.50:9001</value>
</property>
</configuration>
3.2.4、配置master
vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/masters
hadoop-master
3.2.5、配置slaves
vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/slaves
hadoop-slave01
hadoop-slave02
3.2.6、配置hadoop-env.xml
vim /home/hadoop/app/hadoop/hadoop-1.2.1/conf/hadoop-env.sh
將JAVA_HOME
修改為如下:
export JAVA_HOME=/home/hadoop/app/java/jdk1.6.0_45
3.3、拷貝Hadoop程序到slave
scp -r app hadoop-slave01:/home/hadoop/
scp -r app hadoop-slave02:/home/hadoop/
3.4、配置Hadoop環境變量
在所有機器hadoop用戶家目錄下編輯 .bash_profile 文件,在最後追加:vim /home/hadoop/.bash_profile
### Hadoop PATH
export HADOOP_HOME_WARN_SUPPRESS=1
export HADOOP_HOME=/home/hadoop/app/hadoop/hadoop-1.2.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
讓環境變量生效:source /home/hadoop/.bash_profile
3.5、啟動Hadoop
在hadoop主節點上初始化HDFS文件系統,然後啟動hadoop集群
3.5.1、初始化HDFS文件系統
hadoop namenode –format
3.5.2、啟動和關閉Hadoop集群
-
啟動:
start-all.sh
- 關閉:
stop-all.sh
3.5.3、hadoop各節點的啟動進程
-
master
$ jps 22262 NameNode 22422 SecondaryNameNode 24005 Jps 22506 JobTracker
- slave
$ jps 2700 TaskTracker 2611 DataNode 4160 Jps
3.5.4、hadoop啟動後驗證
- 簡單的文件操作
hadoop fs -ls hdfs:/
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2018-01-09 16:15 /home
drwxr-xr-x - hadoop supergroup 0 2018-01-10 10:39 /user
- 簡單的MapReduce計算
hadoop jar /home/hadoop/app/hadoop/hadoop-1.2.1/hadoop-examples-1.2.1.jar pi 10 10
得到的計算結果是:
Number of Maps = 10 Samples per Map = 10 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Starting Job 18/01/10 13:49:35 INFO mapred.FileInputFormat: Total input paths to process : 10 18/01/10 13:49:36 INFO mapred.JobClient: Running job: job_201801101031_0002 18/01/10 13:49:37 INFO mapred.JobClient: map 0% reduce 0% 18/01/10 13:49:49 INFO mapred.JobClient: map 10% reduce 0% 18/01/10 13:49:50 INFO mapred.JobClient: map 30% reduce 0% 18/01/10 13:49:51 INFO mapred.JobClient: map 40% reduce 0% 18/01/10 13:49:59 INFO mapred.JobClient: map 50% reduce 0% 18/01/10 13:50:00 INFO mapred.JobClient: map 60% reduce 0% 18/01/10 13:50:02 INFO mapred.JobClient: map 80% reduce 0% 18/01/10 13:50:07 INFO mapred.JobClient: map 100% reduce 0% 18/01/10 13:50:12 INFO mapred.JobClient: map 100% reduce 33% 18/01/10 13:50:14 INFO mapred.JobClient: map 100% reduce 100% 18/01/10 13:50:16 INFO mapred.JobClient: Job complete: job_201801101031_0002 18/01/10 13:50:16 INFO mapred.JobClient: Counters: 30 18/01/10 13:50:16 INFO mapred.JobClient: Job Counters 18/01/10 13:50:16 INFO mapred.JobClient: Launched reduce tasks=1 18/01/10 13:50:16 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=95070 18/01/10 13:50:16 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 18/01/10 13:50:16 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 18/01/10 13:50:16 INFO mapred.JobClient: Launched map tasks=10 18/01/10 13:50:16 INFO mapred.JobClient: Data-local map tasks=10 18/01/10 13:50:16 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=25054 18/01/10 13:50:16 INFO mapred.JobClient: File Input Format Counters 18/01/10 13:50:16 INFO mapred.JobClient: Bytes Read=1180 18/01/10 13:50:16 INFO mapred.JobClient: File Output Format Counters 18/01/10 13:50:16 INFO mapred.JobClient: Bytes Written=97 18/01/10 13:50:16 INFO mapred.JobClient: FileSystemCounters 18/01/10 13:50:16 INFO mapred.JobClient: FILE_BYTES_READ=226 18/01/10 13:50:16 INFO mapred.JobClient: HDFS_BYTES_READ=2450 18/01/10 13:50:16 INFO mapred.JobClient: FILE_BYTES_WRITTEN=682653 18/01/10 13:50:16 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215 18/01/10 13:50:16 INFO mapred.JobClient: Map-Reduce Framework 18/01/10 13:50:16 INFO mapred.JobClient: Map output materialized bytes=280 18/01/10 13:50:16 INFO mapred.JobClient: Map input records=10 18/01/10 13:50:16 INFO mapred.JobClient: Reduce shuffle bytes=280 18/01/10 13:50:16 INFO mapred.JobClient: Spilled Records=40 18/01/10 13:50:16 INFO mapred.JobClient: Map output bytes=180 18/01/10 13:50:16 INFO mapred.JobClient: Total committed heap usage (bytes)=1146068992 18/01/10 13:50:16 INFO mapred.JobClient: CPU time spent (ms)=7050 18/01/10 13:50:16 INFO mapred.JobClient: Map input bytes=240 18/01/10 13:50:16 INFO mapred.JobClient: SPLIT_RAW_BYTES=1270 18/01/10 13:50:16 INFO mapred.JobClient: Combine input records=0 18/01/10 13:50:16 INFO mapred.JobClient: Reduce input records=20 18/01/10 13:50:16 INFO mapred.JobClient: Reduce input groups=20 18/01/10 13:50:16 INFO mapred.JobClient: Combine output records=0 18/01/10 13:50:16 INFO mapred.JobClient: Physical memory (bytes) snapshot=1843138560 18/01/10 13:50:16 INFO mapred.JobClient: Reduce output records=0 18/01/10 13:50:16 INFO mapred.JobClient: Virtual memory (bytes) snapshot=7827865600 18/01/10 13:50:16 INFO mapred.JobClient: Map output records=20 Job Finished in 41.091 seconds Estimated value of Pi is 3.20000000000000000000
hadoop搭建完成,如果有錯誤,可以查看日誌信息。
4、參考資料
- [hadoop文檔]<http://hadoop.apache.org/docs/r1.0.4/cn/cluster_setup.html>
CentOS 6.5 搭建Hadoop 1.2.1集群