hadoop偽分布式環境搭建
1.1、在vmware上安裝centos7的虛擬機
1.2、系統配置
配置網絡
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.120.131
GATEWAY=192.168.120.2
NETMASK=255.255.255.0
DNS1=8.8.8.8
DNS2=4.4.4.4
1.3、配置主機名
# hostnamectl set-hostname master1
# hostname master1
1.4、指定時區(如果時區不是上海)
# ll /etc/localtime
lrwxrwxrwx. 1 root root 35 6月 4 19:25 /etc/localtime -> ../usr/share/zoneinfo/Asia/Shanghai
如果時區不對的話需要修改時區,方法:
# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
1.5、上傳包
hadoop-2.9.1.tar
jdk-8u171-linux-x64.tar
2、開始搭建環境
2.1、創建用戶和組
[root@master1 ~]# groupadd hadoop
[root@master1 ~]# useradd -g hadoop hadoop
[root@master1 ~]# passwd hadoop
2.2、解壓包
切換用戶
[root@master1 ~]# su hadoop
創建存放包的目錄
[hadoop@master1 root]$ cd
[hadoop@master1 ~]$ mkdir src
[hadoop@master1 ~]$ mv *.tar src
解壓包
[hadoop@master1 ~]$ cd src
[hadoop@master1 src]$ tar -xf jdk-8u171-linux-x64.tar -C ../
[hadoop@master1 src]$ tar xf hadoop-2.9.1.tar -C ../
[hadoop@master1 src]$ cd
[hadoop@master1 ~]$ mv jdk1.8.0_171 jdk
[hadoop@master1 ~]$ mv hadoop-2.9.1 hadoop
2.3、配置環境變量
[hadoop@master1 ~]$ vi .bashrc
export JAVA_HOME=/home/hadoop/jdk
export JRE_HOME=/$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/home/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
使配置文件生效
[hadoop@master1 ~]$ source .bashrc
驗證
[hadoop@master1 ~]$ java -version
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
[hadoop@master1 ~]$ hadoop version
Hadoop 2.9.1
Subversion https://github.com/apache/hadoop.git -r e30710aea4e6e55e69372929106cf119af06fd0e
Compiled by root on 2018-04-16T09:33Z
Compiled with protoc 2.5.0
From source with checksum 7d6d2b655115c6cc336d662cc2b919bd
This command was run using /home/hadoop/hadoop/share/hadoop/common/hadoop-common-2.9.1.jar
2.4、修改hadoop配置文件
[hadoop@master1 ~]$ cd hadoop/etc/hadoop/
[hadoop@master1 hadoop]$ vi hadoop-env.sh
export JAVA_HOME=/home/hadoop/jdk
[hadoop@master1 hadoop]$ vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.120.131:9000</value>
</property>
</configuration>
[hadoop@master1 hadoop]$ vi hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/hadoop/hdfs/nn</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///data/hadoop/hdfs/snn</value>
</property>
<property>
<name>dfs.namenode.checkpoint.edits.dir</name>
<value>file:///data/hadoop/hdfs/snn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/hadoop/hdfs/dn</value>
</property>
</configuration>
[hadoop@master1 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@master1 hadoop]$ vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
[hadoop@master1 hadoop]$ vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<!-- 指定ResourceManager的地址-->
<property>
<name>yarn.resoutcemanager.hostname</name>
<value>192.168.120.131</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>file:///data/hadoop/yarn/nm</value>
</property>
</configuration>
2.5、創建目錄並賦予權限
[hadoop@master1 hadoop]$ exit
[root@master1 ~]# mkdir -p /data/hadoop/hdfs/{nn,dn,snn}
[root@master1 ~]# mkdir -p /data/hadoop/yarn/nm
[root@master1 ~]# chown -R hadoop:hadoop /data
2.6、格式化文件系統並啟動服務
[root@master1 ~]# su hadoop
[hadoop@master1 ~]$ cd hadoop/bin
[hadoop@master1 bin]$ ./hdfs namenode -format
[hadoop@master1 bin]$ cd sbin
[hadoop@master1 sbin]$ ./hadoop-daemon.sh start namenode
[hadoop@master1 sbin]$ ./hadoop-daemon.sh start datanode
[hadoop@master1 sbin]$ ./yarn-daemon.sh start resourcemanager
[hadoop@master1 sbin]$ ./yarn-daemon.sh start nodemanager
[hadoop@master1 sbin]$ ./mr-jobhistory-daemon.sh start historyserver
hadoop偽分布式環境搭建