1. 程式人生 > >hadoop/hbase 環境搭建

hadoop/hbase 環境搭建

hadoop安裝

(1)建立hadoop使用者組和使用者 見(1.11.5

2)配置hadoop使用者信任關係

1)生成 非對稱金鑰 :ssh-keygen -t rsa // 一直回車

1)znode01,znode02,znode03 新增信任關係

(1)znode01 無密碼訪問znode02

(2)znode02 mkdir ~/.ssh/tmp ; touch ~/.ssh/authorized_keys;

(3)znode01 scp  ~/.ssh/id_rsa.pub [email protected]:~/.ssh/tmp //此時輸入znode02密碼即可

(4)znode02 cat ~/.ssh/tmp/id_rsa.pub >> ~/.ssh/authorized_keys

(5)znode01 ssh 無密碼訪問自己 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

......重複以上相似操作配置其他機器

3)安裝包hadoop-2.6.0.zip準備

mkdir ~/env;mkdir ~/env/hadoop;將hadoop-2.6.0.zip 解壓到hadoop

(4)配置檔案修改

(1)修改hadoop-env.sh

 配置JAVA_HOME:   export JAVA_HOME=/usr/software/jdk1.7.0_79

(2)修改core-site.xml

   <configuration>

        <property>

        <name>fs.defaultFS</name>

        <value>hdfs://hdpcls1</value>

    </property>

        <property>

        <name>io.file.buffer.size</name>

        <value>131072</value>

    </property>

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/home/hdpusr/env/hadoop/tmp</value>

    </property>

        <property>

        <name>ha.zookeeper.quorum</name>

        <value>znode01:29181,znode02:29181,znode03:29181</value>

    </property>

</configuration>

(3)修改hdfs-stie.xml

<configuration>

<property>

<name>dfs.nameservices</name>

<value>hdpcls1</value>

</property>

<property>

<name>dfs.ha.namenodes.hdpcls1</name>

<value>nn1,nn2</value>

</property>

<property>

<name>dfs.namenode.rpc-address.hdpcls1.nn1</name>

<value>znode01:8920</value>

</property>

<property>

<name>dfs.namenode.http-address.hdpcls1.nn1</name>

<value>znode01:59070</value>

</property>

<property>

<name>dfs.namenode.rpc-address.hdpcls1.nn2</name>

<value>znode02:8920</value>

</property>

<property>

<name>dfs.namenode.http-address.hdpcls1.nn2</name>

<value>znode02:59070</value>

</property>

<property>

<name>dfs.namenode.shared.edits.dir</name>

<value>qjournal://znode01:8485;znode02:8485;znode03:8485/hdpcls1</value>

</property>

<property>

<name>dfs.journalnode.edits.dir</name>

<value>/home/hdpusr/env/hadoop/dfs/jn</value>

</property>

<property>

  <name>dfs.journalnode.rpc-address</name>

  <value>0.0.0.0:8485</value>

</property>

<property>

  <name>dfs.journalnode.http-address</name>

  <value>0.0.0.0:8480</value>

</property>

<property>

  <name>dfs.journalnode.https-address</name>

  <value>0.0.0.0:8481</value>

</property>

<property>

<name>dfs.ha.automatic-failover.enabled</name>

<value>true</value>

</property>

<property>

<name>dfs.client.failover.proxy.provider.hdpcls1</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

<name>dfs.ha.fencing.methods</name>

<value>sshfence</value>

</property>

<property>

<name>dfs.ha.fencing.ssh.private-key-files</name>

<value>/home/hdpusr/.ssh/id_rsa</value>

</property>

<property>

<name>dfs.ha.fencing.ssh.connect-timeout</name>

<value>30000</value>

</property>

<property>

  <name>dfs.datanode.data.dir</name>

  <value>file:/home/hdpusr/env/hadoop/dfs/dn/data01</value>

</property>

</configuration>

(4)修改mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

(5)修改yarn-site.xml

<configuration>

<property>

<name>yarn.resourcemanager.ha.enabled</name>

<value>true</value>

</property>

<property>

<name>yarn.resourcemanager.cluster-id</name>

<value>rmcls1</value>

</property>

<property>

<name>yarn.resourcemanager.ha.rm-ids</name>

<value>rm1,rm2</value>

</property>

<property>

<name>yarn.resourcemanager.hostname.rm1</name>

<value>znode01</value>

</property>

<property>

<name>yarn.resourcemanager.hostname.rm2</name>

<value>znode02</value>

</property>

<property>

<name>yarn.resourcemanager.recovery.enabled</name>

<value>true</value>

</property>

<property>

<name>yarn.resourcemanager.store.class</name>

<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>

</property>

<property>

<name>yarn.resourcemanager.zk-address</name>

<value>znode01:29181,znode02:29181,znode03:29181</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

</configuration>

(6)修改slaves

vim slaves

新增

znode02

znode03

(7)建立hadoop目錄

mkdir -p ~/env/hadoop/{tmp,dfs/jn,dfs/dn/data01}

(5)環境變數配置

su - hdpusr01

vim ~/.bash_profile

export HADOOP_HOME=${HOME}/env/hadoop

export HADOOP_CONF_DIR=$HOME/env/hadoop/etc/hadoop

#export HADOOP_PID_DIR=$HOME/env/hadoop/pid

export HADOOP_LOG_DIR=$HOME/env/hadoop/logs

export HADOOP_LIBEXEC_DIR=$HOME/env/hadoop/libexec

export HADOOP_PREFIX=${HADOOP_HOME}

export HADOOP_COMMON_HOME=${HADOOP_HOME}

export HADOOP_HDFS_HOME=${HADOOP_HOME}

export HADOOP_LIB=${HADOOP_HOME}/lib

export HADOOP_LIBRARY_PATH=${HADOOP_HOME}/lib/native

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HADOOP_HOME}/lib/native

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export HADOOP_MAPRED_HOME=${HADOOP_HOME}

export HADOOP_YARN_USER=pochdp04

export YARN_CONF_DIR=${HADOOP_CONF_DIR}

export YARN_HOME=${HADOOP_HOME}

export HADOOP_YARN_HOME=${HADOOP_HOME}

export YARN_LOG_DIR=${HADOOP_LOG_DIR}

export YARN_PID_DIR=${HADOOP_CONF_DIR}/../yarn

source ~/.bash_profile 使環境變數生效

(6)格式化操作,啟動停止操作

(1) 保證zk正常執行

(2)znode01,znode02,znode03 啟動sbin目錄下執行:./hadoop-daemon.sh start journalnode

28365 JournalNode --正常的情況可以看到該程序

(3)格式化hdfs

znode01 節點bin目錄執行:./hdfs namenode -format

顯示:

15/12/18 13:34:35 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1518459121-10.46.52.30-1450416875417

15/12/18 13:34:35 INFO common.Storage: Storage directory /aifs01/users/hdpusr01/hadoop/tmp/dfs/name has been successfully formatted.

15/12/18 13:34:35 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

15/12/18 13:34:35 INFO util.ExitUtil: Exiting with status 0

【將/home/hdpusr/env/hadoop/tmp 拷貝之znode02 /home/hdpusr/env/hadoop/tmp 下】

(4)格式化ZK

znode01 節點bin目錄執行:./hdfs zkfc -formatZK

顯示:

15/12/18 13:49:04 INFO ha.ActiveStandbyElector: Session connected.

15/12/18 13:49:04 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/hdpcls1 in ZK.

15/12/18 13:49:04 INFO zookeeper.ZooKeeper: Session: 0x251b09471e80001 closed

15/12/18 13:49:04 INFO zookeeper.ClientCnxn: EventThread shut down

成功後zk 出現節點 hadoop-ha

[zk: 127.0.0.1:29181(CONNECTED) 1] ls /

[consumers, config, hadoop-ha, zookeeper, brokers, admin, controller_epoch]

(5)啟動分散式儲存 znode01 sbin上執行 ./start-dfs.sh

啟動成功

znode01 程序:

15507 NameNode

14371 JournalNode

23478 DFSZKFailoverController

znode02 程序:

23870 DataNode

23474 JournalNode

23998 DFSZKFailoverController

znode03 程序

16268 JournalNode

16550 DataNode

同時zk註冊節點

[zk: 127.0.0.1:29181(CONNECTED) 4] ls /hadoop-ha/hdpcls1

[ActiveBreadCrumb, ActiveStandbyElectorLock]

此時可在頁面輸入http://118.190.79.15:59070 訪問

(6)啟動分散式資源管理

znode01 sbin上執行:./start-yarn.sh

此時znode01,

出現程序:17746 ResourceManager

znode02

出現程序:24975 NodeManager

znode03

 出現程序17389 NodeManager

Zk註冊節點 :rmstore, yarn-leader-election

[zk: 127.0.0.1:29181(CONNECTED) 0] ls /

[hadoop-ha, admin, zookeeper, consumers, config, rmstore, yarn-leader-election, brokers, controller_epoch]

(7) 全部啟動:./start-all.sh,全部停止./stop-all.sh

[[email protected] sbin]$ ./stop-all.sh

This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh

17/03/25 09:56:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Stopping namenodes on [znode01 znode02]

znode02: no namenode to stop

znode01: stopping namenode

znode03: stopping datanode

znode02: stopping datanode

Stopping journal nodes [znode01 znode02 znode03]

znode03: stopping journalnode

znode02: stopping journalnode

znode01: stopping journalnode

17/03/25 09:57:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Stopping ZK Failover Controllers on NN hosts [znode01 znode02]

znode02: stopping zkfc

znode01: no zkfc to stop

stopping yarn daemons

stopping resourcemanager

znode02: stopping nodemanager

znode03: stopping nodemanager

no proxyserver to stop

[[email protected] sbin]$ jps

22992 Jps

[[email protected] sbin]$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

17/03/25 09:57:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [znode01 znode02]

znode02: starting namenode, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-namenode-znode02.out

znode01: starting namenode, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-namenode-znode01.out

znode03: starting datanode, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-datanode-znode03.out

znode02: starting datanode, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-datanode-znode02.out

Starting journal nodes [znode01 znode02 znode03]

znode02: starting journalnode, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-journalnode-znode02.out

znode03: starting journalnode, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-journalnode-znode03.out

znode01: starting journalnode, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-journalnode-znode01.out

17/03/25 09:57:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting ZK Failover Controllers on NN hosts [znode01 znode02]

znode02: starting zkfc, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-zkfc-znode02.out

znode01: starting zkfc, logging to /home/hdpusr/env/hadoop/logs/hadoop-hdpusr-zkfc-znode01.out

starting yarn daemons

starting resourcemanager, logging to /home/hdpusr/env/hadoop/logs/yarn-hdpusr-resourcemanager-znode01.out

znode03: starting nodemanager, logging to /home/hdpusr/env/hadoop/logs/yarn-hdpusr-nodemanager-znode03.out

znode02: starting nodemanager, logging to /home/hdpusr/env/hadoop/logs/yarn-hdpusr-nodemanager-znode02.out

(8)檢視叢集狀態 bin目錄 ./hdfs dfsadmin -report

(9)http://118.190.79.15:59070/dfshealth.html#tab-overview

//////////////////////////////////////////////////////////////

***********************************************hadoop完全分散式部署**********************************************
0.叢集規劃
主機名ip地址 安裝的軟體執行的程序
HBS0110.46.52.30hadoop,hbasenamenode,zkfc,resourcemanager
HBS0210.46.52.31hadoopnamenode,zkfc,resourcemanager
HBS0310.46.52.32hadoop,hbase datanode
HBS0410.46.52.33hadoop,zookeeper,hbasedatanode,nodemanager,journalnode
HBS0510.46.52.34 hadoop,zookeeper,hbasedatanode,nodemanager,journalnode
HBS0610.46.52.35hadoop,zookeeper,hbasedatanode,nodemanager,journalnode




1.建立使用者以及配置主機名
mkdir -p /zn/users
useradd -u 351   -g hadoop -G ibss -d /zn/users/hdpusr01 -m hdpusr01
passwd hdpusr01     --stdin <<< Linuhdp_0805
echo -e "\n. ~puwadm/wprofile\n" >> ~hdpusr01/.bash_profile 


useradd -u 352   -g hbase -G ibss -d /zn/users/hbsusr01 -m hbsusr01
passwd hbsusr01     --stdin <<< Linuhbs_0805
echo -e "\n. ~puwadm/wprofile\n" >> ~hbsusr01/.bash_profile 




vi /etc/hosts            
10.46.52.30 HBS01
10.46.52.31 HBS02
10.46.52.32 HBS03
10.46.52.33 HBS04
10.46.52.34 HBS05
10.46.52.35 HBS06


2.配置互信
--每臺主機都需要操作
su - hdpusr01
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
chmod 600 .ssh/authorized_keys


--選擇一臺master操作(HBS01)
ssh HBS02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS03 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS04 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS05 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS06 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 


scp ~/.ssh/authorized_keys HBS02:/zn/users/hdpusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS03:/zn/users/hdpusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS04:/zn/users/hdpusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS05:/zn/users/hdpusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS06:/zn/users/hdpusr01/.ssh/authorized_keys


--驗證配置是否生效
ssh HBS01 date
ssh HBS02 date
ssh HBS03 date
ssh HBS04 date
ssh HBS05 date
ssh HBS06 date


3.安裝jdk
which java




4.安裝Hadoop
4.1 安裝zookkeper叢集(HBS04 - HBS06)
--配置zookeeper叢集
[[email protected] hdpusr01]# tar xvf /opt/software/zookeeper.ccs01.tgz
[[email protected] hdpusr01]# chown -R hdpusr01:hadoop zookeeper/
[[email protected] hdpusr01]# rm -rf zookeeper/logs/*
[[email protected] hdpusr01]# rm -rf zookeeper/zk-data/version-2/
[[email protected] hdpusr01]# rm -rf zookeeper/zk-data/zookeeper_server.pid 
[[email protected] hdpusr01]# vi zookeeper/conf/zoo.cfg
dataDir=/zn/users/hdpusr01/zookeeper/zk-data
server.1=HBS04:29888:39889
server.2=HBS05:29888:39889
server.3=HBS06:29888:39889


[email protected]:/zn/users/hdpusr01> vi zookeeper/bin/zkEnv.sh
export ZOOCFGDIR=/zn/users/hdpusr01/zookeeper/conf
export ZOO_LOG_DIR=/zn/users/hdpusr01/zookeeper/logs


[[email protected] hdpusr01]# su - hdpusr01
[email protected]:/zn/users/hdpusr01> tar cvf zkper.tar zookeeper/
[email protected]:/zn/users/hdpusr01> scp zkper.tar HBS05:/zn/users/hdpusr01                                                                                                                           
[email protected]:/zn/users/hdpusr01> scp zkper.tar HBS06:/zn/users/hdpusr01


[email protected]:/zn/users/hdpusr01> tar xvf zkper.tar
[email protected]:/zn/users/hdpusr01> echo 2 >zookeeper/zk-data/myid


[email protected]:/zn/users/hdpusr01> tar xvf zkper.tar
[email protected]:/zn/users/hdpusr01> echo 3 >zookeeper/zk-data/myid


--啟動zookeeper
[email protected]:/zn/users/hdpusr01> zookeeper/bin/zkServer.sh start
[email protected]:/zn/users/hdpusr01> zookeeper/bin/zkServer.sh start
[email protected]:/zn/users/hdpusr01> zookeeper/bin/zkServer.sh start




4.2 安裝hadoop叢集(AI-OPT-HBS01 - HBS06)
[[email protected] hdpusr01]# tar xvf /opt/software/hadoop-2.6.0.tgz
[[email protected] hdpusr01]# chown -R hdpusr01:hadoop hadoop-2.6.0/
[email protected]:/zn/users/hdpusr01> mv hadoop-2.6.0 hadoop


--修改hadoop-env.sh
[email protected]:/zn/users/hdpusr01> vi hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/opt/freeware/jdk1.7.0_79


--修改core-site.xml
[email protected]:/zn/users/hdpusr01/hadoop/etc/hadoop> vi core-site.xml
<configuration>
        <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hdpcls1</value>
    </property>
        <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/zn/users/hdpusr01/hadoop/tmp</value>
    </property>
        <property>
        <name>ha.zookeeper.quorum</name>
        <value>HBS04:29181,HBS05:29181,HBS06:29181</value>
    </property>
</configuration>




--修改hdfs-stie.xml
[email protected]:/zn/users/hdpusr01/hadoop/etc/hadoop> vi hdfs-site.xml
<configuration>


<property>
<name>dfs.nameservices</name>
<value>hdpcls1</value>
</property>




<property>
<name>dfs.ha.namenodes.hdpcls1</name>
<value>nn1,nn2</value>
</property>



<property>
<name>dfs.namenode.rpc-address.hdpcls1.nn1</name>
<value>HBS01:8920</value>
</property>


<property>
<name>dfs.namenode.http-address.hdpcls1.nn1</name>
<value>HBS01:59070</value>
</property>


<property>
<name>dfs.namenode.rpc-address.hdpcls1.nn2</name>
<value>HBS02:8920</value>
</property>


<property>
<name>dfs.namenode.http-address.hdpcls1.nn2</name>
<value>HBS02:59070</value>
</property>


<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://HBS04:8485;HBS05:8485;HBS06:8485/hdpcls1</value>
</property>


<property>
<name>dfs.journalnode.edits.dir</name>
<value>/zn/users/hdpusr01/hadoop/dfs/jn</value>
</property>


<property>
  <name>dfs.journalnode.rpc-address</name>
  <value>0.0.0.0:8485</value>
</property>


<property>
  <name>dfs.journalnode.http-address</name>
  <value>0.0.0.0:8480</value>
</property>


<property>
  <name>dfs.journalnode.https-address</name>
  <value>0.0.0.0:8481</value>
</property>




<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>


<property>
<name>dfs.client.failover.proxy.provider.hdpcls1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>


<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>


<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/zn/users/hdpusr01/.ssh/id_rsa</value>
</property>


<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>


<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/zn/users/hdpusr01/hadoop/dfs/dn/data01</value>
</property>
</configuration>
 
--修改mapred-site.xml
[email protected]:/zn/users/hdpusr01/hadoop/etc/hadoop> vi mapred-site.xml
<configuration>


<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
 
--修改yarn-site.xml
[email protected]:/zn/users/hdpusr01/hadoop/etc/hadoop>  vi yarn-site.xml
<configuration>


<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>


<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>rmcls1</value>
</property>


<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>


<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>HBS01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>HBS02</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>


<property>
<name>yarn.resourcemanager.zk-address</name>
<value>HBS04:29181,HBS05:29181,HBS06:29181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>




--修改slaves
[email protected]:/zn/users/hdpusr01/hadoop/etc/hadoop> vi slaves 
HBS03
HBS04
HBS05
HBS06


--建立hadoop目錄
[email protected]:/zn/users/hdpusr01> mkdir -p hadoop/{tmp,dfs/jn,dfs/dn/data01}




--將配置好的hadoop複製到其他節點
[email protected]:/zn/users/hdpusr01> tar cvf  hdp.tar hadoop
[email protected]:/zn/users/hdpusr01> scp  hdp.tar HBS02:/zn/users/hdpusr01  
[email protected]:/zn/users/hdpusr01> scp  hdp.tar HBS03:/zn/users/hdpusr01  
[email protected]:/zn/users/hdpusr01> scp  hdp.tar HBS04:/zn/users/hdpusr01  
[email protected]:/zn/users/hdpusr01> scp  hdp.tar HBS05:/zn/users/hdpusr01  
[email protected]:/zn/users/hdpusr01> scp  hdp.tar HBS06:/zn/users/hdpusr01  




--解壓hadoop包(AI-OPT-HBS02 - HBS05)
tar xvf hdp.tar






--檢視zookeeper叢集狀態(AI-OPT-HBS04 - HBS06)
[email protected]:/zn/users/hdpusr01> zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /zn/users/hdpusr01/zookeeper/conf/zoo.cfg
Mode: follower


[email protected]:/zn/users/hdpusr01> zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /zn/users/hdpusr01/zookeeper/conf/zoo.cfg
Mode: leader


[email protected]:/zn/users/hdpusr01> zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /zn/users/hdpusr01/zookeeper/conf/zoo.cfg
Mode: follower


--啟動journalnode(AI-OPT-HBS04 - HBS06)
[email protected]:/zn/users/hdpusr01/hadoop/sbin> ./hadoop-daemon.sh start journalnode
[email protected]:/zn/users/hdpusr01/hadoop/sbin> jps
27035 QuorumPeerMain
28365 JournalNode--正常的情況可以看到該程序




--格式化HDFS(AI-OPT-HBS01)
[email protected]:/zn/users/hdpusr01/hadoop/bin> ./hdfs namenode -format
15/12/18 13:34:35 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1518459121-10.46.52.30-1450416875417
15/12/18 13:34:35 INFO common.Storage: Storage directory /zn/users/hdpusr01/hadoop/tmp/dfs/name has been successfully formatted.
15/12/18 13:34:35 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/12/18 13:34:35 INFO util.ExitUtil: Exiting with status 0


--拷貝/zn/users/hdpusr01/hadoop/tmp(AI-OPT-HBS01)
[email protected]:/zn/users/hdpusr01> scp -r /zn/users/hdpusr01/hadoop/tmp/* HBS02:/zn/users/hdpusr01/hadoop/tmp




--格式化ZK(AI-OPT-HBS01)
[email protected]:/zn/users/hdpusr01/hadoop/bin> ./hdfs zkfc -formatZK
15/12/18 13:49:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=HBS04:29181,HBS05:29181,HBS06:29181 sessionTimeout=5000 watcher[email protected]6456c5f4
15/12/18 13:49:04 INFO zookeeper.ClientCnxn: Opening socket connection to server HBS05/10.46.52.34:29181. Will not attempt to authenticate using SASL (unknown error)
15/12/18 13:49:04 INFO zookeeper.ClientCnxn: Socket connection established to HBS05/10.46.52.34:29181, initiating session
15/12/18 13:49:04 INFO zookeeper.ClientCnxn: Session establishment complete on server HBS05/10.46.52.34:29181, sessionid = 0x251b09471e80001, negotiated timeout = 80000
15/12/18 13:49:04 INFO ha.ActiveStandbyElector: Session connected.
15/12/18 13:49:04 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/hdpcls1 in ZK.
15/12/18 13:49:04 INFO zookeeper.ZooKeeper: Session: 0x251b09471e80001 closed
15/12/18 13:49:04 INFO zookeeper.ClientCnxn: EventThread shut down


/* 在格式化ZK後zookeeper中多出的znode */
[zk: 10.46.52.34:29181(CONNECTED) 0] ls /
[zookeeper]
[zk: 10.46.52.34:29181(CONNECTED) 7] ls /
[hadoop-ha, zookeeper]
[zk: 10.46.52.34:29181(CONNECTED) 8] ls /hadoop-ha
[hdpcls1]
[zk: 10.46.52.34:29181(CONNECTED) 9] ls /hadoop-ha/hdpcls1
[]




--啟動HDFS(AI-OPT-HBS01)
[email protected]:/zn/users/hdpusr01/hadoop/sbin> ./start-dfs.sh
[email protected]:/zn/users/hdpusr01/hadoop/sbin> jps
29964 DFSZKFailoverController
29657 NameNode


[email protected]:/zn/users/hdpusr01/hadoop> jps
28086 NameNode
28185 DFSZKFailoverController


[email protected]:/zn/users/hdpusr01/hadoop/etc/hadoop> jps
27826 DataNode    --(AI-OPT-HBS03 - HBS06)




/* 在啟動HDFS後zookeeper中多出的znode */
[zk: 10.46.52.34:29181(CONNECTED) 15] ls /hadoop-ha/hdpcls1
[ActiveBreadCrumb, ActiveStandbyElectorLock]
[zk: 10.46.52.34:29181(CONNECTED) 16] ls /hadoop-ha/hdpcls1/ActiveBreadCrumb
[]
[zk: 10.46.52.34:29181(CONNECTED) 17] ls /hadoop-ha/hdpcls1/ActiveStandbyElectorLock
[]
[zk: 10.46.52.34:29181(CONNECTED) 18] get /hadoop-ha/hdpcls1/ActiveBreadCrumb


hdpcls1nn1
          HBS01 ?E(?>
cZxid = 0x10000000a
ctime = Fri Dec 18 13:55:41 CST 2015
mZxid = 0x10000000a
mtime = Fri Dec 18 13:55:41 CST 2015
pZxid = 0x10000000a
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 34
numChildren = 0
[zk: 10.46.52.34:29181(CONNECTED) 19] get /hadoop-ha/hdpcls1/ActiveStandbyElectorLock


hdpcls1nn1
          HBS01 ?E(?>
cZxid = 0x100000008
ctime = Fri Dec 18 13:55:41 CST 2015
mZxid = 0x100000008
mtime = Fri Dec 18 13:55:41 CST 2015
pZxid = 0x100000008
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x351b09482220000
dataLength = 34
numChildren = 0




--啟動YARN(HBS01由於resourcemanager程序佔用資源較多,機器充足的情況下可以考慮將其與namenode程序分開)
[email protected]:/zn/users/hdpusr01/hadoop/sbin> ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /zn/users/hdpusr01/hadoop/logs/yarn-hdpusr01-resourcemanager-HBS01.out
HBS03: starting nodemanager, logging to /zn/users/hdpusr01/hadoop/logs/yarn-hdpusr01-nodemanager-HBS03.out
HBS04: starting nodemanager, logging to /zn/users/hdpusr01/hadoop/logs/yarn-hdpusr01-nodemanager-HBS04.out
HBS05: starting nodemanager, logging to /zn/users/hdpusr01/hadoop/logs/yarn-hdpusr01-nodemanager-HBS05.out
HBS06: starting nodemanager, logging to /zn/users/hdpusr01/hadoop/logs/yarn-hdpusr01-nodemanager-HBS06.out
注:這個時候主節點上的resourcemanager程序還沒啟動,從節點上的nodemanager程序已經啟動
[email protected]:/zn/users/hdpusr01/hadoop/etc/hadoop> jps
28004 NodeManager--(AI-OPT-HBS03 - HBS06)


--啟動resourcemanager
[email protected]:/zn/users/hdpusr01/hadoop/sbin> ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /zn/users/hdpusr01/hadoop/logs/yarn-hdpusr01-resourcemanager-HBS01.out
[email protected]:/zn/users/hdpusr01/hadoop/sbin> jps
30921 ResourceManager  --終於看到這個程序了
注:主機AI-OPT-HBS02上的resourcemanager還沒啟動,需要手工啟動
[email protected]:/zn/users/hdpusr01/hadoop/sbin> ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /zn/users/hdpusr01/hadoop/logs/yarn-hdpusr01-resourcemanager-HBS02.out
[email protected]:/zn/users/hdpusr01/hadoop/sbin> jps
28945 ResourceManager




/* 在啟動YARN和resourcemanager後zookeeper中多出的znode */
[zk: 10.46.52.34:29181(CONNECTED) 22] ls /
[rmstore, yarn-leader-election, hadoop-ha, zookeeper]
[zk: 10.46.52.34:29181(CONNECTED) 23] ls /rmstore
[ZKRMStateRoot]
[zk: 10.46.52.34:29181(CONNECTED) 24] ls /rmstore/ZKRMStateRoot
[AMRMTokenSecretManagerRoot, RMAppRoot, RMVersionNode, RMDTSecretManagerRoot]
[zk: 10.46.52.34:29181(CONNECTED) 25] ls /yarn-leader-election
[rmcls1]
[zk: 10.46.52.34:29181(CONNECTED) 26] ls /yarn-leader-election/rmcls1
[ActiveBreadCrumb, ActiveStandbyElectorLock]








--通過命令檢視叢集狀態
[email protected]:/zn/users/hdpusr01/hadoop/bin> ./hdfs dfsadmin -report








5.安裝HBase
5.1 安裝HBase叢集(AI-OPT-HBS01 - HBS06)
--選擇一臺主機操作(AI-OPT-HBS01)
[[email protected] hbsusr01]# tar xvf /opt/software/hbase-1.1.2.tgz
[[email protected] hbsusr01]# mv hbase-1.1.2 hbase
[[email protected] hbsusr01]# chown -R hbsusr01:hbase hbase




--設定叢集節點檔案
[email protected]:/zn/users/hbsusr01/hbase/conf> vi regionservers
HBS03
HBS04
HBS05
HBS06




--設定hbase配置檔案
[email protected]:/zn/users/hbsusr01/hbase/conf> vi hbase-site.xml 
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://HBS01:8920/hbase</value>
</property>


<property>
<name>hbase.master.port</name>
<value>60900</value>
</property>


<property>
<name>hbase.regionserver.port</name>
<value>60920</value>
</property>


<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>


<property>
<name>hbase.zookeeper.quorum</name>
<value>HBS04,HBS05,HBS06</value>
</property>


<property>
<name>hbase.tmp.dir</name>
<value>/zn/users/hbsusr01/hbase/tmp</value>
</property>


<property>
<name>hbase.zookeeper.peerport</name>
<value>29888</value>
</property>


<property>
<name>hbase.zookeeper.leaderport</name>
<value>39888</value>
</property>


<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>29181</value>
</property>


<property>
<name>hbase.rest.port</name>
<value>8980</value>
</property>
</configuration>






--設定hbase環境
[email protected]:/zn/users/hbsusr01/hbase/conf> vi hbase-env.sh 
export HBASE_CONF_DIR=/zn/users/hbsusr01/hbase/conf
export HBASE_PID_DIR=/zn/users/hbsusr01/hbase/pid
export HBASE_LOG_DIR=/zn/users/hbsusr01/hbase/logs


# export HBASE_MANAGES_ZK=true
export HBASE_MANAGES_ZK=false




--建立相關目錄
[email protected]:/zn/users/hbsusr01> mkdir -p hbase/{conf,pid,logs,tmp}




--拷貝檔案到其他節點並解壓
[email protected]:/zn/users/hbsusr01> tar cvf hbase.tar hbase
[email protected]:/zn/users/hbsusr01> scp hbase.tar HBS03:/zn/users/hbsusr01
[email protected]:/zn/users/hbsusr01> scp hbase.tar HBS04:/zn/users/hbsusr01
[email protected]:/zn/users/hbsusr01> scp hbase.tar HBS05:/zn/users/hbsusr01
[email protected]:/zn/users/hbsusr01> scp hbase.tar HBS06:/zn/users/hbsusr01




                         
--配置ssh互信
su - hbsusr01
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
chmod 600 .ssh/authorized_keys


--選擇一臺master操作(HBS01)
ssh HBS02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS03 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS04 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS05 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
ssh HBS06 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 


scp ~/.ssh/authorized_keys HBS02:/zn/users/hbsusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS03:/zn/users/hbsusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS04:/zn/users/hbsusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS05:/zn/users/hbsusr01/.ssh/authorized_keys
scp ~/.ssh/authorized_keys HBS06:/zn/users/hbsusr01/.ssh/authorized_keys


--驗證配置是否生效
ssh HBS01 date
ssh HBS02 date
ssh HBS03 date
ssh HBS04 date
ssh HBS05 date
ssh HBS06 date


--解壓hadoop包(AI-OPT-HBS03 - HBS06)
tar xvf hbase.tar




--啟動hbase
start-hbase.sh
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbsusr01, access=WRITE, inode="/":hdpusr01:supergroup:drwxr-xr-x


解決方法:hadoop fs新增目錄和許可權
[email protected]:/zn/users/hdpusr01> hadoop fs -mkdir /hbase
-mkdir: java.net.UnknownHostException: host-10-1-241-18


臨時解決:
[email protected]:/zn/users/hdpusr01/hadoop/bin> ./hadoop  fs -mkdir /hbase
[email protected]:/zn/users/hdpusr01/hadoop/bin> ./hadoop fs -chown hbsusr01:hbase /hbase


/*
解決方法:配置環境變數
su - hdpusr01
export HADOOP_CONF_DIR=$HOME/hadoop/etc/hadoop
#export HADOOP_PID_DIR=$HOME/hadoop/pid
export HADOOP_LOG_DIR=$HOME/hadoop/logs
export HADOOP_LIBEXEC_DIR=$HOME/hadoop/libexec
export HADOOP_PREFIX=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export HADOOP_LIB=${HADOOP_HOME}/lib
export HADOOP_LIBRARY_PATH=${HADOOP_HOME}/lib/native
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HADOOP_HOME}/lib/native
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_YARN_USER=pochdp04
export YARN_CONF_DIR=${HADOOP_CONF_DIR}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_YARN_HOME=${HADOOP_HOME}
export YARN_LOG_DIR=${HADOOP_LOG_DIR}
export YARN_PID_DIR=${HADOOP_CONF_DIR}/../yarn
*/




su - hbsusr01
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/zn/users/hdpusr01/hadoop/lib/native
export HBASE_HOME=/zn/users/hbsusr01/hbase
export HBASE_CONF_DIR=$HBASE_HOME/conf
export HBASE_LOG_DIR=$HBASE_HOME/logs
export HBASE_PID_DIR=$HBASE_HOME/pid
export PATH=$PATH:$HBASE_HOME/bin


http://blog.itpub.net/20777547/viewspace-1745820/
http://www.aboutyun.com/thread-11909-1-1.html
http://www.ibm.com/developerworks/cn/opensource/os-cn-hadoop-yarn/






***********************************************hadoop偽分散式部署**********************************************
1.建立使用者以及配置主機名
mkdir -p /zn/users
useradd -u 451   -g hadoop -G ibss -d /zn/users/hdpusr02 -m hdpusr02
passwd hdpusr02     --stdin <<< Linuhdp_0805
echo -e "\n. ~puwadm/wprofile\n" >> ~hdpusr02/.bash_profile 




2.配置ssh互相
su - hdpusr02
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh AI-UAT-OPT-HBASE03 date


3.解壓檔案
tar xvf /opt/software/hadoop-2.6.0.tgz 
mv hadoop-2.6.0/ hadoop


4.配置hadoop
vi hadoop-env.sh
export JAVA_HOME=/opt/freeware/jdk1.7.0_79


vi core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/zn/users/hdpusr01/hadoop/tmp</value>
</property>


<property>
<name>fs.defaultFS</name>
<value>hdfs://AI-UAT-OPT-HBASE03:9000</value>
</property>
</configuration>


vi mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>


vi yarn-site.xml 
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>


vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>


<property>
<name>dfs.namenode.name.dir</name>
<value>file:/zn/users/hdpusr02/hadoop/dfs/name</value>
</property>


<property>
<name>dfs.datanode.data.dir</name>
<value>file:/zn/users/hdpusr02/hadoop/dfs/data</value>
</property>
</configuration>


vi slaves
AI-UAT-OPT-HBASE03


5.建立相應目錄
mkdir -p hadoop/dfs/{name,data}


6.啟動hadoop
bin/hdfs namenode -format
sbin/start-dfs.sh
sbin/start-yarn.sh


相關推薦

Hadoop-Hbase環境搭建

一. 初始環境1. hadoop : hadoop-1.2.12. java : jdk1.6.0_45    --> 改為 jdk1.7.0_803. hbase : hbase-0.98.24-hadoop1二. 啟動hadoophadoop安裝目錄: /usr/l

hadoop/hbase 環境搭建

hadoop安裝 (1)建立hadoop使用者組和使用者 見(1.1,1.5) (2)配置hadoop使用者信任關係 1)生成 非對稱金鑰 :ssh-keygen -t rsa // 一直回車 1)znode01,znode02,znode03 新增信任關係 (1)zno

Hadoop HA + HBase環境搭建(二)————HBase環境搭建

property hadoop zookeeper conf ado 文件 ice mes root HBase配置(只需要做一處修改)   修改HBase的 hbase-site.xml 配置文件種的一項  <property>

HBase+Hadoop+Zookeeper環境搭建的錯誤排查

確認hbase下的hbase-site.xml中的hbase.rootdir的埠和hadoop下的core-site.xml中的fs.defaultFS共用一個埠,否則在進入hbase shell的時候輸入list會報Can't get master address from Z

HBase環境搭建隨記

很多 數值 配置環境變量 釋放 響應時間 ack 臨時文件 leader eba ====軟件版本==== jdk:jdk-8u77-linux-x64.tar.gz zookeeper:zookeeper-3.4.6.tar.gz hadoop:hadoop-2.7.4.

大數據測試之hadoop單機環境搭建(超級詳細版)

com jvm 末尾 內容 取數 搭建 cluster replicat specific 友情提示:本文超級長,請備好瓜子 Hadoop的運行模式 單機模式是Hadoop的默認模式,在該模式下無需任何守護進程,所有程序都在單個JVM上運行,該模式主要用於開發和調試map

Spark+Hadoop+IDE環境搭建

AR spark 環境搭建 分享圖片 img oop tps get 搭建 下載地址:https://download.csdn.net/download/u014028392/8841545 Spark+Hadoop+IDE環境搭建

Hbase環境搭建

comm 4.6 ron oop AC 以及 XML reg plugins 此筆記僅用於作者記錄復習使用,如有錯誤地方歡迎留言指正,作者感激不盡,如有轉載請指明出處 Hbase環境搭建 Hbase環境搭建 hadoop為HA的Hbase配置 Zookeeper集群

大數據學習系列之六 ----- Hadoop+Spark環境搭建

csdn jdk sts htm ps命令 sta cnblogs 環境變量設置 lib 引言 在上一篇中 大數據學習系列之五 ----- Hive整合HBase圖文詳解 : http://www.panchengming.com/2017/12/18/pancm62/

ubuntu系統的mysql+hadoop+hive環境搭建

1.在ubuntu系統上安裝mysql資料庫 sudo apt-get install mysql-server 安裝過程中會提示兩次輸入密碼,己住自己設定的密碼,一直下一步。 檢查是否安裝成功:mysql -u 使用者名稱 -p 密碼 顯示資料庫後 show databases; 出現數

Hadoop系列003-Hadoop執行環境搭建

本人微信公眾號,歡迎掃碼關注! Hadoop執行環境搭建 1、虛擬機器網路模式設定為NAT 2、克隆虛擬機器 3、修改為靜態ip 4、 修改主機名 5、關閉防火牆 1)檢視防火牆開機啟動狀態 chkconfig iptables --list 2)關閉防火牆 chkconfi

分享知識-快樂自己:大資料(hadoop環境搭建

大資料 hadoop 環境搭建: 一):大資料(hadoop)初始化環境搭建 二):大資料(hadoop)環境搭建 三):執行wordcount案例 四):揭祕HDFS 五):揭祕MapReduce 六):揭祕HBase 七):HBase程式設計 -----------------------

hbase 環境搭建

Hbase 環境搭建 Hbase 單機模式(Standalone ) 官網下載 hbase 包 解壓到指定目錄 $ tar xzvf hbase-3.0.0-SNAPSHOT-bin.tar.gz $ cd hbase-3.0.0-SNAPSHOT   在con

大資料作業(一)基於docker的hadoop叢集環境搭建

主要是根據廈門大學資料庫實驗室的教程(http://dblab.xmu.edu.cn/blog/1233/)在Ubuntu16.04環境下進行搭建。 一、安裝docker(Docker CE) 根據docker官網教程(https://docs.docker.

Hadoop叢集環境搭建(雲伺服器,虛擬機器都適用)

為了配置方便,為每臺電腦配置一個主機名: vim /etc/hostname 各個節點中,主節點寫入:master , 其他從節點寫入:slavexx 如果這樣修改不能生效,則繼續如下操作 vim /etc/cloud/cloud.cfg 做preserve_hostname: true 修改 reb

hadoop叢集環境搭建之偽分散式叢集環境搭建

搭建叢集的模式有三種 1.偽分散式:在一臺伺服器上,啟動多個執行緒分別代表多個角色(因為角色在叢集中使用程序表現的) 2.完全分散式:在多臺伺服器上,每臺伺服器啟動不同角色的程序,多臺伺服器構成叢集 node01:NameNode node02:

Hadoop原始碼環境搭建

準備工具: maven 3.0.0版本或者更高版本(配置中心庫) protocbuff 2.5.0 http://www.zlib.net/ git bash(Windows環境可以用此工具執行編譯命令) 原始碼下載: http://hadoop.apache.org/releases.html

VMware 下Hadoop叢集環境搭建之虛擬機器克隆,Hadoop環境配置

在上一篇我們完成了ContOS網路配置以及JDK的安裝,這一篇將在上一篇的基礎上繼續講解虛擬機器的克隆,hadoop環境搭建 虛擬機器克隆. 利用上一篇已經完成網路配置和jdk安裝的虛擬機器在克隆兩臺虛擬機器. 1. 將擬機hadoop01關機.

linux ubuntu系統下基於eclipse的hadoop開發環境搭建

hadoop是基於linux作業系統的。 本文在linux ubuntu系統下,在eclipse下配置hadoop的開發環境。 這個開發環境對linux下的hadoop偽分散式配置有效,其他配置情況不明。 如果是完全分散式環境,需要重新設定core-site.xml,hdf

windows Hadoop開發環境搭建及遠端提交

這篇文章將介紹如何搭建hadoop的開發環境,並且詳細描述如何通過intellij idea開發hadoop的map-reduce程式以及遠端提交。 前提: 需要在本機下載hadoop,不需要修改配置安裝,但需要設定下hadoop_home,java_ho