1. 程式人生 > >hadoop-2.6.1安裝

hadoop-2.6.1安裝


categories:
- Hadoop
date: 2015-11-23 21:35:22
---


## 1.安裝JDK 7


### 1.1新建目錄
```shell
mkdir /usr/local/jdk/
```
將jdk1.7.0_79.tar.gz解壓到此目錄,目錄結構
/usr/local/jdk/jdk1.7.0_79


### 1.2全域性變數配置
vi /etc/profile 新增如下配置
```
export JAVA_HOME=/usr/local/jdk/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH
```


使全域性變數生效
```
source /etc/profile
```


## 2.SSH免密碼登入


### 2.1啟用公鑰驗證
CentOS啟動ssh無密登入,/etc/ssh/sshd_config以下配置註釋去掉:
```
#RSAAuthentication yes
#PubkeyAuthentication yes
```


如果沒有安裝ssh客戶端,安裝:
```
yum install openssh-clients
```


### 2.2.生成公鑰
在各臺機器上,使用root賬號執行如下命令
```
ssh-keygen -t rsa
```
此時/root/.ssh/中生成公鑰檔案id_rsa.pub
```
cat id_rsa.pub>> authorized_keys
```
將其他機器的公鑰也新增到這個檔案中
```
ssh
[email protected]
cat ~/.ssh/id_rsa.pub>> authorized_keys
```
### 2.3
把Master伺服器的authorized_keys、known_hosts複製到Slave伺服器的/root/.ssh目錄
測試ssh [email protected]、ssh [email protected] 是否需要密碼


## 4.hadoop安裝
建立資料存放的資料夾
/home/hadoop
/home/hadoop/tmp
/home/hadoop/hdfs
/home/hadoop/hdfs/data
/home/hadoop/hdfs/name


## 5.建立目錄
```
mkdir /usr/local/hadoop
```
http://mirrors.advancedhosters.com/apache/hadoop/common/hadoop-2.6.1/hadoop-2.6.1.tar.gz
hadoop-2.6.1.tar.gz解壓至/usr/local/hadoop/下




## 6.配置各機器hosts
vi /etc/hosts
ipnode1
ipnode2
ip機器名


## 7.配置hadoop引數
### 7.1配置hadoop-2.6.1/etc/hadoop/core-site.xml
```xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.1.100:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/hadoop/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>
</configuration>
```


### 7.2配置hadoop-2.6.1/etc/hadoop/hdfs-site.xml
```xml
<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/hadoop/hdfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/hadoop/hdfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>192.168.1.100:9001</value>
    </property>
    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>
</configuration>
```


### 7.3配置hadoop-2.6.1/etc/hadoop/mapred-site.xml
```xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>192.168.1.100:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>192.168.1.100:19888</value>
    </property>
</configuration>
```


### 7.4配置hadoop-2.6.1/etc/hadoop/yarn-site.xml
```xml
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>192.168.1.100:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>192.168.1.100:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>192.168.1.100:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>192.168.1.100:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>192.168.1.100:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>22528</value>
    </property>
</configuration>
```


### 7.5配置hadoop-2.6.1/etc/hadoop/hadoop-env.sh和hadoop-2.6.1/etc/hadoop/yarn-env.sh
```
export JAVA_HOME=/usr/local/jdk/jdk1.7.0_79
```


### 7.6配置slaves新增從節點
vi hadoop-2.6.1/etc/hadoop/slaves


## 8.命令
初始化
```
bin/hdfs namenode -format
```


```
sbin/start-dfs.sh
sbin/start-yarn.sh


sbin/stop-dfs.sh
sbin/stop-yarn.sh
```
輸入命令jps可以看到相關資訊


執行jar
```
hadoop jar xxxxx.jar arg1 arg2 
```


** hdfs命令 **
列出目錄下檔案
```
hadoop fs -ls /
```


建立目錄
```
hadoop fs -mkdir /newdir
```


本地檔案複製到HDFS
```
hadoop fs -copyFromLocal /home/input/a.txt /input/a.txt
```


HDFS檔案複製到本地
```
hadoop fs -copyToLocal /input/a.txt /home/input/a.txt
```


刪除HDFS目錄及其中檔案
```
 hadoop fs -rm -f -r /output1
```


移動檔案
```
hadoop fs -mv URI [URI …] <dest>
```


停止job
```
hadoop job -kill <id>
```


關閉安全模式
```
hadoop dfsadmin -safemode leave 
```


Cluster檢視
http://192.168.1.100:8088/


HDFS檢視
http://192.168.1.100:50070/


## 9.錯誤
### 初始化報錯
host = java.net.UnknownHostException: centos: centos


檢視/etc/sysconfig/network檔案
```
NETWORKING=yes
HOSTNAME=centos
```


HOSTNAME是centos, 無法在/etc/hosts中找到對應IP


vi /etc/hosts,新增:
```
127.0.0.1   centos
```


### 啟動報錯:
/hadoop-2.6.1/sbin/hadoop-daemon.sh: Permission denied


從節點hadoop目錄要有執行許可權
```
chmod -R 755 hadoop-2.6.1
```
### ShuffleError


Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#3
        at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:56)
        at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:46)
        at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.<init>(InMemoryMapOutput.java:63)
        at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:305)
        at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:295)
        at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:514)
        at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:336)
        at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
      
解決方法:在mapred-size.xml新增配置
```  
<property>
        <name>mapreduce.reduce.shuffle.memory.limit.percent</name>
        <value>0.10</value>
    </property>
```




15/11/30 20:15:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 192.168.1.200:50010
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1460)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)


解決方法:開啟192.168.1.200 50010埠防火牆