ambari增加journalnode服務節點
原生態方式:
以hadoop使用者執行以下操作:
1.修改etc/hadoop/hdfs-site.xml,在dfs.namenode.shared.edits.dir 配置項中增加相應的新的journal的地址和埠.
2.把etc/hadoop/hdfs-site.xml分發到叢集中各伺服器.
3.把現有journal中的資料目錄拷貝到新journal伺服器.
4.在新journal伺服器中執行hadoop-daemon.sh start journalnode 來啟動journal node.
5.在standby namenode伺服器執行 hadoop-daemon.sh stop namenode 來停止namenode服務.
6.在standby namenode伺服器執行 hadoop-daemon.sh start namenode 來啟動namenode服務.可能在網頁上看到journalnode增加.
7.使用hdfs haadmin -failover nn1 nn2 切換namenode
8.在原active namenode上執行以下語句來重啟namenode
hadoop-daemon.sh stop namenode
hadoop-daemon.sh start namenode
如果使用者手動安裝的Hadoop叢集,可以用上面的方法,如果是通過ambari安裝的叢集,手動增加JournalNode後,還想在ambari介面上看到,可以用以下方法
ambari方式:
ambari預設3個journalnode節點,但是如果一個節點出現問題,需要增加補充,ambari介面沒有操作的選項,所以只能通過其他命令方式操作,
看到之前有個文章是將HA降級,之後重新做HA,這樣的風險太高了,操作複雜,從網上找到了其他方式,分享給需要的朋友,也希望ambari新版本可以將這個增加journalnode功能,新增進去。操作前提示:如果你對ambari這些操作一點都不熟悉,建議不要進行操作,以免ambari管理介面異常,導致無法管理。
可以先在測試環境操作練習,確認無誤後,再進行正式環境操作。
增加journalnode
1、分配角色:
curl -u admin:admin -H 'X-Requested-By: Ambari' -X POST http://localhost:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_JN_NODE/host_components/JOURNALNODE
我的環境:
curl -u admin:admin -H "X-Requested-By: Ambari" -X POST http://10.11.32.50:8080/api/v1/clusters/testhadoop/hosts/testserver2.bj/host_components/JOURNALNODE
檢視下內容:
curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://10.11.32.50:8080/api/v1/clusters/testhadoop/hosts/testserver2.bj/host_components/JOURNALNODE
2、安裝Journalnode:
curl -u admin:admin -H 'X-Requested-By: Ambari' -X PUT -d ‘{“RequestInfo”:{“context”:”Install JournalNode”},”Body”:{“HostRoles”:{“state”:”INSTALLED”}}}’ http://10.11.32.50:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_JN_NODE/host_components/JOURNALNODE
我的環境:
curl -u admin:admin -H 'X-Requested-By: Ambari' -X PUT -d '{"RequestInfo":{"context":"Install JournalNode"},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://10.11.32.50:8080/api/v1/clusters/testhadoop/hosts/testserver2.bj/host_components/JOURNALNODE
連結網址與上邊是一行,中間有個空格
3、更新HDFS配置
Login to Ambari Web UI and modify the HDFS Configuration. Search for dfs.namenode.shared.edits.dir and add the new JournalNode. Make sure you don’t mess up the format for the journalnode list provided. The following is a format of a typical 3 JournalNode shared edits definition.
qjournal://my-jn-node-1.host.com:8485;my-jn-node-1.host.com:8485;my-jn-node-1.host.com:8485/MyLAB
我的環境配置:
qjournal://testserver4.bj:8485;testserver1.bj:8485;testserver2.bj:8485;testserver3.bj:8485/testcluster
4、建立journalnode目錄
Time to create the required directory structure on the new Journalnode. You have to create this directory structure based on your cluster installation. If unsure, you
can find this value from $HADOOP_CONF/hdfs-site.xml file. Look for the parameter value for dfs.journalnode.edits.dir. In my case, it happens to be /hadoop/qjournal/namenode/.
我的環境:
dfs.journalnode.edits.dir /hadoop/hdfs/journal
ll -d /hadoop/hdfs/journal
drwxr-xr-x 3 hdfs hadoop 4096 Feb 2 10:56 /hadoop/hdfs/journal
Make sure you add the HDFS Nameservice directory. You can find this value from $HADOOP_CONF/hdfs-site.xml file. The value can be found for parameter dfs.nameservices.
In my example, I have “MyLab”. So I will create the directory structure as /hadoop/qjournal/namenode/MyLab.
我的環境:
dfs.nameservices testcluster
ll -d /hadoop/hdfs/journal/testcluster/
drwxr-xr-x 3 hdfs hadoop 4096 Mar 16 18:40 /hadoop/hdfs/journal/testcluster/
mkdir -p /hadoop/hdfs/journal/testcluster/
chown hdfs:hadoop -R /hadoop/hdfs/journal/
5、同步資料
Copy or Sync the directory ‘current’ under the ‘shared edits’ location from an existing JournalNode. Make sure that the ownership for all these newly created directories and sync’ed files is right.
我的環境:
scp -r /hadoop/hdfs/journal/testcluster/* 10.11.32.51:/hadoop/hdfs/journal/testcluster/
chown hdfs:hadoop -R /hadoop/hdfs/journal/
ambari中journalnode檔案儲存的路徑/hadoop/hdfs/journal
ll /hadoop/hdfs/journal/testcluster/
-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 11:39 edits_inprogress_0000000000000014177
-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 11:39 edits_inprogress_0000000000000021001
-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 13:44 edits_inprogress_0000000000000021215
-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 14:30 edits_inprogress_0000000000000021306
檢視新的節點時可能會出現4個edits_inprogress,時間早的應該是scp過來的,啟動後會生成2個新的edits_inprogress檔案,舊的沒用可以刪除,也可以留著。
從ambari介面啟動journalnode服務
2016-03-17 11:48:27,644 INFO server.Journal (Journal.java:scanStorageForLatestEdits(187)) - Scanning storage FileJournalManager(root=/hadoop/hdfs/journal/testcluster)
2016-03-17 11:48:27,791 INFO server.Journal (Journal.java:scanStorageForLatestEdits(193)) - Latest log is EditLogFile
(file=/hadoop/hdfs/journal/testcluster/current/edits_inprogress_0000000000000010224,first=0000000000000010224,last=0000000000000010232,inProgress=true,hasCorruptHeader=
false)
2016-03-17 11:49:37,304 INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(133)) - Finalizing edits file
/hadoop/hdfs/journal/testcluster/current/edits_inprogress_0000000000000010238 -> /hadoop/hdfs/journal/testcluster/current/edits_0000000000000010238-0000000000000010251
參考:
http://gaganonthenet.com/2015/09/14/add-journalnode-to-ambari-managed-hadoop-cluster/