基於centos7.2最小化環境, cdh manager 及 cdh 叢集的部署過程常見問題整理
注:此篇文章主要面向對hadoop有一定了解的開發和運維人員,若是初次接觸hadoop叢集,具體安裝過程請更多參考Ambari的安裝部署教程:http://blog.csdn.net/balabalayi/article/details/64920537
CDH Manager的部署與安裝與Ambari的安裝有極大的相似性,幾乎就是“安裝包和檔案目錄不一樣”的區別
過程簡單闡述,具體請參見官方文件(建議方式):https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_install_path_b.html
一. 基礎依賴安裝:
yum install -y net-tools ntp psmisc perl libxml2 libxslt lrzsz httpd telnet wget bind-utils
二. 環境準備:
包括java,ssh,ntp,hosts
三. 下載部署CDH Manager:
建議使用線上或是離線(將rpm提前下載至本地,修改yum.repo)然後直接yum install 的方式
四. 部署安裝CDH
建議離線將CDH parcel提前下載至本地,放入指定parcel-repo目錄,則通過CDH Manager可直接進行解壓安裝和部署
問題整理:
一. 主機問題檢查常見問題:
1.Cloudera 建議將 /proc/sys/vm/swappiness 設定為最大值 10。當前設定為 30。使用 sysctl 命令在執行時更改該設定並編輯 /etc/sysctl.conf,以在重啟後儲存該設定。您可以繼續進行安裝,但 Cloudera Manager 可能會報告您的主機由於交換而執行狀況不良。
解決:
執行
sysctl vm.swappiness=10
vi /etc/sysctl.conf
新增:
vm.swappiness=10
2.已啟用透明大頁面壓縮,可能會導致重大效能問題。請執行“echo never > /sys/kernel/mm/transparent_hugepage/defrag”和“echo never > /sys/kernel/mm/transparent_hugepage/enabled”以禁用此設定,然後將同一命令新增到 /etc/rc.local 等初始化指令碼中,以便在系統重啟時予以設定。
解決:
執行
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
vi /etc/rc.local
新增:
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
二. 許可權問題(parcel啟用過程中,建立的var目錄下的目錄許可權有誤):
如:
ls -l /var/lib/ |grep hadoop
d---------. 2 root root 4096 Jul 17 16:19 hadoop-hdfs
d---------. 2 root root 4096 Jul 17 16:19 hadoop-httpfs
d---------. 2 root root 4096 Jul 17 16:19 hadoop-kms
d---------. 2 root root 4096 Jul 17 16:19 hadoop-mapreduce
d---------. 3 root root 4096 Jul 17 17:54 hadoop-yarn
解決(請根據服務安裝的具體情況自行變通):
chown -R flume:flume /var/lib/flume-ng
chown -R hdfs:hdfs /var/lib/hadoop-hdfs
chown -R httpfs:httpfs /var/lib/hadoop-httpfs
chown -R kms:kms /var/lib/hadoop-kms
chown -R mapred:mapred /var/lib/hadoop-mapreduce
chown -R yarn:yarn /var/lib/hadoop-yarn
chown -R hbase:hbase /var/lib/hbase
chown -R hive:hive /var/lib/hive
chown -R impala:impala /var/lib/impala
chown -R llama:llama /var/lib/llama
chown -R oozie:oozie /var/lib/oozie
chown -R sentry:sentry /var/lib/sentry
chown -R solr:solr /var/lib/solr
chown -R spark:spark /var/lib/spark
chown -R sqoop:sqoop /var/lib/sqoop
chown -R sqoop2:sqoop2 /var/lib/sqoop2
chown -R zookeeper:zookeeper /var/lib/zookeeper
chmod -R 755 /var/lib/flume-ng
chmod -R 755 /var/lib/hadoop-hdfs
chmod -R 755 /var/lib/hadoop-httpfs
chmod -R 755 /var/lib/hadoop-kms
chmod -R 755 /var/lib/hadoop-mapreduce
chmod -R 755 /var/lib/hadoop-yarn
chmod -R 755 /var/lib/hbase
chmod -R 755 /var/lib/hive
chmod -R 755 /var/lib/impala
chmod -R 755 /var/lib/llama
chmod -R 755 /var/lib/oozie
chmod -R 755 /var/lib/sentry
chmod -R 755 /var/lib/solr
chmod -R 755 /var/lib/spark
chmod -R 755 /var/lib/sqoop
chmod -R 755 /var/lib/sqoop2
chmod -R 755 /var/lib/zookeeper
三. Parcel部署啟用過程卡在“正在獲取安裝鎖”:
解決:
在問題節點執行:
rm -rf /tmp/scm_prepare_node.*
rm -rf /tmp/.scm_prepare_node.lock
然後重試
四. Parcel部署啟用過程報錯“ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized>”:
解決:
在問題節點執行:
ps -ef | grep supervisord
kill -9 <processID>
然後重試
五. HDFS部署啟動後,檢查報錯“Canary 測試無法為 /tmp/.cloudera_health_monitoring_canary_files 建立父目錄”:
原因:常發生於叢集啟動或是叢集不健康時,前者的話無影響
解決:
在nameNode節點執行:
sudo -uhdfs hdfs dfsadmin -safemode leave
關於CDH的叢集解除安裝,與Ambari類似沒有太好的辦法,CDH也僅僅提供了安裝資源的解除安裝方式:
https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh_comp_uninstall.html
作者在此將目錄及使用者的刪除也做簡單補充(請注意,在刪除之前請保證cdh manager 的 server和agent端相關程序都已停止):
rm -rf /var/run/hadoop-*/ /var/run/hdfs-*/
rm -rf /var/lib/hadoop-* /var/lib/impala /var/lib/llama /var/lib/solr /var/lib/zookeeper /var/lib/hbase /var/lib/hue /var/lib/oozie /var/lib/pgsql /var/lib/sqoop* /var/lib/sentry /var/lib/spark*
rm -rf /var/log/hadoop*
rm -rf /usr/bin/hadoop* /usr/bin/zookeeper* /usr/bin/hbase* /usr/bin/hive* /usr/bin/hdfs /usr/bin/mapred /usr/bin/yarn /usr/bin/spark* /usr/bin/sqoop* /usr/bin/oozie
rm -rf /etc/hadoop* /etc/zookeeper* /etc/hive* /etc/hue /etc/impala /etc/sqoop* /etc/oozie /etc/hbase* /etc/hcatalog
rm -rf /dfs /hbase /yarn
userdel -rf oozie
userdel -rf hive
userdel -rf flume
userdel -rf hdfs
userdel -rf knox
userdel -rf storm
userdel -rf mapred
userdel -rf hbase
userdel -rf solr
userdel -rf impala
userdel -rf hue
userdel -rf tez
userdel -rf zookeeper
userdel -rf kafka
userdel -rf falcon
userdel -rf sqoop
userdel -rf yarn
userdel -rf hcat
userdel -rf atlas
userdel -rf spark
userdel -rf spark2
userdel -rf ams
userdel -rf llama
userdel -rf httpfs
userdel -rf sentry
userdel -rf sqoop
userdel -rf sqoop2
userdel -rf cloudera-scm
groupdel cloudera-scm