hadoop格式化(或者啟動hadoop)的時候出現名稱和服務不識別錯誤
在第一次格式化hadoop的時候,可能會出現以下錯誤資訊:
- 14/08/1007:07:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Stopping namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hadoop/hadoop-2.2.0/lib/native/libhadoop.so.
- It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
- cluster1]
- sed: -e expression #1, char 6: unknown option to `s'
- -c: Unknown cipher type 'cd'
- ^Ccluster1: stopping namenode
- cluster1: stopping datanode
- VM: ssh: Could not resolve hostname VM: Name or service not known
- stack: ssh: Could not resolve hostname stack: Name or service not known
- warning:: ssh: Could not resolve hostname warning:: Name or service not known
- will: ssh: Could not resolve hostname will: Name
- which: ssh: Could not resolve hostname which: Name or service not known
- fix: ssh: Could not resolve hostname fix: Name or service not known
- disabled: ssh: Could not resolve hostname disabled: Name or service not known
- have: ssh: Could not resolve hostname have: Name or service not known
- 64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
- guard: ssh: Could not resolve hostname guard: Name or service not known
- HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
- Java: ssh: Could not resolve hostname Java: Name or service not known
- VM: ssh: Could not resolve hostname VM: Name or service not known
- stack: ssh: Could not resolve hostname stack: Name or service not known
- The: ssh: Could not resolve hostname The: Name or service not known
- recommended: ssh: Could not resolve hostname recommended: Name or service not known
- have: ssh: Could not resolve hostname have: Name or service not known
- guard.: ssh: Could not resolve hostname guard.: Name or service not known
- Server: ssh: Could not resolve hostname Server: Name or service not known
- loaded: ssh: Could not resolve hostname loaded: Name or service not known
- It's: ssh: Could not resolve hostname It's: Name or service not known
- try: ssh: Could not resolve hostname try: Name or service not known
- the: ssh: Could not resolve hostname the: Name or service not known
- You: ssh: Could not resolve hostname You: Name or service not known
- that: ssh: Could not resolve hostname that: Name or service not known
- might: ssh: Could not resolve hostname might: Name or service not known
- you: ssh: Could not resolve hostname you: Name or service not known
- library: ssh: Could not resolve hostname library: Name or service not known
- fix: ssh: Could not resolve hostname fix: Name or service not known
- to: ssh: Could not resolve hostname to: Name or service not known
- highly: ssh: Could not resolve hostname highly: Name or service not known
- library: ssh: Could not resolve hostname library: Name or service not known
- the: ssh: Could not resolve hostname the: Name or service not known
- 'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
- '-z: ssh: Could not resolve hostname '-z: Name or service not known
- now.: ssh: Could not resolve hostname now.: Name or service not known
以上錯誤可以在/etc/profile加入以下配置
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
然後重新格式化,這時將不會提示錯誤資訊,但是還是有警告資訊。
4/08/1007:07:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
為了能徹底解決這些問題,我們還需要在/etc/profile檔案中新增以下環境變數
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
加入了該環境變數,此時如果主機的位數和hadoop庫檔案的位數不一致也會出現以上的錯誤。
可以通過uname -a 檢視當前系統的資訊。
Linux yitian1 2.6.32-573.8.1.el6.x86_64 #1 SMP Tue Nov 10 18:01:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
從以上資訊可知,當前系統的linux核心是2.6.32-573.8.1.el6.x86_64,位數是64位
還有通過以下命令檢視當前hadoop的庫檔案是多少位的
file /usr/local/hadoop/lib/native/*
/usr/local/hadoop/lib/native/libhadoop.a: current ar archive
/usr/local/hadoop/lib/native/libhadooppipes.a: current ar archive
/usr/local/hadoop/lib/native/libhadoop.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
/usr/local/hadoop/lib/native/libhadooputils.a: current ar archive
/usr/local/hadoop/lib/native/libhdfs.a: current ar archive
/usr/local/hadoop/lib/native/libhdfs.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
/usr/local/hadoop/lib/native/libhdfs.so.0.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
從以上輸出資訊可以看出,hadoop的版本也是64位的,如果hadoop的版本和系統的版本不一致,則必須要保證他們一致,不一致的情況下,一般是更換hadoop的核心庫
注意:記得修改了/etc/profile檔案之後,為了讓配置立即生效,執行source /etc/profile命令
相關推薦
hadoop格式化(或者啟動hadoop)的時候出現名稱和服務不識別錯誤
在第一次格式化hadoop的時候,可能會出現以下錯誤資訊: 14/08/1007:07:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... usi
Hadoop問題:啟動hadoop 2.6遇到的datanode啟動不了
left hand list common status ctc 解決辦法 總結 aac 問題描述:第一次啟動輸入jps都有,第二次沒有datanode 日誌如下: 查看日誌如下: 2014-12-22 12:08:27,264 INFO org.mortbay.l
Android客戶端使用OkGo上傳檔案或者圖片,客戶端和服務端程式碼分享
(一)上傳單個檔案或者圖片: 客戶端程式碼: /** * 儲存資料到伺服器 */ private void saveToInternet() { //上傳單個檔案 String url = Constants.USER_NET_ICON; Fi
啟動eclipse出現JVM terminated. Exit code=127 錯誤解決辦法
啟動eclipse出現如下錯誤: linux下: JVM terminated. Exit code=127 /eclipse/jdk1.7.0_71/bin/java -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m
載入一個web工程,啟動tomcat出現org.springframework.web.context.ContextLoaderListener錯誤.
今天將同學的工程導進Myeclipse,啟動tomcat出現org.springframework.web.context.ContextLoaderListener錯誤. 網上查了,原因是“一般出現這種錯誤有可能是spring庫沒有包含,主要是spring-web
使用root配置的hadoop並啟動會出現報錯
1、使用root配置的hadoop並啟動會出現報錯 錯誤: Starting namenodes on [master] &
Hadoop 2.6.x啟動出現:no databode to stop 錯誤
產生錯誤的原因: Hadoop啟動後的PID檔案的預設配置是儲存在/tmp 目錄下的,而linux下 /tmp 目錄會定時清理,所以在叢集執行一段時間後如果在輸入start-all.sh,出現no datanode to stop的錯誤提示,所以我們最好在配置檔案中修改以下PID的預
Mac下每次開機啟動hadoop都要格式化檔案系統
hdfs-site.xml下配置 <configuration> <property> <name>dfs.replication</name> <value>1</v
Hadoop HDFS 配置、格式化、啟動、基本使用Hadoop MapReduce配置、wordcount程式提交
Hadoop的安裝方式 單機:所有的服務執行在一個程序裡面,開發階段才會使用 分散式:將多個服務(JVM),分別執行在多臺機器上。 偽分散式:將多個服務(JVM)執行在一臺機器上 Hadoop偽分散式安裝 文件:http://hadoop.a
搭建基於hadoop 2.2.0的分散式叢集啟動時出現 "Unable to load native-hadoop library for your platform" 的解決方案
問題引出: 搭建基於hadoop 2.2.0的分散式叢集完成後,為了測試hadoop分散式叢集,啟動時執行命令: {HADOOP_HOME}/sbin/start-dfs.sh,經常會看到如下提示:WARN util.NativeCodeLoader: Un
hadoop格式化節點、啟動及執行狀態檢視
準備工作: 1. 主機和節點都保證聯網,並service sshd start 啟動ssh服務 2.退出ssh exit(); 工作: 1. 格式化節點 1.1 進入hadoop安裝目錄 1.2 bin/hadoop namenode -format命令 1.3 jps
hadoop 啟動時不識別主機名稱
java.net.UnknownHostException: hadoop102: hadoop102 at java.net.InetAddress.getLocalHost(InetAddress.java:1475) at org.apach
Hadoop中正常啟動了datanode但管理介面卻卻顯示0個datanode節點或者只有本機的一個datanade,DFS Used顯示0(100%)
以下會列出引起該問題的常見原因,及其解決辦法 1、在hadoop已經啟動的前提下,使用命令netstat -an |grep 9001 (改命令用來監聽namenode主節點通訊情況,9001為
格式化hdfs後,hadoop集群啟動hdfs,namenode啟動成功,datanode未啟動
沒有 jps 數據文件 不一致 新的 hdf for ren size 集群格式化hdfs後,在主節點運行啟動hdfs後,發現namenode啟動了,而datanode沒有啟動,在其他節點上jps後沒有datanode進程!原因: 當我們使用hdfs namenod
Hadoop 0.20.2+Ubuntu13.04配置和WordCount測試
password trac 讓我 說明 core jvm -m launchpad 1.7 事實上這篇博客寫的有些晚了。之前做過一些總結後來學校的事給忘了,這幾天想又一次拿來玩玩發現有的東西記不住了。翻博客發現居然沒有。好吧,所以趕緊寫一份留著自己用吧。這東西網上有非常
Hadoop-2.4.1學習之edits和fsimage查看器
文件的 順序 rup oev 兼容 require aps block mean 在hadoop中edits和fsimage是兩個至關關鍵的文件。當中edits負責保存自最新檢查點後命名空間的變化。起著日誌的作用,而fsimage則保存了最新的檢查點信息
啟動hadoop的節點
啟動hadoop的節點1.啟動hadoop的節點start-dfs.sh本文出自 “素顏” 博客,請務必保留此出處http://suyanzhu.blog.51cto.com/8050189/1959242啟動hadoop的節點
【Hadoop】hiveserver2 不能啟動端口 10000 開啟服務的相關經驗總結
error: pro mon 進行 org multipl html pen exp 轉載來自http://blog.csdn.net/lsttoy/article/details/53490144。 這個問題困擾了我三天,各種查資料踩坑填坑的嘗試,終於搞定了這個
Hadoop namenode無法啟動問題解決
hdfs atan system.in trac perm ces log and hadoop 原文:http://www.cnblogs.com/unflynaomi/p/4476870.html 原因:在root賬戶(非hadoop賬戶)下操作hadoop會導致很大的
Hadoop(九)Hadoop IO之Compression和Codecs
需要 本地文件 .get 擴展 ecs zip 客戶 網絡 color 前言 前面一篇介紹了Java怎麽去查看數據塊的相關信息和怎麽去查看文件系統。我們只要知道怎麽去查看就行了!接下來我分享的是Hadoop的I/O操作。 在Hadoop中為什麽要去使用壓縮(Co